* Re: [MPTCP] [PATCH] squash-to: mptcp: Implement MPTCP receive path
@ 2019-06-14 13:37 Matthieu Baerts
0 siblings, 0 replies; 6+ messages in thread
From: Matthieu Baerts @ 2019-06-14 13:37 UTC (permalink / raw)
To: mptcp
[-- Attachment #1: Type: text/plain, Size: 2366 bytes --]
Hi Paolo, Mat,
On 13/06/2019 20:23, Mat Martineau wrote:
>
>
> On Thu, 13 Jun 2019, Paolo Abeni wrote:
>
>> The usage of an atomic type for such field does not really protect
>> vs concurrent read/modify/update cycles, and makes the code less
>> readable, move to plain u64.
>>
>> We already held the required lock in the relevant context
>
> Thanks for removing the extra complexity, Paolo. This looks ready to merge.
Thank you for the patch and the review!
- 00ace0e59bd4: "squashed" into "mptcp: Implement MPTCP receive path"
- signed-off already there
- no conflict
- 90e253ed5838..9ff7aba894d0: result
Note that I just ran the tests after having applied the patch and
without KASAN and PROVE_LOCKING kconfig, I got this error:
00:12:41.843 # ns2 MPTCP -> ns2 (10.0.2.1:10025) TCP [ OK ]
00:12:42.874 # ns2 TCP -> ns2 (10.0.2.1:10026) MPTCP [ OK ]
00:12:43.907 # ns2 MPTCP -> ns3 (10.0.2.2:10027) MPTCP main_loop_s:
timed out
00:13:14.949 # copyfd_io: poll timed out (events: POLLIN 1, POLLOUT 4)
00:13:14.950 # [ FAIL ] client exit code 2, server 2
00:13:14.950 # \nnetns ns3 socket stat for 10027:
00:13:14.970 # State Recv-Q Send-Q Local Address:Port
Peer Address:Port
00:13:14.970 # SYN-RECV 0 0 10.0.2.2:10027
10.0.2.1:47614 timer:(on,1.550ms,4) cwnd:2 retrans:4/0 reordering:0
00:13:14.972 # \nnetns ns2 socket stat for 10027:
00:13:14.982 # State Recv-Q Send-Q Local Address:Port
Peer Address:Port
00:13:14.983 # FIN-WAIT-1 0 56569 10.0.2.1:47614
10.0.2.2:10027 timer:(on,22sec,7) rto:25.72 cwnd:1 ssthresh:7
retrans:7/0 reordering:0
00:13:14.987 # ns2 MPTCP -> ns3 (10.0.3.2:10028) MPTCP [ OK ]
00:13:16.020 # ns2 MPTCP -> ns3 (10.0.3.2:10029) TCP [ OK ]
I *think* it looks the same as the one I reported a few days ago.
https://lists.01.org/pipermail/mptcp/2019-June/001278.html
Is it not related to what Florian fixed in his last patch? e9d3a8a4a460
(mptcp: selftests: increase timeout and return error on ping failure)
https://lists.01.org/pipermail/mptcp/2019-June/001286.html
Cheers,
Matt
--
Matthieu Baerts | R&D Engineer
matthieu.baerts(a)tessares.net
Tessares SA | Hybrid Access Solutions
www.tessares.net
1 Avenue Jean Monnet, 1348 Louvain-la-Neuve, Belgium
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [MPTCP] [PATCH] squash-to: mptcp: Implement MPTCP receive path
@ 2019-06-14 14:23 Matthieu Baerts
0 siblings, 0 replies; 6+ messages in thread
From: Matthieu Baerts @ 2019-06-14 14:23 UTC (permalink / raw)
To: mptcp
[-- Attachment #1: Type: text/plain, Size: 3153 bytes --]
Hi Paolo,
On 14/06/2019 15:51, Matthieu Baerts wrote:
> Hi Paolo,
>
> Thank you for the quick answer!
>
> On 14/06/2019 15:45, Paolo Abeni wrote:
>> Hi,
>>
>> On Fri, 2019-06-14 at 15:37 +0200, Matthieu Baerts wrote:
>>> Note that I just ran the tests after having applied the patch and
>>> without KASAN and PROVE_LOCKING kconfig, I got this error:
>>
>> That is quite unexpected, the patch should not impact functionality in
>> any way.
>>
>>> 00:12:41.843 # ns2 MPTCP -> ns2 (10.0.2.1:10025) TCP [ OK ]
>>> 00:12:42.874 # ns2 TCP -> ns2 (10.0.2.1:10026) MPTCP [ OK ]
>>> 00:12:43.907 # ns2 MPTCP -> ns3 (10.0.2.2:10027) MPTCP main_loop_s:
>>> timed out
>>> 00:13:14.949 # copyfd_io: poll timed out (events: POLLIN 1, POLLOUT 4)
>>> 00:13:14.950 # [ FAIL ] client exit code 2, server 2
>>> 00:13:14.950 # \nnetns ns3 socket stat for 10027:
>>> 00:13:14.970 # State Recv-Q Send-Q Local Address:Port
>>> Peer Address:Port
>>> 00:13:14.970 # SYN-RECV 0 0 10.0.2.2:10027
>>> 10.0.2.1:47614 timer:(on,1.550ms,4) cwnd:2 retrans:4/0 reordering:0
>>> 00:13:14.972 # \nnetns ns2 socket stat for 10027:
>>> 00:13:14.982 # State Recv-Q Send-Q Local Address:Port
>>> Peer Address:Port
>>> 00:13:14.983 # FIN-WAIT-1 0 56569 10.0.2.1:47614
>>> 10.0.2.2:10027 timer:(on,22sec,7) rto:25.72 cwnd:1 ssthresh:7
>>> retrans:7/0 reordering:0
>>> 00:13:14.987 # ns2 MPTCP -> ns3 (10.0.3.2:10028) MPTCP [ OK ]
>>> 00:13:16.020 # ns2 MPTCP -> ns3 (10.0.3.2:10029) TCP [ OK ]
>>>
>>> I *think* it looks the same as the one I reported a few days ago.
>>>
>>> https://lists.01.org/pipermail/mptcp/2019-June/001278.html
>>
>> Indeed it looks similar...
>>
>>> Is it not related to what Florian fixed in his last patch? e9d3a8a4a460
>>> (mptcp: selftests: increase timeout and return error on ping failure)
>>>
>>> https://lists.01.org/pipermail/mptcp/2019-June/001286.html
>>
>> ... and you should not see that on top of Florian's patch. Just to
>> double check: is your testbed up2date? (e.g. the script includes
>> Florian's patch).
>
> Yes it is up to date: I used a ci job to re-create the "export" branch
> (after having checked that the compilation was OK with and without
> CONFIG_MPTCP) and then launch the tests from the "export" branch.
>
> So I was on the tag export/20190614T131503 and the patch is included:
>
> https://github.com/multipath-tcp/mptcp_net-next/commit/56c6c088766c04549eeef30ea86200350486ee26#diff-c550c086600c85e4bbd654545d2d9f8eR222
>
> I am going to run new tests in a few minutes. I will check if I still
> have the issue or not.
Tests are now good with the new export branch including the last 4
patches from Peter.
We might suffer from the same bug as before which was also not happening
all the time.
Cheers,
Matt
> Cheers,
> Matt
>
>> Cheers,
>>
>> Paolo
>>
>
--
Matthieu Baerts | R&D Engineer
matthieu.baerts(a)tessares.net
Tessares SA | Hybrid Access Solutions
www.tessares.net
1 Avenue Jean Monnet, 1348 Louvain-la-Neuve, Belgium
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [MPTCP] [PATCH] squash-to: mptcp: Implement MPTCP receive path
@ 2019-06-14 13:51 Matthieu Baerts
0 siblings, 0 replies; 6+ messages in thread
From: Matthieu Baerts @ 2019-06-14 13:51 UTC (permalink / raw)
To: mptcp
[-- Attachment #1: Type: text/plain, Size: 2815 bytes --]
Hi Paolo,
Thank you for the quick answer!
On 14/06/2019 15:45, Paolo Abeni wrote:
> Hi,
>
> On Fri, 2019-06-14 at 15:37 +0200, Matthieu Baerts wrote:
>> Note that I just ran the tests after having applied the patch and
>> without KASAN and PROVE_LOCKING kconfig, I got this error:
>
> That is quite unexpected, the patch should not impact functionality in
> any way.
>
>> 00:12:41.843 # ns2 MPTCP -> ns2 (10.0.2.1:10025) TCP [ OK ]
>> 00:12:42.874 # ns2 TCP -> ns2 (10.0.2.1:10026) MPTCP [ OK ]
>> 00:12:43.907 # ns2 MPTCP -> ns3 (10.0.2.2:10027) MPTCP main_loop_s:
>> timed out
>> 00:13:14.949 # copyfd_io: poll timed out (events: POLLIN 1, POLLOUT 4)
>> 00:13:14.950 # [ FAIL ] client exit code 2, server 2
>> 00:13:14.950 # \nnetns ns3 socket stat for 10027:
>> 00:13:14.970 # State Recv-Q Send-Q Local Address:Port
>> Peer Address:Port
>> 00:13:14.970 # SYN-RECV 0 0 10.0.2.2:10027
>> 10.0.2.1:47614 timer:(on,1.550ms,4) cwnd:2 retrans:4/0 reordering:0
>> 00:13:14.972 # \nnetns ns2 socket stat for 10027:
>> 00:13:14.982 # State Recv-Q Send-Q Local Address:Port
>> Peer Address:Port
>> 00:13:14.983 # FIN-WAIT-1 0 56569 10.0.2.1:47614
>> 10.0.2.2:10027 timer:(on,22sec,7) rto:25.72 cwnd:1 ssthresh:7
>> retrans:7/0 reordering:0
>> 00:13:14.987 # ns2 MPTCP -> ns3 (10.0.3.2:10028) MPTCP [ OK ]
>> 00:13:16.020 # ns2 MPTCP -> ns3 (10.0.3.2:10029) TCP [ OK ]
>>
>> I *think* it looks the same as the one I reported a few days ago.
>>
>> https://lists.01.org/pipermail/mptcp/2019-June/001278.html
>
> Indeed it looks similar...
>
>> Is it not related to what Florian fixed in his last patch? e9d3a8a4a460
>> (mptcp: selftests: increase timeout and return error on ping failure)
>>
>> https://lists.01.org/pipermail/mptcp/2019-June/001286.html
>
> ... and you should not see that on top of Florian's patch. Just to
> double check: is your testbed up2date? (e.g. the script includes
> Florian's patch).
Yes it is up to date: I used a ci job to re-create the "export" branch
(after having checked that the compilation was OK with and without
CONFIG_MPTCP) and then launch the tests from the "export" branch.
So I was on the tag export/20190614T131503 and the patch is included:
https://github.com/multipath-tcp/mptcp_net-next/commit/56c6c088766c04549eeef30ea86200350486ee26#diff-c550c086600c85e4bbd654545d2d9f8eR222
I am going to run new tests in a few minutes. I will check if I still
have the issue or not.
Cheers,
Matt
> Cheers,
>
> Paolo
>
--
Matthieu Baerts | R&D Engineer
matthieu.baerts(a)tessares.net
Tessares SA | Hybrid Access Solutions
www.tessares.net
1 Avenue Jean Monnet, 1348 Louvain-la-Neuve, Belgium
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [MPTCP] [PATCH] squash-to: mptcp: Implement MPTCP receive path
@ 2019-06-14 13:45 Paolo Abeni
0 siblings, 0 replies; 6+ messages in thread
From: Paolo Abeni @ 2019-06-14 13:45 UTC (permalink / raw)
To: mptcp
[-- Attachment #1: Type: text/plain, Size: 1945 bytes --]
Hi,
On Fri, 2019-06-14 at 15:37 +0200, Matthieu Baerts wrote:
> Note that I just ran the tests after having applied the patch and
> without KASAN and PROVE_LOCKING kconfig, I got this error:
That is quite unexpected, the patch should not impact functionality in
any way.
> 00:12:41.843 # ns2 MPTCP -> ns2 (10.0.2.1:10025) TCP [ OK ]
> 00:12:42.874 # ns2 TCP -> ns2 (10.0.2.1:10026) MPTCP [ OK ]
> 00:12:43.907 # ns2 MPTCP -> ns3 (10.0.2.2:10027) MPTCP main_loop_s:
> timed out
> 00:13:14.949 # copyfd_io: poll timed out (events: POLLIN 1, POLLOUT 4)
> 00:13:14.950 # [ FAIL ] client exit code 2, server 2
> 00:13:14.950 # \nnetns ns3 socket stat for 10027:
> 00:13:14.970 # State Recv-Q Send-Q Local Address:Port
> Peer Address:Port
> 00:13:14.970 # SYN-RECV 0 0 10.0.2.2:10027
> 10.0.2.1:47614 timer:(on,1.550ms,4) cwnd:2 retrans:4/0 reordering:0
> 00:13:14.972 # \nnetns ns2 socket stat for 10027:
> 00:13:14.982 # State Recv-Q Send-Q Local Address:Port
> Peer Address:Port
> 00:13:14.983 # FIN-WAIT-1 0 56569 10.0.2.1:47614
> 10.0.2.2:10027 timer:(on,22sec,7) rto:25.72 cwnd:1 ssthresh:7
> retrans:7/0 reordering:0
> 00:13:14.987 # ns2 MPTCP -> ns3 (10.0.3.2:10028) MPTCP [ OK ]
> 00:13:16.020 # ns2 MPTCP -> ns3 (10.0.3.2:10029) TCP [ OK ]
>
> I *think* it looks the same as the one I reported a few days ago.
>
> https://lists.01.org/pipermail/mptcp/2019-June/001278.html
Indeed it looks similar...
> Is it not related to what Florian fixed in his last patch? e9d3a8a4a460
> (mptcp: selftests: increase timeout and return error on ping failure)
>
> https://lists.01.org/pipermail/mptcp/2019-June/001286.html
... and you should not see that on top of Florian's patch. Just to
double check: is your testbed up2date? (e.g. the script includes
Florian's patch).
Cheers,
Paolo
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [MPTCP] [PATCH] squash-to: mptcp: Implement MPTCP receive path
@ 2019-06-13 18:23 Mat Martineau
0 siblings, 0 replies; 6+ messages in thread
From: Mat Martineau @ 2019-06-13 18:23 UTC (permalink / raw)
To: mptcp
[-- Attachment #1: Type: text/plain, Size: 3412 bytes --]
On Thu, 13 Jun 2019, Paolo Abeni wrote:
> The usage of an atomic type for such field does not really protect
> vs concurrent read/modify/update cycles, and makes the code less
> readable, move to plain u64.
>
> We already held the required lock in the relevant context
Thanks for removing the extra complexity, Paolo. This looks ready to
merge.
Mat
>
> Signed-off-by: Paolo Abeni <pabeni(a)redhat.com>
> ---
> net/mptcp/options.c | 2 +-
> net/mptcp/protocol.c | 8 ++++----
> net/mptcp/protocol.h | 2 +-
> 3 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/net/mptcp/options.c b/net/mptcp/options.c
> index d285a33cd480..92cb0cb48f0c 100644
> --- a/net/mptcp/options.c
> +++ b/net/mptcp/options.c
> @@ -319,7 +319,7 @@ static bool mptcp_established_options_dss(struct sock *sk, struct sk_buff *skb,
>
> msk = mptcp_sk(subflow_ctx(sk)->conn);
> if (msk) {
> - opts->ext_copy.data_ack = atomic64_read(&msk->ack_seq);
> + opts->ext_copy.data_ack = msk->ack_seq;
> } else {
> crypto_key_sha1(subflow_ctx(sk)->remote_key, NULL,
> &opts->ext_copy.data_ack);
> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
> index 941988387ea0..91e913636c45 100644
> --- a/net/mptcp/protocol.c
> +++ b/net/mptcp/protocol.c
> @@ -435,7 +435,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
> }
>
> ssn = tcp_sk(ssk)->copied_seq - subflow->ssn_offset;
> - old_ack = atomic64_read(&msk->ack_seq);
> + old_ack = msk->ack_seq;
>
> if (unlikely(before(ssn, subflow->map_subflow_seq))) {
> /* Mapping covers data later in the subflow stream,
> @@ -501,7 +501,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
> ack_seq = get_mapped_dsn(subflow);
>
> if (before64(old_ack, ack_seq))
> - atomic64_set(&msk->ack_seq, ack_seq);
> + msk->ack_seq = ack_seq;
>
> if (!before(tcp_sk(ssk)->copied_seq - subflow->ssn_offset,
> subflow->map_subflow_seq + subflow->map_data_len)) {
> @@ -635,7 +635,7 @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err,
> crypto_key_sha1(msk->remote_key, NULL, &ack_seq);
> msk->write_seq = subflow->idsn + 1;
> ack_seq++;
> - atomic64_set(&msk->ack_seq, ack_seq);
> + msk->ack_seq = ack_seq;
> subflow->map_seq = ack_seq;
> subflow->map_subflow_seq = 1;
> subflow->rel_write_seq = 1;
> @@ -760,7 +760,7 @@ void mptcp_finish_connect(struct sock *sk, int mp_capable)
> crypto_key_sha1(msk->remote_key, NULL, &ack_seq);
> msk->write_seq = subflow->idsn + 1;
> ack_seq++;
> - atomic64_set(&msk->ack_seq, ack_seq);
> + msk->ack_seq = ack_seq;
> subflow->map_seq = ack_seq;
> subflow->map_subflow_seq = 1;
> subflow->rel_write_seq = 1;
> diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
> index a4f0e7d3bd62..4bc1a094aad6 100644
> --- a/net/mptcp/protocol.h
> +++ b/net/mptcp/protocol.h
> @@ -43,7 +43,7 @@ struct mptcp_sock {
> u64 local_key;
> u64 remote_key;
> u64 write_seq;
> - atomic64_t ack_seq;
> + u64 ack_seq;
> u32 token;
> struct list_head conn_list;
> struct socket *subflow; /* outgoing connect/listener/!mp_capable */
> --
> 2.20.1
>
> _______________________________________________
> mptcp mailing list
> mptcp(a)lists.01.org
> https://lists.01.org/mailman/listinfo/mptcp
>
--
Mat Martineau
Intel
^ permalink raw reply [flat|nested] 6+ messages in thread
* [MPTCP] [PATCH] squash-to: mptcp: Implement MPTCP receive path
@ 2019-06-13 15:25 Paolo Abeni
0 siblings, 0 replies; 6+ messages in thread
From: Paolo Abeni @ 2019-06-13 15:25 UTC (permalink / raw)
To: mptcp
[-- Attachment #1: Type: text/plain, Size: 2983 bytes --]
The usage of an atomic type for such field does not really protect
vs concurrent read/modify/update cycles, and makes the code less
readable, move to plain u64.
We already held the required lock in the relevant context
Signed-off-by: Paolo Abeni <pabeni(a)redhat.com>
---
net/mptcp/options.c | 2 +-
net/mptcp/protocol.c | 8 ++++----
net/mptcp/protocol.h | 2 +-
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/net/mptcp/options.c b/net/mptcp/options.c
index d285a33cd480..92cb0cb48f0c 100644
--- a/net/mptcp/options.c
+++ b/net/mptcp/options.c
@@ -319,7 +319,7 @@ static bool mptcp_established_options_dss(struct sock *sk, struct sk_buff *skb,
msk = mptcp_sk(subflow_ctx(sk)->conn);
if (msk) {
- opts->ext_copy.data_ack = atomic64_read(&msk->ack_seq);
+ opts->ext_copy.data_ack = msk->ack_seq;
} else {
crypto_key_sha1(subflow_ctx(sk)->remote_key, NULL,
&opts->ext_copy.data_ack);
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 941988387ea0..91e913636c45 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -435,7 +435,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
}
ssn = tcp_sk(ssk)->copied_seq - subflow->ssn_offset;
- old_ack = atomic64_read(&msk->ack_seq);
+ old_ack = msk->ack_seq;
if (unlikely(before(ssn, subflow->map_subflow_seq))) {
/* Mapping covers data later in the subflow stream,
@@ -501,7 +501,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
ack_seq = get_mapped_dsn(subflow);
if (before64(old_ack, ack_seq))
- atomic64_set(&msk->ack_seq, ack_seq);
+ msk->ack_seq = ack_seq;
if (!before(tcp_sk(ssk)->copied_seq - subflow->ssn_offset,
subflow->map_subflow_seq + subflow->map_data_len)) {
@@ -635,7 +635,7 @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err,
crypto_key_sha1(msk->remote_key, NULL, &ack_seq);
msk->write_seq = subflow->idsn + 1;
ack_seq++;
- atomic64_set(&msk->ack_seq, ack_seq);
+ msk->ack_seq = ack_seq;
subflow->map_seq = ack_seq;
subflow->map_subflow_seq = 1;
subflow->rel_write_seq = 1;
@@ -760,7 +760,7 @@ void mptcp_finish_connect(struct sock *sk, int mp_capable)
crypto_key_sha1(msk->remote_key, NULL, &ack_seq);
msk->write_seq = subflow->idsn + 1;
ack_seq++;
- atomic64_set(&msk->ack_seq, ack_seq);
+ msk->ack_seq = ack_seq;
subflow->map_seq = ack_seq;
subflow->map_subflow_seq = 1;
subflow->rel_write_seq = 1;
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index a4f0e7d3bd62..4bc1a094aad6 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -43,7 +43,7 @@ struct mptcp_sock {
u64 local_key;
u64 remote_key;
u64 write_seq;
- atomic64_t ack_seq;
+ u64 ack_seq;
u32 token;
struct list_head conn_list;
struct socket *subflow; /* outgoing connect/listener/!mp_capable */
--
2.20.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
end of thread, other threads:[~2019-06-14 14:23 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-14 13:37 [MPTCP] [PATCH] squash-to: mptcp: Implement MPTCP receive path Matthieu Baerts
-- strict thread matches above, loose matches on Subject: below --
2019-06-14 14:23 Matthieu Baerts
2019-06-14 13:51 Matthieu Baerts
2019-06-14 13:45 Paolo Abeni
2019-06-13 18:23 Mat Martineau
2019-06-13 15:25 Paolo Abeni
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.