All of lore.kernel.org
 help / color / mirror / Atom feed
* [MPTCP] Re: [PATCH] mptcp: Add DATA_FIN transmission and handling
@ 2019-11-07  8:42 Paolo Abeni
  0 siblings, 0 replies; 6+ messages in thread
From: Paolo Abeni @ 2019-11-07  8:42 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 3192 bytes --]

On Wed, 2019-11-06 at 16:37 -0800, Mat Martineau wrote:
> On Wed, 6 Nov 2019, Paolo Abeni wrote:
> > I think we don't need a workqueue here: subflow could simply set
> > (WRITE_ONCE) one/some msk level field[s].
> > 
> > I don't think we need to deal with subflows concurrently writing
> > different values there - it will be a protocol violation, the stream
> > will be corrupted anyway.
> 
> The goal of the workqueue would be to process the DATA_FIN (and ack it if 
> the recv side is caught up) even when there's no pending send/recv from 
> userspace to wake up. That means the subflow error queue wouldn't work 
> either.

I mean: I think we can update msk->incoming_data_fin from
mptcp_incoming_options() even without locking, just using WRITE_ONCE().

Additionally we could use cmpxchg(), if we want detect cuncurrent
modification from multiple subflows, but that looks more a debug
features than a functional requirement to me.

> > [...]
> > > @@ -383,6 +383,13 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
> > >  	lock_sock(ssk);
> > >  	mptcp_clean_una(sk);
> > >  	timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
> > > +
> > > +	if ((1 << sk->sk_state) & ~(TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) {
> > > +		ret = sk_stream_wait_connect(sk, &timeo);
> > > +		if (ret != 0)
> > > +			goto put_out_subflow;
> > > +	}
> > > +
> > >  	while (msg_data_left(msg)) {
> > >  		ret = mptcp_sendmsg_frag(sk, ssk, msg, NULL, &timeo, &mss_now,
> > >  					 &size_goal);
> > 
> > I think waiting is not needed here?!? we could just bail out with
> > ENOTCONN if state is >= TCP_FIN_WAIT1. Such check could be located
> > before selecting the egress subflow: it should be outside the ssk lock.
> 
> I agree that the sk state check and possible waiting need to happen 
> outside the ssk lock, thank you for spotting that.
> 
> Waiting isn't needed for the states at the end of the connection, but I 
> saw that TCP sendmsg/do_tcp_sendpages verified the connection state prior 
> to sending in this way. 

Uhm... overall this looks a bugfix, perhaps it could be placed into
it's own patch for review's sake...

> What if I move this check very early in the 
> function, before the fallback sock_sendmsg()? If there's an attempt to 
> send before the handshake is done, we don't want to be waiting in the 
> regular TCP sendmsg().

I think you are right, otherwise we use the subflow as fallback, even
if the MP_CAPABLE handshake is still running and will complete
successfully. 

I additionally think we need something similar even for recvmsg(), for
similar reasons

Side note: with v1 MP_CAPABLE we will a similar problem, e.g. we can
start recvmsg() when the MP_CAPABLE handshake is not completed, but the
tcp connection is established (because v1 has a forth MP_CAPABLE
message). In theat scenario we will use the subflow as a MPTCP enabled
one - and block with no data. 

Later the MP_CAPABLE handshake can complete even unsuccessfully, and we
have to fallback to plain TCP, while still waiting in the MPTCP-
specific part of mptcp_recvmsg() - will be fun ;)

Cheers,

Paolo

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [MPTCP] Re: [PATCH] mptcp: Add DATA_FIN transmission and handling
@ 2019-11-08 19:08 Mat Martineau
  0 siblings, 0 replies; 6+ messages in thread
From: Mat Martineau @ 2019-11-08 19:08 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 2043 bytes --]

On Fri, 8 Nov 2019, Matthieu Baerts wrote:

> Hi Mat,
>
> On 06/11/2019 05:50, Mat Martineau wrote:
>> Send and process received DATA_FIN options. This requires addition of a
>> state machine to the MPTCP socket to handle the closing process. DATA_FIN
>> is sent along with a DSS mapping for the final data transmitted, or on a
>> bare ACK if there is no data to transmit.
>> 
>> This is still a work in progress, including these areas:
>>
>>   * Need to finish handling of DSS_ACKs for the state machine.
>>
>>   * Need to double-check whether it's correct to bypass inet_shutdown
>>     as the current code does.
>>
>>   * A DATA_FIN that arrives on a packet without data payload (unless it's
>>     a TCP FIN) are discarded by tcp_data_queue() and not propagated to
>>     the MPTCP socket. This is detected in mptcp_incoming_options() but
>>     not yet handled. The skb with the DATA_FIN could be cloned and
>>     placed in the subflow error queue, or a workqueue could handle it.
>> 
>> Signed-off-by: Mat Martineau <mathew.j.martineau(a)linux.intel.com>
>
> Yesterday at the meeting we talked a bit about this patch. Here is what we 
> said:
>
>
>    - DATA_FIN:
>        - Mat is working on it
>        - It seems still to be too big for the "initial submission" with the 
> minimal feature
>        - because sending the DATA_FIN might be required for some 
> implementations (not closing the MPTCP connections), it could be good to at 
> least send the DATA_FIN, without the full support.
>
>        - *Mat*: maybe for the initial submission, part 2, where only 1 
> subflow is supported, could we send the DATA_FIN just before closing the 
> connection? With the minimal support.
>
>
> What do you think about that?
>

For the single-subflow case, that's a good point. It will probably work 
well enough to set DATA_FIN whenever FIN is sent. I'll focus on that first 
and then see if there's also a single-subflow shortcut for acking the 
peer's DATA_FIN.

--
Mat Martineau
Intel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [MPTCP] Re: [PATCH] mptcp: Add DATA_FIN transmission and handling
@ 2019-11-08 15:02 Matthieu Baerts
  0 siblings, 0 replies; 6+ messages in thread
From: Matthieu Baerts @ 2019-11-08 15:02 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 1891 bytes --]

Hi Mat,

On 06/11/2019 05:50, Mat Martineau wrote:
> Send and process received DATA_FIN options. This requires addition of a
> state machine to the MPTCP socket to handle the closing process. DATA_FIN
> is sent along with a DSS mapping for the final data transmitted, or on a
> bare ACK if there is no data to transmit.
> 
> This is still a work in progress, including these areas:
> 
>   * Need to finish handling of DSS_ACKs for the state machine.
> 
>   * Need to double-check whether it's correct to bypass inet_shutdown
>     as the current code does.
> 
>   * A DATA_FIN that arrives on a packet without data payload (unless it's
>     a TCP FIN) are discarded by tcp_data_queue() and not propagated to
>     the MPTCP socket. This is detected in mptcp_incoming_options() but
>     not yet handled. The skb with the DATA_FIN could be cloned and
>     placed in the subflow error queue, or a workqueue could handle it.
> 
> Signed-off-by: Mat Martineau <mathew.j.martineau(a)linux.intel.com>

Yesterday at the meeting we talked a bit about this patch. Here is what 
we said:


     - DATA_FIN:
         - Mat is working on it
         - It seems still to be too big for the "initial submission" 
with the minimal feature
         - because sending the DATA_FIN might be required for some 
implementations (not closing the MPTCP connections), it could be good to 
at least send the DATA_FIN, without the full support.

         - *Mat*: maybe for the initial submission, part 2, where only 1 
subflow is supported, could we send the DATA_FIN just before closing the 
connection? With the minimal support.


What do you think about that?

Cheers and take care,
Matt
-- 
Matthieu Baerts | R&D Engineer
matthieu.baerts(a)tessares.net
Tessares SA | Hybrid Access Solutions
www.tessares.net
1 Avenue Jean Monnet, 1348 Louvain-la-Neuve, Belgium

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [MPTCP] Re: [PATCH] mptcp: Add DATA_FIN transmission and handling
@ 2019-11-07  0:37 Mat Martineau
  0 siblings, 0 replies; 6+ messages in thread
From: Mat Martineau @ 2019-11-07  0:37 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 11071 bytes --]

On Wed, 6 Nov 2019, Paolo Abeni wrote:

> Hi,
>
> Thanks for doing this!
>
> I have quite a few comments below... As a general note, this patch
> looks quite complex, what about splitting it in a [small] series? I
> think it would be nice:
>
> - introducing the state machine in a separate patch,
> - possibly another one to deal with state update on fallback,
> - possibly a separate patch to fix status update on listen
> - data fin handling code
>
> On Tue, 2019-11-05 at 20:50 -0800, Mat Martineau wrote:
>> @@ -632,6 +663,18 @@ void mptcp_incoming_options(struct sock *sk, struct sk_buff *skb,
>>
>>  	mpext->data_fin = mp_opt->data_fin;
>>
>> +	if (mp_opt->data_fin &&
>> +	    TCP_SKB_CB(skb)->seq == TCP_SKB_CB(skb)->end_seq) {
>> +		/* The DATA_FIN is on a packet that will be discarded by
>> +		 * tcp_data_queue() and will not get propagated to the MPTCP
>> +		 * socket.
>> +		 *
>> +		 * Use workqueue or subflow error queue?
>> +		 */
>> +		pr_warn("Ignored DATA_FIN");
>> +	}
>> +
>> +
>>  	if (msk)
>>  		mptcp_pm_fully_established(msk);
>>  }
>
> I think we don't need a workqueue here: subflow could simply set
> (WRITE_ONCE) one/some msk level field[s].
>
> I don't think we need to deal with subflows concurrently writing
> different values there - it will be a protocol violation, the stream
> will be corrupted anyway.

The goal of the workqueue would be to process the DATA_FIN (and ack it if 
the recv side is caught up) even when there's no pending send/recv from 
userspace to wake up. That means the subflow error queue wouldn't work 
either.

>
> [...]
>> @@ -383,6 +383,13 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
>>  	lock_sock(ssk);
>>  	mptcp_clean_una(sk);
>>  	timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
>> +
>> +	if ((1 << sk->sk_state) & ~(TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) {
>> +		ret = sk_stream_wait_connect(sk, &timeo);
>> +		if (ret != 0)
>> +			goto put_out_subflow;
>> +	}
>> +
>>  	while (msg_data_left(msg)) {
>>  		ret = mptcp_sendmsg_frag(sk, ssk, msg, NULL, &timeo, &mss_now,
>>  					 &size_goal);
>
> I think waiting is not needed here?!? we could just bail out with
> ENOTCONN if state is >= TCP_FIN_WAIT1. Such check could be located
> before selecting the egress subflow: it should be outside the ssk lock.

I agree that the sk state check and possible waiting need to happen 
outside the ssk lock, thank you for spotting that.

Waiting isn't needed for the states at the end of the connection, but I 
saw that TCP sendmsg/do_tcp_sendpages verified the connection state prior 
to sending in this way. What if I move this check very early in the 
function, before the fallback sock_sendmsg()? If there's an attempt to 
send before the handshake is done, we don't want to be waiting in the 
regular TCP sendmsg().


>
> [...]
>> @@ -403,6 +410,7 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
>>  			mptcp_reset_timer(sk);
>>  	}
>>
>> +put_out_subflow:
>>  	release_sock(ssk);
>>
>>  out:
>> @@ -464,6 +472,41 @@ static void mptcp_wait_data(struct sock *sk, long *timeo)
>>  	remove_wait_queue(sk_sleep(sk), &wait);
>>  }
>>
>> +static int mptcp_close_state(int oldstate, bool fin, bool ack)
>> +{
>> +	switch (oldstate) {
>> +	case TCP_FIN_WAIT1:
>> +		if (fin && ack)
>> +			return TCP_TIME_WAIT;
>> +		else if (fin)
>> +			return TCP_FIN_WAIT2;
>> +		else
>> +			return TCP_CLOSING;
>
> As we are in 'TCP_FIN_WAIT1', we sent a DATA_FIN, I think we should:
>
> - move to TCP_FIN_WAIT2 when the fin is acked (snd_una == data_fin_seq)
> - move to TCP_CLOSING when we receive a DATA_FIN
>

You're correct, that's what I was supposed to do based on the state 
diagram in RFC6824 appendix C.

> is the 'ack' argument true when there aren't any unacked data at the
> MPTCP level?!?

It will be up to the caller to set 'ack' to true only when the sequence 
number of the DATA_FIN is acked by the peer. That isn't implemented yet.

>
>> +	case TCP_FIN_WAIT2:
>> +		if (fin)
>> +			return TCP_TIME_WAIT;
>> +		else
>> +			return oldstate;
>
> When receiving the data fin in TCP_FIN_WAIT2 status, I think we can
> move directly to TCP_CLOSED and let the subflow handle the time wait.

The appendix C diagram transitions from TIME_WAIT to CLOSED when all the 
subflows have closed, but the state isn't explained further in RFC 6824. 
In RFC 6824bis, there is another mention of TIME_WAIT (section 3.3.3) and 
break-before-make - but this doesn't make sense to me since no DATA_FINs 
have been exchanged in a break-before-make scenario and the MPTCP-level 
connection would still be ESTABLISHED.

Since the DATA_FIN has been acked by the time we are exiting FIN_WAIT2 or 
CLOSING, there isn't anything to keep track of at the MPTCP level for 
retransmission purposes. I think you're right that we don't need 
TIME_WAIT.

>
> It's not clear to me if the 'fin' argument refears to 'ingress' or
> 'egress' fin (or has different meaning according to the current sk
> state).

The 'fin' and 'ack' arguments are referring to incoming DATA_FIN/DATA_ACK. 
I can modify those names to clarify.

>
>> +	case TCP_CLOSING:
>> +		if (ack)
>> +			return TCP_TIME_WAIT;
>> +		else
>> +			return oldstate;
>
> I think here we have to check (snd_una == data_fin_seq), and move
> directly to TCP_CLOSED - the subflows will handle the time wait

I was planning to have the caller check that.

>
>> +	case TCP_LAST_ACK:
>> +		if (ack)
>> +			return TCP_CLOSE;
>> +		else
>> +			return oldstate;
>
>> +	case TCP_ESTABLISHED:
>> +		return TCP_CLOSE_WAIT;
>
> I think we should enter TCP_CLOSE_WAIT iff we received a data fin, and
> it looks like we need two additional cases:
> - TCP_CLOSE_WAIT -> TCP_LAST_ACK, when we send data fin

I handled that transition in mptcp_stream_shutdown, but it should be here 
for clarity.

> - TCP_LAST_ACK -> TCP_CLOSED when our fin is acked

That's what is above in the TCP_LAST_ACK case.

>
> Finally I think we can avoid the TCP_TIME_WAIT at all, and we could
> handle some additional corner cases:
> - all subflows are closed (due to TCP reset) -> move msk state to
> TCP_CLOSE
> - what if we receive TCP_FIN on all subflows, but no MPTCP data fin ?

Either of these could be a break-before-make scenario, but for now we do 
need to change the MPTCP state to TCP_CLOSE when all subflows are gone.

>
> [...]
>> @@ -545,6 +588,20 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
>>  				continue;
>>  			}
>>  		}
>> +
>> +		if (!subflow->map_valid && subflow->incoming_data_fin) {
>> +			int newstate;
>> +
>> +			// TODO: Send on all subflows?
>> +			newstate = mptcp_close_state(sk->sk_state, true, false);
>> +			msk->ack_seq++;
>> +			subflow->incoming_data_fin = 0;
>> +			subflow->send_data_fin = 1;
>> +			subflow->data_fin_seq = msk->write_seq;
>
> The above means that the msk socket will send DATA_FIN after receiving
> one, regardless of shutdown(), right? I think that is not required ?!?

You're right - need to check newstate here and only send DATA_FIN in 
CLOSE_WAIT.

>
> [...]
>> @@ -747,11 +804,11 @@ static void mptcp_close(struct sock *sk, long timeout)
>>  	struct mptcp_sock *msk = mptcp_sk(sk);
>>  	struct socket *ssk = NULL;
>>
>> -	mptcp_token_destroy(msk->token);
>> -	inet_sk_state_store(sk, TCP_CLOSE);
>> -
>>  	lock_sock(sk);
>>
>> +	inet_sk_state_store(sk, TCP_CLOSE);
>> +	mptcp_token_destroy(msk->token);
>> +
>
> Don't we need to apply the above state machine here? and possibly use
> sk_stream_wait_close()? (with appropriate states handling, we can
> possibly re-use as it)

Yes, need the state machine here.

>
>>  	if (msk->subflow) {
>>  		ssk = msk->subflow;
>>  		msk->subflow = NULL;
>
> [...]
>> @@ -1262,6 +1330,18 @@ static int mptcp_shutdown(struct socket *sock, int how)
>>  	pr_debug("sk=%p, how=%d", msk, how);
>>
>>  	lock_sock(sock->sk);
>> +
>> +	if (sock->sk->sk_state == TCP_ESTABLISHED) {
>> +		inet_sk_state_store(sock->sk, TCP_FIN_WAIT1);
>> +	} else if (sock->sk->sk_state == TCP_CLOSE_WAIT) {
>> +		inet_sk_state_store(sock->sk, TCP_LAST_ACK);
>> +	} else if (sock->sk->sk_state != TCP_CLOSE) {
>> +		pr_warn("Shutdown from unexpected state %d",
>> +			sock->sk->sk_state);
>> +		release_sock(sock->sk);
>> +		return -EALREADY;
>> +	}
>> +
>
> I think we can re-use the state machine for the above transition?
> Shoul we need to check for: (how & SEND_SHUTDOWN) before doinf any
> mptcp level action?
> Should we apply the state changes only if !fallback, and ev copying the
> fallback socket status ?

Yes, yes, and yes.

>
>
>>  	ssock = __mptcp_fallback_get_ref(msk);
>>  	if (ssock) {
>>  		release_sock(sock->sk);
>> @@ -1276,6 +1356,8 @@ static int mptcp_shutdown(struct socket *sock, int how)
>>
>>  		tcp_socket = mptcp_subflow_tcp_socket(subflow);
>>  		pr_debug("conn_list->subflow=%p", subflow);
>> +		subflow->send_data_fin = 1;
>> +		subflow->data_fin_seq = msk->write_seq;
>>  		ret = kernel_sock_shutdown(tcp_socket, how);
>>  	}
>>  	release_sock(sock->sk);
>
> I think here we must avoid shutting down the subflow (but likely we can
> shut them down when msk transition to TCP_FIN_WAIT2.
>
> Additionally we can consider calling tcp_close() on all subflows when
> msk transition to TCP_CLOSE state

Ok, both of those are part of the state diagram but not reflected in the 
code.

>
>> @@ -1303,7 +1385,7 @@ void mptcp_proto_init(void)
>>  	mptcp_stream_ops.accept = mptcp_stream_accept;
>>  	mptcp_stream_ops.getname = mptcp_getname;
>>  	mptcp_stream_ops.listen = mptcp_listen;
>> -	mptcp_stream_ops.shutdown = mptcp_shutdown;
>> +	mptcp_stream_ops.shutdown = mptcp_stream_shutdown;
>>
>>  	if (percpu_counter_init(&mptcp_sockets_allocated, 0, GFP_KERNEL))
>>  		panic("Failed to allocate MPTCP pcpu counter\n");
>> diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
>> index 8fe5f9383d38..cf90014974ea 100644
>> --- a/net/mptcp/protocol.h
>> +++ b/net/mptcp/protocol.h
>> @@ -217,9 +217,12 @@ struct mptcp_subflow_context {
>>  		map_valid : 1,
>>  		backup : 1,
>>  		data_avail : 1,
>> -		rx_eof : 1;
>> +		rx_eof : 1,
>> +		incoming_data_fin : 1,
>> +		send_data_fin : 1;
>>  	u32	remote_nonce;
>>  	u64	thmac;
>> +	u64	data_fin_seq;
>>  	u32	local_nonce;
>>  	u32	remote_token;
>>  	u8	hmac[MPTCPOPT_HMAC_LEN];
>
> Possibly we can move this field to msk? and possibly we don't need
> send_data_fin?

Yeah, data_fin_seq can be moved up to msk.

>
> if we have a dss, we send data fin iff dss map matches msk data_fin_seq
> if this is not a data packet (no dss) we send the data_fin (once ???)
> if msk->write_seq == data_fin_seq.

It's that "once" part that made me add send_data_fin. 
mptcp_write_data_fin() has this logic.

Thanks for your review!

--
Mat Martineau
Intel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [MPTCP] Re: [PATCH] mptcp: Add DATA_FIN transmission and handling
@ 2019-11-06 10:58 Paolo Abeni
  0 siblings, 0 replies; 6+ messages in thread
From: Paolo Abeni @ 2019-11-06 10:58 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 8194 bytes --]

Hi,

Thanks for doing this!

I have quite a few comments below... As a general note, this patch
looks quite complex, what about splitting it in a [small] series? I
think it would be nice:

- introducing the state machine in a separate patch, 
- possibly another one to deal with state update on fallback, 
- possibly a separate patch to fix status update on listen
- data fin handling code

On Tue, 2019-11-05 at 20:50 -0800, Mat Martineau wrote:
> @@ -632,6 +663,18 @@ void mptcp_incoming_options(struct sock *sk, struct sk_buff *skb,
>  
>  	mpext->data_fin = mp_opt->data_fin;
>  
> +	if (mp_opt->data_fin &&
> +	    TCP_SKB_CB(skb)->seq == TCP_SKB_CB(skb)->end_seq) {
> +		/* The DATA_FIN is on a packet that will be discarded by
> +		 * tcp_data_queue() and will not get propagated to the MPTCP
> +		 * socket.
> +		 *
> +		 * Use workqueue or subflow error queue?
> +		 */
> +		pr_warn("Ignored DATA_FIN");
> +	}
> +
> +
>  	if (msk)
>  		mptcp_pm_fully_established(msk);
>  }

I think we don't need a workqueue here: subflow could simply set
(WRITE_ONCE) one/some msk level field[s].

I don't think we need to deal with subflows concurrently writing
different values there - it will be a protocol violation, the stream
will be corrupted anyway.

[...]
> @@ -383,6 +383,13 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
>  	lock_sock(ssk);
>  	mptcp_clean_una(sk);
>  	timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
> +
> +	if ((1 << sk->sk_state) & ~(TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) {
> +		ret = sk_stream_wait_connect(sk, &timeo);
> +		if (ret != 0)
> +			goto put_out_subflow;
> +	}
> +
>  	while (msg_data_left(msg)) {
>  		ret = mptcp_sendmsg_frag(sk, ssk, msg, NULL, &timeo, &mss_now,
>  					 &size_goal);

 I think waiting is not needed here?!? we could just bail out with
ENOTCONN if state is >= TCP_FIN_WAIT1. Such check could be located 
before selecting the egress subflow: it should be outside the ssk lock.

[...]
> @@ -403,6 +410,7 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
>  			mptcp_reset_timer(sk);
>  	}
>  
> +put_out_subflow:
>  	release_sock(ssk);
>  
>  out:
> @@ -464,6 +472,41 @@ static void mptcp_wait_data(struct sock *sk, long *timeo)
>  	remove_wait_queue(sk_sleep(sk), &wait);
>  }
>  
> +static int mptcp_close_state(int oldstate, bool fin, bool ack)
> +{
> +	switch (oldstate) {
> +	case TCP_FIN_WAIT1:
> +		if (fin && ack)
> +			return TCP_TIME_WAIT;
> +		else if (fin)
> +			return TCP_FIN_WAIT2;
> +		else
> +			return TCP_CLOSING;

As we are in 'TCP_FIN_WAIT1', we sent a DATA_FIN, I think we should:

- move to TCP_FIN_WAIT2 when the fin is acked (snd_una == data_fin_seq)
- move to TCP_CLOSING when we receive a DATA_FIN

is the 'ack' argument true when there aren't any unacked data at the
MPTCP level?!?

> +	case TCP_FIN_WAIT2:
> +		if (fin)
> +			return TCP_TIME_WAIT;
> +		else
> +			return oldstate;

When receiving the data fin in TCP_FIN_WAIT2 status, I think we can
move directly to TCP_CLOSED and let the subflow handle the time wait.

It's not clear to me if the 'fin' argument refears to 'ingress' or
'egress' fin (or has different meaning according to the current sk
state).

> +	case TCP_CLOSING:
> +		if (ack)
> +			return TCP_TIME_WAIT;
> +		else
> +			return oldstate;

I think here we have to check (snd_una == data_fin_seq), and move
directly to TCP_CLOSED - the subflows will handle the time wait

> +	case TCP_LAST_ACK:
> +		if (ack)
> +			return TCP_CLOSE;
> +		else
> +			return oldstate;

> +	case TCP_ESTABLISHED:
> +		return TCP_CLOSE_WAIT;

I think we should enter TCP_CLOSE_WAIT iff we received a data fin, and
it looks like we need two additional cases: 
- TCP_CLOSE_WAIT -> TCP_LAST_ACK, when we send data fin
- TCP_LAST_ACK -> TCP_CLOSED when our fin is acked

Finally I think we can avoid the TCP_TIME_WAIT at all, and we could
handle some additional corner cases:
- all subflows are closed (due to TCP reset) -> move msk state to
TCP_CLOSE
- what if we receive TCP_FIN on all subflows, but no MPTCP data fin ?

[...]
> @@ -545,6 +588,20 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
>  				continue;
>  			}
>  		}
> +
> +		if (!subflow->map_valid && subflow->incoming_data_fin) {
> +			int newstate;
> +
> +			// TODO: Send on all subflows?
> +			newstate = mptcp_close_state(sk->sk_state, true, false);
> +			msk->ack_seq++;
> +			subflow->incoming_data_fin = 0;
> +			subflow->send_data_fin = 1;
> +			subflow->data_fin_seq = msk->write_seq;

The above means that the msk socket will send DATA_FIN after receiving
one, regardless of shutdown(), right? I think that is not required ?!?

[...]
> @@ -747,11 +804,11 @@ static void mptcp_close(struct sock *sk, long timeout)
>  	struct mptcp_sock *msk = mptcp_sk(sk);
>  	struct socket *ssk = NULL;
>  
> -	mptcp_token_destroy(msk->token);
> -	inet_sk_state_store(sk, TCP_CLOSE);
> -
>  	lock_sock(sk);
>  
> +	inet_sk_state_store(sk, TCP_CLOSE);
> +	mptcp_token_destroy(msk->token);
> +

Don't we need to apply the above state machine here? and possibly use
sk_stream_wait_close()? (with appropriate states handling, we can
possibly re-use as it)

>  	if (msk->subflow) {
>  		ssk = msk->subflow;
>  		msk->subflow = NULL;
 
[...]
> @@ -1262,6 +1330,18 @@ static int mptcp_shutdown(struct socket *sock, int how)
>  	pr_debug("sk=%p, how=%d", msk, how);
>  
>  	lock_sock(sock->sk);
> +
> +	if (sock->sk->sk_state == TCP_ESTABLISHED) {
> +		inet_sk_state_store(sock->sk, TCP_FIN_WAIT1);
> +	} else if (sock->sk->sk_state == TCP_CLOSE_WAIT) {
> +		inet_sk_state_store(sock->sk, TCP_LAST_ACK);
> +	} else if (sock->sk->sk_state != TCP_CLOSE) {
> +		pr_warn("Shutdown from unexpected state %d",
> +			sock->sk->sk_state);
> +		release_sock(sock->sk);
> +		return -EALREADY;
> +	}
> +

I think we can re-use the state machine for the above transition? 
Shoul we need to check for: (how & SEND_SHUTDOWN) before doinf any
mptcp level action?
Should we apply the state changes only if !fallback, and ev copying the
fallback socket status ?


>  	ssock = __mptcp_fallback_get_ref(msk);
>  	if (ssock) {
>  		release_sock(sock->sk);
> @@ -1276,6 +1356,8 @@ static int mptcp_shutdown(struct socket *sock, int how)
>  
>  		tcp_socket = mptcp_subflow_tcp_socket(subflow);
>  		pr_debug("conn_list->subflow=%p", subflow);
> +		subflow->send_data_fin = 1;
> +		subflow->data_fin_seq = msk->write_seq;
>  		ret = kernel_sock_shutdown(tcp_socket, how);
>  	}
>  	release_sock(sock->sk);

I think here we must avoid shutting down the subflow (but likely we can
shut them down when msk transition to TCP_FIN_WAIT2.

Additionally we can consider calling tcp_close() on all subflows when
msk transition to TCP_CLOSE state

> @@ -1303,7 +1385,7 @@ void mptcp_proto_init(void)
>  	mptcp_stream_ops.accept = mptcp_stream_accept;
>  	mptcp_stream_ops.getname = mptcp_getname;
>  	mptcp_stream_ops.listen = mptcp_listen;
> -	mptcp_stream_ops.shutdown = mptcp_shutdown;
> +	mptcp_stream_ops.shutdown = mptcp_stream_shutdown;
>  
>  	if (percpu_counter_init(&mptcp_sockets_allocated, 0, GFP_KERNEL))
>  		panic("Failed to allocate MPTCP pcpu counter\n");
> diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
> index 8fe5f9383d38..cf90014974ea 100644
> --- a/net/mptcp/protocol.h
> +++ b/net/mptcp/protocol.h
> @@ -217,9 +217,12 @@ struct mptcp_subflow_context {
>  		map_valid : 1,
>  		backup : 1,
>  		data_avail : 1,
> -		rx_eof : 1;
> +		rx_eof : 1,
> +		incoming_data_fin : 1,
> +		send_data_fin : 1;
>  	u32	remote_nonce;
>  	u64	thmac;
> +	u64	data_fin_seq;
>  	u32	local_nonce;
>  	u32	remote_token;
>  	u8	hmac[MPTCPOPT_HMAC_LEN];

Possibly we can move this field to msk? and possibly we don't need
send_data_fin?

if we have a dss, we send data fin iff dss map matches msk data_fin_seq
if this is not a data packet (no dss) we send the data_fin (once ???)
if msk->write_seq == data_fin_seq.

Cheers,

Paolo

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [MPTCP] Re: [PATCH] mptcp: Add DATA_FIN transmission and handling
@ 2019-11-06  9:29 Florian Westphal
  0 siblings, 0 replies; 6+ messages in thread
From: Florian Westphal @ 2019-11-06  9:29 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 633 bytes --]

Mat Martineau <mathew.j.martineau(a)linux.intel.com> wrote:
>  * A DATA_FIN that arrives on a packet without data payload (unless it's
>    a TCP FIN) are discarded by tcp_data_queue() and not propagated to
>    the MPTCP socket. This is detected in mptcp_incoming_options() but
>    not yet handled. The skb with the DATA_FIN could be cloned and
>    placed in the subflow error queue, or a workqueue could handle it.

I have a pending patch that adds an ack workqueue to deal with the
case where we get an mptcp ack for an mptcp socket that has exceeded
its wmem limit, it can be reused for this too.

I'll send it soon.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-11-08 19:08 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-07  8:42 [MPTCP] Re: [PATCH] mptcp: Add DATA_FIN transmission and handling Paolo Abeni
  -- strict thread matches above, loose matches on Subject: below --
2019-11-08 19:08 Mat Martineau
2019-11-08 15:02 Matthieu Baerts
2019-11-07  0:37 Mat Martineau
2019-11-06 10:58 Paolo Abeni
2019-11-06  9:29 Florian Westphal

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.