All of lore.kernel.org
 help / color / mirror / Atom feed
* [MPTCP] Re: [PATCH] squashto: mptcp: schedule worker when subflow is closed
@ 2021-02-10 16:04 Paolo Abeni
  0 siblings, 0 replies; 4+ messages in thread
From: Paolo Abeni @ 2021-02-10 16:04 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 3825 bytes --]

On Wed, 2021-02-10 at 13:47 +0100, Florian Westphal wrote:
> When remote side closes a subflow we should schedule the worker to
> dispose of the subflow in a timely manner.
> 
> Otherwise, SF_CLOSED event won't be generated until the mptcp
> socket itself is closing or local side is closing another subflow.
> 
> As noted by Paolo and Matthieu, a subflow that moves to TCP_CLOSE state
> might still have data in its rx queue.
> 
> Add a helper to only schedule the work queue once the subflow is empty.
> 
> For subflows that still have data, we can do same schedule check
> in subflow_check_data_avail().
> 
> In case we have multiple subflows closing at the same time,
> also make the work queue skip subflows that are TCP_CLOSE with
> pending data.
> 
> Signed-off-by: Florian Westphal <fw(a)strlen.de>
> ---
>  net/mptcp/protocol.c |  4 ++++
>  net/mptcp/subflow.c  | 30 ++++++++++++++++++++++--------
>  2 files changed, 26 insertions(+), 8 deletions(-)
> 
> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
> index 646f91e1c792..d49e0efbdd50 100644
> --- a/net/mptcp/protocol.c
> +++ b/net/mptcp/protocol.c
> @@ -2172,6 +2172,10 @@ static void __mptcp_close_subflow(struct mptcp_sock *msk)
>  		if (inet_sk_state_load(ssk) != TCP_CLOSE)
>  			continue;
>  
> +		/* 'subflow_data_ready' will re-sched once rx queue is empty */
> +		if (!skb_queue_empty_lockless(&ssk->sk_receive_queue))
> +			continue;
> +
>  		mptcp_close_ssk((struct sock *)msk, ssk, subflow);
>  	}
>  }
> diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
> index 16c391ee8e0f..06e233410e0e 100644
> --- a/net/mptcp/subflow.c
> +++ b/net/mptcp/subflow.c
> @@ -945,6 +945,22 @@ static void mptcp_subflow_discard_data(struct sock *ssk, struct sk_buff *skb,
>  		subflow->map_valid = 0;
>  }
>  
> +/* sched mptcp worker to remove the subflow if no more data is pending */
> +static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ssk)
> +{
> +	struct sock *sk = (struct sock *)msk;
> +
> +	if (likely(ssk->sk_state != TCP_CLOSE))
> +		return;
> +
> +	if (skb_queue_empty(&ssk->sk_receive_queue) &&
> +	    !test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags)) {
> +		sock_hold(sk);
> +		if (!schedule_work(&msk->work))
> +			sock_put(sk);
> +	}
> +}
> +
>  static bool subflow_check_data_avail(struct sock *ssk)
>  {
>  	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
> @@ -983,11 +999,11 @@ static bool subflow_check_data_avail(struct sock *ssk)
>  		}
>  
>  		if (status != MAPPING_OK)
> -			return false;
> +			goto no_data;
>  
>  		skb = skb_peek(&ssk->sk_receive_queue);
>  		if (WARN_ON_ONCE(!skb))
> -			return false;
> +			goto no_data;
>  
>  		/* if msk lacks the remote key, this subflow must provide an
>  		 * MP_CAPABLE-based mapping
> @@ -1021,6 +1037,9 @@ static bool subflow_check_data_avail(struct sock *ssk)
>  	}
>  	return true;
>  
> +no_data:
> +	subflow_sched_work_if_closed(msk, ssk);
> +	return false;
>  fatal:
>  	/* fatal protocol error, close the socket */
>  	/* This barrier is coupled with smp_rmb() in tcp_poll() */
> @@ -1445,12 +1464,7 @@ static void subflow_state_change(struct sock *sk)
>  	if (mptcp_subflow_data_available(sk))
>  		mptcp_data_ready(parent, sk);
>  
> -	if (sk->sk_state == TCP_CLOSE &&
> -	    !test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &mptcp_sk(parent)->flags)) {
> -		sock_hold(parent);
> -		if (!schedule_work(&mptcp_sk(parent)->work))
> -			sock_put(parent);
> -	}
> +	subflow_sched_work_if_closed(mptcp_sk(parent), sk);
>  
>  	if (__mptcp_check_fallback(mptcp_sk(parent)) &&
>  	    !subflow->rx_eof && subflow_is_done(sk)) {

LGTM!

@Matttbe: could you please double check this one vs issues/154?

Thanks!

Paolo

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [MPTCP] Re: [PATCH] squashto: mptcp: schedule worker when subflow is closed
@ 2021-02-10 19:06 Matthieu Baerts
  0 siblings, 0 replies; 4+ messages in thread
From: Matthieu Baerts @ 2021-02-10 19:06 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 1065 bytes --]

Hi Florian, Paolo, Mat,

On 10/02/2021 13:47, Florian Westphal wrote:
> When remote side closes a subflow we should schedule the worker to
> dispose of the subflow in a timely manner.
> 
> Otherwise, SF_CLOSED event won't be generated until the mptcp
> socket itself is closing or local side is closing another subflow.
> 
> As noted by Paolo and Matthieu, a subflow that moves to TCP_CLOSE state
> might still have data in its rx queue.
> 
> Add a helper to only schedule the work queue once the subflow is empty.
> 
> For subflows that still have data, we can do same schedule check
> in subflow_check_data_avail().
> 
> In case we have multiple subflows closing at the same time,
> also make the work queue skip subflows that are TCP_CLOSE with
> pending data.

Thank you for the patch and reviews!

- 0629948c0ae6: "squashed" in "mptcp: schedule worker when subflow is 
closed"
- Results: 9c096197d99f..36c9c494faa9

Tests + export are in progress!

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [MPTCP] Re: [PATCH] squashto: mptcp: schedule worker when subflow is closed
@ 2021-02-10 18:03 Mat Martineau
  0 siblings, 0 replies; 4+ messages in thread
From: Mat Martineau @ 2021-02-10 18:03 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 1127 bytes --]

On Wed, 10 Feb 2021, Matthieu Baerts wrote:

> Hi Paolo,
>
> On 10/02/2021 17:04, Paolo Abeni wrote:
>> On Wed, 2021-02-10 at 13:47 +0100, Florian Westphal wrote:
>>> When remote side closes a subflow we should schedule the worker to
>>> dispose of the subflow in a timely manner.
>>> 
>>> Otherwise, SF_CLOSED event won't be generated until the mptcp
>>> socket itself is closing or local side is closing another subflow.
>>> 
>>> As noted by Paolo and Matthieu, a subflow that moves to TCP_CLOSE state
>>> might still have data in its rx queue.
>
> (...)
>
>> 
>> LGTM!
>> 
>> @Matttbe: could you please double check this one vs issues/154?
>
> Sure! I should have commented here instead of on GH :)
> I ran this for more than 3h now and I was not able to reproduce this issue! I 
> just stopped the loop.
>
> It looks good to me but I prefer to wait for Mat's ACK as he was the reviewer 
> of the original patch. I can always remove his RvB tag and re-add it later if 
> needed.
>
> Thanks for your work!
>

Ack from me as well - please squash and keep the RvB!


--
Mat Martineau
Intel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [MPTCP] Re: [PATCH] squashto: mptcp: schedule worker when subflow is closed
@ 2021-02-10 16:11 Matthieu Baerts
  0 siblings, 0 replies; 4+ messages in thread
From: Matthieu Baerts @ 2021-02-10 16:11 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 1035 bytes --]

Hi Paolo,

On 10/02/2021 17:04, Paolo Abeni wrote:
> On Wed, 2021-02-10 at 13:47 +0100, Florian Westphal wrote:
>> When remote side closes a subflow we should schedule the worker to
>> dispose of the subflow in a timely manner.
>>
>> Otherwise, SF_CLOSED event won't be generated until the mptcp
>> socket itself is closing or local side is closing another subflow.
>>
>> As noted by Paolo and Matthieu, a subflow that moves to TCP_CLOSE state
>> might still have data in its rx queue.

(...)

> 
> LGTM!
> 
> @Matttbe: could you please double check this one vs issues/154?

Sure! I should have commented here instead of on GH :)
I ran this for more than 3h now and I was not able to reproduce this 
issue! I just stopped the loop.

It looks good to me but I prefer to wait for Mat's ACK as he was the 
reviewer of the original patch. I can always remove his RvB tag and 
re-add it later if needed.

Thanks for your work!

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-02-10 19:06 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-10 16:04 [MPTCP] Re: [PATCH] squashto: mptcp: schedule worker when subflow is closed Paolo Abeni
2021-02-10 16:11 Matthieu Baerts
2021-02-10 18:03 Mat Martineau
2021-02-10 19:06 Matthieu Baerts

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.