All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next] tcp: replace LINUX_MIB_TCPOFODROP with LINUX_MIB_TCPRMEMFULLDROP for drops due to receive buffer full
@ 2018-06-27 11:50 Yafang Shao
  2018-06-27 14:48 ` Eric Dumazet
  0 siblings, 1 reply; 6+ messages in thread
From: Yafang Shao @ 2018-06-27 11:50 UTC (permalink / raw)
  To: edumazet, davem; +Cc: netdev, linux-kernel, Yafang Shao

When sk_rmem_alloc is larger than the receive buffer and we can't
schedule more memory for it, the skb will be dropped.

In above situation, if this skb is put into the ofo queue,
LINUX_MIB_TCPOFODROP is incremented to track it,
while if this skb is put into the receive queue, there's no record.

So LINUX_MIB_TCPOFODROP is replaced with LINUX_MIB_TCPRMEMFULLDROP to track
this behavior.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 include/uapi/linux/snmp.h |  2 +-
 net/ipv4/proc.c           |  2 +-
 net/ipv4/tcp_input.c      | 10 +++++++---
 3 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/include/uapi/linux/snmp.h b/include/uapi/linux/snmp.h
index 97517f3..3a83322 100644
--- a/include/uapi/linux/snmp.h
+++ b/include/uapi/linux/snmp.h
@@ -243,7 +243,6 @@ enum
 	LINUX_MIB_TCPRETRANSFAIL,		/* TCPRetransFail */
 	LINUX_MIB_TCPRCVCOALESCE,		/* TCPRcvCoalesce */
 	LINUX_MIB_TCPOFOQUEUE,			/* TCPOFOQueue */
-	LINUX_MIB_TCPOFODROP,			/* TCPOFODrop */
 	LINUX_MIB_TCPOFOMERGE,			/* TCPOFOMerge */
 	LINUX_MIB_TCPCHALLENGEACK,		/* TCPChallengeACK */
 	LINUX_MIB_TCPSYNCHALLENGE,		/* TCPSYNChallenge */
@@ -280,6 +279,7 @@ enum
 	LINUX_MIB_TCPDELIVEREDCE,		/* TCPDeliveredCE */
 	LINUX_MIB_TCPACKCOMPRESSED,		/* TCPAckCompressed */
 	LINUX_MIB_TCPZEROWINDOWDROP,		/* TCPZeroWindowDrop */
+	LINUX_MIB_TCPRMEMFULLDROP,              /* TCPRmemFullDrop */
 	__LINUX_MIB_MAX
 };
 
diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 225ef34..43ee02d 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -251,7 +251,6 @@ static int sockstat_seq_show(struct seq_file *seq, void *v)
 	SNMP_MIB_ITEM("TCPRetransFail", LINUX_MIB_TCPRETRANSFAIL),
 	SNMP_MIB_ITEM("TCPRcvCoalesce", LINUX_MIB_TCPRCVCOALESCE),
 	SNMP_MIB_ITEM("TCPOFOQueue", LINUX_MIB_TCPOFOQUEUE),
-	SNMP_MIB_ITEM("TCPOFODrop", LINUX_MIB_TCPOFODROP),
 	SNMP_MIB_ITEM("TCPOFOMerge", LINUX_MIB_TCPOFOMERGE),
 	SNMP_MIB_ITEM("TCPChallengeACK", LINUX_MIB_TCPCHALLENGEACK),
 	SNMP_MIB_ITEM("TCPSYNChallenge", LINUX_MIB_TCPSYNCHALLENGE),
@@ -288,6 +287,7 @@ static int sockstat_seq_show(struct seq_file *seq, void *v)
 	SNMP_MIB_ITEM("TCPDeliveredCE", LINUX_MIB_TCPDELIVEREDCE),
 	SNMP_MIB_ITEM("TCPAckCompressed", LINUX_MIB_TCPACKCOMPRESSED),
 	SNMP_MIB_ITEM("TCPZeroWindowDrop", LINUX_MIB_TCPZEROWINDOWDROP),
+	SNMP_MIB_ITEM("TCPRmemFullDrop", LINUX_MIB_TCPRMEMFULLDROP),
 	SNMP_MIB_SENTINEL
 };
 
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 9c5b341..d11abb8 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -4442,7 +4442,7 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
 	tcp_ecn_check_ce(sk, skb);
 
 	if (unlikely(tcp_try_rmem_schedule(sk, skb, skb->truesize))) {
-		NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPOFODROP);
+		NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPRMEMFULLDROP);
 		tcp_drop(sk, skb);
 		return;
 	}
@@ -4611,8 +4611,10 @@ int tcp_send_rcvq(struct sock *sk, struct msghdr *msg, size_t size)
 	skb->data_len = data_len;
 	skb->len = size;
 
-	if (tcp_try_rmem_schedule(sk, skb, skb->truesize))
+	if (tcp_try_rmem_schedule(sk, skb, skb->truesize)) {
+		NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPRMEMFULLDROP);
 		goto err_free;
+	}
 
 	err = skb_copy_datagram_from_iter(skb, 0, &msg->msg_iter, size);
 	if (err)
@@ -4677,8 +4679,10 @@ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)
 queue_and_out:
 		if (skb_queue_len(&sk->sk_receive_queue) == 0)
 			sk_forced_mem_schedule(sk, skb->truesize);
-		else if (tcp_try_rmem_schedule(sk, skb, skb->truesize))
+		else if (tcp_try_rmem_schedule(sk, skb, skb->truesize)) {
+			NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPRMEMFULLDROP);
 			goto drop;
+		}
 
 		eaten = tcp_queue_rcv(sk, skb, 0, &fragstolen);
 		tcp_rcv_nxt_update(tp, TCP_SKB_CB(skb)->end_seq);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH net-next] tcp: replace LINUX_MIB_TCPOFODROP with LINUX_MIB_TCPRMEMFULLDROP for drops due to receive buffer full
  2018-06-27 11:50 [PATCH net-next] tcp: replace LINUX_MIB_TCPOFODROP with LINUX_MIB_TCPRMEMFULLDROP for drops due to receive buffer full Yafang Shao
@ 2018-06-27 14:48 ` Eric Dumazet
  2018-06-27 15:14   ` Yafang Shao
  0 siblings, 1 reply; 6+ messages in thread
From: Eric Dumazet @ 2018-06-27 14:48 UTC (permalink / raw)
  To: Yafang Shao, edumazet, davem; +Cc: netdev, linux-kernel



On 06/27/2018 04:50 AM, Yafang Shao wrote:
> When sk_rmem_alloc is larger than the receive buffer and we can't
> schedule more memory for it, the skb will be dropped.
> 
> In above situation, if this skb is put into the ofo queue,
> LINUX_MIB_TCPOFODROP is incremented to track it,
> while if this skb is put into the receive queue, there's no record.
> 
> So LINUX_MIB_TCPOFODROP is replaced with LINUX_MIB_TCPRMEMFULLDROP to track
> this behavior.


Hi Yafang

I do not want to remove TCPOFODrop and mix multiple causes in one single counter.

Please take a look at commit a6df1ae9383697c to have the reasoning.

Thanks.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH net-next] tcp: replace LINUX_MIB_TCPOFODROP with LINUX_MIB_TCPRMEMFULLDROP for drops due to receive buffer full
  2018-06-27 14:48 ` Eric Dumazet
@ 2018-06-27 15:14   ` Yafang Shao
  2018-06-27 15:27     ` Eric Dumazet
  0 siblings, 1 reply; 6+ messages in thread
From: Yafang Shao @ 2018-06-27 15:14 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Eric Dumazet, David Miller, netdev, LKML

On Wed, Jun 27, 2018 at 10:48 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
>
>
> On 06/27/2018 04:50 AM, Yafang Shao wrote:
>> When sk_rmem_alloc is larger than the receive buffer and we can't
>> schedule more memory for it, the skb will be dropped.
>>
>> In above situation, if this skb is put into the ofo queue,
>> LINUX_MIB_TCPOFODROP is incremented to track it,
>> while if this skb is put into the receive queue, there's no record.
>>
>> So LINUX_MIB_TCPOFODROP is replaced with LINUX_MIB_TCPRMEMFULLDROP to track
>> this behavior.
>
>
> Hi Yafang
>
> I do not want to remove TCPOFODrop and mix multiple causes in one single counter.
>
> Please take a look at commit a6df1ae9383697c to have the reasoning.
>

Got it!

What about introduce a new counter, i.e. TCPRcvQFullDrop ?

Thanks
Yafang

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH net-next] tcp: replace LINUX_MIB_TCPOFODROP with LINUX_MIB_TCPRMEMFULLDROP for drops due to receive buffer full
  2018-06-27 15:14   ` Yafang Shao
@ 2018-06-27 15:27     ` Eric Dumazet
  2018-06-27 15:38       ` Yafang Shao
  0 siblings, 1 reply; 6+ messages in thread
From: Eric Dumazet @ 2018-06-27 15:27 UTC (permalink / raw)
  To: Yafang Shao; +Cc: Eric Dumazet, David Miller, netdev, LKML



On 06/27/2018 08:14 AM, Yafang Shao wrote:
 
> Got it!
> 
> What about introduce a new counter, i.e. TCPRcvQFullDrop ?

tcp_try_rmem_schedule() can fail for many different reasons,
not related to how occupied the socket receive queue is.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH net-next] tcp: replace LINUX_MIB_TCPOFODROP with LINUX_MIB_TCPRMEMFULLDROP for drops due to receive buffer full
  2018-06-27 15:27     ` Eric Dumazet
@ 2018-06-27 15:38       ` Yafang Shao
  2018-06-27 15:52         ` Eric Dumazet
  0 siblings, 1 reply; 6+ messages in thread
From: Yafang Shao @ 2018-06-27 15:38 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Eric Dumazet, David Miller, netdev, LKML

On Wed, Jun 27, 2018 at 11:27 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
>
>
> On 06/27/2018 08:14 AM, Yafang Shao wrote:
>
>> Got it!
>>
>> What about introduce a new counter, i.e. TCPRcvQFullDrop ?
>
> tcp_try_rmem_schedule() can fail for many different reasons,
> not related to how occupied the socket receive queue is.

Yes. So TCPRcvQDrop would be more specific ?

Thanks
Yafang

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH net-next] tcp: replace LINUX_MIB_TCPOFODROP with LINUX_MIB_TCPRMEMFULLDROP for drops due to receive buffer full
  2018-06-27 15:38       ` Yafang Shao
@ 2018-06-27 15:52         ` Eric Dumazet
  0 siblings, 0 replies; 6+ messages in thread
From: Eric Dumazet @ 2018-06-27 15:52 UTC (permalink / raw)
  To: Yafang Shao, Eric Dumazet; +Cc: Eric Dumazet, David Miller, netdev, LKML



On 06/27/2018 08:38 AM, Yafang Shao wrote:
> On Wed, Jun 27, 2018 at 11:27 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
>>
>>
>> On 06/27/2018 08:14 AM, Yafang Shao wrote:
>>
>>> Got it!
>>>
>>> What about introduce a new counter, i.e. TCPRcvQFullDrop ?
>>
>> tcp_try_rmem_schedule() can fail for many different reasons,
>> not related to how occupied the socket receive queue is.
> 
> Yes. So TCPRcvQDrop would be more specific ?

Yes, this looks better.


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-06-27 15:52 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-27 11:50 [PATCH net-next] tcp: replace LINUX_MIB_TCPOFODROP with LINUX_MIB_TCPRMEMFULLDROP for drops due to receive buffer full Yafang Shao
2018-06-27 14:48 ` Eric Dumazet
2018-06-27 15:14   ` Yafang Shao
2018-06-27 15:27     ` Eric Dumazet
2018-06-27 15:38       ` Yafang Shao
2018-06-27 15:52         ` Eric Dumazet

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.