All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 net-next 0/2] tcp: reduce cpu usage when SO_SNDBUF is set
@ 2016-06-22 15:32 Jason Baron
  2016-06-22 15:32 ` [PATCH v2 net-next 1/2] tcp: replace smp_mb__after_atomic() with smp_mb() in tcp_poll() Jason Baron
  2016-06-22 15:32 ` [PATCH v2 net-next 2/2] tcp: reduce cpu usage when SO_SNDBUF is set Jason Baron
  0 siblings, 2 replies; 9+ messages in thread
From: Jason Baron @ 2016-06-22 15:32 UTC (permalink / raw)
  To: eric.dumazet, davem; +Cc: netdev

Hi,

I've added a patch to upgrade smp_mb__after_atomic() to a smp_mb()
as Eric Dumazet pointed out. Also combined the clearing of
SOCK_SHORT_WRITE with SOCK_QUEUE_SHRUNK.

Thanks,

-Jason

v2:
-upgrade smp_mb__after_atomic to smp_mb() in tcp_poll()
-combine clear of SOCK_SHORT_WRITE with SOCK_QUEUE_SHRUNK

Jason Baron (2):
  tcp: replace smp_mb__after_atomic() with smp_mb() in tcp_poll()
  tcp: reduce cpu usage when SO_SNDBUF is set

 include/net/sock.h   |  6 ++++++
 net/ipv4/tcp.c       | 26 +++++++++++++++++++-------
 net/ipv4/tcp_input.c |  5 +++--
 3 files changed, 28 insertions(+), 9 deletions(-)

-- 
2.6.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 net-next 1/2] tcp: replace smp_mb__after_atomic() with smp_mb() in tcp_poll()
  2016-06-22 15:32 [PATCH v2 net-next 0/2] tcp: reduce cpu usage when SO_SNDBUF is set Jason Baron
@ 2016-06-22 15:32 ` Jason Baron
  2016-06-22 15:32 ` [PATCH v2 net-next 2/2] tcp: reduce cpu usage when SO_SNDBUF is set Jason Baron
  1 sibling, 0 replies; 9+ messages in thread
From: Jason Baron @ 2016-06-22 15:32 UTC (permalink / raw)
  To: eric.dumazet, davem; +Cc: netdev

From: Jason Baron <jbaron@akamai.com>

sock_reset_flag() maps to __clear_bit() not the atomic version clear_bit(),
hence we need an smp_mb() there, smp_mb__after_atomic() is not sufficient.

Fixes: 3c7151275c0c ("tcp: add memory barriers to write space paths")
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Jason Baron <jbaron@akamai.com>
---
 net/ipv4/tcp_input.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 94d4aff97523..3ba526ecdeb9 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -4987,7 +4987,7 @@ static void tcp_check_space(struct sock *sk)
 	if (sock_flag(sk, SOCK_QUEUE_SHRUNK)) {
 		sock_reset_flag(sk, SOCK_QUEUE_SHRUNK);
 		/* pairs with tcp_poll() */
-		smp_mb__after_atomic();
+		smp_mb();
 		if (sk->sk_socket &&
 		    test_bit(SOCK_NOSPACE, &sk->sk_socket->flags))
 			tcp_new_space(sk);
-- 
2.6.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 net-next 2/2] tcp: reduce cpu usage when SO_SNDBUF is set
  2016-06-22 15:32 [PATCH v2 net-next 0/2] tcp: reduce cpu usage when SO_SNDBUF is set Jason Baron
  2016-06-22 15:32 ` [PATCH v2 net-next 1/2] tcp: replace smp_mb__after_atomic() with smp_mb() in tcp_poll() Jason Baron
@ 2016-06-22 15:32 ` Jason Baron
  2016-06-22 17:34   ` Eric Dumazet
  1 sibling, 1 reply; 9+ messages in thread
From: Jason Baron @ 2016-06-22 15:32 UTC (permalink / raw)
  To: eric.dumazet, davem; +Cc: netdev

From: Jason Baron <jbaron@akamai.com>

When SO_SNDBUF is set and we are under tcp memory pressure, the effective
write buffer space can be much lower than what was set using SO_SNDBUF. For
example, we may have set the buffer to 100kb, but we may only be able to
write 10kb. In this scenario poll()/select()/epoll(), are going to
continuously return POLLOUT, followed by -EAGAIN from write(), and thus
result in a tight loop. Note that epoll in edge-triggered does not have
this issue since it only triggers once data has been ack'd. There is no
issue here when SO_SNDBUF is not set, since the tcp layer will auto tune
the sk->sndbuf.

Introduce a new socket flag, SOCK_SHORT_WRITE, such that we can mark the
socket when we have a short write due to memory pressure. By then testing
for SOCK_SHORT_WRITE in tcp_poll(), we hold off the POLLOUT until a
non-zero amount of data has been ack'd. In a previous approach:
http://marc.info/?l=linux-netdev&m=143930393211782&w=2, I had introduced a
new field in 'struct sock' to solve this issue, but its undesirable to add
bloat to 'struct sock'. We also could address this issue, by waiting for
the buffer to become completely empty, but that may reduce throughput since
the write buffer would be empty while waiting for subsequent writes. This
change brings us in line with the current epoll edge-trigger behavior for
the poll()/select() and epoll level-trigger.

We guarantee that SOCK_SHORT_WRITE will eventually clear, since when we set
SOCK_SHORT_WRITE, we make sure that sk_wmem_queued is non-zero and
SOCK_NOSPACE is set as well (in sk_stream_wait_memory()).

I tested this patch using 10,000 sockets, and setting SO_SNDBUF on the
server side, to induce tcp memory pressure. A single server thread reduced
its cpu usage from 100% to 19%, while maintaining the same level of
throughput.

Note: This is not a new issue, it has been this way for some time.

Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Jason Baron <jbaron@akamai.com>
---
 include/net/sock.h   |  6 ++++++
 net/ipv4/tcp.c       | 26 +++++++++++++++++++-------
 net/ipv4/tcp_input.c |  3 ++-
 3 files changed, 27 insertions(+), 8 deletions(-)

diff --git a/include/net/sock.h b/include/net/sock.h
index 649d2a8c17fc..616e8e1a5d5d 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -741,6 +741,7 @@ enum sock_flags {
 	SOCK_FILTER_LOCKED, /* Filter cannot be changed anymore */
 	SOCK_SELECT_ERR_QUEUE, /* Wake select on error queue */
 	SOCK_RCU_FREE, /* wait rcu grace period in sk_destruct() */
+	SOCK_SHORT_WRITE, /* Couldn't fill sndbuf due to memory pressure */
 };
 
 #define SK_FLAGS_TIMESTAMP ((1UL << SOCK_TIMESTAMP) | (1UL << SOCK_TIMESTAMPING_RX_SOFTWARE))
@@ -1114,6 +1115,11 @@ static inline bool sk_stream_is_writeable(const struct sock *sk)
 	       sk_stream_memory_free(sk);
 }
 
+static inline void sk_set_short_write(struct sock *sk)
+{
+	if (sk->sk_wmem_queued > 0 && sk_stream_is_writeable(sk))
+		sock_set_flag(sk, SOCK_SHORT_WRITE);
+}
 
 static inline bool sk_has_memory_pressure(const struct sock *sk)
 {
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 5c7ed147449c..4577a90d7d87 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -517,7 +517,8 @@ unsigned int tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
 			mask |= POLLIN | POLLRDNORM;
 
 		if (!(sk->sk_shutdown & SEND_SHUTDOWN)) {
-			if (sk_stream_is_writeable(sk)) {
+			if (sk_stream_is_writeable(sk) &&
+			    !sock_flag(sk, SOCK_SHORT_WRITE)) {
 				mask |= POLLOUT | POLLWRNORM;
 			} else {  /* send SIGIO later */
 				sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk);
@@ -529,7 +530,8 @@ unsigned int tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
 				 * pairs with the input side.
 				 */
 				smp_mb__after_atomic();
-				if (sk_stream_is_writeable(sk))
+				if (sk_stream_is_writeable(sk) &&
+				    !sock_flag(sk, SOCK_SHORT_WRITE))
 					mask |= POLLOUT | POLLWRNORM;
 			}
 		} else
@@ -917,8 +919,10 @@ new_segment:
 
 			skb = sk_stream_alloc_skb(sk, 0, sk->sk_allocation,
 						  skb_queue_empty(&sk->sk_write_queue));
-			if (!skb)
+			if (!skb) {
+				sk_set_short_write(sk);
 				goto wait_for_memory;
+			}
 
 			skb_entail(sk, skb);
 			copy = size_goal;
@@ -933,8 +937,10 @@ new_segment:
 			tcp_mark_push(tp, skb);
 			goto new_segment;
 		}
-		if (!sk_wmem_schedule(sk, copy))
+		if (!sk_wmem_schedule(sk, copy)) {
+			sk_set_short_write(sk);
 			goto wait_for_memory;
+		}
 
 		if (can_coalesce) {
 			skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy);
@@ -1176,8 +1182,10 @@ new_segment:
 						  select_size(sk, sg),
 						  sk->sk_allocation,
 						  skb_queue_empty(&sk->sk_write_queue));
-			if (!skb)
+			if (!skb) {
+				sk_set_short_write(sk);
 				goto wait_for_memory;
+			}
 
 			process_backlog = true;
 			/*
@@ -1214,8 +1222,10 @@ new_segment:
 			int i = skb_shinfo(skb)->nr_frags;
 			struct page_frag *pfrag = sk_page_frag(sk);
 
-			if (!sk_page_frag_refill(sk, pfrag))
+			if (!sk_page_frag_refill(sk, pfrag)) {
+				sk_set_short_write(sk);
 				goto wait_for_memory;
+			}
 
 			if (!skb_can_coalesce(skb, i, pfrag->page,
 					      pfrag->offset)) {
@@ -1228,8 +1238,10 @@ new_segment:
 
 			copy = min_t(int, copy, pfrag->size - pfrag->offset);
 
-			if (!sk_wmem_schedule(sk, copy))
+			if (!sk_wmem_schedule(sk, copy)) {
+				sk_set_short_write(sk);
 				goto wait_for_memory;
+			}
 
 			err = skb_copy_to_page_nocache(sk, &msg->msg_iter, skb,
 						       pfrag->page,
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 3ba526ecdeb9..e216f41a325b 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -4985,7 +4985,8 @@ static void tcp_new_space(struct sock *sk)
 static void tcp_check_space(struct sock *sk)
 {
 	if (sock_flag(sk, SOCK_QUEUE_SHRUNK)) {
-		sock_reset_flag(sk, SOCK_QUEUE_SHRUNK);
+		sk->sk_flags &= ~((1UL << SOCK_QUEUE_SHRUNK) |
+				  (1UL << SOCK_SHORT_WRITE));
 		/* pairs with tcp_poll() */
 		smp_mb();
 		if (sk->sk_socket &&
-- 
2.6.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 net-next 2/2] tcp: reduce cpu usage when SO_SNDBUF is set
  2016-06-22 15:32 ` [PATCH v2 net-next 2/2] tcp: reduce cpu usage when SO_SNDBUF is set Jason Baron
@ 2016-06-22 17:34   ` Eric Dumazet
  2016-06-22 18:18     ` Jason Baron
  0 siblings, 1 reply; 9+ messages in thread
From: Eric Dumazet @ 2016-06-22 17:34 UTC (permalink / raw)
  To: Jason Baron; +Cc: davem, netdev

On Wed, 2016-06-22 at 11:32 -0400, Jason Baron wrote:
> From: Jason Baron <jbaron@akamai.com>
> 
> When SO_SNDBUF is set and we are under tcp memory pressure, the effective
> write buffer space can be much lower than what was set using SO_SNDBUF. For
> example, we may have set the buffer to 100kb, but we may only be able to
> write 10kb. In this scenario poll()/select()/epoll(), are going to
> continuously return POLLOUT, followed by -EAGAIN from write(), and thus
> result in a tight loop. Note that epoll in edge-triggered does not have
> this issue since it only triggers once data has been ack'd. There is no
> issue here when SO_SNDBUF is not set, since the tcp layer will auto tune
> the sk->sndbuf.


Still, generating one POLLOUT event per incoming ACK will not please
epoll() users in edge-trigger mode.

Host is under global memory pressure, so we probably want to drain
socket queues _and_ reduce cpu pressure.

Strategy to insure all sockets converge to small amounts ASAP is simply
the best answer.

Letting big SO_SNDBUF offenders hog memory while their queue is big
is not going to help sockets who can not get ACK
(elephants get more ACK than mice, so they have more chance to succeed
their new allocations)

Your patch adds lot of complexity logic in tcp_sendmsg() and
tcp_sendpage().


I would prefer a simpler patch like :


 net/ipv4/tcp.c |   26 +++++++++++++++++++++++---
 1 file changed, 23 insertions(+), 3 deletions(-)

diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 5c7ed147449c..68bf035180d5 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -370,6 +370,26 @@ static int retrans_to_secs(u8 retrans, int timeout, int rto_max)
 	return period;
 }
 
+static bool tcp_throttle_writes(const struct sock *sk)
+{
+	return unlikely(tcp_memory_pressure &&
+			sk->sk_wmem_queued >= sysctl_tcp_wmem[0]);
+}
+
+static bool tcp_is_writeable(const struct sock *sk)
+{
+	if (tcp_throttle_writes(sk))
+		return false;
+	return sk_stream_is_writeable(sk);
+}
+
+static void tcp_write_space(struct sock *sk)
+{
+	if (tcp_throttle_writes(sk))
+		return;
+	sk_stream_write_space(sk);
+}
+
 /* Address-family independent initialization for a tcp_sock.
  *
  * NOTE: A lot of things set to zero explicitly by call to
@@ -412,7 +432,7 @@ void tcp_init_sock(struct sock *sk)
 
 	sk->sk_state = TCP_CLOSE;
 
-	sk->sk_write_space = sk_stream_write_space;
+	sk->sk_write_space = tcp_write_space;
 	sock_set_flag(sk, SOCK_USE_WRITE_QUEUE);
 
 	icsk->icsk_sync_mss = tcp_sync_mss;
@@ -517,7 +537,7 @@ unsigned int tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
 			mask |= POLLIN | POLLRDNORM;
 
 		if (!(sk->sk_shutdown & SEND_SHUTDOWN)) {
-			if (sk_stream_is_writeable(sk)) {
+			if (tcp_is_writeable(sk)) {
 				mask |= POLLOUT | POLLWRNORM;
 			} else {  /* send SIGIO later */
 				sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk);
@@ -529,7 +549,7 @@ unsigned int tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
 				 * pairs with the input side.
 				 */
 				smp_mb__after_atomic();
-				if (sk_stream_is_writeable(sk))
+				if (tcp_is_writeable(sk))
 					mask |= POLLOUT | POLLWRNORM;
 			}
 		} else

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 net-next 2/2] tcp: reduce cpu usage when SO_SNDBUF is set
  2016-06-22 17:34   ` Eric Dumazet
@ 2016-06-22 18:18     ` Jason Baron
  2016-06-22 18:43       ` Eric Dumazet
  0 siblings, 1 reply; 9+ messages in thread
From: Jason Baron @ 2016-06-22 18:18 UTC (permalink / raw)
  To: Eric Dumazet, davem; +Cc: netdev



On 06/22/2016 01:34 PM, Eric Dumazet wrote:
> On Wed, 2016-06-22 at 11:32 -0400, Jason Baron wrote:
>> From: Jason Baron <jbaron@akamai.com>
>>
>> When SO_SNDBUF is set and we are under tcp memory pressure, the effective
>> write buffer space can be much lower than what was set using SO_SNDBUF. For
>> example, we may have set the buffer to 100kb, but we may only be able to
>> write 10kb. In this scenario poll()/select()/epoll(), are going to
>> continuously return POLLOUT, followed by -EAGAIN from write(), and thus
>> result in a tight loop. Note that epoll in edge-triggered does not have
>> this issue since it only triggers once data has been ack'd. There is no
>> issue here when SO_SNDBUF is not set, since the tcp layer will auto tune
>> the sk->sndbuf.
>
> Still, generating one POLLOUT event per incoming ACK will not please
> epoll() users in edge-trigger mode.
>
> Host is under global memory pressure, so we probably want to drain
> socket queues _and_ reduce cpu pressure.
>
> Strategy to insure all sockets converge to small amounts ASAP is simply
> the best answer.
>
> Letting big SO_SNDBUF offenders hog memory while their queue is big
> is not going to help sockets who can not get ACK
> (elephants get more ACK than mice, so they have more chance to succeed
> their new allocations)
>
> Your patch adds lot of complexity logic in tcp_sendmsg() and
> tcp_sendpage().
>
>
> I would prefer a simpler patch like :
>
>

Ok, fair enough. I'm going to assume that you will submit this as
a formal patch.

For 1/2, the getting the correct memory barrier, should I re-submit
that as a separate patch?

Thanks,

-Jason

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 net-next 2/2] tcp: reduce cpu usage when SO_SNDBUF is set
  2016-06-22 18:18     ` Jason Baron
@ 2016-06-22 18:43       ` Eric Dumazet
  2016-06-22 18:51         ` Eric Dumazet
  0 siblings, 1 reply; 9+ messages in thread
From: Eric Dumazet @ 2016-06-22 18:43 UTC (permalink / raw)
  To: Jason Baron; +Cc: davem, netdev

On Wed, 2016-06-22 at 14:18 -0400, Jason Baron wrote:
> 

> 
> For 1/2, the getting the correct memory barrier, should I re-submit
> that as a separate patch?

Are you sure a full memory barrier (smp_mb() is needed ?

Maybe smp_wmb() would be enough ?

(And smp_rmb() in tcp_poll() ?)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 net-next 2/2] tcp: reduce cpu usage when SO_SNDBUF is set
  2016-06-22 18:43       ` Eric Dumazet
@ 2016-06-22 18:51         ` Eric Dumazet
  2016-06-22 19:20           ` Jason Baron
  0 siblings, 1 reply; 9+ messages in thread
From: Eric Dumazet @ 2016-06-22 18:51 UTC (permalink / raw)
  To: Jason Baron; +Cc: davem, netdev

On Wed, 2016-06-22 at 11:43 -0700, Eric Dumazet wrote:
> On Wed, 2016-06-22 at 14:18 -0400, Jason Baron wrote:
> > 
> 
> > 
> > For 1/2, the getting the correct memory barrier, should I re-submit
> > that as a separate patch?
> 
> Are you sure a full memory barrier (smp_mb() is needed ?
> 
> Maybe smp_wmb() would be enough ?
> 
> (And smp_rmb() in tcp_poll() ?)

Well, in tcp_poll() smp_mb__after_atomic() is fine as it follows 
set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);

(although we might add a comment why we should keep
sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk) before the set_bit() !)

But presumably smp_wmb() would be enough in tcp_check_space()

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 net-next 2/2] tcp: reduce cpu usage when SO_SNDBUF is set
  2016-06-22 18:51         ` Eric Dumazet
@ 2016-06-22 19:20           ` Jason Baron
  2016-06-22 20:15             ` Eric Dumazet
  0 siblings, 1 reply; 9+ messages in thread
From: Jason Baron @ 2016-06-22 19:20 UTC (permalink / raw)
  To: Eric Dumazet, Jason Baron; +Cc: davem, netdev



On 06/22/2016 02:51 PM, Eric Dumazet wrote:
> On Wed, 2016-06-22 at 11:43 -0700, Eric Dumazet wrote:
>> On Wed, 2016-06-22 at 14:18 -0400, Jason Baron wrote:
>>> For 1/2, the getting the correct memory barrier, should I re-submit
>>> that as a separate patch?
>> Are you sure a full memory barrier (smp_mb() is needed ?
>>
>> Maybe smp_wmb() would be enough ?
>>
>> (And smp_rmb() in tcp_poll() ?)
> Well, in tcp_poll() smp_mb__after_atomic() is fine as it follows
> set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
>
> (although we might add a comment why we should keep
> sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk) before the set_bit() !)
>
> But presumably smp_wmb() would be enough in tcp_check_space()
>
>
>
>

hmm, I think we need the smp_mb() there. From
tcp_poll() we have:

1) set_bit(SOCK_NOSPACE, ...)  (write)
2) smp_mb__after_atomic();
3) if (sk_stream_is_writeable(sk)) (read)

while in tcp_check_space() its:

1) the state that sk_stream_is_writeable() cares about (write)
2) smp_mb();
3) if (sk->sk_socket && test_bit(SOCK_NOSPACE,...) (read)

So if we can show that there are sufficient barriers
for #1 (directly above), maybe it can be down-graded or
eliminated. But it would still seem somewhat fragile.

Note I didn't observe any missing wakeups here, but I
just wanted to make sure we didn't miss any, since they
can be quite hard to debug.

Thanks,

-Jason

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 net-next 2/2] tcp: reduce cpu usage when SO_SNDBUF is set
  2016-06-22 19:20           ` Jason Baron
@ 2016-06-22 20:15             ` Eric Dumazet
  0 siblings, 0 replies; 9+ messages in thread
From: Eric Dumazet @ 2016-06-22 20:15 UTC (permalink / raw)
  To: Jason Baron; +Cc: davem, netdev

On Wed, 2016-06-22 at 15:20 -0400, Jason Baron wrote:


> hmm, I think we need the smp_mb() there. From
> tcp_poll() we have:
> 
> 1) set_bit(SOCK_NOSPACE, ...)  (write)
> 2) smp_mb__after_atomic();
> 3) if (sk_stream_is_writeable(sk)) (read)
> 
> while in tcp_check_space() its:
> 
> 1) the state that sk_stream_is_writeable() cares about (write)
> 2) smp_mb();
> 3) if (sk->sk_socket && test_bit(SOCK_NOSPACE,...) (read)


Oh right, thanks for checking.

> 
> So if we can show that there are sufficient barriers
> for #1 (directly above), maybe it can be down-graded or
> eliminated. But it would still seem somewhat fragile.
> 
> Note I didn't observe any missing wakeups here, but I
> just wanted to make sure we didn't miss any, since they
> can be quite hard to debug.
x

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2016-06-22 20:15 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-22 15:32 [PATCH v2 net-next 0/2] tcp: reduce cpu usage when SO_SNDBUF is set Jason Baron
2016-06-22 15:32 ` [PATCH v2 net-next 1/2] tcp: replace smp_mb__after_atomic() with smp_mb() in tcp_poll() Jason Baron
2016-06-22 15:32 ` [PATCH v2 net-next 2/2] tcp: reduce cpu usage when SO_SNDBUF is set Jason Baron
2016-06-22 17:34   ` Eric Dumazet
2016-06-22 18:18     ` Jason Baron
2016-06-22 18:43       ` Eric Dumazet
2016-06-22 18:51         ` Eric Dumazet
2016-06-22 19:20           ` Jason Baron
2016-06-22 20:15             ` Eric Dumazet

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.