bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf v2 00/11] bpf sockmap fixes
@ 2023-03-27 17:54 John Fastabend
  2023-03-27 17:54 ` [PATCH bpf v2 01/12] bpf: sockmap, pass skb ownership through read_skb John Fastabend
                   ` (12 more replies)
  0 siblings, 13 replies; 24+ messages in thread
From: John Fastabend @ 2023-03-27 17:54 UTC (permalink / raw)
  To: cong.wang, jakub, daniel, lmb, edumazet
  Cc: john.fastabend, bpf, netdev, ast, andrii, will

Fixes for sockmap running against NGINX TCP tests and also on an
underprovisioned VM so that we hit error (ENOMEM) cases regularly.

The first 3 patches fix cases related to ENOMEM that were either
causing splats or data hangs.

Then 4-7 resolved cases found when running NGINX with its sockets
assigned to sockmap. These mostly have to do with handling fin/shutdown
incorrectly and ensuring epoll_wait works as expected.

Patches 8 and 9 extract some of the logic used for sockmap_listen tests
so that we can use it in other tests because it didn't make much
sense to me to add tests to the sockmap_listen cases when here we
are testing send/recv *basic* cases.

Finally patches 10 and 11 add the new tests to ensure we handle
ioctl(FIONREAD) and shutdown correctly.

To test the series I ran the NGINX compliance tests and the sockmap
selftests. For now our compliance test just runs with SK_PASS.

There are some more things to be done here, but these 11 patches
stand on their own in my opionion and fix issues we are having in
CI now. For bpf-next we can fixup/improve selftests to use the
ASSERT_* in sockmap_helpers, streamline some of the testing, and
add more tests. We also still are debugging a few additional flakes
patches coming soon.

v2: use skb_queue_empty instead of *_empty_lockless (Eric)
    oops incorrectly updated copied_seq on DROP case (Eric)
    added test for drop case copied_seq update

John Fastabend (11):
  bpf: sockmap, pass skb ownership through read_skb
  bpf: sockmap, convert schedule_work into delayed_work
  bpf: sockmap, improved check for empty queue
  bpf: sockmap, handle fin correctly
  bpf: sockmap, TCP data stall on recv before accept
  bpf: sockmap, wake up polling after data copy
  bpf: sockmap incorrectly handling copied_seq
  bpf: sockmap, pull socket helpers out of listen test for general use
  bpf: sockmap, build helper to create connected socket pair
  bpf: sockmap, test shutdown() correctly exits epoll and recv()=0
  bpf: sockmap, test FIONREAD returns correct bytes in rx buffer

 include/linux/skmsg.h                         |   2 +-
 include/net/tcp.h                             |   1 +
 net/core/skmsg.c                              |  58 ++-
 net/core/sock_map.c                           |   3 +-
 net/ipv4/tcp.c                                |   9 -
 net/ipv4/tcp_bpf.c                            |  81 +++-
 net/ipv4/udp.c                                |   5 +-
 net/unix/af_unix.c                            |   5 +-
 .../selftests/bpf/prog_tests/sockmap_basic.c  | 119 +++++-
 .../bpf/prog_tests/sockmap_helpers.h          | 374 ++++++++++++++++++
 .../selftests/bpf/prog_tests/sockmap_listen.c | 352 +----------------
 .../bpf/progs/test_sockmap_pass_prog.c        |  32 ++
 12 files changed, 659 insertions(+), 382 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h
 create mode 100644 tools/testing/selftests/bpf/progs/test_sockmap_pass_prog.c

-- 
2.33.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH bpf v2 01/12] bpf: sockmap, pass skb ownership through read_skb
  2023-03-27 17:54 [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
@ 2023-03-27 17:54 ` John Fastabend
  2023-03-28 10:42   ` Jakub Sitnicki
  2023-03-27 17:54 ` [PATCH bpf v2 02/12] bpf: sockmap, convert schedule_work into delayed_work John Fastabend
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 24+ messages in thread
From: John Fastabend @ 2023-03-27 17:54 UTC (permalink / raw)
  To: cong.wang, jakub, daniel, lmb, edumazet
  Cc: john.fastabend, bpf, netdev, ast, andrii, will

The read_skb hook calls consume_skb() now, but this means that if the
recv_actor program wants to use the skb it needs to inc the ref cnt
so that the consume_skb() doesn't kfree the sk_buff.

This is problematic because in some error cases under memory pressure
we may need to linearize the sk_buff from sk_psock_skb_ingress_enqueue().
Then we get this,

 skb_linearize()
   __pskb_pull_tail()
     pskb_expand_head()
       BUG_ON(skb_shared(skb))

Because we incremented users refcnt from sk_psock_verdict_recv() we
hit the bug on with refcnt > 1 and trip it.

To fix lets simply pass ownership of the sk_buff through the skb_read
call. Then we can drop the consume from read_skb handlers and assume
the verdict recv does any required kfree.

Bug found while testing in our CI which runs in VMs that hit memory
constraints rather regularly. William tested TCP read_skb handlers.

[  106.536188] ------------[ cut here ]------------
[  106.536197] kernel BUG at net/core/skbuff.c:1693!
[  106.536479] invalid opcode: 0000 [#1] PREEMPT SMP PTI
[  106.536726] CPU: 3 PID: 1495 Comm: curl Not tainted 5.19.0-rc5 #1
[  106.537023] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ArchLinux 1.16.0-1 04/01/2014
[  106.537467] RIP: 0010:pskb_expand_head+0x269/0x330
[  106.538585] RSP: 0018:ffffc90000138b68 EFLAGS: 00010202
[  106.538839] RAX: 000000000000003f RBX: ffff8881048940e8 RCX: 0000000000000a20
[  106.539186] RDX: 0000000000000002 RSI: 0000000000000000 RDI: ffff8881048940e8
[  106.539529] RBP: ffffc90000138be8 R08: 00000000e161fd1a R09: 0000000000000000
[  106.539877] R10: 0000000000000018 R11: 0000000000000000 R12: ffff8881048940e8
[  106.540222] R13: 0000000000000003 R14: 0000000000000000 R15: ffff8881048940e8
[  106.540568] FS:  00007f277dde9f00(0000) GS:ffff88813bd80000(0000) knlGS:0000000000000000
[  106.540954] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  106.541227] CR2: 00007f277eeede64 CR3: 000000000ad3e000 CR4: 00000000000006e0
[  106.541569] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  106.541915] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[  106.542255] Call Trace:
[  106.542383]  <IRQ>
[  106.542487]  __pskb_pull_tail+0x4b/0x3e0
[  106.542681]  skb_ensure_writable+0x85/0xa0
[  106.542882]  sk_skb_pull_data+0x18/0x20
[  106.543084]  bpf_prog_b517a65a242018b0_bpf_skskb_http_verdict+0x3a9/0x4aa9
[  106.543536]  ? migrate_disable+0x66/0x80
[  106.543871]  sk_psock_verdict_recv+0xe2/0x310
[  106.544258]  ? sk_psock_write_space+0x1f0/0x1f0
[  106.544561]  tcp_read_skb+0x7b/0x120
[  106.544740]  tcp_data_queue+0x904/0xee0
[  106.544931]  tcp_rcv_established+0x212/0x7c0
[  106.545142]  tcp_v4_do_rcv+0x174/0x2a0
[  106.545326]  tcp_v4_rcv+0xe70/0xf60
[  106.545500]  ip_protocol_deliver_rcu+0x48/0x290
[  106.545744]  ip_local_deliver_finish+0xa7/0x150

Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
Reported-by: William Findlay <will@isovalent.com>
Tested-by: William Findlay <will@isovalent.com>
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 net/core/skmsg.c   | 2 --
 net/ipv4/tcp.c     | 1 -
 net/ipv4/udp.c     | 5 +----
 net/unix/af_unix.c | 5 +----
 4 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 53d0251788aa..2b6d9519ff29 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -1180,8 +1180,6 @@ static int sk_psock_verdict_recv(struct sock *sk, struct sk_buff *skb)
 	int ret = __SK_DROP;
 	int len = skb->len;
 
-	skb_get(skb);
-
 	rcu_read_lock();
 	psock = sk_psock(sk);
 	if (unlikely(!psock)) {
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 33f559f491c8..6572962b0237 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1770,7 +1770,6 @@ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
 		WARN_ON_ONCE(!skb_set_owner_sk_safe(skb, sk));
 		tcp_flags = TCP_SKB_CB(skb)->tcp_flags;
 		used = recv_actor(sk, skb);
-		consume_skb(skb);
 		if (used < 0) {
 			if (!copied)
 				copied = used;
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 9592fe3e444a..04e8c6385246 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1832,10 +1832,7 @@ int udp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
 	}
 
 	WARN_ON_ONCE(!skb_set_owner_sk_safe(skb, sk));
-	copied = recv_actor(sk, skb);
-	kfree_skb(skb);
-
-	return copied;
+	return recv_actor(sk, skb);
 }
 EXPORT_SYMBOL(udp_read_skb);
 
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index f0c2293f1d3b..a5dd2ee0cfed 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -2554,10 +2554,7 @@ static int unix_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
 	if (!skb)
 		return err;
 
-	copied = recv_actor(sk, skb);
-	kfree_skb(skb);
-
-	return copied;
+	return recv_actor(sk, skb);
 }
 
 /*
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf v2 02/12] bpf: sockmap, convert schedule_work into delayed_work
  2023-03-27 17:54 [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
  2023-03-27 17:54 ` [PATCH bpf v2 01/12] bpf: sockmap, pass skb ownership through read_skb John Fastabend
@ 2023-03-27 17:54 ` John Fastabend
  2023-03-28 12:09   ` Jakub Sitnicki
  2023-03-27 17:54 ` [PATCH bpf v2 03/12] bpf: sockmap, improved check for empty queue John Fastabend
                   ` (10 subsequent siblings)
  12 siblings, 1 reply; 24+ messages in thread
From: John Fastabend @ 2023-03-27 17:54 UTC (permalink / raw)
  To: cong.wang, jakub, daniel, lmb, edumazet
  Cc: john.fastabend, bpf, netdev, ast, andrii, will

Sk_buffs are fed into sockmap verdict programs either from a strparser
(when the user might want to decide how framing of skb is done by attaching
another parser program) or directly through tcp_read_sock. The
tcp_read_sock is the preferred method for performance when the BPF logic is
a stream parser.

The flow for Cilium's common use case with a stream parser is,

 tcp_read_sock()
  sk_psock_verdict_recv
    ret = bpf_prog_run_pin_on_cpu()
    sk_psock_verdict_apply(sock, skb, ret)
     // if system is under memory pressure or app is slow we may
     // need to queue skb. Do this queuing through ingress_skb and
     // then kick timer to wake up handler
     skb_queue_tail(ingress_skb, skb)
     schedule_work(work);


The work queue is wired up to sk_psock_backlog(). This will then walk the
ingress_skb skb list that holds our sk_buffs that could not be handled,
but should be OK to run at some later point. However, its possible that
the workqueue doing this work still hits an error when sending the skb.
When this happens the skbuff is requeued on a temporary 'state' struct
kept with the workqueue. This is necessary because its possible to
partially send an skbuff before hitting an error and we need to know how
and where to restart when the workqueue runs next.

Now for the trouble, we don't rekick the workqueue. This can cause a
stall where the skbuff we just cached on the state variable might never
be sent. This happens when its the last packet in a flow and no further
packets come along that would cause the system to kick the workqueue from
that side.

To fix we could do simple schedule_work(), but while under memory pressure
it makes sense to back off some instead of continue to retry repeatedly. So
instead to fix convert schedule_work to schedule_delayed_work and add
backoff logic to reschedule from backlog queue on errors. Its not obvious
though what a good backoff is so use '1'.

To test we observed some flakes whil running NGINX compliance test with
sockmap we attributed these failed test to this bug and subsequent issue.

Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
Tested-by: William Findlay <will@isovalent.com>
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 include/linux/skmsg.h |  2 +-
 net/core/skmsg.c      | 19 ++++++++++++-------
 net/core/sock_map.c   |  3 ++-
 3 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
index 84f787416a54..904ff9a32ad6 100644
--- a/include/linux/skmsg.h
+++ b/include/linux/skmsg.h
@@ -105,7 +105,7 @@ struct sk_psock {
 	struct proto			*sk_proto;
 	struct mutex			work_mutex;
 	struct sk_psock_work_state	work_state;
-	struct work_struct		work;
+	struct delayed_work		work;
 	struct rcu_work			rwork;
 };
 
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 2b6d9519ff29..96a6a3a74a67 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -481,7 +481,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
 	}
 out:
 	if (psock->work_state.skb && copied > 0)
-		schedule_work(&psock->work);
+		schedule_delayed_work(&psock->work, 0);
 	return copied;
 }
 EXPORT_SYMBOL_GPL(sk_msg_recvmsg);
@@ -639,7 +639,8 @@ static void sk_psock_skb_state(struct sk_psock *psock,
 
 static void sk_psock_backlog(struct work_struct *work)
 {
-	struct sk_psock *psock = container_of(work, struct sk_psock, work);
+	struct delayed_work *dwork = to_delayed_work(work);
+	struct sk_psock *psock = container_of(dwork, struct sk_psock, work);
 	struct sk_psock_work_state *state = &psock->work_state;
 	struct sk_buff *skb = NULL;
 	bool ingress;
@@ -679,6 +680,10 @@ static void sk_psock_backlog(struct work_struct *work)
 				if (ret == -EAGAIN) {
 					sk_psock_skb_state(psock, state, skb,
 							   len, off);
+
+					// Delay slightly to prioritize any
+					// other work that might be here.
+					schedule_delayed_work(&psock->work, 1);
 					goto end;
 				}
 				/* Hard errors break pipe and stop xmit. */
@@ -733,7 +738,7 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node)
 	INIT_LIST_HEAD(&psock->link);
 	spin_lock_init(&psock->link_lock);
 
-	INIT_WORK(&psock->work, sk_psock_backlog);
+	INIT_DELAYED_WORK(&psock->work, sk_psock_backlog);
 	mutex_init(&psock->work_mutex);
 	INIT_LIST_HEAD(&psock->ingress_msg);
 	spin_lock_init(&psock->ingress_lock);
@@ -822,7 +827,7 @@ static void sk_psock_destroy(struct work_struct *work)
 
 	sk_psock_done_strp(psock);
 
-	cancel_work_sync(&psock->work);
+	cancel_delayed_work_sync(&psock->work);
 	mutex_destroy(&psock->work_mutex);
 
 	psock_progs_drop(&psock->progs);
@@ -937,7 +942,7 @@ static int sk_psock_skb_redirect(struct sk_psock *from, struct sk_buff *skb)
 	}
 
 	skb_queue_tail(&psock_other->ingress_skb, skb);
-	schedule_work(&psock_other->work);
+	schedule_delayed_work(&psock_other->work, 0);
 	spin_unlock_bh(&psock_other->ingress_lock);
 	return 0;
 }
@@ -1017,7 +1022,7 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
 			spin_lock_bh(&psock->ingress_lock);
 			if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
 				skb_queue_tail(&psock->ingress_skb, skb);
-				schedule_work(&psock->work);
+				schedule_delayed_work(&psock->work, 0);
 				err = 0;
 			}
 			spin_unlock_bh(&psock->ingress_lock);
@@ -1048,7 +1053,7 @@ static void sk_psock_write_space(struct sock *sk)
 	psock = sk_psock(sk);
 	if (likely(psock)) {
 		if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
-			schedule_work(&psock->work);
+			schedule_delayed_work(&psock->work, 0);
 		write_space = psock->saved_write_space;
 	}
 	rcu_read_unlock();
diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index a68a7290a3b2..d38267201892 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -1624,9 +1624,10 @@ void sock_map_close(struct sock *sk, long timeout)
 		rcu_read_unlock();
 		sk_psock_stop(psock);
 		release_sock(sk);
-		cancel_work_sync(&psock->work);
+		cancel_delayed_work_sync(&psock->work);
 		sk_psock_put(sk, psock);
 	}
+
 	/* Make sure we do not recurse. This is a bug.
 	 * Leak the socket instead of crashing on a stack overflow.
 	 */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf v2 03/12] bpf: sockmap, improved check for empty queue
  2023-03-27 17:54 [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
  2023-03-27 17:54 ` [PATCH bpf v2 01/12] bpf: sockmap, pass skb ownership through read_skb John Fastabend
  2023-03-27 17:54 ` [PATCH bpf v2 02/12] bpf: sockmap, convert schedule_work into delayed_work John Fastabend
@ 2023-03-27 17:54 ` John Fastabend
  2023-03-29 12:24   ` Jakub Sitnicki
  2023-03-27 17:54 ` [PATCH bpf v2 04/12] bpf: sockmap, handle fin correctly John Fastabend
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 24+ messages in thread
From: John Fastabend @ 2023-03-27 17:54 UTC (permalink / raw)
  To: cong.wang, jakub, daniel, lmb, edumazet
  Cc: john.fastabend, bpf, netdev, ast, andrii, will

We noticed some rare sk_buffs were stepping past the queue when system was
under memory pressure. The general theory is to skip enqueueing
sk_buffs when its not necessary which is the normal case with a system
that is properly provisioned for the task, no memory pressure and enough
cpu assigned.

But, if we can't allocate memory due to an ENOMEM error when enqueueing
the sk_buff into the sockmap receive queue we push it onto a delayed
workqueue to retry later. When a new sk_buff is received we then check
if that queue is empty. However, there is a problem with simply checking
the queue length. When a sk_buff is being processed from the ingress queue
but not yet on the sockmap msg receive queue its possible to also recv
a sk_buff through normal path. It will check the ingress queue which is
zero and then skip ahead of the pkt being processed.

Previously we used sock lock from both contexts which made the problem
harder to hit, but not impossible.

To fix also check the 'state' variable where we would cache partially
processed sk_buff. This catches the majority of cases. But, we also
need to use the mutex lock around this check because we can't have both
codes running and check sensibly. We could perhaps do this with atomic
bit checks, but we are already here due to memory pressure so slowing
things down a bit seems OK and simpler to just grab a lock.

To reproduce issue we run NGINX compliance test with sockmap running and
observe some flakes in our testing that we attributed to this issue.

Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
Tested-by: William Findlay <will@isovalent.com>
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 net/core/skmsg.c | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 96a6a3a74a67..34de0605694e 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -985,6 +985,7 @@ EXPORT_SYMBOL_GPL(sk_psock_tls_strp_read);
 static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
 				  int verdict)
 {
+	struct sk_psock_work_state *state;
 	struct sock *sk_other;
 	int err = 0;
 	u32 len, off;
@@ -1001,13 +1002,28 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
 
 		skb_bpf_set_ingress(skb);
 
+		/* We need to grab mutex here because in-flight skb is in one of
+		 * the following states: either on ingress_skb, in psock->state
+		 * or being processed by backlog and neither in state->skb and
+		 * ingress_skb may be also empty. The troublesome case is when
+		 * the skb has been dequeued from ingress_skb list or taken from
+		 * state->skb because we can not easily test this case. Maybe we
+		 * could be clever with flags and resolve this but being clever
+		 * got us here in the first place and we note this is done under
+		 * sock lock and backlog conditions mean we are already running
+		 * into ENOMEM or other performance hindering cases so lets do
+		 * the obvious thing and grab the mutex.
+		 */
+		mutex_lock(&psock->work_mutex);
+		state = &psock->work_state;
+
 		/* If the queue is empty then we can submit directly
 		 * into the msg queue. If its not empty we have to
 		 * queue work otherwise we may get OOO data. Otherwise,
 		 * if sk_psock_skb_ingress errors will be handled by
 		 * retrying later from workqueue.
 		 */
-		if (skb_queue_empty(&psock->ingress_skb)) {
+		if (skb_queue_empty(&psock->ingress_skb) && likely(!state->skb)) {
 			len = skb->len;
 			off = 0;
 			if (skb_bpf_strparser(skb)) {
@@ -1028,9 +1044,11 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
 			spin_unlock_bh(&psock->ingress_lock);
 			if (err < 0) {
 				skb_bpf_redirect_clear(skb);
+				mutex_unlock(&psock->work_mutex);
 				goto out_free;
 			}
 		}
+		mutex_unlock(&psock->work_mutex);
 		break;
 	case __SK_REDIRECT:
 		err = sk_psock_skb_redirect(psock, skb);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf v2 04/12] bpf: sockmap, handle fin correctly
  2023-03-27 17:54 [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
                   ` (2 preceding siblings ...)
  2023-03-27 17:54 ` [PATCH bpf v2 03/12] bpf: sockmap, improved check for empty queue John Fastabend
@ 2023-03-27 17:54 ` John Fastabend
  2023-04-03 11:11   ` Jakub Sitnicki
  2023-03-27 17:54 ` [PATCH bpf v2 05/12] bpf: sockmap, TCP data stall on recv before accept John Fastabend
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 24+ messages in thread
From: John Fastabend @ 2023-03-27 17:54 UTC (permalink / raw)
  To: cong.wang, jakub, daniel, lmb, edumazet
  Cc: john.fastabend, bpf, netdev, ast, andrii, will

The sockmap code is returning EAGAIN after a FIN packet is received and no
more data is on the receive queue. Correct behavior is to return 0 to the
user and the user can then close the socket. The EAGAIN causes many apps
to retry which masks the problem. Eventually the socket is evicted from
the sockmap because its released from sockmap sock free handling. The
issue creates a delay and can cause some errors on application side.

To fix this check on sk_msg_recvmsg side if length is zero and FIN flag
is set then set return to zero. A selftest will be added to check this
condition.

Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
Tested-by: William Findlay <will@isovalent.com>
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 net/ipv4/tcp_bpf.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
index cf26d65ca389..3a0f43f3afd8 100644
--- a/net/ipv4/tcp_bpf.c
+++ b/net/ipv4/tcp_bpf.c
@@ -174,6 +174,24 @@ static int tcp_msg_wait_data(struct sock *sk, struct sk_psock *psock,
 	return ret;
 }
 
+static bool is_next_msg_fin(struct sk_psock *psock)
+{
+	struct scatterlist *sge;
+	struct sk_msg *msg_rx;
+	int i;
+
+	msg_rx = sk_psock_peek_msg(psock);
+	i = msg_rx->sg.start;
+	sge = sk_msg_elem(msg_rx, i);
+	if (!sge->length) {
+		struct sk_buff *skb = msg_rx->skb;
+
+		if (skb && TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
+			return true;
+	}
+	return false;
+}
+
 static int tcp_bpf_recvmsg_parser(struct sock *sk,
 				  struct msghdr *msg,
 				  size_t len,
@@ -193,6 +211,19 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
 	lock_sock(sk);
 msg_bytes_ready:
 	copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
+	/* The typical case for EFAULT is the socket was gracefully
+	 * shutdown with a FIN pkt. So check here the other case is
+	 * some error on copy_page_to_iter which would be unexpected.
+	 * On fin return correct return code to zero.
+	 */
+	if (copied == -EFAULT) {
+		bool is_fin = is_next_msg_fin(psock);
+
+		if (is_fin) {
+			copied = 0;
+			goto out;
+		}
+	}
 	if (!copied) {
 		long timeo;
 		int data;
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf v2 05/12] bpf: sockmap, TCP data stall on recv before accept
  2023-03-27 17:54 [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
                   ` (3 preceding siblings ...)
  2023-03-27 17:54 ` [PATCH bpf v2 04/12] bpf: sockmap, handle fin correctly John Fastabend
@ 2023-03-27 17:54 ` John Fastabend
  2023-03-27 17:54 ` [PATCH bpf v2 06/12] bpf: sockmap, wake up polling after data copy John Fastabend
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 24+ messages in thread
From: John Fastabend @ 2023-03-27 17:54 UTC (permalink / raw)
  To: cong.wang, jakub, daniel, lmb, edumazet
  Cc: john.fastabend, bpf, netdev, ast, andrii, will

A common mechanism to put a TCP socket into the sockmap is to hook the
BPF_SOCK_OPS_{ACTIVE_PASSIVE}_ESTABLISHED_CB event with a BPF program
that can map the socket info to the correct BPF verdict parser. When
the user adds the socket to the map the psock is created and the new
ops are assigned to ensure the verdict program will 'see' the sk_buffs
as they arrive.

Part of this process hooks the sk_data_ready op with a BPF specific
handler to wake up the BPF verdict program when data is ready to read.
The logic is simple enough (posted here for easy reading)

 static void sk_psock_verdict_data_ready(struct sock *sk)
 {
	struct socket *sock = sk->sk_socket;

	if (unlikely(!sock || !sock->ops || !sock->ops->read_skb))
		return;
	sock->ops->read_skb(sk, sk_psock_verdict_recv);
 }

The oversight here is sk->sk_socket is not assigned until the application
accepts() the new socket. However, its entirely ok for the peer application
to do a connect() followed immediately by sends. The socket on the receiver
is sitting on the backlog queue of the listening socket until its accepted
and the data is queued up. If the peer never accepts the socket or is slow
it will eventually hit data limits and rate limit the session. But,
important for BPF sockmap hooks when this data is received TCP stack does
the sk_data_ready() call but the read_skb() for this data is never called
because sk_socket is missing. The data sits on the sk_receive_queue.

Then once the socket is accepted if we never receive more data from the
peer there will be no further sk_data_ready calls and all the data
is still on the sk_receive_queue(). Then user calls recvmsg after accept()
and for TCP sockets in sockmap we use the tcp_bpf_recvmsg_parser() handler.
The handler checks for data in the sk_msg ingress queue expecting that
the BPF program has already run from the sk_data_ready hook and enqueued
the data as needed. So we are stuck.

To fix do an unlikely check in recvmsg handler for data on the
sk_receive_queue and if it exists wake up data_ready. We have the sock
locked in both read_skb and recvmsg so should avoid having multiple
runners.

Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 net/ipv4/tcp_bpf.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
index 3a0f43f3afd8..2c75bbcbefed 100644
--- a/net/ipv4/tcp_bpf.c
+++ b/net/ipv4/tcp_bpf.c
@@ -209,6 +209,26 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
 		return tcp_recvmsg(sk, msg, len, flags, addr_len);
 
 	lock_sock(sk);
+
+	/* We may have received data on the sk_receive_queue pre-accept and
+	 * then we can not use read_skb in this context because we haven't
+	 * assigned a sk_socket yet so have no link to the ops. The work-around
+	 * is to check the sk_receive_queue and in these cases read skbs off
+	 * queue again. The read_skb hook is not running at this point because
+	 * of lock_sock so we avoid having multiple runners in read_skb.
+	 */
+	if (unlikely(!skb_queue_empty(&sk->sk_receive_queue))) {
+		tcp_data_ready(sk);
+		/* This handles the ENOMEM errors if we both receive data
+		 * pre accept and are already under memory pressure. At least
+		 * let user no to retry.
+		 */
+		if (unlikely(!skb_queue_empty(&sk->sk_receive_queue))) {
+			copied = -EAGAIN;
+			goto out;
+		}
+	}
+
 msg_bytes_ready:
 	copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
 	/* The typical case for EFAULT is the socket was gracefully
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf v2 06/12] bpf: sockmap, wake up polling after data copy
  2023-03-27 17:54 [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
                   ` (4 preceding siblings ...)
  2023-03-27 17:54 ` [PATCH bpf v2 05/12] bpf: sockmap, TCP data stall on recv before accept John Fastabend
@ 2023-03-27 17:54 ` John Fastabend
  2023-03-27 17:54 ` [PATCH bpf v2 07/12] bpf: sockmap incorrectly handling copied_seq John Fastabend
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 24+ messages in thread
From: John Fastabend @ 2023-03-27 17:54 UTC (permalink / raw)
  To: cong.wang, jakub, daniel, lmb, edumazet
  Cc: john.fastabend, bpf, netdev, ast, andrii, will

When TCP stack has data ready to read sk_data_ready() is called. Sockmap
overwrites this with its own handler to call into BPF verdict program.
But, the original TCP socket had sock_def_readable that would additionally
wake up any user space waiters with sk_wake_async().

Sockmap saved the callback when the socket was created so call the saved
data ready callback and then we can wake up any epoll() logic waiting
on the read.

Note we call on 'copied >= 0' to account for returning 0 when a FIN is
received because we need to wake up user for this as well so they
can do the recvmsg() -> 0 and detect the shutdown.

Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 net/core/skmsg.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 34de0605694e..10e5481da662 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -1230,10 +1230,19 @@ static int sk_psock_verdict_recv(struct sock *sk, struct sk_buff *skb)
 static void sk_psock_verdict_data_ready(struct sock *sk)
 {
 	struct socket *sock = sk->sk_socket;
+	int copied;
 
 	if (unlikely(!sock || !sock->ops || !sock->ops->read_skb))
 		return;
-	sock->ops->read_skb(sk, sk_psock_verdict_recv);
+	copied = sock->ops->read_skb(sk, sk_psock_verdict_recv);
+	if (copied >= 0) {
+		struct sk_psock *psock;
+
+		rcu_read_lock();
+		psock = sk_psock(sk);
+		psock->saved_data_ready(sk);
+		rcu_read_unlock();
+	}
 }
 
 void sk_psock_start_verdict(struct sock *sk, struct sk_psock *psock)
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf v2 07/12] bpf: sockmap incorrectly handling copied_seq
  2023-03-27 17:54 [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
                   ` (5 preceding siblings ...)
  2023-03-27 17:54 ` [PATCH bpf v2 06/12] bpf: sockmap, wake up polling after data copy John Fastabend
@ 2023-03-27 17:54 ` John Fastabend
  2023-03-27 17:54 ` [PATCH bpf v2 08/12] bpf: sockmap, pull socket helpers out of listen test for general use John Fastabend
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 24+ messages in thread
From: John Fastabend @ 2023-03-27 17:54 UTC (permalink / raw)
  To: cong.wang, jakub, daniel, lmb, edumazet
  Cc: john.fastabend, bpf, netdev, ast, andrii, will

The read_skb() logic is incrementing the tcp->copied_seq which is used for
among other things calculating how many outstanding bytes can be read by
the application. This results in application errors, if the application
does an ioctl(FIONREAD) we return zero because this is calculated from
the copied_seq value.

To fix this we move tcp->copied_seq accounting into the recv handler so
that we update these when the recvmsg() hook is called and data is in
fact copied into user buffers. This gives an accurate FIONREAD value
as expected and improves ACK handling. Before we were calling the
tcp_rcv_space_adjust() which would update 'number of bytes copied to
user in last RTT' which is wrong for programs returning SK_PASS. The
bytes are only copied to the user when recvmsg is handled.

Doing the fix for recvmsg is straightforward, but fixing redirect and
SK_DROP pkts is a bit tricker. Build a tcp_psock_eat() helper and then
call this from skmsg handlers. This fixes another issue where a broken
socket with a BPF program doing a resubmit could hang the receiver. This
happened because although read_skb() consumed the skb through sock_drop()
it did not update the copied_seq. Now if a single reccv socket is
redirecting to many sockets (for example for lb) the receiver sk will be
hung even though we might expect it to continue. The hang comes from
not updating the copied_seq numbers and memory pressure resulting from
that.

We have a slight layer problem of calling tcp_eat_skb even if its not
a TCP socket. To fix we could refactor and create per type receiver
handlers. I decided this is more work than we want in the fix and we
already have some small tweaks depending on caller that use the
helper skb_bpf_strparser(). So we extend that a bit and always set
the strparser bit when it is in use and then we can gate the
seq_copied updates on this.

Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 include/net/tcp.h  |  3 +++
 net/core/skmsg.c   |  7 +++++--
 net/ipv4/tcp.c     | 10 +---------
 net/ipv4/tcp_bpf.c | 28 +++++++++++++++++++++++++++-
 4 files changed, 36 insertions(+), 12 deletions(-)

diff --git a/include/net/tcp.h b/include/net/tcp.h
index db9f828e9d1e..674044b8bdaf 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1467,6 +1467,8 @@ static inline void tcp_adjust_rcv_ssthresh(struct sock *sk)
 }
 
 void tcp_cleanup_rbuf(struct sock *sk, int copied);
+void __tcp_cleanup_rbuf(struct sock *sk, int copied);
+
 
 /* We provision sk_rcvbuf around 200% of sk_rcvlowat.
  * If 87.5 % (7/8) of the space has been consumed, we want to override
@@ -2321,6 +2323,7 @@ struct sk_psock;
 struct proto *tcp_bpf_get_proto(struct sock *sk, struct sk_psock *psock);
 int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore);
 void tcp_bpf_clone(const struct sock *sk, struct sock *newsk);
+void tcp_eat_skb(struct sock *sk, struct sk_buff *skb);
 #endif /* CONFIG_BPF_SYSCALL */
 
 int tcp_bpf_sendmsg_redir(struct sock *sk, bool ingress,
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 10e5481da662..b141b422697c 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -1051,11 +1051,14 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
 		mutex_unlock(&psock->work_mutex);
 		break;
 	case __SK_REDIRECT:
+		tcp_eat_skb(psock->sk, skb);
 		err = sk_psock_skb_redirect(psock, skb);
 		break;
 	case __SK_DROP:
 	default:
 out_free:
+		tcp_eat_skb(psock->sk, skb);
+		skb_bpf_redirect_clear(skb);
 		sock_drop(psock->sk, skb);
 	}
 
@@ -1100,8 +1103,7 @@ static void sk_psock_strp_read(struct strparser *strp, struct sk_buff *skb)
 		skb_dst_drop(skb);
 		skb_bpf_redirect_clear(skb);
 		ret = bpf_prog_run_pin_on_cpu(prog, skb);
-		if (ret == SK_PASS)
-			skb_bpf_set_strparser(skb);
+		skb_bpf_set_strparser(skb);
 		ret = sk_psock_map_verd(ret, skb_bpf_redirect_fetch(skb));
 		skb->sk = NULL;
 	}
@@ -1207,6 +1209,7 @@ static int sk_psock_verdict_recv(struct sock *sk, struct sk_buff *skb)
 	psock = sk_psock(sk);
 	if (unlikely(!psock)) {
 		len = 0;
+		tcp_eat_skb(sk, skb);
 		sock_drop(sk, skb);
 		goto out;
 	}
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 6572962b0237..e2594d8e3429 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1568,7 +1568,7 @@ static int tcp_peek_sndq(struct sock *sk, struct msghdr *msg, int len)
  * calculation of whether or not we must ACK for the sake of
  * a window update.
  */
-static void __tcp_cleanup_rbuf(struct sock *sk, int copied)
+void __tcp_cleanup_rbuf(struct sock *sk, int copied)
 {
 	struct tcp_sock *tp = tcp_sk(sk);
 	bool time_to_ack = false;
@@ -1783,14 +1783,6 @@ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
 			break;
 		}
 	}
-	WRITE_ONCE(tp->copied_seq, seq);
-
-	tcp_rcv_space_adjust(sk);
-
-	/* Clean up data we have read: This will do ACK frames. */
-	if (copied > 0)
-		__tcp_cleanup_rbuf(sk, copied);
-
 	return copied;
 }
 EXPORT_SYMBOL(tcp_read_skb);
diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
index 2c75bbcbefed..6fb9978d72e3 100644
--- a/net/ipv4/tcp_bpf.c
+++ b/net/ipv4/tcp_bpf.c
@@ -11,6 +11,24 @@
 #include <net/inet_common.h>
 #include <net/tls.h>
 
+void tcp_eat_skb(struct sock *sk, struct sk_buff *skb)
+{
+	struct tcp_sock *tcp;
+	int copied;
+
+	if (!skb || !skb->len || !sk_is_tcp(sk))
+		return;
+
+	if (skb_bpf_strparser(skb))
+		return;
+
+	tcp = tcp_sk(sk);
+	copied = tcp->copied_seq + skb->len;
+	WRITE_ONCE(tcp->copied_seq, copied);
+	tcp_rcv_space_adjust(sk);
+	__tcp_cleanup_rbuf(sk, skb->len);
+}
+
 static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock,
 			   struct sk_msg *msg, u32 apply_bytes, int flags)
 {
@@ -198,8 +216,10 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
 				  int flags,
 				  int *addr_len)
 {
+	struct tcp_sock *tcp = tcp_sk(sk);
+	u32 seq = tcp->copied_seq;
 	struct sk_psock *psock;
-	int copied;
+	int copied = 0;
 
 	if (unlikely(flags & MSG_ERRQUEUE))
 		return inet_recv_error(sk, msg, len, addr_len);
@@ -241,9 +261,11 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
 
 		if (is_fin) {
 			copied = 0;
+			seq++;
 			goto out;
 		}
 	}
+	seq += copied;
 	if (!copied) {
 		long timeo;
 		int data;
@@ -281,6 +303,10 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
 		copied = -EAGAIN;
 	}
 out:
+	WRITE_ONCE(tcp->copied_seq, seq);
+	tcp_rcv_space_adjust(sk);
+	if (copied > 0)
+		__tcp_cleanup_rbuf(sk, copied);
 	release_sock(sk);
 	sk_psock_put(sk, psock);
 	return copied;
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf v2 08/12] bpf: sockmap, pull socket helpers out of listen test for general use
  2023-03-27 17:54 [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
                   ` (6 preceding siblings ...)
  2023-03-27 17:54 ` [PATCH bpf v2 07/12] bpf: sockmap incorrectly handling copied_seq John Fastabend
@ 2023-03-27 17:54 ` John Fastabend
  2023-03-27 17:54 ` [PATCH bpf v2 09/12] bpf: sockmap, build helper to create connected socket pair John Fastabend
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 24+ messages in thread
From: John Fastabend @ 2023-03-27 17:54 UTC (permalink / raw)
  To: cong.wang, jakub, daniel, lmb, edumazet
  Cc: john.fastabend, bpf, netdev, ast, andrii, will

No functional change here we merely pull the helpers in sockmap_listen.c
into a header file so we can use these in other programs. The tests we
are about to add aren't really _listen tests so doesn't make sense
to add them here.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 .../bpf/prog_tests/sockmap_helpers.h          | 249 ++++++++++++++++++
 .../selftests/bpf/prog_tests/sockmap_listen.c | 245 +----------------
 2 files changed, 250 insertions(+), 244 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h

diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h b/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h
new file mode 100644
index 000000000000..bff56844e745
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h
@@ -0,0 +1,249 @@
+#ifndef __SOCKAMP_HELPERS__
+#define __SOCKMAP_HELPERS__
+
+#define IO_TIMEOUT_SEC 30
+#define MAX_STRERR_LEN 256
+#define MAX_TEST_NAME 80
+
+#define __always_unused	__attribute__((__unused__))
+
+#define _FAIL(errnum, fmt...)                                                  \
+	({                                                                     \
+		error_at_line(0, (errnum), __func__, __LINE__, fmt);           \
+		CHECK_FAIL(true);                                              \
+	})
+#define FAIL(fmt...) _FAIL(0, fmt)
+#define FAIL_ERRNO(fmt...) _FAIL(errno, fmt)
+#define FAIL_LIBBPF(err, msg)                                                  \
+	({                                                                     \
+		char __buf[MAX_STRERR_LEN];                                    \
+		libbpf_strerror((err), __buf, sizeof(__buf));                  \
+		FAIL("%s: %s", (msg), __buf);                                  \
+	})
+
+/* Wrappers that fail the test on error and report it. */
+
+#define xaccept_nonblock(fd, addr, len)                                        \
+	({                                                                     \
+		int __ret =                                                    \
+			accept_timeout((fd), (addr), (len), IO_TIMEOUT_SEC);   \
+		if (__ret == -1)                                               \
+			FAIL_ERRNO("accept");                                  \
+		__ret;                                                         \
+	})
+
+#define xbind(fd, addr, len)                                                   \
+	({                                                                     \
+		int __ret = bind((fd), (addr), (len));                         \
+		if (__ret == -1)                                               \
+			FAIL_ERRNO("bind");                                    \
+		__ret;                                                         \
+	})
+
+#define xclose(fd)                                                             \
+	({                                                                     \
+		int __ret = close((fd));                                       \
+		if (__ret == -1)                                               \
+			FAIL_ERRNO("close");                                   \
+		__ret;                                                         \
+	})
+
+#define xconnect(fd, addr, len)                                                \
+	({                                                                     \
+		int __ret = connect((fd), (addr), (len));                      \
+		if (__ret == -1)                                               \
+			FAIL_ERRNO("connect");                                 \
+		__ret;                                                         \
+	})
+
+#define xgetsockname(fd, addr, len)                                            \
+	({                                                                     \
+		int __ret = getsockname((fd), (addr), (len));                  \
+		if (__ret == -1)                                               \
+			FAIL_ERRNO("getsockname");                             \
+		__ret;                                                         \
+	})
+
+#define xgetsockopt(fd, level, name, val, len)                                 \
+	({                                                                     \
+		int __ret = getsockopt((fd), (level), (name), (val), (len));   \
+		if (__ret == -1)                                               \
+			FAIL_ERRNO("getsockopt(" #name ")");                   \
+		__ret;                                                         \
+	})
+
+#define xlisten(fd, backlog)                                                   \
+	({                                                                     \
+		int __ret = listen((fd), (backlog));                           \
+		if (__ret == -1)                                               \
+			FAIL_ERRNO("listen");                                  \
+		__ret;                                                         \
+	})
+
+#define xsetsockopt(fd, level, name, val, len)                                 \
+	({                                                                     \
+		int __ret = setsockopt((fd), (level), (name), (val), (len));   \
+		if (__ret == -1)                                               \
+			FAIL_ERRNO("setsockopt(" #name ")");                   \
+		__ret;                                                         \
+	})
+
+#define xsend(fd, buf, len, flags)                                             \
+	({                                                                     \
+		ssize_t __ret = send((fd), (buf), (len), (flags));             \
+		if (__ret == -1)                                               \
+			FAIL_ERRNO("send");                                    \
+		__ret;                                                         \
+	})
+
+#define xrecv_nonblock(fd, buf, len, flags)                                    \
+	({                                                                     \
+		ssize_t __ret = recv_timeout((fd), (buf), (len), (flags),      \
+					     IO_TIMEOUT_SEC);                  \
+		if (__ret == -1)                                               \
+			FAIL_ERRNO("recv");                                    \
+		__ret;                                                         \
+	})
+
+#define xsocket(family, sotype, flags)                                         \
+	({                                                                     \
+		int __ret = socket(family, sotype, flags);                     \
+		if (__ret == -1)                                               \
+			FAIL_ERRNO("socket");                                  \
+		__ret;                                                         \
+	})
+
+#define xbpf_map_delete_elem(fd, key)                                          \
+	({                                                                     \
+		int __ret = bpf_map_delete_elem((fd), (key));                  \
+		if (__ret < 0)                                               \
+			FAIL_ERRNO("map_delete");                              \
+		__ret;                                                         \
+	})
+
+#define xbpf_map_lookup_elem(fd, key, val)                                     \
+	({                                                                     \
+		int __ret = bpf_map_lookup_elem((fd), (key), (val));           \
+		if (__ret < 0)                                               \
+			FAIL_ERRNO("map_lookup");                              \
+		__ret;                                                         \
+	})
+
+#define xbpf_map_update_elem(fd, key, val, flags)                              \
+	({                                                                     \
+		int __ret = bpf_map_update_elem((fd), (key), (val), (flags));  \
+		if (__ret < 0)                                               \
+			FAIL_ERRNO("map_update");                              \
+		__ret;                                                         \
+	})
+
+#define xbpf_prog_attach(prog, target, type, flags)                            \
+	({                                                                     \
+		int __ret =                                                    \
+			bpf_prog_attach((prog), (target), (type), (flags));    \
+		if (__ret < 0)                                               \
+			FAIL_ERRNO("prog_attach(" #type ")");                  \
+		__ret;                                                         \
+	})
+
+#define xbpf_prog_detach2(prog, target, type)                                  \
+	({                                                                     \
+		int __ret = bpf_prog_detach2((prog), (target), (type));        \
+		if (__ret < 0)                                               \
+			FAIL_ERRNO("prog_detach2(" #type ")");                 \
+		__ret;                                                         \
+	})
+
+#define xpthread_create(thread, attr, func, arg)                               \
+	({                                                                     \
+		int __ret = pthread_create((thread), (attr), (func), (arg));   \
+		errno = __ret;                                                 \
+		if (__ret)                                                     \
+			FAIL_ERRNO("pthread_create");                          \
+		__ret;                                                         \
+	})
+
+#define xpthread_join(thread, retval)                                          \
+	({                                                                     \
+		int __ret = pthread_join((thread), (retval));                  \
+		errno = __ret;                                                 \
+		if (__ret)                                                     \
+			FAIL_ERRNO("pthread_join");                            \
+		__ret;                                                         \
+	})
+
+static inline int poll_read(int fd, unsigned int timeout_sec)
+{
+	struct timeval timeout = { .tv_sec = timeout_sec };
+	fd_set rfds;
+	int r;
+
+	FD_ZERO(&rfds);
+	FD_SET(fd, &rfds);
+
+	r = select(fd + 1, &rfds, NULL, NULL, &timeout);
+	if (r == 0)
+		errno = ETIME;
+
+	return r == 1 ? 0 : -1;
+}
+
+static inline int accept_timeout(int fd, struct sockaddr *addr, socklen_t *len,
+				 unsigned int timeout_sec)
+{
+	if (poll_read(fd, timeout_sec))
+		return -1;
+
+	return accept(fd, addr, len);
+}
+
+static inline int recv_timeout(int fd, void *buf, size_t len, int flags,
+			       unsigned int timeout_sec)
+{
+	if (poll_read(fd, timeout_sec))
+		return -1;
+
+	return recv(fd, buf, len, flags);
+}
+
+static inline void init_addr_loopback4(struct sockaddr_storage *ss, socklen_t *len)
+{
+	struct sockaddr_in *addr4 = memset(ss, 0, sizeof(*ss));
+
+	addr4->sin_family = AF_INET;
+	addr4->sin_port = 0;
+	addr4->sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+	*len = sizeof(*addr4);
+}
+
+static inline void init_addr_loopback6(struct sockaddr_storage *ss, socklen_t *len)
+{
+	struct sockaddr_in6 *addr6 = memset(ss, 0, sizeof(*ss));
+
+	addr6->sin6_family = AF_INET6;
+	addr6->sin6_port = 0;
+	addr6->sin6_addr = in6addr_loopback;
+	*len = sizeof(*addr6);
+}
+
+static inline void init_addr_loopback(int family, struct sockaddr_storage *ss,
+			       socklen_t *len)
+{
+	switch (family) {
+	case AF_INET:
+		init_addr_loopback4(ss, len);
+		return;
+	case AF_INET6:
+		init_addr_loopback6(ss, len);
+		return;
+	default:
+		FAIL("unsupported address family %d", family);
+	}
+}
+
+static inline struct sockaddr *sockaddr(struct sockaddr_storage *ss)
+{
+	return (struct sockaddr *)ss;
+}
+
+#endif // __SOCKMAP_HELPERS__
diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
index 567e07c19ecc..0f0cddd4e15e 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
@@ -26,250 +26,7 @@
 #include "test_progs.h"
 #include "test_sockmap_listen.skel.h"
 
-#define IO_TIMEOUT_SEC 30
-#define MAX_STRERR_LEN 256
-#define MAX_TEST_NAME 80
-
-#define __always_unused	__attribute__((__unused__))
-
-#define _FAIL(errnum, fmt...)                                                  \
-	({                                                                     \
-		error_at_line(0, (errnum), __func__, __LINE__, fmt);           \
-		CHECK_FAIL(true);                                              \
-	})
-#define FAIL(fmt...) _FAIL(0, fmt)
-#define FAIL_ERRNO(fmt...) _FAIL(errno, fmt)
-#define FAIL_LIBBPF(err, msg)                                                  \
-	({                                                                     \
-		char __buf[MAX_STRERR_LEN];                                    \
-		libbpf_strerror((err), __buf, sizeof(__buf));                  \
-		FAIL("%s: %s", (msg), __buf);                                  \
-	})
-
-/* Wrappers that fail the test on error and report it. */
-
-#define xaccept_nonblock(fd, addr, len)                                        \
-	({                                                                     \
-		int __ret =                                                    \
-			accept_timeout((fd), (addr), (len), IO_TIMEOUT_SEC);   \
-		if (__ret == -1)                                               \
-			FAIL_ERRNO("accept");                                  \
-		__ret;                                                         \
-	})
-
-#define xbind(fd, addr, len)                                                   \
-	({                                                                     \
-		int __ret = bind((fd), (addr), (len));                         \
-		if (__ret == -1)                                               \
-			FAIL_ERRNO("bind");                                    \
-		__ret;                                                         \
-	})
-
-#define xclose(fd)                                                             \
-	({                                                                     \
-		int __ret = close((fd));                                       \
-		if (__ret == -1)                                               \
-			FAIL_ERRNO("close");                                   \
-		__ret;                                                         \
-	})
-
-#define xconnect(fd, addr, len)                                                \
-	({                                                                     \
-		int __ret = connect((fd), (addr), (len));                      \
-		if (__ret == -1)                                               \
-			FAIL_ERRNO("connect");                                 \
-		__ret;                                                         \
-	})
-
-#define xgetsockname(fd, addr, len)                                            \
-	({                                                                     \
-		int __ret = getsockname((fd), (addr), (len));                  \
-		if (__ret == -1)                                               \
-			FAIL_ERRNO("getsockname");                             \
-		__ret;                                                         \
-	})
-
-#define xgetsockopt(fd, level, name, val, len)                                 \
-	({                                                                     \
-		int __ret = getsockopt((fd), (level), (name), (val), (len));   \
-		if (__ret == -1)                                               \
-			FAIL_ERRNO("getsockopt(" #name ")");                   \
-		__ret;                                                         \
-	})
-
-#define xlisten(fd, backlog)                                                   \
-	({                                                                     \
-		int __ret = listen((fd), (backlog));                           \
-		if (__ret == -1)                                               \
-			FAIL_ERRNO("listen");                                  \
-		__ret;                                                         \
-	})
-
-#define xsetsockopt(fd, level, name, val, len)                                 \
-	({                                                                     \
-		int __ret = setsockopt((fd), (level), (name), (val), (len));   \
-		if (__ret == -1)                                               \
-			FAIL_ERRNO("setsockopt(" #name ")");                   \
-		__ret;                                                         \
-	})
-
-#define xsend(fd, buf, len, flags)                                             \
-	({                                                                     \
-		ssize_t __ret = send((fd), (buf), (len), (flags));             \
-		if (__ret == -1)                                               \
-			FAIL_ERRNO("send");                                    \
-		__ret;                                                         \
-	})
-
-#define xrecv_nonblock(fd, buf, len, flags)                                    \
-	({                                                                     \
-		ssize_t __ret = recv_timeout((fd), (buf), (len), (flags),      \
-					     IO_TIMEOUT_SEC);                  \
-		if (__ret == -1)                                               \
-			FAIL_ERRNO("recv");                                    \
-		__ret;                                                         \
-	})
-
-#define xsocket(family, sotype, flags)                                         \
-	({                                                                     \
-		int __ret = socket(family, sotype, flags);                     \
-		if (__ret == -1)                                               \
-			FAIL_ERRNO("socket");                                  \
-		__ret;                                                         \
-	})
-
-#define xbpf_map_delete_elem(fd, key)                                          \
-	({                                                                     \
-		int __ret = bpf_map_delete_elem((fd), (key));                  \
-		if (__ret < 0)                                               \
-			FAIL_ERRNO("map_delete");                              \
-		__ret;                                                         \
-	})
-
-#define xbpf_map_lookup_elem(fd, key, val)                                     \
-	({                                                                     \
-		int __ret = bpf_map_lookup_elem((fd), (key), (val));           \
-		if (__ret < 0)                                               \
-			FAIL_ERRNO("map_lookup");                              \
-		__ret;                                                         \
-	})
-
-#define xbpf_map_update_elem(fd, key, val, flags)                              \
-	({                                                                     \
-		int __ret = bpf_map_update_elem((fd), (key), (val), (flags));  \
-		if (__ret < 0)                                               \
-			FAIL_ERRNO("map_update");                              \
-		__ret;                                                         \
-	})
-
-#define xbpf_prog_attach(prog, target, type, flags)                            \
-	({                                                                     \
-		int __ret =                                                    \
-			bpf_prog_attach((prog), (target), (type), (flags));    \
-		if (__ret < 0)                                               \
-			FAIL_ERRNO("prog_attach(" #type ")");                  \
-		__ret;                                                         \
-	})
-
-#define xbpf_prog_detach2(prog, target, type)                                  \
-	({                                                                     \
-		int __ret = bpf_prog_detach2((prog), (target), (type));        \
-		if (__ret < 0)                                               \
-			FAIL_ERRNO("prog_detach2(" #type ")");                 \
-		__ret;                                                         \
-	})
-
-#define xpthread_create(thread, attr, func, arg)                               \
-	({                                                                     \
-		int __ret = pthread_create((thread), (attr), (func), (arg));   \
-		errno = __ret;                                                 \
-		if (__ret)                                                     \
-			FAIL_ERRNO("pthread_create");                          \
-		__ret;                                                         \
-	})
-
-#define xpthread_join(thread, retval)                                          \
-	({                                                                     \
-		int __ret = pthread_join((thread), (retval));                  \
-		errno = __ret;                                                 \
-		if (__ret)                                                     \
-			FAIL_ERRNO("pthread_join");                            \
-		__ret;                                                         \
-	})
-
-static int poll_read(int fd, unsigned int timeout_sec)
-{
-	struct timeval timeout = { .tv_sec = timeout_sec };
-	fd_set rfds;
-	int r;
-
-	FD_ZERO(&rfds);
-	FD_SET(fd, &rfds);
-
-	r = select(fd + 1, &rfds, NULL, NULL, &timeout);
-	if (r == 0)
-		errno = ETIME;
-
-	return r == 1 ? 0 : -1;
-}
-
-static int accept_timeout(int fd, struct sockaddr *addr, socklen_t *len,
-			  unsigned int timeout_sec)
-{
-	if (poll_read(fd, timeout_sec))
-		return -1;
-
-	return accept(fd, addr, len);
-}
-
-static int recv_timeout(int fd, void *buf, size_t len, int flags,
-			unsigned int timeout_sec)
-{
-	if (poll_read(fd, timeout_sec))
-		return -1;
-
-	return recv(fd, buf, len, flags);
-}
-
-static void init_addr_loopback4(struct sockaddr_storage *ss, socklen_t *len)
-{
-	struct sockaddr_in *addr4 = memset(ss, 0, sizeof(*ss));
-
-	addr4->sin_family = AF_INET;
-	addr4->sin_port = 0;
-	addr4->sin_addr.s_addr = htonl(INADDR_LOOPBACK);
-	*len = sizeof(*addr4);
-}
-
-static void init_addr_loopback6(struct sockaddr_storage *ss, socklen_t *len)
-{
-	struct sockaddr_in6 *addr6 = memset(ss, 0, sizeof(*ss));
-
-	addr6->sin6_family = AF_INET6;
-	addr6->sin6_port = 0;
-	addr6->sin6_addr = in6addr_loopback;
-	*len = sizeof(*addr6);
-}
-
-static void init_addr_loopback(int family, struct sockaddr_storage *ss,
-			       socklen_t *len)
-{
-	switch (family) {
-	case AF_INET:
-		init_addr_loopback4(ss, len);
-		return;
-	case AF_INET6:
-		init_addr_loopback6(ss, len);
-		return;
-	default:
-		FAIL("unsupported address family %d", family);
-	}
-}
-
-static inline struct sockaddr *sockaddr(struct sockaddr_storage *ss)
-{
-	return (struct sockaddr *)ss;
-}
+#include "sockmap_helpers.h"
 
 static int enable_reuseport(int s, int progfd)
 {
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf v2 09/12] bpf: sockmap, build helper to create connected socket pair
  2023-03-27 17:54 [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
                   ` (7 preceding siblings ...)
  2023-03-27 17:54 ` [PATCH bpf v2 08/12] bpf: sockmap, pull socket helpers out of listen test for general use John Fastabend
@ 2023-03-27 17:54 ` John Fastabend
  2023-03-27 17:54 ` [PATCH bpf v2 10/12] bpf: sockmap, test shutdown() correctly exits epoll and recv()=0 John Fastabend
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 24+ messages in thread
From: John Fastabend @ 2023-03-27 17:54 UTC (permalink / raw)
  To: cong.wang, jakub, daniel, lmb, edumazet
  Cc: john.fastabend, bpf, netdev, ast, andrii, will

A common operation for testing is to spin up a pair of sockets that are
connected. Then we can use these to run specific tests that need to
send data, check BPF programs and so on.

The sockmap_listen programs already have this logic lets move it into
the new sockmap_helpers header file for general use.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 .../bpf/prog_tests/sockmap_helpers.h          | 125 ++++++++++++++++++
 .../selftests/bpf/prog_tests/sockmap_listen.c | 107 +--------------
 2 files changed, 130 insertions(+), 102 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h b/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h
index bff56844e745..54e3a019ba72 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h
@@ -246,4 +246,129 @@ static inline struct sockaddr *sockaddr(struct sockaddr_storage *ss)
 	return (struct sockaddr *)ss;
 }
 
+static inline int add_to_sockmap(int sock_mapfd, int fd1, int fd2)
+{
+	u64 value;
+	u32 key;
+	int err;
+
+	key = 0;
+	value = fd1;
+	err = xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST);
+	if (err)
+		return err;
+
+	key = 1;
+	value = fd2;
+	return xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST);
+}
+
+static inline int create_socket_pairs(int s, int family, int sotype,
+				      int *c0, int *c1, int *p0, int *p1)
+{
+	struct sockaddr_storage addr;
+	socklen_t len;
+	int err = 0;
+
+	len = sizeof(addr);
+	err = xgetsockname(s, sockaddr(&addr), &len);
+	if (err)
+		return err;
+
+	*c0 = xsocket(family, sotype, 0);
+	if (*c0 < 0)
+		return errno;
+	err = xconnect(*c0, sockaddr(&addr), len);
+	if (err) {
+		err = errno;
+		goto close_cli0;
+	}
+
+	*p0 = xaccept_nonblock(s, NULL, NULL);
+	if (*p0 < 0) {
+		err = errno;
+		goto close_cli0;
+	}
+
+	*c1 = xsocket(family, sotype, 0);
+	if (*c1 < 0) {
+		err = errno;
+		goto close_peer0;
+	}
+	err = xconnect(*c1, sockaddr(&addr), len);
+	if (err) {
+		err = errno;
+		goto close_cli1;
+	}
+
+	*p1 = xaccept_nonblock(s, NULL, NULL);
+	if (*p1 < 0) {
+		err = errno;
+		goto close_peer1;
+	}
+	return err;
+close_peer1:
+	close(*p1);
+close_cli1:
+	close(*c1);
+close_peer0:
+	close(*p0);
+close_cli0:
+	close(*c0);
+	return err;
+}
+
+static inline int enable_reuseport(int s, int progfd)
+{
+	int err, one = 1;
+
+	err = xsetsockopt(s, SOL_SOCKET, SO_REUSEPORT, &one, sizeof(one));
+	if (err)
+		return -1;
+	err = xsetsockopt(s, SOL_SOCKET, SO_ATTACH_REUSEPORT_EBPF, &progfd,
+			  sizeof(progfd));
+	if (err)
+		return -1;
+
+	return 0;
+}
+
+static inline int socket_loopback_reuseport(int family, int sotype, int progfd)
+{
+	struct sockaddr_storage addr;
+	socklen_t len;
+	int err, s;
+
+	init_addr_loopback(family, &addr, &len);
+
+	s = xsocket(family, sotype, 0);
+	if (s == -1)
+		return -1;
+
+	if (progfd >= 0)
+		enable_reuseport(s, progfd);
+
+	err = xbind(s, sockaddr(&addr), len);
+	if (err)
+		goto close;
+
+	if (sotype & SOCK_DGRAM)
+		return s;
+
+	err = xlisten(s, SOMAXCONN);
+	if (err)
+		goto close;
+
+	return s;
+close:
+	xclose(s);
+	return -1;
+}
+
+static inline int socket_loopback(int family, int sotype)
+{
+	return socket_loopback_reuseport(family, sotype, -1);
+}
+
+
 #endif // __SOCKMAP_HELPERS__
diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
index 0f0cddd4e15e..f3913ba9e899 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
@@ -28,58 +28,6 @@
 
 #include "sockmap_helpers.h"
 
-static int enable_reuseport(int s, int progfd)
-{
-	int err, one = 1;
-
-	err = xsetsockopt(s, SOL_SOCKET, SO_REUSEPORT, &one, sizeof(one));
-	if (err)
-		return -1;
-	err = xsetsockopt(s, SOL_SOCKET, SO_ATTACH_REUSEPORT_EBPF, &progfd,
-			  sizeof(progfd));
-	if (err)
-		return -1;
-
-	return 0;
-}
-
-static int socket_loopback_reuseport(int family, int sotype, int progfd)
-{
-	struct sockaddr_storage addr;
-	socklen_t len;
-	int err, s;
-
-	init_addr_loopback(family, &addr, &len);
-
-	s = xsocket(family, sotype, 0);
-	if (s == -1)
-		return -1;
-
-	if (progfd >= 0)
-		enable_reuseport(s, progfd);
-
-	err = xbind(s, sockaddr(&addr), len);
-	if (err)
-		goto close;
-
-	if (sotype & SOCK_DGRAM)
-		return s;
-
-	err = xlisten(s, SOMAXCONN);
-	if (err)
-		goto close;
-
-	return s;
-close:
-	xclose(s);
-	return -1;
-}
-
-static int socket_loopback(int family, int sotype)
-{
-	return socket_loopback_reuseport(family, sotype, -1);
-}
-
 static void test_insert_invalid(struct test_sockmap_listen *skel __always_unused,
 				int family, int sotype, int mapfd)
 {
@@ -722,31 +670,12 @@ static const char *redir_mode_str(enum redir_mode mode)
 	}
 }
 
-static int add_to_sockmap(int sock_mapfd, int fd1, int fd2)
-{
-	u64 value;
-	u32 key;
-	int err;
-
-	key = 0;
-	value = fd1;
-	err = xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST);
-	if (err)
-		return err;
-
-	key = 1;
-	value = fd2;
-	return xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST);
-}
-
 static void redir_to_connected(int family, int sotype, int sock_mapfd,
 			       int verd_mapfd, enum redir_mode mode)
 {
 	const char *log_prefix = redir_mode_str(mode);
-	struct sockaddr_storage addr;
 	int s, c0, c1, p0, p1;
 	unsigned int pass;
-	socklen_t len;
 	int err, n;
 	u32 key;
 	char b;
@@ -757,36 +686,13 @@ static void redir_to_connected(int family, int sotype, int sock_mapfd,
 	if (s < 0)
 		return;
 
-	len = sizeof(addr);
-	err = xgetsockname(s, sockaddr(&addr), &len);
+	err = create_socket_pairs(s, family, sotype, &c0, &c1, &p0, &p1);
 	if (err)
 		goto close_srv;
 
-	c0 = xsocket(family, sotype, 0);
-	if (c0 < 0)
-		goto close_srv;
-	err = xconnect(c0, sockaddr(&addr), len);
-	if (err)
-		goto close_cli0;
-
-	p0 = xaccept_nonblock(s, NULL, NULL);
-	if (p0 < 0)
-		goto close_cli0;
-
-	c1 = xsocket(family, sotype, 0);
-	if (c1 < 0)
-		goto close_peer0;
-	err = xconnect(c1, sockaddr(&addr), len);
-	if (err)
-		goto close_cli1;
-
-	p1 = xaccept_nonblock(s, NULL, NULL);
-	if (p1 < 0)
-		goto close_cli1;
-
 	err = add_to_sockmap(sock_mapfd, p0, p1);
 	if (err)
-		goto close_peer1;
+		goto close;
 
 	n = write(mode == REDIR_INGRESS ? c1 : p1, "a", 1);
 	if (n < 0)
@@ -794,12 +700,12 @@ static void redir_to_connected(int family, int sotype, int sock_mapfd,
 	if (n == 0)
 		FAIL("%s: incomplete write", log_prefix);
 	if (n < 1)
-		goto close_peer1;
+		goto close;
 
 	key = SK_PASS;
 	err = xbpf_map_lookup_elem(verd_mapfd, &key, &pass);
 	if (err)
-		goto close_peer1;
+		goto close;
 	if (pass != 1)
 		FAIL("%s: want pass count 1, have %d", log_prefix, pass);
 	n = recv_timeout(c0, &b, 1, 0, IO_TIMEOUT_SEC);
@@ -808,13 +714,10 @@ static void redir_to_connected(int family, int sotype, int sock_mapfd,
 	if (n == 0)
 		FAIL("%s: incomplete recv", log_prefix);
 
-close_peer1:
+close:
 	xclose(p1);
-close_cli1:
 	xclose(c1);
-close_peer0:
 	xclose(p0);
-close_cli0:
 	xclose(c0);
 close_srv:
 	xclose(s);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf v2 10/12] bpf: sockmap, test shutdown() correctly exits epoll and recv()=0
  2023-03-27 17:54 [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
                   ` (8 preceding siblings ...)
  2023-03-27 17:54 ` [PATCH bpf v2 09/12] bpf: sockmap, build helper to create connected socket pair John Fastabend
@ 2023-03-27 17:54 ` John Fastabend
  2023-03-27 17:54 ` [PATCH bpf v2 11/12] bpf: sockmap, test FIONREAD returns correct bytes in rx buffer John Fastabend
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 24+ messages in thread
From: John Fastabend @ 2023-03-27 17:54 UTC (permalink / raw)
  To: cong.wang, jakub, daniel, lmb, edumazet
  Cc: john.fastabend, bpf, netdev, ast, andrii, will

When session gracefully shutdowns epoll needs to wake up and any recv()
readers should return 0 not the -EAGAIN they previously returned.

Note we use epoll instead of select to test the epoll wake on shutdown
event as well.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 .../selftests/bpf/prog_tests/sockmap_basic.c  | 68 +++++++++++++++++++
 .../bpf/progs/test_sockmap_pass_prog.c        | 32 +++++++++
 2 files changed, 100 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/test_sockmap_pass_prog.c

diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
index 0aa088900699..8f0d60f5c847 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
@@ -2,6 +2,7 @@
 // Copyright (c) 2020 Cloudflare
 #include <error.h>
 #include <netinet/tcp.h>
+#include <sys/epoll.h>
 
 #include "test_progs.h"
 #include "test_skmsg_load_helpers.skel.h"
@@ -9,8 +10,11 @@
 #include "test_sockmap_invalid_update.skel.h"
 #include "test_sockmap_skb_verdict_attach.skel.h"
 #include "test_sockmap_progs_query.skel.h"
+#include "test_sockmap_pass_prog.skel.h"
 #include "bpf_iter_sockmap.skel.h"
 
+#include "sockmap_helpers.h"
+
 #define TCP_REPAIR		19	/* TCP sock is under repair right now */
 
 #define TCP_REPAIR_ON		1
@@ -350,6 +354,68 @@ static void test_sockmap_progs_query(enum bpf_attach_type attach_type)
 	test_sockmap_progs_query__destroy(skel);
 }
 
+#define MAX_EVENTS 10
+static void test_sockmap_skb_verdict_shutdown(void)
+{
+	int n, err, map, verdict, s, c0, c1, p0, p1;
+	struct epoll_event ev, events[MAX_EVENTS];
+	struct test_sockmap_pass_prog *skel;
+	int epollfd;
+	int zero = 0;
+	char b;
+
+	skel = test_sockmap_pass_prog__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "open_and_load"))
+		return;
+
+	verdict = bpf_program__fd(skel->progs.prog_skb_verdict);
+	map = bpf_map__fd(skel->maps.sock_map_rx);
+
+	err = bpf_prog_attach(verdict, map, BPF_SK_SKB_STREAM_VERDICT, 0);
+	if (!ASSERT_OK(err, "bpf_prog_attach"))
+		goto out;
+
+	s = socket_loopback(AF_INET, SOCK_STREAM);
+	if (s < 0)
+		goto out;
+	err = create_socket_pairs(s, AF_INET, SOCK_STREAM, &c0, &c1, &p0, &p1);
+	if (err < 0)
+		goto out;
+
+	err = bpf_map_update_elem(map, &zero, &c1, BPF_NOEXIST);
+	if (err < 0)
+		goto out_close;
+
+	shutdown(c0, SHUT_RDWR);
+	shutdown(p1, SHUT_WR);
+
+	ev.events = EPOLLIN;
+	ev.data.fd = c1;
+
+	epollfd = epoll_create1(0);
+	if (!ASSERT_GT(epollfd, -1, "epoll_create(0)"))
+		goto out_close;
+	err = epoll_ctl(epollfd, EPOLL_CTL_ADD, c1, &ev);
+	if (!ASSERT_OK(err, "epoll_ctl(EPOLL_CTL_ADD)"))
+		goto out_close;
+	err = epoll_wait(epollfd, events, MAX_EVENTS, -1);
+	if (!ASSERT_EQ(err, 1, "epoll_wait(fd)"))
+		goto out_close;
+
+	n = recv(c1, &b, 1, SOCK_NONBLOCK);
+	ASSERT_EQ(n, 0, "recv_timeout(fin)");
+	n = recv(p0, &b, 1, SOCK_NONBLOCK);
+	ASSERT_EQ(n, 0, "recv_timeout(fin)");
+
+out_close:
+	close(c0);
+	close(p0);
+	close(c1);
+	close(p1);
+out:
+	test_sockmap_pass_prog__destroy(skel);
+}
+
 void test_sockmap_basic(void)
 {
 	if (test__start_subtest("sockmap create_update_free"))
@@ -384,4 +450,6 @@ void test_sockmap_basic(void)
 		test_sockmap_progs_query(BPF_SK_SKB_STREAM_VERDICT);
 	if (test__start_subtest("sockmap skb_verdict progs query"))
 		test_sockmap_progs_query(BPF_SK_SKB_VERDICT);
+	if (test__start_subtest("sockmap skb_verdict shutdown"))
+		test_sockmap_skb_verdict_shutdown();
 }
diff --git a/tools/testing/selftests/bpf/progs/test_sockmap_pass_prog.c b/tools/testing/selftests/bpf/progs/test_sockmap_pass_prog.c
new file mode 100644
index 000000000000..1d86a717a290
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_sockmap_pass_prog.c
@@ -0,0 +1,32 @@
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_endian.h>
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKMAP);
+	__uint(max_entries, 20);
+	__type(key, int);
+	__type(value, int);
+} sock_map_rx SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKMAP);
+	__uint(max_entries, 20);
+	__type(key, int);
+	__type(value, int);
+} sock_map_tx SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKMAP);
+	__uint(max_entries, 20);
+	__type(key, int);
+	__type(value, int);
+} sock_map_msg SEC(".maps");
+
+SEC("sk_skb")
+int prog_skb_verdict(struct __sk_buff *skb)
+{
+	return SK_PASS;
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf v2 11/12] bpf: sockmap, test FIONREAD returns correct bytes in rx buffer
  2023-03-27 17:54 [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
                   ` (9 preceding siblings ...)
  2023-03-27 17:54 ` [PATCH bpf v2 10/12] bpf: sockmap, test shutdown() correctly exits epoll and recv()=0 John Fastabend
@ 2023-03-27 17:54 ` John Fastabend
  2023-03-27 17:54 ` [PATCH bpf v2 12/12] bpf: sockmap, test FIONREAD returns correct bytes in rx buffer with drops John Fastabend
  2023-03-27 18:17 ` [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
  12 siblings, 0 replies; 24+ messages in thread
From: John Fastabend @ 2023-03-27 17:54 UTC (permalink / raw)
  To: cong.wang, jakub, daniel, lmb, edumazet
  Cc: john.fastabend, bpf, netdev, ast, andrii, will

A bug was reported where ioctl(FIONREAD) returned zero even though the
socket with a SK_SKB verdict program attached had bytes in the msg
queue. The result is programs may hang or more likely try to recover,
but use suboptimal buffer sizes.

Add a test to check that ioctl(FIONREAD) returns the correct number of
bytes.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 .../selftests/bpf/prog_tests/sockmap_basic.c  | 48 +++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
index 8f0d60f5c847..16d76ec1ea1c 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
@@ -416,6 +416,52 @@ static void test_sockmap_skb_verdict_shutdown(void)
 	test_sockmap_pass_prog__destroy(skel);
 }
 
+static void test_sockmap_skb_verdict_fionread(void)
+{
+	int err, map, verdict, s, c0, c1, p0, p1;
+	struct test_sockmap_pass_prog *skel;
+	int zero = 0, sent, recvd, avail;
+	char buf[256] = "0123456789";
+
+	skel = test_sockmap_pass_prog__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "open_and_load"))
+		return;
+
+	verdict = bpf_program__fd(skel->progs.prog_skb_verdict);
+	map = bpf_map__fd(skel->maps.sock_map_rx);
+
+	err = bpf_prog_attach(verdict, map, BPF_SK_SKB_STREAM_VERDICT, 0);
+	if (!ASSERT_OK(err, "bpf_prog_attach"))
+		goto out;
+
+	s = socket_loopback(AF_INET, SOCK_STREAM);
+	if (!ASSERT_GT(s, -1, "socket_loopback(s)"))
+		goto out;
+	err = create_socket_pairs(s, AF_INET, SOCK_STREAM, &c0, &c1, &p0, &p1);
+	if (!ASSERT_OK(err, "create_socket_pairs(s)"))
+		goto out;
+
+	err = bpf_map_update_elem(map, &zero, &c1, BPF_NOEXIST);
+	if (!ASSERT_OK(err, "bpf_map_update_elem(c1)"))
+		goto out_close;
+
+	sent = xsend(p1, &buf, sizeof(buf), 0);
+	ASSERT_EQ(sent, sizeof(buf), "xsend(p0)");
+	err = ioctl(c1, FIONREAD, &avail);
+	ASSERT_OK(err, "ioctl(FIONREAD) error");
+	ASSERT_EQ(avail, sizeof(buf), "ioctl(FIONREAD)");
+	recvd = recv_timeout(c1, &buf, sizeof(buf), SOCK_NONBLOCK, IO_TIMEOUT_SEC);
+	ASSERT_EQ(recvd, sizeof(buf), "recv_timeout(c0)");
+
+out_close:
+	close(c0);
+	close(p0);
+	close(c1);
+	close(p1);
+out:
+	test_sockmap_pass_prog__destroy(skel);
+}
+
 void test_sockmap_basic(void)
 {
 	if (test__start_subtest("sockmap create_update_free"))
@@ -452,4 +498,6 @@ void test_sockmap_basic(void)
 		test_sockmap_progs_query(BPF_SK_SKB_VERDICT);
 	if (test__start_subtest("sockmap skb_verdict shutdown"))
 		test_sockmap_skb_verdict_shutdown();
+	if (test__start_subtest("sockmap skb_verdict fionread"))
+		test_sockmap_skb_verdict_fionread();
 }
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf v2 12/12] bpf: sockmap, test FIONREAD returns correct bytes in rx buffer with drops
  2023-03-27 17:54 [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
                   ` (10 preceding siblings ...)
  2023-03-27 17:54 ` [PATCH bpf v2 11/12] bpf: sockmap, test FIONREAD returns correct bytes in rx buffer John Fastabend
@ 2023-03-27 17:54 ` John Fastabend
  2023-03-27 18:17 ` [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
  12 siblings, 0 replies; 24+ messages in thread
From: John Fastabend @ 2023-03-27 17:54 UTC (permalink / raw)
  To: cong.wang, jakub, daniel, lmb, edumazet
  Cc: john.fastabend, bpf, netdev, ast, andrii, will

When BPF program drops pkts the sockmap logic 'eats' the packet and
updates copied_seq. In the PASS case where the sk_buff is accepted
we update copied_seq from recvmsg path so we need a new test to
handle the drop case.

Original patch series broke this resulting in

test_sockmap_skb_verdict_fionread:PASS:ioctl(FIONREAD) error 0 nsec
test_sockmap_skb_verdict_fionread:FAIL:ioctl(FIONREAD) unexpected ioctl(FIONREAD): actual 1503041772 != expected 256
#176/17  sockmap_basic/sockmap skb_verdict fionread on drop:FAIL

After updated patch with fix.

#176/16  sockmap_basic/sockmap skb_verdict fionread:OK
#176/17  sockmap_basic/sockmap skb_verdict fionread on drop:OK

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 .../selftests/bpf/prog_tests/sockmap_basic.c  | 47 ++++++++++++++-----
 .../bpf/progs/test_sockmap_drop_prog.c        | 32 +++++++++++++
 2 files changed, 66 insertions(+), 13 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/test_sockmap_drop_prog.c

diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
index 16d76ec1ea1c..22cbe947de2f 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
@@ -11,6 +11,7 @@
 #include "test_sockmap_skb_verdict_attach.skel.h"
 #include "test_sockmap_progs_query.skel.h"
 #include "test_sockmap_pass_prog.skel.h"
+#include "test_sockmap_drop_prog.skel.h"
 #include "bpf_iter_sockmap.skel.h"
 
 #include "sockmap_helpers.h"
@@ -416,19 +417,31 @@ static void test_sockmap_skb_verdict_shutdown(void)
 	test_sockmap_pass_prog__destroy(skel);
 }
 
-static void test_sockmap_skb_verdict_fionread(void)
+static void test_sockmap_skb_verdict_fionread(bool pass_prog)
 {
+	int expected, zero = 0, sent, recvd, avail;
 	int err, map, verdict, s, c0, c1, p0, p1;
-	struct test_sockmap_pass_prog *skel;
-	int zero = 0, sent, recvd, avail;
+	struct test_sockmap_pass_prog *pass;
+	struct test_sockmap_drop_prog *drop;
 	char buf[256] = "0123456789";
 
-	skel = test_sockmap_pass_prog__open_and_load();
-	if (!ASSERT_OK_PTR(skel, "open_and_load"))
-		return;
+	if (pass_prog) {
+		pass = test_sockmap_pass_prog__open_and_load();
+		if (!ASSERT_OK_PTR(pass, "open_and_load"))
+			return;
+		verdict = bpf_program__fd(pass->progs.prog_skb_verdict);
+		map = bpf_map__fd(pass->maps.sock_map_rx);
+		expected = sizeof(buf);
+	} else {
+		drop = test_sockmap_drop_prog__open_and_load();
+		if (!ASSERT_OK_PTR(drop, "open_and_load"))
+			return;
+		verdict = bpf_program__fd(drop->progs.prog_skb_verdict);
+		map = bpf_map__fd(drop->maps.sock_map_rx);
+		/* On drop data is consumed immediately and copied_seq inc'd */
+		expected = 0;
+	}
 
-	verdict = bpf_program__fd(skel->progs.prog_skb_verdict);
-	map = bpf_map__fd(skel->maps.sock_map_rx);
 
 	err = bpf_prog_attach(verdict, map, BPF_SK_SKB_STREAM_VERDICT, 0);
 	if (!ASSERT_OK(err, "bpf_prog_attach"))
@@ -449,9 +462,12 @@ static void test_sockmap_skb_verdict_fionread(void)
 	ASSERT_EQ(sent, sizeof(buf), "xsend(p0)");
 	err = ioctl(c1, FIONREAD, &avail);
 	ASSERT_OK(err, "ioctl(FIONREAD) error");
-	ASSERT_EQ(avail, sizeof(buf), "ioctl(FIONREAD)");
-	recvd = recv_timeout(c1, &buf, sizeof(buf), SOCK_NONBLOCK, IO_TIMEOUT_SEC);
-	ASSERT_EQ(recvd, sizeof(buf), "recv_timeout(c0)");
+	ASSERT_EQ(avail, expected, "ioctl(FIONREAD)");
+	/* On DROP test there will be no data to read */
+	if (pass_prog) {
+		recvd = recv_timeout(c1, &buf, sizeof(buf), SOCK_NONBLOCK, IO_TIMEOUT_SEC);
+		ASSERT_EQ(recvd, sizeof(buf), "recv_timeout(c0)");
+	}
 
 out_close:
 	close(c0);
@@ -459,7 +475,10 @@ static void test_sockmap_skb_verdict_fionread(void)
 	close(c1);
 	close(p1);
 out:
-	test_sockmap_pass_prog__destroy(skel);
+	if (pass_prog)
+		test_sockmap_pass_prog__destroy(pass);
+	else
+		test_sockmap_drop_prog__destroy(drop);
 }
 
 void test_sockmap_basic(void)
@@ -499,5 +518,7 @@ void test_sockmap_basic(void)
 	if (test__start_subtest("sockmap skb_verdict shutdown"))
 		test_sockmap_skb_verdict_shutdown();
 	if (test__start_subtest("sockmap skb_verdict fionread"))
-		test_sockmap_skb_verdict_fionread();
+		test_sockmap_skb_verdict_fionread(true);
+	if (test__start_subtest("sockmap skb_verdict fionread on drop"))
+		test_sockmap_skb_verdict_fionread(false);
 }
diff --git a/tools/testing/selftests/bpf/progs/test_sockmap_drop_prog.c b/tools/testing/selftests/bpf/progs/test_sockmap_drop_prog.c
new file mode 100644
index 000000000000..29314805ce42
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_sockmap_drop_prog.c
@@ -0,0 +1,32 @@
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_endian.h>
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKMAP);
+	__uint(max_entries, 20);
+	__type(key, int);
+	__type(value, int);
+} sock_map_rx SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKMAP);
+	__uint(max_entries, 20);
+	__type(key, int);
+	__type(value, int);
+} sock_map_tx SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKMAP);
+	__uint(max_entries, 20);
+	__type(key, int);
+	__type(value, int);
+} sock_map_msg SEC(".maps");
+
+SEC("sk_skb")
+int prog_skb_verdict(struct __sk_buff *skb)
+{
+	return SK_DROP;
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* RE: [PATCH bpf v2 00/11] bpf sockmap fixes
  2023-03-27 17:54 [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
                   ` (11 preceding siblings ...)
  2023-03-27 17:54 ` [PATCH bpf v2 12/12] bpf: sockmap, test FIONREAD returns correct bytes in rx buffer with drops John Fastabend
@ 2023-03-27 18:17 ` John Fastabend
  12 siblings, 0 replies; 24+ messages in thread
From: John Fastabend @ 2023-03-27 18:17 UTC (permalink / raw)
  To: John Fastabend, cong.wang, jakub, daniel, lmb, edumazet
  Cc: john.fastabend, bpf, netdev, ast, andrii, will

John Fastabend wrote:
> Fixes for sockmap running against NGINX TCP tests and also on an
> underprovisioned VM so that we hit error (ENOMEM) cases regularly.
> 
> The first 3 patches fix cases related to ENOMEM that were either
> causing splats or data hangs.
> 
> Then 4-7 resolved cases found when running NGINX with its sockets
> assigned to sockmap. These mostly have to do with handling fin/shutdown
> incorrectly and ensuring epoll_wait works as expected.
> 
> Patches 8 and 9 extract some of the logic used for sockmap_listen tests
> so that we can use it in other tests because it didn't make much
> sense to me to add tests to the sockmap_listen cases when here we
> are testing send/recv *basic* cases.
> 
> Finally patches 10 and 11 add the new tests to ensure we handle
> ioctl(FIONREAD) and shutdown correctly.
> 
> To test the series I ran the NGINX compliance tests and the sockmap
> selftests. For now our compliance test just runs with SK_PASS.
> 
> There are some more things to be done here, but these 11 patches
> stand on their own in my opionion and fix issues we are having in
> CI now. For bpf-next we can fixup/improve selftests to use the
> ASSERT_* in sockmap_helpers, streamline some of the testing, and
> add more tests. We also still are debugging a few additional flakes
> patches coming soon.
> 
> v2: use skb_queue_empty instead of *_empty_lockless (Eric)
>     oops incorrectly updated copied_seq on DROP case (Eric)
>     added test for drop case copied_seq update
> 

Sorry folks on the to line there I resent with the cc list here. I had
suppressed the CC list in the first batch.

.John

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf v2 01/12] bpf: sockmap, pass skb ownership through read_skb
  2023-03-27 17:54 ` [PATCH bpf v2 01/12] bpf: sockmap, pass skb ownership through read_skb John Fastabend
@ 2023-03-28 10:42   ` Jakub Sitnicki
  0 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2023-03-28 10:42 UTC (permalink / raw)
  To: John Fastabend
  Cc: cong.wang, daniel, lmb, edumazet, bpf, netdev, ast, andrii, will

On Mon, Mar 27, 2023 at 10:54 AM -07, John Fastabend wrote:
> The read_skb hook calls consume_skb() now, but this means that if the
> recv_actor program wants to use the skb it needs to inc the ref cnt
> so that the consume_skb() doesn't kfree the sk_buff.
>
> This is problematic because in some error cases under memory pressure
> we may need to linearize the sk_buff from sk_psock_skb_ingress_enqueue().
> Then we get this,
>
>  skb_linearize()
>    __pskb_pull_tail()
>      pskb_expand_head()
>        BUG_ON(skb_shared(skb))
>
> Because we incremented users refcnt from sk_psock_verdict_recv() we
> hit the bug on with refcnt > 1 and trip it.
>
> To fix lets simply pass ownership of the sk_buff through the skb_read
> call. Then we can drop the consume from read_skb handlers and assume
> the verdict recv does any required kfree.
>
> Bug found while testing in our CI which runs in VMs that hit memory
> constraints rather regularly. William tested TCP read_skb handlers.
>
> [  106.536188] ------------[ cut here ]------------
> [  106.536197] kernel BUG at net/core/skbuff.c:1693!
> [  106.536479] invalid opcode: 0000 [#1] PREEMPT SMP PTI
> [  106.536726] CPU: 3 PID: 1495 Comm: curl Not tainted 5.19.0-rc5 #1
> [  106.537023] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ArchLinux 1.16.0-1 04/01/2014
> [  106.537467] RIP: 0010:pskb_expand_head+0x269/0x330
> [  106.538585] RSP: 0018:ffffc90000138b68 EFLAGS: 00010202
> [  106.538839] RAX: 000000000000003f RBX: ffff8881048940e8 RCX: 0000000000000a20
> [  106.539186] RDX: 0000000000000002 RSI: 0000000000000000 RDI: ffff8881048940e8
> [  106.539529] RBP: ffffc90000138be8 R08: 00000000e161fd1a R09: 0000000000000000
> [  106.539877] R10: 0000000000000018 R11: 0000000000000000 R12: ffff8881048940e8
> [  106.540222] R13: 0000000000000003 R14: 0000000000000000 R15: ffff8881048940e8
> [  106.540568] FS:  00007f277dde9f00(0000) GS:ffff88813bd80000(0000) knlGS:0000000000000000
> [  106.540954] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [  106.541227] CR2: 00007f277eeede64 CR3: 000000000ad3e000 CR4: 00000000000006e0
> [  106.541569] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [  106.541915] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> [  106.542255] Call Trace:
> [  106.542383]  <IRQ>
> [  106.542487]  __pskb_pull_tail+0x4b/0x3e0
> [  106.542681]  skb_ensure_writable+0x85/0xa0
> [  106.542882]  sk_skb_pull_data+0x18/0x20
> [  106.543084]  bpf_prog_b517a65a242018b0_bpf_skskb_http_verdict+0x3a9/0x4aa9
> [  106.543536]  ? migrate_disable+0x66/0x80
> [  106.543871]  sk_psock_verdict_recv+0xe2/0x310
> [  106.544258]  ? sk_psock_write_space+0x1f0/0x1f0
> [  106.544561]  tcp_read_skb+0x7b/0x120
> [  106.544740]  tcp_data_queue+0x904/0xee0
> [  106.544931]  tcp_rcv_established+0x212/0x7c0
> [  106.545142]  tcp_v4_do_rcv+0x174/0x2a0
> [  106.545326]  tcp_v4_rcv+0xe70/0xf60
> [  106.545500]  ip_protocol_deliver_rcu+0x48/0x290
> [  106.545744]  ip_local_deliver_finish+0xa7/0x150
>
> Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
> Reported-by: William Findlay <will@isovalent.com>
> Tested-by: William Findlay <will@isovalent.com>
> Signed-off-by: John Fastabend <john.fastabend@gmail.com>
> ---

Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf v2 02/12] bpf: sockmap, convert schedule_work into delayed_work
  2023-03-27 17:54 ` [PATCH bpf v2 02/12] bpf: sockmap, convert schedule_work into delayed_work John Fastabend
@ 2023-03-28 12:09   ` Jakub Sitnicki
  2023-03-28 21:56     ` John Fastabend
  0 siblings, 1 reply; 24+ messages in thread
From: Jakub Sitnicki @ 2023-03-28 12:09 UTC (permalink / raw)
  To: John Fastabend
  Cc: cong.wang, daniel, lmb, edumazet, bpf, netdev, ast, andrii, will

On Mon, Mar 27, 2023 at 10:54 AM -07, John Fastabend wrote:
> Sk_buffs are fed into sockmap verdict programs either from a strparser
> (when the user might want to decide how framing of skb is done by attaching
> another parser program) or directly through tcp_read_sock. The
> tcp_read_sock is the preferred method for performance when the BPF logic is
> a stream parser.
>
> The flow for Cilium's common use case with a stream parser is,
>
>  tcp_read_sock()
>   sk_psock_verdict_recv
>     ret = bpf_prog_run_pin_on_cpu()
>     sk_psock_verdict_apply(sock, skb, ret)
>      // if system is under memory pressure or app is slow we may
>      // need to queue skb. Do this queuing through ingress_skb and
>      // then kick timer to wake up handler
>      skb_queue_tail(ingress_skb, skb)
>      schedule_work(work);
>
>
> The work queue is wired up to sk_psock_backlog(). This will then walk the
> ingress_skb skb list that holds our sk_buffs that could not be handled,
> but should be OK to run at some later point. However, its possible that
> the workqueue doing this work still hits an error when sending the skb.
> When this happens the skbuff is requeued on a temporary 'state' struct
> kept with the workqueue. This is necessary because its possible to
> partially send an skbuff before hitting an error and we need to know how
> and where to restart when the workqueue runs next.
>
> Now for the trouble, we don't rekick the workqueue. This can cause a
> stall where the skbuff we just cached on the state variable might never
> be sent. This happens when its the last packet in a flow and no further
> packets come along that would cause the system to kick the workqueue from
> that side.
>
> To fix we could do simple schedule_work(), but while under memory pressure
> it makes sense to back off some instead of continue to retry repeatedly. So
> instead to fix convert schedule_work to schedule_delayed_work and add
> backoff logic to reschedule from backlog queue on errors. Its not obvious
> though what a good backoff is so use '1'.
>
> To test we observed some flakes whil running NGINX compliance test with
> sockmap we attributed these failed test to this bug and subsequent issue.
>
> Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
> Tested-by: William Findlay <will@isovalent.com>
> Signed-off-by: John Fastabend <john.fastabend@gmail.com>
> ---
>  include/linux/skmsg.h |  2 +-
>  net/core/skmsg.c      | 19 ++++++++++++-------
>  net/core/sock_map.c   |  3 ++-
>  3 files changed, 15 insertions(+), 9 deletions(-)
>
> diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
> index 84f787416a54..904ff9a32ad6 100644
> --- a/include/linux/skmsg.h
> +++ b/include/linux/skmsg.h
> @@ -105,7 +105,7 @@ struct sk_psock {
>  	struct proto			*sk_proto;
>  	struct mutex			work_mutex;
>  	struct sk_psock_work_state	work_state;
> -	struct work_struct		work;
> +	struct delayed_work		work;
>  	struct rcu_work			rwork;
>  };
>  
> diff --git a/net/core/skmsg.c b/net/core/skmsg.c
> index 2b6d9519ff29..96a6a3a74a67 100644
> --- a/net/core/skmsg.c
> +++ b/net/core/skmsg.c
> @@ -481,7 +481,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
>  	}
>  out:
>  	if (psock->work_state.skb && copied > 0)
> -		schedule_work(&psock->work);
> +		schedule_delayed_work(&psock->work, 0);
>  	return copied;
>  }
>  EXPORT_SYMBOL_GPL(sk_msg_recvmsg);
> @@ -639,7 +639,8 @@ static void sk_psock_skb_state(struct sk_psock *psock,
>  
>  static void sk_psock_backlog(struct work_struct *work)
>  {
> -	struct sk_psock *psock = container_of(work, struct sk_psock, work);
> +	struct delayed_work *dwork = to_delayed_work(work);
> +	struct sk_psock *psock = container_of(dwork, struct sk_psock, work);
>  	struct sk_psock_work_state *state = &psock->work_state;
>  	struct sk_buff *skb = NULL;
>  	bool ingress;
> @@ -679,6 +680,10 @@ static void sk_psock_backlog(struct work_struct *work)
>  				if (ret == -EAGAIN) {
>  					sk_psock_skb_state(psock, state, skb,
>  							   len, off);
> +
> +					// Delay slightly to prioritize any
> +					// other work that might be here.
> +					schedule_delayed_work(&psock->work, 1);

Do IIUC that this means we can back out changes from commit bec217197b41
("skmsg: Schedule psock work if the cached skb exists on the psock")?

Nit: Comment syntax.

>  					goto end;
>  				}
>  				/* Hard errors break pipe and stop xmit. */
> @@ -733,7 +738,7 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node)
>  	INIT_LIST_HEAD(&psock->link);
>  	spin_lock_init(&psock->link_lock);
>  
> -	INIT_WORK(&psock->work, sk_psock_backlog);
> +	INIT_DELAYED_WORK(&psock->work, sk_psock_backlog);
>  	mutex_init(&psock->work_mutex);
>  	INIT_LIST_HEAD(&psock->ingress_msg);
>  	spin_lock_init(&psock->ingress_lock);
> @@ -822,7 +827,7 @@ static void sk_psock_destroy(struct work_struct *work)
>  
>  	sk_psock_done_strp(psock);
>  
> -	cancel_work_sync(&psock->work);
> +	cancel_delayed_work_sync(&psock->work);
>  	mutex_destroy(&psock->work_mutex);
>  
>  	psock_progs_drop(&psock->progs);
> @@ -937,7 +942,7 @@ static int sk_psock_skb_redirect(struct sk_psock *from, struct sk_buff *skb)
>  	}
>  
>  	skb_queue_tail(&psock_other->ingress_skb, skb);
> -	schedule_work(&psock_other->work);
> +	schedule_delayed_work(&psock_other->work, 0);
>  	spin_unlock_bh(&psock_other->ingress_lock);
>  	return 0;
>  }
> @@ -1017,7 +1022,7 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
>  			spin_lock_bh(&psock->ingress_lock);
>  			if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
>  				skb_queue_tail(&psock->ingress_skb, skb);
> -				schedule_work(&psock->work);
> +				schedule_delayed_work(&psock->work, 0);
>  				err = 0;
>  			}
>  			spin_unlock_bh(&psock->ingress_lock);
> @@ -1048,7 +1053,7 @@ static void sk_psock_write_space(struct sock *sk)
>  	psock = sk_psock(sk);
>  	if (likely(psock)) {
>  		if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
> -			schedule_work(&psock->work);
> +			schedule_delayed_work(&psock->work, 0);
>  		write_space = psock->saved_write_space;
>  	}
>  	rcu_read_unlock();
> diff --git a/net/core/sock_map.c b/net/core/sock_map.c
> index a68a7290a3b2..d38267201892 100644
> --- a/net/core/sock_map.c
> +++ b/net/core/sock_map.c
> @@ -1624,9 +1624,10 @@ void sock_map_close(struct sock *sk, long timeout)
>  		rcu_read_unlock();
>  		sk_psock_stop(psock);
>  		release_sock(sk);
> -		cancel_work_sync(&psock->work);
> +		cancel_delayed_work_sync(&psock->work);
>  		sk_psock_put(sk, psock);
>  	}
> +
>  	/* Make sure we do not recurse. This is a bug.
>  	 * Leak the socket instead of crashing on a stack overflow.
>  	 */


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf v2 02/12] bpf: sockmap, convert schedule_work into delayed_work
  2023-03-28 12:09   ` Jakub Sitnicki
@ 2023-03-28 21:56     ` John Fastabend
  2023-03-29 11:09       ` Jakub Sitnicki
  0 siblings, 1 reply; 24+ messages in thread
From: John Fastabend @ 2023-03-28 21:56 UTC (permalink / raw)
  To: Jakub Sitnicki, John Fastabend
  Cc: cong.wang, daniel, lmb, edumazet, bpf, netdev, ast, andrii, will

Jakub Sitnicki wrote:
> On Mon, Mar 27, 2023 at 10:54 AM -07, John Fastabend wrote:
> > Sk_buffs are fed into sockmap verdict programs either from a strparser
> > (when the user might want to decide how framing of skb is done by attaching
> > another parser program) or directly through tcp_read_sock. The
> > tcp_read_sock is the preferred method for performance when the BPF logic is
> > a stream parser.
> >
> > The flow for Cilium's common use case with a stream parser is,
> >
> >  tcp_read_sock()
> >   sk_psock_verdict_recv
> >     ret = bpf_prog_run_pin_on_cpu()
> >     sk_psock_verdict_apply(sock, skb, ret)
> >      // if system is under memory pressure or app is slow we may
> >      // need to queue skb. Do this queuing through ingress_skb and
> >      // then kick timer to wake up handler
> >      skb_queue_tail(ingress_skb, skb)
> >      schedule_work(work);
> >
> >
> > The work queue is wired up to sk_psock_backlog(). This will then walk the
> > ingress_skb skb list that holds our sk_buffs that could not be handled,
> > but should be OK to run at some later point. However, its possible that
> > the workqueue doing this work still hits an error when sending the skb.
> > When this happens the skbuff is requeued on a temporary 'state' struct
> > kept with the workqueue. This is necessary because its possible to
> > partially send an skbuff before hitting an error and we need to know how
> > and where to restart when the workqueue runs next.
> >
> > Now for the trouble, we don't rekick the workqueue. This can cause a
> > stall where the skbuff we just cached on the state variable might never
> > be sent. This happens when its the last packet in a flow and no further
> > packets come along that would cause the system to kick the workqueue from
> > that side.
> >
> > To fix we could do simple schedule_work(), but while under memory pressure
> > it makes sense to back off some instead of continue to retry repeatedly. So
> > instead to fix convert schedule_work to schedule_delayed_work and add
> > backoff logic to reschedule from backlog queue on errors. Its not obvious
> > though what a good backoff is so use '1'.
> >
> > To test we observed some flakes whil running NGINX compliance test with
> > sockmap we attributed these failed test to this bug and subsequent issue.
> >
> > Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
> > Tested-by: William Findlay <will@isovalent.com>
> > Signed-off-by: John Fastabend <john.fastabend@gmail.com>
> > ---

[...]

> > --- a/net/core/skmsg.c
> > +++ b/net/core/skmsg.c
> > @@ -481,7 +481,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
> >  	}
> >  out:
> >  	if (psock->work_state.skb && copied > 0)
> > -		schedule_work(&psock->work);
> > +		schedule_delayed_work(&psock->work, 0);
> >  	return copied;
> >  }
> >  EXPORT_SYMBOL_GPL(sk_msg_recvmsg);
> > @@ -639,7 +639,8 @@ static void sk_psock_skb_state(struct sk_psock *psock,
> >  
> >  static void sk_psock_backlog(struct work_struct *work)
> >  {
> > -	struct sk_psock *psock = container_of(work, struct sk_psock, work);
> > +	struct delayed_work *dwork = to_delayed_work(work);
> > +	struct sk_psock *psock = container_of(dwork, struct sk_psock, work);
> >  	struct sk_psock_work_state *state = &psock->work_state;
> >  	struct sk_buff *skb = NULL;
> >  	bool ingress;
> > @@ -679,6 +680,10 @@ static void sk_psock_backlog(struct work_struct *work)
> >  				if (ret == -EAGAIN) {
> >  					sk_psock_skb_state(psock, state, skb,
> >  							   len, off);
> > +
> > +					// Delay slightly to prioritize any
> > +					// other work that might be here.
> > +					schedule_delayed_work(&psock->work, 1);
> 
> Do IIUC that this means we can back out changes from commit bec217197b41
> ("skmsg: Schedule psock work if the cached skb exists on the psock")?

Yeah I think so this is a more direct way to get the same result. I'm also
thinking this check,

       if (psock->work_state.skb && copied > 0)
               schedule_work(&psock->work)

is not correct copied=0 which could happen on empty queue could be the
result of a skb stuck from this eagain error in backlog.

I think its OK to revert that patch in a separate patch. And ideally we
could get some way to load up the stack to hit these corner cases without
running long stress tests.

WDYT?

> 
> Nit: Comment syntax.

Yep happy to fix.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf v2 02/12] bpf: sockmap, convert schedule_work into delayed_work
  2023-03-28 21:56     ` John Fastabend
@ 2023-03-29 11:09       ` Jakub Sitnicki
  0 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2023-03-29 11:09 UTC (permalink / raw)
  To: John Fastabend
  Cc: cong.wang, daniel, lmb, edumazet, bpf, netdev, ast, andrii, will

On Tue, Mar 28, 2023 at 02:56 PM -07, John Fastabend wrote:
> Jakub Sitnicki wrote:
>> On Mon, Mar 27, 2023 at 10:54 AM -07, John Fastabend wrote:
>> > Sk_buffs are fed into sockmap verdict programs either from a strparser
>> > (when the user might want to decide how framing of skb is done by attaching
>> > another parser program) or directly through tcp_read_sock. The
>> > tcp_read_sock is the preferred method for performance when the BPF logic is
>> > a stream parser.
>> >
>> > The flow for Cilium's common use case with a stream parser is,
>> >
>> >  tcp_read_sock()
>> >   sk_psock_verdict_recv
>> >     ret = bpf_prog_run_pin_on_cpu()
>> >     sk_psock_verdict_apply(sock, skb, ret)
>> >      // if system is under memory pressure or app is slow we may
>> >      // need to queue skb. Do this queuing through ingress_skb and
>> >      // then kick timer to wake up handler
>> >      skb_queue_tail(ingress_skb, skb)
>> >      schedule_work(work);
>> >
>> >
>> > The work queue is wired up to sk_psock_backlog(). This will then walk the
>> > ingress_skb skb list that holds our sk_buffs that could not be handled,
>> > but should be OK to run at some later point. However, its possible that
>> > the workqueue doing this work still hits an error when sending the skb.
>> > When this happens the skbuff is requeued on a temporary 'state' struct
>> > kept with the workqueue. This is necessary because its possible to
>> > partially send an skbuff before hitting an error and we need to know how
>> > and where to restart when the workqueue runs next.
>> >
>> > Now for the trouble, we don't rekick the workqueue. This can cause a
>> > stall where the skbuff we just cached on the state variable might never
>> > be sent. This happens when its the last packet in a flow and no further
>> > packets come along that would cause the system to kick the workqueue from
>> > that side.
>> >
>> > To fix we could do simple schedule_work(), but while under memory pressure
>> > it makes sense to back off some instead of continue to retry repeatedly. So
>> > instead to fix convert schedule_work to schedule_delayed_work and add
>> > backoff logic to reschedule from backlog queue on errors. Its not obvious
>> > though what a good backoff is so use '1'.
>> >
>> > To test we observed some flakes whil running NGINX compliance test with
>> > sockmap we attributed these failed test to this bug and subsequent issue.
>> >
>> > Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
>> > Tested-by: William Findlay <will@isovalent.com>
>> > Signed-off-by: John Fastabend <john.fastabend@gmail.com>
>> > ---
>
> [...]
>
>> > --- a/net/core/skmsg.c
>> > +++ b/net/core/skmsg.c
>> > @@ -481,7 +481,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
>> >  	}
>> >  out:
>> >  	if (psock->work_state.skb && copied > 0)
>> > -		schedule_work(&psock->work);
>> > +		schedule_delayed_work(&psock->work, 0);
>> >  	return copied;
>> >  }
>> >  EXPORT_SYMBOL_GPL(sk_msg_recvmsg);
>> > @@ -639,7 +639,8 @@ static void sk_psock_skb_state(struct sk_psock *psock,
>> >  
>> >  static void sk_psock_backlog(struct work_struct *work)
>> >  {
>> > -	struct sk_psock *psock = container_of(work, struct sk_psock, work);
>> > +	struct delayed_work *dwork = to_delayed_work(work);
>> > +	struct sk_psock *psock = container_of(dwork, struct sk_psock, work);
>> >  	struct sk_psock_work_state *state = &psock->work_state;
>> >  	struct sk_buff *skb = NULL;
>> >  	bool ingress;
>> > @@ -679,6 +680,10 @@ static void sk_psock_backlog(struct work_struct *work)
>> >  				if (ret == -EAGAIN) {
>> >  					sk_psock_skb_state(psock, state, skb,
>> >  							   len, off);
>> > +
>> > +					// Delay slightly to prioritize any
>> > +					// other work that might be here.
>> > +					schedule_delayed_work(&psock->work, 1);
>> 
>> Do IIUC that this means we can back out changes from commit bec217197b41
>> ("skmsg: Schedule psock work if the cached skb exists on the psock")?
>
> Yeah I think so this is a more direct way to get the same result. I'm also
> thinking this check,
>
>        if (psock->work_state.skb && copied > 0)
>                schedule_work(&psock->work)
>
> is not correct copied=0 which could happen on empty queue could be the
> result of a skb stuck from this eagain error in backlog.

I suspect the 'copied > 0' check is there to handle the 0-length read
scenario. But I think you're right, that the empty queue scenario is not
being handled properly.

> I think its OK to revert that patch in a separate patch. And ideally we
> could get some way to load up the stack to hit these corner cases without
> running long stress tests.
>
> WDYT?

Yeah, the revert can wait. I was just curious if my thinking was
right. There is plenty of material in this series as is :-)

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf v2 03/12] bpf: sockmap, improved check for empty queue
  2023-03-27 17:54 ` [PATCH bpf v2 03/12] bpf: sockmap, improved check for empty queue John Fastabend
@ 2023-03-29 12:24   ` Jakub Sitnicki
  2023-04-01  0:59     ` John Fastabend
  0 siblings, 1 reply; 24+ messages in thread
From: Jakub Sitnicki @ 2023-03-29 12:24 UTC (permalink / raw)
  To: John Fastabend
  Cc: cong.wang, daniel, lmb, edumazet, bpf, netdev, ast, andrii, will

On Mon, Mar 27, 2023 at 10:54 AM -07, John Fastabend wrote:
> We noticed some rare sk_buffs were stepping past the queue when system was
> under memory pressure. The general theory is to skip enqueueing
> sk_buffs when its not necessary which is the normal case with a system
> that is properly provisioned for the task, no memory pressure and enough
> cpu assigned.
>
> But, if we can't allocate memory due to an ENOMEM error when enqueueing
> the sk_buff into the sockmap receive queue we push it onto a delayed
> workqueue to retry later. When a new sk_buff is received we then check
> if that queue is empty. However, there is a problem with simply checking
> the queue length. When a sk_buff is being processed from the ingress queue
> but not yet on the sockmap msg receive queue its possible to also recv
> a sk_buff through normal path. It will check the ingress queue which is
> zero and then skip ahead of the pkt being processed.
>
> Previously we used sock lock from both contexts which made the problem
> harder to hit, but not impossible.
>
> To fix also check the 'state' variable where we would cache partially
> processed sk_buff. This catches the majority of cases. But, we also
> need to use the mutex lock around this check because we can't have both
> codes running and check sensibly. We could perhaps do this with atomic
> bit checks, but we are already here due to memory pressure so slowing
> things down a bit seems OK and simpler to just grab a lock.
>
> To reproduce issue we run NGINX compliance test with sockmap running and
> observe some flakes in our testing that we attributed to this issue.
>
> Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
> Tested-by: William Findlay <will@isovalent.com>
> Signed-off-by: John Fastabend <john.fastabend@gmail.com>
> ---

I've got an idea to try, but it'd a bigger change.

skb_dequeue is lock, skb_peek, skb_unlink, unlock, right?

What if we split up the skb_dequeue in sk_psock_backlog to publish the
change to the ingress_skb queue only once an skb has been processed?

static void sk_psock_backlog(struct work_struct *work)
{
        ...
        while ((skb = skb_peek_locked(&psock->ingress_skb))) {
                ...
                skb_unlink(skb, &psock->ingress_skb);
        }
        ...
}

Even more - if we hold off the unlinking until an skb has been fully
processed, that perhaps opens up the way to get rid of keeping state in
sk_psock_work_state. We could just skb_pull the processed data instead.

It's just an idea and I don't want to block a tested fix that LGTM so
consider this:

Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf v2 03/12] bpf: sockmap, improved check for empty queue
  2023-03-29 12:24   ` Jakub Sitnicki
@ 2023-04-01  0:59     ` John Fastabend
  2023-04-03  8:42       ` Jakub Sitnicki
  0 siblings, 1 reply; 24+ messages in thread
From: John Fastabend @ 2023-04-01  0:59 UTC (permalink / raw)
  To: Jakub Sitnicki, John Fastabend
  Cc: cong.wang, daniel, lmb, edumazet, bpf, netdev, ast, andrii, will

Jakub Sitnicki wrote:
> On Mon, Mar 27, 2023 at 10:54 AM -07, John Fastabend wrote:
> > We noticed some rare sk_buffs were stepping past the queue when system was
> > under memory pressure. The general theory is to skip enqueueing
> > sk_buffs when its not necessary which is the normal case with a system
> > that is properly provisioned for the task, no memory pressure and enough
> > cpu assigned.
> >
> > But, if we can't allocate memory due to an ENOMEM error when enqueueing
> > the sk_buff into the sockmap receive queue we push it onto a delayed
> > workqueue to retry later. When a new sk_buff is received we then check
> > if that queue is empty. However, there is a problem with simply checking
> > the queue length. When a sk_buff is being processed from the ingress queue
> > but not yet on the sockmap msg receive queue its possible to also recv
> > a sk_buff through normal path. It will check the ingress queue which is
> > zero and then skip ahead of the pkt being processed.
> >
> > Previously we used sock lock from both contexts which made the problem
> > harder to hit, but not impossible.
> >
> > To fix also check the 'state' variable where we would cache partially
> > processed sk_buff. This catches the majority of cases. But, we also
> > need to use the mutex lock around this check because we can't have both
> > codes running and check sensibly. We could perhaps do this with atomic
> > bit checks, but we are already here due to memory pressure so slowing
> > things down a bit seems OK and simpler to just grab a lock.
> >
> > To reproduce issue we run NGINX compliance test with sockmap running and
> > observe some flakes in our testing that we attributed to this issue.
> >
> > Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
> > Tested-by: William Findlay <will@isovalent.com>
> > Signed-off-by: John Fastabend <john.fastabend@gmail.com>
> > ---
> 
> I've got an idea to try, but it'd a bigger change.
> 
> skb_dequeue is lock, skb_peek, skb_unlink, unlock, right?
> 
> What if we split up the skb_dequeue in sk_psock_backlog to publish the
> change to the ingress_skb queue only once an skb has been processed?

I think this works now. Early on we tried to run backlog without locking
but given this is now locked by mutex I think it works out.

We have a few places that work on ingress_skb but those are all enqueue
on tail so should be fine to peek here.

> 
> static void sk_psock_backlog(struct work_struct *work)
> {
>         ...
>         while ((skb = skb_peek_locked(&psock->ingress_skb))) {
>                 ...
>                 skb_unlink(skb, &psock->ingress_skb);
>         }
>         ...
> }
> 
> Even more - if we hold off the unlinking until an skb has been fully
> processed, that perhaps opens up the way to get rid of keeping state in
> sk_psock_work_state. We could just skb_pull the processed data instead.

Yep.

> 
> It's just an idea and I don't want to block a tested fix that LGTM so
> consider this:

Did you get a chance to try this? Otherwise I can also give the idea
a try next week.

> 
> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf v2 03/12] bpf: sockmap, improved check for empty queue
  2023-04-01  0:59     ` John Fastabend
@ 2023-04-03  8:42       ` Jakub Sitnicki
  0 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2023-04-03  8:42 UTC (permalink / raw)
  To: John Fastabend
  Cc: cong.wang, daniel, lmb, edumazet, bpf, netdev, ast, andrii, will

On Fri, Mar 31, 2023 at 05:59 PM -07, John Fastabend wrote:
> Jakub Sitnicki wrote:

[...]

>> It's just an idea and I don't want to block a tested fix that LGTM so
>> consider this:
>
> Did you get a chance to try this? Otherwise I can also give the idea
> a try next week.

No. By all means, feel free. You caught me at a busy time. That's also
why the review has been going so slow.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf v2 04/12] bpf: sockmap, handle fin correctly
  2023-03-27 17:54 ` [PATCH bpf v2 04/12] bpf: sockmap, handle fin correctly John Fastabend
@ 2023-04-03 11:11   ` Jakub Sitnicki
  2023-04-03 21:05     ` John Fastabend
  0 siblings, 1 reply; 24+ messages in thread
From: Jakub Sitnicki @ 2023-04-03 11:11 UTC (permalink / raw)
  To: John Fastabend
  Cc: cong.wang, daniel, lmb, edumazet, bpf, netdev, ast, andrii, will

On Mon, Mar 27, 2023 at 10:54 AM -07, John Fastabend wrote:
> The sockmap code is returning EAGAIN after a FIN packet is received and no
> more data is on the receive queue. Correct behavior is to return 0 to the
> user and the user can then close the socket. The EAGAIN causes many apps
> to retry which masks the problem. Eventually the socket is evicted from
> the sockmap because its released from sockmap sock free handling. The
> issue creates a delay and can cause some errors on application side.
>
> To fix this check on sk_msg_recvmsg side if length is zero and FIN flag
> is set then set return to zero. A selftest will be added to check this
> condition.
>
> Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
> Tested-by: William Findlay <will@isovalent.com>
> Signed-off-by: John Fastabend <john.fastabend@gmail.com>
> ---
>  net/ipv4/tcp_bpf.c | 31 +++++++++++++++++++++++++++++++
>  1 file changed, 31 insertions(+)
>
> diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
> index cf26d65ca389..3a0f43f3afd8 100644
> --- a/net/ipv4/tcp_bpf.c
> +++ b/net/ipv4/tcp_bpf.c

[...]

> @@ -193,6 +211,19 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
>  	lock_sock(sk);
>  msg_bytes_ready:
>  	copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
> +	/* The typical case for EFAULT is the socket was gracefully
> +	 * shutdown with a FIN pkt. So check here the other case is
> +	 * some error on copy_page_to_iter which would be unexpected.
> +	 * On fin return correct return code to zero.
> +	 */
> +	if (copied == -EFAULT) {
> +		bool is_fin = is_next_msg_fin(psock);
> +
> +		if (is_fin) {
> +			copied = 0;
> +			goto out;
> +		}
> +	}
>  	if (!copied) {
>  		long timeo;
>  		int data;

tcp_bpf_recvmsg needs a similar fix, no?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf v2 04/12] bpf: sockmap, handle fin correctly
  2023-04-03 11:11   ` Jakub Sitnicki
@ 2023-04-03 21:05     ` John Fastabend
  2023-04-04 10:11       ` Jakub Sitnicki
  0 siblings, 1 reply; 24+ messages in thread
From: John Fastabend @ 2023-04-03 21:05 UTC (permalink / raw)
  To: Jakub Sitnicki, John Fastabend
  Cc: cong.wang, daniel, lmb, edumazet, bpf, netdev, ast, andrii, will

Jakub Sitnicki wrote:
> On Mon, Mar 27, 2023 at 10:54 AM -07, John Fastabend wrote:
> > The sockmap code is returning EAGAIN after a FIN packet is received and no
> > more data is on the receive queue. Correct behavior is to return 0 to the
> > user and the user can then close the socket. The EAGAIN causes many apps
> > to retry which masks the problem. Eventually the socket is evicted from
> > the sockmap because its released from sockmap sock free handling. The
> > issue creates a delay and can cause some errors on application side.
> >
> > To fix this check on sk_msg_recvmsg side if length is zero and FIN flag
> > is set then set return to zero. A selftest will be added to check this
> > condition.
> >
> > Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
> > Tested-by: William Findlay <will@isovalent.com>
> > Signed-off-by: John Fastabend <john.fastabend@gmail.com>
> > ---
> >  net/ipv4/tcp_bpf.c | 31 +++++++++++++++++++++++++++++++
> >  1 file changed, 31 insertions(+)
> >
> > diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
> > index cf26d65ca389..3a0f43f3afd8 100644
> > --- a/net/ipv4/tcp_bpf.c
> > +++ b/net/ipv4/tcp_bpf.c
> 
> [...]
> 
> > @@ -193,6 +211,19 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
> >  	lock_sock(sk);
> >  msg_bytes_ready:
> >  	copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
> > +	/* The typical case for EFAULT is the socket was gracefully
> > +	 * shutdown with a FIN pkt. So check here the other case is
> > +	 * some error on copy_page_to_iter which would be unexpected.
> > +	 * On fin return correct return code to zero.
> > +	 */
> > +	if (copied == -EFAULT) {
> > +		bool is_fin = is_next_msg_fin(psock);
> > +
> > +		if (is_fin) {
> > +			copied = 0;
> > +			goto out;
> > +		}
> > +	}
> >  	if (!copied) {
> >  		long timeo;
> >  		int data;
> 
> tcp_bpf_recvmsg needs a similar fix, no?

Yes, I had lumped it in with follow up fixes needed for the
stream parser case but your right its not related.

Mind if I do it in a follow up? Or if I need to do a v4 I'll
roll it in there.

Thanks!
John

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf v2 04/12] bpf: sockmap, handle fin correctly
  2023-04-03 21:05     ` John Fastabend
@ 2023-04-04 10:11       ` Jakub Sitnicki
  0 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2023-04-04 10:11 UTC (permalink / raw)
  To: John Fastabend
  Cc: cong.wang, daniel, lmb, edumazet, bpf, netdev, ast, andrii, will

On Mon, Apr 03, 2023 at 02:05 PM -07, John Fastabend wrote:
> Jakub Sitnicki wrote:

[...]

>> tcp_bpf_recvmsg needs a similar fix, no?
>
> Yes, I had lumped it in with follow up fixes needed for the
> stream parser case but your right its not related.
>
> Mind if I do it in a follow up? Or if I need to do a v4 I'll
> roll it in there.

That works. Totally your call.

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2023-04-04 10:13 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-27 17:54 [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend
2023-03-27 17:54 ` [PATCH bpf v2 01/12] bpf: sockmap, pass skb ownership through read_skb John Fastabend
2023-03-28 10:42   ` Jakub Sitnicki
2023-03-27 17:54 ` [PATCH bpf v2 02/12] bpf: sockmap, convert schedule_work into delayed_work John Fastabend
2023-03-28 12:09   ` Jakub Sitnicki
2023-03-28 21:56     ` John Fastabend
2023-03-29 11:09       ` Jakub Sitnicki
2023-03-27 17:54 ` [PATCH bpf v2 03/12] bpf: sockmap, improved check for empty queue John Fastabend
2023-03-29 12:24   ` Jakub Sitnicki
2023-04-01  0:59     ` John Fastabend
2023-04-03  8:42       ` Jakub Sitnicki
2023-03-27 17:54 ` [PATCH bpf v2 04/12] bpf: sockmap, handle fin correctly John Fastabend
2023-04-03 11:11   ` Jakub Sitnicki
2023-04-03 21:05     ` John Fastabend
2023-04-04 10:11       ` Jakub Sitnicki
2023-03-27 17:54 ` [PATCH bpf v2 05/12] bpf: sockmap, TCP data stall on recv before accept John Fastabend
2023-03-27 17:54 ` [PATCH bpf v2 06/12] bpf: sockmap, wake up polling after data copy John Fastabend
2023-03-27 17:54 ` [PATCH bpf v2 07/12] bpf: sockmap incorrectly handling copied_seq John Fastabend
2023-03-27 17:54 ` [PATCH bpf v2 08/12] bpf: sockmap, pull socket helpers out of listen test for general use John Fastabend
2023-03-27 17:54 ` [PATCH bpf v2 09/12] bpf: sockmap, build helper to create connected socket pair John Fastabend
2023-03-27 17:54 ` [PATCH bpf v2 10/12] bpf: sockmap, test shutdown() correctly exits epoll and recv()=0 John Fastabend
2023-03-27 17:54 ` [PATCH bpf v2 11/12] bpf: sockmap, test FIONREAD returns correct bytes in rx buffer John Fastabend
2023-03-27 17:54 ` [PATCH bpf v2 12/12] bpf: sockmap, test FIONREAD returns correct bytes in rx buffer with drops John Fastabend
2023-03-27 18:17 ` [PATCH bpf v2 00/11] bpf sockmap fixes John Fastabend

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).