netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/4] tls: rx: follow ups to rx work
@ 2022-07-27  3:15 Jakub Kicinski
  2022-07-27  3:15 ` [PATCH net-next 1/4] selftests: tls: handful of memrnd() and length checks Jakub Kicinski
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Jakub Kicinski @ 2022-07-27  3:15 UTC (permalink / raw)
  To: davem
  Cc: netdev, edumazet, pabeni, borisp, john.fastabend, maximmi,
	tariqt, vfedorenko, Jakub Kicinski

A selection of unrelated changes. First some selftest polishing.
Next a change to rcvtimeo handling for locking based on an exchange
with Eric. Follow up to Paolo's comments from yesterday. Last but
not least a fix to a false positive warning, turns out I've been
testing with DEBUG_NET=n this whole time.

Jakub Kicinski (4):
  selftests: tls: handful of memrnd() and length checks
  tls: rx: don't consider sock_rcvtimeo() cumulative
  tls: strp: rename and multithread the workqueue
  tls: rx: fix the false positive warning

 net/tls/tls_strp.c                |  2 +-
 net/tls/tls_sw.c                  | 39 ++++++++++++++++---------------
 tools/testing/selftests/net/tls.c | 26 ++++++++++++++-------
 3 files changed, 38 insertions(+), 29 deletions(-)

-- 
2.37.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH net-next 1/4] selftests: tls: handful of memrnd() and length checks
  2022-07-27  3:15 [PATCH net-next 0/4] tls: rx: follow ups to rx work Jakub Kicinski
@ 2022-07-27  3:15 ` Jakub Kicinski
  2022-07-27  3:15 ` [PATCH net-next 2/4] tls: rx: don't consider sock_rcvtimeo() cumulative Jakub Kicinski
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Jakub Kicinski @ 2022-07-27  3:15 UTC (permalink / raw)
  To: davem
  Cc: netdev, edumazet, pabeni, borisp, john.fastabend, maximmi,
	tariqt, vfedorenko, Jakub Kicinski, shuah, linux-kselftest

Add a handful of memory randomizations and precise length checks.
Nothing is really broken here, I did this to increase confidence
when debugging. It does fix a GCC warning, tho. Apparently GCC
recognizes that memory needs to be initialized for send() but
does not recognize that for write().

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
CC: shuah@kernel.org
CC: linux-kselftest@vger.kernel.org
---
 tools/testing/selftests/net/tls.c | 26 +++++++++++++++++---------
 1 file changed, 17 insertions(+), 9 deletions(-)

diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c
index 4ecbac197c46..2cbb12736596 100644
--- a/tools/testing/selftests/net/tls.c
+++ b/tools/testing/selftests/net/tls.c
@@ -644,12 +644,14 @@ TEST_F(tls, splice_from_pipe2)
 	int p2[2];
 	int p[2];
 
+	memrnd(mem_send, sizeof(mem_send));
+
 	ASSERT_GE(pipe(p), 0);
 	ASSERT_GE(pipe(p2), 0);
-	EXPECT_GE(write(p[1], mem_send, 8000), 0);
-	EXPECT_GE(splice(p[0], NULL, self->fd, NULL, 8000, 0), 0);
-	EXPECT_GE(write(p2[1], mem_send + 8000, 8000), 0);
-	EXPECT_GE(splice(p2[0], NULL, self->fd, NULL, 8000, 0), 0);
+	EXPECT_EQ(write(p[1], mem_send, 8000), 8000);
+	EXPECT_EQ(splice(p[0], NULL, self->fd, NULL, 8000, 0), 8000);
+	EXPECT_EQ(write(p2[1], mem_send + 8000, 8000), 8000);
+	EXPECT_EQ(splice(p2[0], NULL, self->fd, NULL, 8000, 0), 8000);
 	EXPECT_EQ(recv(self->cfd, mem_recv, send_len, MSG_WAITALL), send_len);
 	EXPECT_EQ(memcmp(mem_send, mem_recv, send_len), 0);
 }
@@ -683,10 +685,12 @@ TEST_F(tls, splice_to_pipe)
 	char mem_recv[TLS_PAYLOAD_MAX_LEN];
 	int p[2];
 
+	memrnd(mem_send, sizeof(mem_send));
+
 	ASSERT_GE(pipe(p), 0);
-	EXPECT_GE(send(self->fd, mem_send, send_len, 0), 0);
-	EXPECT_GE(splice(self->cfd, NULL, p[1], NULL, send_len, 0), 0);
-	EXPECT_GE(read(p[0], mem_recv, send_len), 0);
+	EXPECT_EQ(send(self->fd, mem_send, send_len, 0), send_len);
+	EXPECT_EQ(splice(self->cfd, NULL, p[1], NULL, send_len, 0), send_len);
+	EXPECT_EQ(read(p[0], mem_recv, send_len), send_len);
 	EXPECT_EQ(memcmp(mem_send, mem_recv, send_len), 0);
 }
 
@@ -875,6 +879,8 @@ TEST_F(tls, multiple_send_single_recv)
 	char recv_mem[2 * 10];
 	char send_mem[10];
 
+	memrnd(send_mem, sizeof(send_mem));
+
 	EXPECT_GE(send(self->fd, send_mem, send_len, 0), 0);
 	EXPECT_GE(send(self->fd, send_mem, send_len, 0), 0);
 	memset(recv_mem, 0, total_len);
@@ -891,6 +897,8 @@ TEST_F(tls, single_send_multiple_recv_non_align)
 	char recv_mem[recv_len * 2];
 	char send_mem[total_len];
 
+	memrnd(send_mem, sizeof(send_mem));
+
 	EXPECT_GE(send(self->fd, send_mem, total_len, 0), 0);
 	memset(recv_mem, 0, total_len);
 
@@ -936,10 +944,10 @@ TEST_F(tls, recv_peek)
 	char buf[15];
 
 	EXPECT_EQ(send(self->fd, test_str, send_len, 0), send_len);
-	EXPECT_NE(recv(self->cfd, buf, send_len, MSG_PEEK), -1);
+	EXPECT_EQ(recv(self->cfd, buf, send_len, MSG_PEEK), send_len);
 	EXPECT_EQ(memcmp(test_str, buf, send_len), 0);
 	memset(buf, 0, sizeof(buf));
-	EXPECT_NE(recv(self->cfd, buf, send_len, 0), -1);
+	EXPECT_EQ(recv(self->cfd, buf, send_len, 0), send_len);
 	EXPECT_EQ(memcmp(test_str, buf, send_len), 0);
 }
 
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next 2/4] tls: rx: don't consider sock_rcvtimeo() cumulative
  2022-07-27  3:15 [PATCH net-next 0/4] tls: rx: follow ups to rx work Jakub Kicinski
  2022-07-27  3:15 ` [PATCH net-next 1/4] selftests: tls: handful of memrnd() and length checks Jakub Kicinski
@ 2022-07-27  3:15 ` Jakub Kicinski
  2022-07-28 13:50   ` Paolo Abeni
  2022-07-27  3:15 ` [PATCH net-next 3/4] tls: strp: rename and multithread the workqueue Jakub Kicinski
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Jakub Kicinski @ 2022-07-27  3:15 UTC (permalink / raw)
  To: davem
  Cc: netdev, edumazet, pabeni, borisp, john.fastabend, maximmi,
	tariqt, vfedorenko, Jakub Kicinski

Eric indicates that restarting rcvtimeo on every wait may be fine.
I thought that we should consider it cumulative, and made
tls_rx_reader_lock() return the remaining timeo after acquiring
the reader lock.

tls_rx_rec_wait() gets its timeout passed in by value so it
does not keep track of time previously spent.

Make the lock waiting consistent with tls_rx_rec_wait() - don't
keep track of time spent.

Read the timeo fresh in tls_rx_rec_wait().
It's unclear to me why callers are supposed to cache the value.

Link: https://lore.kernel.org/all/CANn89iKcmSfWgvZjzNGbsrndmCch2HC_EPZ7qmGboDNaWoviNQ@mail.gmail.com/
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
 net/tls/tls_sw.c | 37 +++++++++++++++++++------------------
 1 file changed, 19 insertions(+), 18 deletions(-)

diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 0fc24a5ce208..8bac7ea2c264 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -1283,11 +1283,14 @@ int tls_sw_sendpage(struct sock *sk, struct page *page,
 
 static int
 tls_rx_rec_wait(struct sock *sk, struct sk_psock *psock, bool nonblock,
-		bool released, long timeo)
+		bool released)
 {
 	struct tls_context *tls_ctx = tls_get_ctx(sk);
 	struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
 	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+	long timeo;
+
+	timeo = sock_rcvtimeo(sk, nonblock);
 
 	while (!tls_strp_msg_ready(ctx)) {
 		if (!sk_psock_queue_empty(psock))
@@ -1308,7 +1311,7 @@ tls_rx_rec_wait(struct sock *sk, struct sk_psock *psock, bool nonblock,
 		if (sock_flag(sk, SOCK_DONE))
 			return 0;
 
-		if (nonblock || !timeo)
+		if (!timeo)
 			return -EAGAIN;
 
 		released = true;
@@ -1842,8 +1845,8 @@ tls_read_flush_backlog(struct sock *sk, struct tls_prot_info *prot,
 	return sk_flush_backlog(sk);
 }
 
-static long tls_rx_reader_lock(struct sock *sk, struct tls_sw_context_rx *ctx,
-			       bool nonblock)
+static int tls_rx_reader_lock(struct sock *sk, struct tls_sw_context_rx *ctx,
+			      bool nonblock)
 {
 	long timeo;
 	int err;
@@ -1874,7 +1877,7 @@ static long tls_rx_reader_lock(struct sock *sk, struct tls_sw_context_rx *ctx,
 
 	WRITE_ONCE(ctx->reader_present, 1);
 
-	return timeo;
+	return 0;
 
 err_unlock:
 	release_sock(sk);
@@ -1913,8 +1916,7 @@ int tls_sw_recvmsg(struct sock *sk,
 	struct tls_msg *tlm;
 	ssize_t copied = 0;
 	bool async = false;
-	int target, err = 0;
-	long timeo;
+	int target, err;
 	bool is_kvec = iov_iter_is_kvec(&msg->msg_iter);
 	bool is_peek = flags & MSG_PEEK;
 	bool released = true;
@@ -1925,9 +1927,9 @@ int tls_sw_recvmsg(struct sock *sk,
 		return sock_recv_errqueue(sk, msg, len, SOL_IP, IP_RECVERR);
 
 	psock = sk_psock_get(sk);
-	timeo = tls_rx_reader_lock(sk, ctx, flags & MSG_DONTWAIT);
-	if (timeo < 0)
-		return timeo;
+	err = tls_rx_reader_lock(sk, ctx, flags & MSG_DONTWAIT);
+	if (err < 0)
+		return err;
 	bpf_strp_enabled = sk_psock_strp_enabled(psock);
 
 	/* If crypto failed the connection is broken */
@@ -1954,8 +1956,8 @@ int tls_sw_recvmsg(struct sock *sk,
 		struct tls_decrypt_arg darg;
 		int to_decrypt, chunk;
 
-		err = tls_rx_rec_wait(sk, psock, flags & MSG_DONTWAIT, released,
-				      timeo);
+		err = tls_rx_rec_wait(sk, psock, flags & MSG_DONTWAIT,
+				      released);
 		if (err <= 0) {
 			if (psock) {
 				chunk = sk_msg_recvmsg(sk, psock, msg, len,
@@ -2131,13 +2133,12 @@ ssize_t tls_sw_splice_read(struct socket *sock,  loff_t *ppos,
 	struct tls_msg *tlm;
 	struct sk_buff *skb;
 	ssize_t copied = 0;
-	int err = 0;
-	long timeo;
 	int chunk;
+	int err;
 
-	timeo = tls_rx_reader_lock(sk, ctx, flags & SPLICE_F_NONBLOCK);
-	if (timeo < 0)
-		return timeo;
+	err = tls_rx_reader_lock(sk, ctx, flags & SPLICE_F_NONBLOCK);
+	if (err < 0)
+		return err;
 
 	if (!skb_queue_empty(&ctx->rx_list)) {
 		skb = __skb_dequeue(&ctx->rx_list);
@@ -2145,7 +2146,7 @@ ssize_t tls_sw_splice_read(struct socket *sock,  loff_t *ppos,
 		struct tls_decrypt_arg darg;
 
 		err = tls_rx_rec_wait(sk, NULL, flags & SPLICE_F_NONBLOCK,
-				      true, timeo);
+				      true);
 		if (err <= 0)
 			goto splice_read_end;
 
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next 3/4] tls: strp: rename and multithread the workqueue
  2022-07-27  3:15 [PATCH net-next 0/4] tls: rx: follow ups to rx work Jakub Kicinski
  2022-07-27  3:15 ` [PATCH net-next 1/4] selftests: tls: handful of memrnd() and length checks Jakub Kicinski
  2022-07-27  3:15 ` [PATCH net-next 2/4] tls: rx: don't consider sock_rcvtimeo() cumulative Jakub Kicinski
@ 2022-07-27  3:15 ` Jakub Kicinski
  2022-07-27  3:15 ` [PATCH net-next 4/4] tls: rx: fix the false positive warning Jakub Kicinski
  2022-07-29  5:00 ` [PATCH net-next 0/4] tls: rx: follow ups to rx work patchwork-bot+netdevbpf
  4 siblings, 0 replies; 10+ messages in thread
From: Jakub Kicinski @ 2022-07-27  3:15 UTC (permalink / raw)
  To: davem
  Cc: netdev, edumazet, pabeni, borisp, john.fastabend, maximmi,
	tariqt, vfedorenko, Jakub Kicinski

Paolo points out that there seems to be no strong reason strparser
users a single threaded workqueue. Perhaps there were some performance
or pinning considerations? Since we don't know (and it's the slow path)
let's default to the most natural, multi-threaded choice.

Also rename the workqueue to "tls-".

Suggested-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
 net/tls/tls_strp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
index b945288c312e..3f1ec42a5923 100644
--- a/net/tls/tls_strp.c
+++ b/net/tls/tls_strp.c
@@ -480,7 +480,7 @@ void tls_strp_done(struct tls_strparser *strp)
 
 int __init tls_strp_dev_init(void)
 {
-	tls_strp_wq = create_singlethread_workqueue("kstrp");
+	tls_strp_wq = create_workqueue("tls-strp");
 	if (unlikely(!tls_strp_wq))
 		return -ENOMEM;
 
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next 4/4] tls: rx: fix the false positive warning
  2022-07-27  3:15 [PATCH net-next 0/4] tls: rx: follow ups to rx work Jakub Kicinski
                   ` (2 preceding siblings ...)
  2022-07-27  3:15 ` [PATCH net-next 3/4] tls: strp: rename and multithread the workqueue Jakub Kicinski
@ 2022-07-27  3:15 ` Jakub Kicinski
  2022-07-29  5:00 ` [PATCH net-next 0/4] tls: rx: follow ups to rx work patchwork-bot+netdevbpf
  4 siblings, 0 replies; 10+ messages in thread
From: Jakub Kicinski @ 2022-07-27  3:15 UTC (permalink / raw)
  To: davem
  Cc: netdev, edumazet, pabeni, borisp, john.fastabend, maximmi,
	tariqt, vfedorenko, Jakub Kicinski

I went too far in the accessor conversion, we can't use tls_strp_msg()
after decryption because the message may not be ready. What we care
about on this path is that the output skb is detached, i.e. we didn't
somehow just turn around and used the input skb with its TCP data
still attached. So look at the anchor directly.

Fixes: 84c61fe1a75b ("tls: rx: do not use the standard strparser")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
 net/tls/tls_sw.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 8bac7ea2c264..17db8c8811fa 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -2026,7 +2026,7 @@ int tls_sw_recvmsg(struct sock *sk,
 			bool partially_consumed = chunk > len;
 			struct sk_buff *skb = darg.skb;
 
-			DEBUG_NET_WARN_ON_ONCE(darg.skb == tls_strp_msg(ctx));
+			DEBUG_NET_WARN_ON_ONCE(darg.skb == ctx->strp.anchor);
 
 			if (async) {
 				/* TLS 1.2-only, to_decrypt must be text len */
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next 2/4] tls: rx: don't consider sock_rcvtimeo() cumulative
  2022-07-27  3:15 ` [PATCH net-next 2/4] tls: rx: don't consider sock_rcvtimeo() cumulative Jakub Kicinski
@ 2022-07-28 13:50   ` Paolo Abeni
  2022-07-28 15:42     ` Jakub Kicinski
  0 siblings, 1 reply; 10+ messages in thread
From: Paolo Abeni @ 2022-07-28 13:50 UTC (permalink / raw)
  To: Jakub Kicinski, davem
  Cc: netdev, edumazet, borisp, john.fastabend, maximmi, tariqt, vfedorenko

On Tue, 2022-07-26 at 20:15 -0700, Jakub Kicinski wrote:
> Eric indicates that restarting rcvtimeo on every wait may be fine.
> I thought that we should consider it cumulative, and made
> tls_rx_reader_lock() return the remaining timeo after acquiring
> the reader lock.
> 
> tls_rx_rec_wait() gets its timeout passed in by value so it
> does not keep track of time previously spent.
> 
> Make the lock waiting consistent with tls_rx_rec_wait() - don't
> keep track of time spent.
> 
> Read the timeo fresh in tls_rx_rec_wait().
> It's unclear to me why callers are supposed to cache the value.
> 
> Link: https://lore.kernel.org/all/CANn89iKcmSfWgvZjzNGbsrndmCch2HC_EPZ7qmGboDNaWoviNQ@mail.gmail.com/
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>

I have a possibly dumb question: this patch seems to introduce a change
of behavior (timeo re-arming after every progress vs a comulative one),
while re-reading the thread linked above it I (mis?)understand that the
timeo re-arming is the current behavior?

Could you please clarify/help me understand this better?

Thanks!

Paolo


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next 2/4] tls: rx: don't consider sock_rcvtimeo() cumulative
  2022-07-28 13:50   ` Paolo Abeni
@ 2022-07-28 15:42     ` Jakub Kicinski
  2022-07-28 15:47       ` Eric Dumazet
  2022-07-28 20:04       ` Paolo Abeni
  0 siblings, 2 replies; 10+ messages in thread
From: Jakub Kicinski @ 2022-07-28 15:42 UTC (permalink / raw)
  To: Paolo Abeni
  Cc: davem, netdev, edumazet, borisp, john.fastabend, maximmi, tariqt,
	vfedorenko

On Thu, 28 Jul 2022 15:50:03 +0200 Paolo Abeni wrote:
> I have a possibly dumb question: this patch seems to introduce a change
> of behavior (timeo re-arming after every progress vs a comulative one),
> while re-reading the thread linked above it I (mis?)understand that the
> timeo re-arming is the current behavior?
> 
> Could you please clarify/help me understand this better?

There're two places we use timeo - waiting for the exclusive reader 
lock and waiting for data. Currently (net-next as of now) we behave
cumulatively in the former and re-arm in the latter.

That's to say if we have a timeo of 50ms, and spend 10ms on the lock,
the wait for each new data record must be shorter than 40ms.

Does that make more sense?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next 2/4] tls: rx: don't consider sock_rcvtimeo() cumulative
  2022-07-28 15:42     ` Jakub Kicinski
@ 2022-07-28 15:47       ` Eric Dumazet
  2022-07-28 20:04       ` Paolo Abeni
  1 sibling, 0 replies; 10+ messages in thread
From: Eric Dumazet @ 2022-07-28 15:47 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Paolo Abeni, David Miller, netdev, Boris Pismenny,
	John Fastabend, Maxim Mikityanskiy, Tariq Toukan, vfedorenko

On Thu, Jul 28, 2022 at 5:42 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Thu, 28 Jul 2022 15:50:03 +0200 Paolo Abeni wrote:
> > I have a possibly dumb question: this patch seems to introduce a change
> > of behavior (timeo re-arming after every progress vs a comulative one),
> > while re-reading the thread linked above it I (mis?)understand that the
> > timeo re-arming is the current behavior?
> >
> > Could you please clarify/help me understand this better?
>
> There're two places we use timeo - waiting for the exclusive reader
> lock and waiting for data. Currently (net-next as of now) we behave
> cumulatively in the former and re-arm in the latter.
>
> That's to say if we have a timeo of 50ms, and spend 10ms on the lock,
> the wait for each new data record must be shorter than 40ms.

s/must/could/    because timers can expire later than expected.

>
> Does that make more sense?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next 2/4] tls: rx: don't consider sock_rcvtimeo() cumulative
  2022-07-28 15:42     ` Jakub Kicinski
  2022-07-28 15:47       ` Eric Dumazet
@ 2022-07-28 20:04       ` Paolo Abeni
  1 sibling, 0 replies; 10+ messages in thread
From: Paolo Abeni @ 2022-07-28 20:04 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: davem, netdev, edumazet, borisp, john.fastabend, maximmi, tariqt,
	vfedorenko

On Thu, 2022-07-28 at 08:42 -0700, Jakub Kicinski wrote:
> On Thu, 28 Jul 2022 15:50:03 +0200 Paolo Abeni wrote:
> > I have a possibly dumb question: this patch seems to introduce a change
> > of behavior (timeo re-arming after every progress vs a comulative one),
> > while re-reading the thread linked above it I (mis?)understand that the
> > timeo re-arming is the current behavior?
> > 
> > Could you please clarify/help me understand this better?
> 
> There're two places we use timeo - waiting for the exclusive reader 
> lock and waiting for data. Currently (net-next as of now) we behave
> cumulatively in the former and re-arm in the latter.

I see it now, thanks for the pointers.
> 
> That's to say if we have a timeo of 50ms, and spend 10ms on the lock,
> the wait for each new data record must be shorter than 40ms.
> 
> Does that make more sense?

Yes.

For the records, I feared a change of behavior that could break
existing user-space applications expecting/dependending on blocking
recvmsg() completing in ~timeo (yep, modulo timer precision - which is
reasonably good for "short" timers), but it looks like there is no
actual overall behaviour change.

So I'm fine with this patch.

Thanks!

Paolo


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next 0/4] tls: rx: follow ups to rx work
  2022-07-27  3:15 [PATCH net-next 0/4] tls: rx: follow ups to rx work Jakub Kicinski
                   ` (3 preceding siblings ...)
  2022-07-27  3:15 ` [PATCH net-next 4/4] tls: rx: fix the false positive warning Jakub Kicinski
@ 2022-07-29  5:00 ` patchwork-bot+netdevbpf
  4 siblings, 0 replies; 10+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-07-29  5:00 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: davem, netdev, edumazet, pabeni, borisp, john.fastabend, maximmi,
	tariqt, vfedorenko

Hello:

This series was applied to netdev/net-next.git (master)
by Jakub Kicinski <kuba@kernel.org>:

On Tue, 26 Jul 2022 20:15:20 -0700 you wrote:
> A selection of unrelated changes. First some selftest polishing.
> Next a change to rcvtimeo handling for locking based on an exchange
> with Eric. Follow up to Paolo's comments from yesterday. Last but
> not least a fix to a false positive warning, turns out I've been
> testing with DEBUG_NET=n this whole time.
> 
> Jakub Kicinski (4):
>   selftests: tls: handful of memrnd() and length checks
>   tls: rx: don't consider sock_rcvtimeo() cumulative
>   tls: strp: rename and multithread the workqueue
>   tls: rx: fix the false positive warning
> 
> [...]

Here is the summary with links:
  - [net-next,1/4] selftests: tls: handful of memrnd() and length checks
    https://git.kernel.org/netdev/net-next/c/86c591fb9142
  - [net-next,2/4] tls: rx: don't consider sock_rcvtimeo() cumulative
    https://git.kernel.org/netdev/net-next/c/70f03fc2fc14
  - [net-next,3/4] tls: strp: rename and multithread the workqueue
    https://git.kernel.org/netdev/net-next/c/d11ef9cc5a67
  - [net-next,4/4] tls: rx: fix the false positive warning
    https://git.kernel.org/netdev/net-next/c/e20691fa36c4

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-07-29  5:00 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-27  3:15 [PATCH net-next 0/4] tls: rx: follow ups to rx work Jakub Kicinski
2022-07-27  3:15 ` [PATCH net-next 1/4] selftests: tls: handful of memrnd() and length checks Jakub Kicinski
2022-07-27  3:15 ` [PATCH net-next 2/4] tls: rx: don't consider sock_rcvtimeo() cumulative Jakub Kicinski
2022-07-28 13:50   ` Paolo Abeni
2022-07-28 15:42     ` Jakub Kicinski
2022-07-28 15:47       ` Eric Dumazet
2022-07-28 20:04       ` Paolo Abeni
2022-07-27  3:15 ` [PATCH net-next 3/4] tls: strp: rename and multithread the workqueue Jakub Kicinski
2022-07-27  3:15 ` [PATCH net-next 4/4] tls: rx: fix the false positive warning Jakub Kicinski
2022-07-29  5:00 ` [PATCH net-next 0/4] tls: rx: follow ups to rx work patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).