netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next] net/tls: fix race condition causing kernel panic
@ 2020-05-19  7:43 Vinay Kumar Yadav
  2020-05-19 19:16 ` David Miller
  0 siblings, 1 reply; 9+ messages in thread
From: Vinay Kumar Yadav @ 2020-05-19  7:43 UTC (permalink / raw)
  To: netdev, davem; +Cc: kuba, secdev, Vinay Kumar Yadav

tls_sw_recvmsg() and tls_decrypt_done() can be run concurrently.
consider the scenario tls_decrypt_done() is about to run complete()
and tls_sw_recvmsg() completed reinit_completion() then tls_decrypt_done()
runs complete(). This sequence of execution results in wrong
completion (ctx->async_wait.completion.done is 1 but it should
be 0). Consequently, for next decrypt request, it will not wait for
completion and on connection close crypto resources freed
(crypto_free_aead()), there is no way to handle pending decrypt
response.
This race condition can be avoided by having atomic_read()
(in tls_sw_recvmsg()) mutually exclusive with
atomic_dec_return(),complete() (in tls_decrypt_done()).
Intoduced spin lock to ensure the mutual exclution.

Addressed similar problem in tx direction.

Signed-off-by: Vinay Kumar Yadav <vinay.yadav@chelsio.com>
---
 include/net/tls.h |  4 ++++
 net/tls/tls_sw.c  | 18 ++++++++++++++++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/include/net/tls.h b/include/net/tls.h
index bf9eb4823933..18cd4f418464 100644
--- a/include/net/tls.h
+++ b/include/net/tls.h
@@ -135,6 +135,8 @@ struct tls_sw_context_tx {
 	struct tls_rec *open_rec;
 	struct list_head tx_list;
 	atomic_t encrypt_pending;
+	/* protect crypto_wait with encrypt_pending */
+	spinlock_t encrypt_compl_lock;
 	int async_notify;
 	u8 async_capable:1;
 
@@ -155,6 +157,8 @@ struct tls_sw_context_rx {
 	u8 async_capable:1;
 	u8 decrypted:1;
 	atomic_t decrypt_pending;
+	/* protect crypto_wait with decrypt_pending*/
+	spinlock_t decrypt_compl_lock;
 	bool async_notify;
 };
 
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index c98e602a1a2d..3f0446d38a16 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -206,10 +206,12 @@ static void tls_decrypt_done(struct crypto_async_request *req, int err)
 
 	kfree(aead_req);
 
+	spin_lock_bh(&ctx->decrypt_compl_lock);
 	pending = atomic_dec_return(&ctx->decrypt_pending);
 
 	if (!pending && READ_ONCE(ctx->async_notify))
 		complete(&ctx->async_wait.completion);
+	spin_unlock_bh(&ctx->decrypt_compl_lock);
 }
 
 static int tls_do_decryption(struct sock *sk,
@@ -467,10 +469,12 @@ static void tls_encrypt_done(struct crypto_async_request *req, int err)
 			ready = true;
 	}
 
+	spin_lock_bh(&ctx->encrypt_compl_lock);
 	pending = atomic_dec_return(&ctx->encrypt_pending);
 
 	if (!pending && READ_ONCE(ctx->async_notify))
 		complete(&ctx->async_wait.completion);
+	spin_unlock_bh(&ctx->encrypt_compl_lock);
 
 	if (!ready)
 		return;
@@ -924,6 +928,7 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
 	int num_zc = 0;
 	int orig_size;
 	int ret = 0;
+	int pending;
 
 	if (msg->msg_flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL))
 		return -EOPNOTSUPP;
@@ -1092,7 +1097,10 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
 		/* Wait for pending encryptions to get completed */
 		smp_store_mb(ctx->async_notify, true);
 
-		if (atomic_read(&ctx->encrypt_pending))
+		spin_lock_bh(&ctx->encrypt_compl_lock);
+		pending = atomic_read(&ctx->encrypt_pending);
+		spin_unlock_bh(&ctx->encrypt_compl_lock);
+		if (pending)
 			crypto_wait_req(-EINPROGRESS, &ctx->async_wait);
 		else
 			reinit_completion(&ctx->async_wait.completion);
@@ -1727,6 +1735,7 @@ int tls_sw_recvmsg(struct sock *sk,
 	bool is_kvec = iov_iter_is_kvec(&msg->msg_iter);
 	bool is_peek = flags & MSG_PEEK;
 	int num_async = 0;
+	int pending;
 
 	flags |= nonblock;
 
@@ -1890,7 +1899,10 @@ int tls_sw_recvmsg(struct sock *sk,
 	if (num_async) {
 		/* Wait for all previously submitted records to be decrypted */
 		smp_store_mb(ctx->async_notify, true);
-		if (atomic_read(&ctx->decrypt_pending)) {
+		spin_lock_bh(&ctx->decrypt_compl_lock);
+		pending = atomic_read(&ctx->decrypt_pending);
+		spin_unlock_bh(&ctx->decrypt_compl_lock);
+		if (pending) {
 			err = crypto_wait_req(-EINPROGRESS, &ctx->async_wait);
 			if (err) {
 				/* one of async decrypt failed */
@@ -2287,6 +2299,7 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
 
 	if (tx) {
 		crypto_init_wait(&sw_ctx_tx->async_wait);
+		spin_lock_init(&sw_ctx_tx->encrypt_compl_lock);
 		crypto_info = &ctx->crypto_send.info;
 		cctx = &ctx->tx;
 		aead = &sw_ctx_tx->aead_send;
@@ -2295,6 +2308,7 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
 		sw_ctx_tx->tx_work.sk = sk;
 	} else {
 		crypto_init_wait(&sw_ctx_rx->async_wait);
+		spin_lock_init(&sw_ctx_rx->decrypt_compl_lock);
 		crypto_info = &ctx->crypto_recv.info;
 		cctx = &ctx->rx;
 		skb_queue_head_init(&sw_ctx_rx->rx_list);
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next] net/tls: fix race condition causing kernel panic
  2020-05-19  7:43 [PATCH net-next] net/tls: fix race condition causing kernel panic Vinay Kumar Yadav
@ 2020-05-19 19:16 ` David Miller
  2020-05-20 17:09   ` Vinay Kumar Yadav
  0 siblings, 1 reply; 9+ messages in thread
From: David Miller @ 2020-05-19 19:16 UTC (permalink / raw)
  To: vinay.yadav; +Cc: netdev, kuba, secdev

From: Vinay Kumar Yadav <vinay.yadav@chelsio.com>
Date: Tue, 19 May 2020 13:13:27 +0530

> +		spin_lock_bh(&ctx->encrypt_compl_lock);
> +		pending = atomic_read(&ctx->encrypt_pending);
> +		spin_unlock_bh(&ctx->encrypt_compl_lock);

The sequence:

	lock();
	x = p->y;
	unlock();

Does not fix anything, and is superfluous locking.

The value of p->y can change right after the unlock() call, so you
aren't protecting the atomic'ness of the read and test sequence
because the test is outside of the lock.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next] net/tls: fix race condition causing kernel panic
  2020-05-19 19:16 ` David Miller
@ 2020-05-20 17:09   ` Vinay Kumar Yadav
  2020-05-20 19:58     ` Jakub Kicinski
  0 siblings, 1 reply; 9+ messages in thread
From: Vinay Kumar Yadav @ 2020-05-20 17:09 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, kuba, secdev

David

On 5/20/2020 12:46 AM, David Miller wrote:
> From: Vinay Kumar Yadav <vinay.yadav@chelsio.com>
> Date: Tue, 19 May 2020 13:13:27 +0530
>
>> +		spin_lock_bh(&ctx->encrypt_compl_lock);
>> +		pending = atomic_read(&ctx->encrypt_pending);
>> +		spin_unlock_bh(&ctx->encrypt_compl_lock);
> The sequence:
>
> 	lock();
> 	x = p->y;
> 	unlock();
>
> Does not fix anything, and is superfluous locking.
>
> The value of p->y can change right after the unlock() call, so you
> aren't protecting the atomic'ness of the read and test sequence
> because the test is outside of the lock.

Here, by using lock I want to achieve atomicity of following statements.

pending = atomic_dec_return(&ctx->decrypt_pending);
       if (!pending && READ_ONCE(ctx->async_notify))
            complete(&ctx->async_wait.completion);

means, don't want to read (atomic_read(&ctx->decrypt_pending))
in middle of two statements

atomic_dec_return(&ctx->decrypt_pending);
and
complete(&ctx->async_wait.completion);

Why am I protecting only read, not test ?

complete() is called only if pending == 0
if we read atomic_read(&ctx->decrypt_pending) = 0
that means complete() is already called and its okay to
initialize completion (reinit_completion(&ctx->async_wait.completion))

if we read atomic_read(&ctx->decrypt_pending) as non zero that means:
1- complete() is going to be called or
2- complete() already called (if we read atomic_read(&ctx->decrypt_pending) == 1, then complete() is called just after unlock())
for both scenario its okay to go into wait (crypto_wait_req(-EINPROGRESS, &ctx->async_wait))


Thanks,
Vinay

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next] net/tls: fix race condition causing kernel panic
  2020-05-20 17:09   ` Vinay Kumar Yadav
@ 2020-05-20 19:58     ` Jakub Kicinski
  2020-05-21  8:58       ` Vinay Kumar Yadav
  0 siblings, 1 reply; 9+ messages in thread
From: Jakub Kicinski @ 2020-05-20 19:58 UTC (permalink / raw)
  To: Vinay Kumar Yadav; +Cc: David Miller, netdev, secdev

On Wed, 20 May 2020 22:39:11 +0530 Vinay Kumar Yadav wrote:
> On 5/20/2020 12:46 AM, David Miller wrote:
> > From: Vinay Kumar Yadav <vinay.yadav@chelsio.com>
> > Date: Tue, 19 May 2020 13:13:27 +0530
> >  
> >> +		spin_lock_bh(&ctx->encrypt_compl_lock);
> >> +		pending = atomic_read(&ctx->encrypt_pending);
> >> +		spin_unlock_bh(&ctx->encrypt_compl_lock);  
> > The sequence:
> >
> > 	lock();
> > 	x = p->y;
> > 	unlock();
> >
> > Does not fix anything, and is superfluous locking.
> >
> > The value of p->y can change right after the unlock() call, so you
> > aren't protecting the atomic'ness of the read and test sequence
> > because the test is outside of the lock.  
> 
> Here, by using lock I want to achieve atomicity of following statements.
> 
> pending = atomic_dec_return(&ctx->decrypt_pending);
>        if (!pending && READ_ONCE(ctx->async_notify))
>             complete(&ctx->async_wait.completion);
> 
> means, don't want to read (atomic_read(&ctx->decrypt_pending))
> in middle of two statements
> 
> atomic_dec_return(&ctx->decrypt_pending);
> and
> complete(&ctx->async_wait.completion);
> 
> Why am I protecting only read, not test ?

Protecting code, not data, is rarely correct, though.

> complete() is called only if pending == 0
> if we read atomic_read(&ctx->decrypt_pending) = 0
> that means complete() is already called and its okay to
> initialize completion (reinit_completion(&ctx->async_wait.completion))
> 
> if we read atomic_read(&ctx->decrypt_pending) as non zero that means:
> 1- complete() is going to be called or
> 2- complete() already called (if we read atomic_read(&ctx->decrypt_pending) == 1, then complete() is called just after unlock())
> for both scenario its okay to go into wait (crypto_wait_req(-EINPROGRESS, &ctx->async_wait))

First of all thanks for the fix, this completion code is unnecessarily
complex and brittle if you ask me.

That said I don't think your fix is 100%.

Consider this scenario:

# 1. writer queues first record on CPU0
# 2. encrypt completes on CPU1

 	pending = atomic_dec_return(&ctx->decrypt_pending);
	# pending is 0
 
# IRQ comes and CPU1 goes off to do something else with spin lock held
# writer proceeds to encrypt next record on CPU0
# writer is done, enters wait 

	smp_store_mb(ctx->async_notify, true);

# Now CPU1 is back from the interrupt, does the check

 	if (!pending && READ_ONCE(ctx->async_notify))
 		complete(&ctx->async_wait.completion);

# and it completes the wait, even though the atomic decrypt_pending was
#   bumped back to 1 

You need to hold the lock around the async_notify false -> true
transition as well. The store no longer needs to have a barrier.

For async_notify true -> false transitions please add a comment 
saying that there can be no concurrent accesses, since we have no
pending crypt operations.


Another way to solve this would be to add a large value to the pending
counter to indicate that there is a waiter:

	if (atomic_add_and_fetch(&decrypt_pending, 1000) > 1000)
		wait();
	else
		reinit();
	atomic_sub(decrypt_pending, 1000)

completion:

	if (atomic_dec_return(&decrypt_pending) == 1000)
		complete()

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next] net/tls: fix race condition causing kernel panic
  2020-05-20 19:58     ` Jakub Kicinski
@ 2020-05-21  8:58       ` Vinay Kumar Yadav
  2020-05-21 18:56         ` Jakub Kicinski
  0 siblings, 1 reply; 9+ messages in thread
From: Vinay Kumar Yadav @ 2020-05-21  8:58 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: David Miller, netdev, secdev

Jakub

On 5/21/2020 1:28 AM, Jakub Kicinski wrote:
> On Wed, 20 May 2020 22:39:11 +0530 Vinay Kumar Yadav wrote:
>> On 5/20/2020 12:46 AM, David Miller wrote:
>>> From: Vinay Kumar Yadav <vinay.yadav@chelsio.com>
>>> Date: Tue, 19 May 2020 13:13:27 +0530
>>>   
>>>> +		spin_lock_bh(&ctx->encrypt_compl_lock);
>>>> +		pending = atomic_read(&ctx->encrypt_pending);
>>>> +		spin_unlock_bh(&ctx->encrypt_compl_lock);
>>> The sequence:
>>>
>>> 	lock();
>>> 	x = p->y;
>>> 	unlock();
>>>
>>> Does not fix anything, and is superfluous locking.
>>>
>>> The value of p->y can change right after the unlock() call, so you
>>> aren't protecting the atomic'ness of the read and test sequence
>>> because the test is outside of the lock.
>> Here, by using lock I want to achieve atomicity of following statements.
>>
>> pending = atomic_dec_return(&ctx->decrypt_pending);
>>         if (!pending && READ_ONCE(ctx->async_notify))
>>              complete(&ctx->async_wait.completion);
>>
>> means, don't want to read (atomic_read(&ctx->decrypt_pending))
>> in middle of two statements
>>
>> atomic_dec_return(&ctx->decrypt_pending);
>> and
>> complete(&ctx->async_wait.completion);
>>
>> Why am I protecting only read, not test ?
> Protecting code, not data, is rarely correct, though.
>
>> complete() is called only if pending == 0
>> if we read atomic_read(&ctx->decrypt_pending) = 0
>> that means complete() is already called and its okay to
>> initialize completion (reinit_completion(&ctx->async_wait.completion))
>>
>> if we read atomic_read(&ctx->decrypt_pending) as non zero that means:
>> 1- complete() is going to be called or
>> 2- complete() already called (if we read atomic_read(&ctx->decrypt_pending) == 1, then complete() is called just after unlock())
>> for both scenario its okay to go into wait (crypto_wait_req(-EINPROGRESS, &ctx->async_wait))
> First of all thanks for the fix, this completion code is unnecessarily
> complex and brittle if you ask me.
>
> That said I don't think your fix is 100%.
>
> Consider this scenario:
>
> # 1. writer queues first record on CPU0
> # 2. encrypt completes on CPU1
>
>   	pending = atomic_dec_return(&ctx->decrypt_pending);
> 	# pending is 0
>   
> # IRQ comes and CPU1 goes off to do something else with spin lock held
> # writer proceeds to encrypt next record on CPU0
> # writer is done, enters wait

Considering the lock in fix ("pending" is local variable), when writer reads
pending == 0 [pending = atomic_read(&ctx->encrypt_pending); --> from tls_sw_sendmsg()],
that means encrypt complete() [from tls_encrypt_done()] is already called.

and if pending == 1 [pending = atomic_read(&ctx->encrypt_pending); --> from tls_sw_sendmsg()],
that means writer is going to wait for atomic_dec_return(&ctx->decrypt_pending) and
complete() [from tls_encrypt_done()]  to be called atomically.

This way, writer is not going to proceed to encrypt next record on CPU0 without complete().

>
> 	smp_store_mb(ctx->async_notify, true);
>
> # Now CPU1 is back from the interrupt, does the check
>
>   	if (!pending && READ_ONCE(ctx->async_notify))
>   		complete(&ctx->async_wait.completion);
>
> # and it completes the wait, even though the atomic decrypt_pending was
> #   bumped back to 1
>
> You need to hold the lock around the async_notify false -> true
> transition as well. The store no longer needs to have a barrier.
>
> For async_notify true -> false transitions please add a comment
> saying that there can be no concurrent accesses, since we have no
> pending crypt operations.
>
>
> Another way to solve this would be to add a large value to the pending
> counter to indicate that there is a waiter:
>
> 	if (atomic_add_and_fetch(&decrypt_pending, 1000) > 1000)
> 		wait();
> 	else
> 		reinit();
> 	atomic_sub(decrypt_pending, 1000)
>
> completion:
>
> 	if (atomic_dec_return(&decrypt_pending) == 1000)
> 		complete()

Considering suggested solutions if this patch doesn't solve the problem.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next] net/tls: fix race condition causing kernel panic
  2020-05-21  8:58       ` Vinay Kumar Yadav
@ 2020-05-21 18:56         ` Jakub Kicinski
  2020-05-21 20:32           ` Vinay Kumar Yadav
       [not found]           ` <bbcfb6c7-8e98-63a1-4ff6-d185bdcf4708@chelsio.com>
  0 siblings, 2 replies; 9+ messages in thread
From: Jakub Kicinski @ 2020-05-21 18:56 UTC (permalink / raw)
  To: Vinay Kumar Yadav; +Cc: David Miller, netdev, secdev

On Thu, 21 May 2020 14:28:27 +0530 Vinay Kumar Yadav wrote:
> Considering the lock in fix ("pending" is local variable), when writer reads
> pending == 0 [pending = atomic_read(&ctx->encrypt_pending); --> from tls_sw_sendmsg()],
> that means encrypt complete() [from tls_encrypt_done()] is already called.

Please indulge me with full sentences. I can't parse this.

> and if pending == 1 [pending = atomic_read(&ctx->encrypt_pending); --> from tls_sw_sendmsg()],
> that means writer is going to wait for atomic_dec_return(&ctx->decrypt_pending) and
> complete() [from tls_encrypt_done()]  to be called atomically.
> 
> This way, writer is not going to proceed to encrypt next record on CPU0 without complete().


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next] net/tls: fix race condition causing kernel panic
  2020-05-21 18:56         ` Jakub Kicinski
@ 2020-05-21 20:32           ` Vinay Kumar Yadav
  2020-05-21 21:40             ` Jakub Kicinski
       [not found]           ` <bbcfb6c7-8e98-63a1-4ff6-d185bdcf4708@chelsio.com>
  1 sibling, 1 reply; 9+ messages in thread
From: Vinay Kumar Yadav @ 2020-05-21 20:32 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: David Miller, netdev, secdev

Jakub

On 5/22/2020 12:26 AM, Jakub Kicinski wrote:
> On Thu, 21 May 2020 14:28:27 +0530 Vinay Kumar Yadav wrote:
>> Considering the lock in fix ("pending" is local variable), when writer reads
>> pending == 0 [pending = atomic_read(&ctx->encrypt_pending); --> from tls_sw_sendmsg()],
>> that means encrypt complete() [from tls_encrypt_done()] is already called.
> Please indulge me with full sentences. I can't parse this.

Here, I am explaining that your scenario is covered in this fix.

# writer code. spin_lock_bh(&ctx->encrypt_compl_lock);
pending = atomic_read(&ctx->encrypt_pending);
spin_unlock_bh(&ctx->encrypt_compl_lock);
if (pending)
  			crypto_wait_req(-EINPROGRESS, &ctx->async_wait);

# Completion code.

spin_lock_bh(&ctx->encrypt_compl_lock);
  	pending = atomic_dec_return(&ctx->encrypt_pending);
  
  	if (!pending && READ_ONCE(ctx->async_notify))
  		complete(&ctx->async_wait.completion);
spin_unlock_bh(&ctx->encrypt_compl_lock); # "pending" is local variable.
  
Your scenario:

     # 1. writer queues first record on CPU0
     # 2. encrypt completes on CPU1

  	    pending = atomic_dec_return(&ctx->decrypt_pending);
	    # pending is 0
  
     # IRQ comes and CPU1 goes off to do something else with spin lock held
     # writer proceeds to encrypt next record on CPU0
     # writer is done, enters wait

	    smp_store_mb(ctx->async_notify, true);

     # Now CPU1 is back from the interrupt, does the check

  	    if (!pending && READ_ONCE(ctx->async_notify))
  		   complete(&ctx->async_wait.completion);

     # and it completes the wait, even though the atomic decrypt_pending was
     #   bumped back to 1

Explanation:

When writer reads pending == 0,
that means completion is already called complete().
its okay writer to  initialize completion. When writer reads pending == 1,
that means writer is going to wait for completion.

This way, writer is not going to proceed to encrypt next record on CPU0 without complete().

>
>> and if pending == 1 [pending = atomic_read(&ctx->encrypt_pending); --> from tls_sw_sendmsg()],
>> that means writer is going to wait for atomic_dec_return(&ctx->decrypt_pending) and
>> complete() [from tls_encrypt_done()]  to be called atomically.
>>
>> This way, writer is not going to proceed to encrypt next record on CPU0 without complete().

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next] net/tls: fix race condition causing kernel panic
       [not found]           ` <bbcfb6c7-8e98-63a1-4ff6-d185bdcf4708@chelsio.com>
@ 2020-05-21 21:38             ` David Miller
  0 siblings, 0 replies; 9+ messages in thread
From: David Miller @ 2020-05-21 21:38 UTC (permalink / raw)
  To: vinay.yadav; +Cc: kuba, netdev, secdev

From: Vinay Kumar Yadav <vinay.yadav@chelsio.com>
Date: Fri, 22 May 2020 01:54:25 +0530

> I am explaining that your scenario is covered in this fix.

Please change your commit message to be more readable in this way,
thank you.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next] net/tls: fix race condition causing kernel panic
  2020-05-21 20:32           ` Vinay Kumar Yadav
@ 2020-05-21 21:40             ` Jakub Kicinski
  0 siblings, 0 replies; 9+ messages in thread
From: Jakub Kicinski @ 2020-05-21 21:40 UTC (permalink / raw)
  To: Vinay Kumar Yadav; +Cc: David Miller, netdev, secdev

On Fri, 22 May 2020 02:02:10 +0530 Vinay Kumar Yadav wrote:
> When writer reads pending == 0,
> that means completion is already called complete().
> its okay writer to  initialize completion. When writer reads pending == 1,
> that means writer is going to wait for completion.
> 
> This way, writer is not going to proceed to encrypt next record on CPU0 without complete().

I assume by writer you mean the CPU queuing up the records.

The writer does not wait between records, just before returning to user
space. The writer can queue multiple records then wait.


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-05-21 21:40 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-19  7:43 [PATCH net-next] net/tls: fix race condition causing kernel panic Vinay Kumar Yadav
2020-05-19 19:16 ` David Miller
2020-05-20 17:09   ` Vinay Kumar Yadav
2020-05-20 19:58     ` Jakub Kicinski
2020-05-21  8:58       ` Vinay Kumar Yadav
2020-05-21 18:56         ` Jakub Kicinski
2020-05-21 20:32           ` Vinay Kumar Yadav
2020-05-21 21:40             ` Jakub Kicinski
     [not found]           ` <bbcfb6c7-8e98-63a1-4ff6-d185bdcf4708@chelsio.com>
2020-05-21 21:38             ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).