linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2 v3] a stateid race and a cleanup
@ 2020-09-23 17:37 Benjamin Coddington
  2020-09-23 17:37 ` [PATCH 1/2 v3] NFSv4: Fix a livelock when CLOSE pre-emptively bumps state sequence Benjamin Coddington
  2020-09-23 17:37 ` [PATCH 2/2 v3] NFSv4: cleanup unused zero_stateid copy Benjamin Coddington
  0 siblings, 2 replies; 8+ messages in thread
From: Benjamin Coddington @ 2020-09-23 17:37 UTC (permalink / raw)
  To: trond.myklebust, anna.schumaker; +Cc: linux-nfs

v3: pull out unnecessary state match checks.  V2 cover letter follows:

Cover letter this time explaining the v2:  Anna helped me find that the
first version's stable fix was wrong, but was fixed by the refactor patch
that followed.

After putting in the logic to fix the stable version, it was messy enough
that it made more sense to squash the two patches together.  So, this time
the first patch does rewrite nfs_need_update_open_stateid a bit more in
order to handle both cases:
	- where two OPENs race to NFS_OPEN_STATE and the second wins
	- where an OPEN and CLOSE+1 race to update nfs4_state and CLOSE+1 wins

The end result is that these two patches are code-equivalent to the first
three.  (It is still getting one final run through my testing, but I haven't
delayed posting for that).



Benjamin Coddington (2):
  NFSv4: Fix a livelock when CLOSE pre-emptively bumps state sequence
  NFSv4: cleanup unused zero_stateid copy

 fs/nfs/nfs4proc.c  | 16 +++++++++-------
 fs/nfs/nfs4state.c |  8 ++------
 2 files changed, 11 insertions(+), 13 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2 v3] NFSv4: Fix a livelock when CLOSE pre-emptively bumps state sequence
  2020-09-23 17:37 [PATCH 0/2 v3] a stateid race and a cleanup Benjamin Coddington
@ 2020-09-23 17:37 ` Benjamin Coddington
  2020-09-23 18:53   ` Trond Myklebust
  2020-09-23 17:37 ` [PATCH 2/2 v3] NFSv4: cleanup unused zero_stateid copy Benjamin Coddington
  1 sibling, 1 reply; 8+ messages in thread
From: Benjamin Coddington @ 2020-09-23 17:37 UTC (permalink / raw)
  To: trond.myklebust, anna.schumaker; +Cc: linux-nfs

Since commit 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in
CLOSE/OPEN_DOWNGRADE") the following livelock may occur if a CLOSE races
with the update of the nfs_state:

Process 1           Process 2           Server
=========           =========           ========
 OPEN file
                    OPEN file
                                        Reply OPEN (1)
                                        Reply OPEN (2)
 Update state (1)
 CLOSE file (1)
                                        Reply OLD_STATEID (1)
 CLOSE file (2)
                                        Reply CLOSE (-1)
                    Update state (2)
                    wait for state change
 OPEN file
                    wake
 CLOSE file
 OPEN file
                    wake
 CLOSE file
 ...
                    ...

As long as the first process continues updating state, the second process
will fail to exit the loop in nfs_set_open_stateid_locked().  This livelock
has been observed in generic/168.

Fix this by detecting the case in nfs_need_update_open_stateid() and
then exit the loop if:
 - the state is NFS_OPEN_STATE, and
 - the stateid sequence is > 1, and
 - the stateid doesn't match the current open stateid

Fixes: 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in CLOSE/OPEN_DOWNGRADE")
Cc: stable@vger.kernel.org # v5.4+
Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
---
 fs/nfs/nfs4proc.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 6e95c85fe395..8c2bb91127ee 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -1588,19 +1588,21 @@ static void nfs_test_and_clear_all_open_stateid(struct nfs4_state *state)
 static bool nfs_need_update_open_stateid(struct nfs4_state *state,
 		const nfs4_stateid *stateid)
 {
-	if (test_bit(NFS_OPEN_STATE, &state->flags) == 0 ||
-	    !nfs4_stateid_match_other(stateid, &state->open_stateid)) {
+	if (test_bit(NFS_OPEN_STATE, &state->flags)) {
+		/* The common case - we're updating to a new sequence number */
+		if (nfs4_stateid_match_other(stateid, &state->open_stateid) &&
+			nfs4_stateid_is_newer(stateid, &state->open_stateid)) {
+			nfs_state_log_out_of_order_open_stateid(state, stateid);
+			return true;
+		}
+	} else {
+		/* This is the first OPEN */
 		if (stateid->seqid == cpu_to_be32(1))
 			nfs_state_log_update_open_stateid(state);
 		else
 			set_bit(NFS_STATE_CHANGE_WAIT, &state->flags);
 		return true;
 	}
-
-	if (nfs4_stateid_is_newer(stateid, &state->open_stateid)) {
-		nfs_state_log_out_of_order_open_stateid(state, stateid);
-		return true;
-	}
 	return false;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2 v3] NFSv4: cleanup unused zero_stateid copy
  2020-09-23 17:37 [PATCH 0/2 v3] a stateid race and a cleanup Benjamin Coddington
  2020-09-23 17:37 ` [PATCH 1/2 v3] NFSv4: Fix a livelock when CLOSE pre-emptively bumps state sequence Benjamin Coddington
@ 2020-09-23 17:37 ` Benjamin Coddington
  1 sibling, 0 replies; 8+ messages in thread
From: Benjamin Coddington @ 2020-09-23 17:37 UTC (permalink / raw)
  To: trond.myklebust, anna.schumaker; +Cc: linux-nfs

Since commit d9aba2b40de6 ("NFSv4: Don't use the zero stateid with
layoutget") the zero stateid will never be used.

Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
---
 fs/nfs/nfs4state.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 4bf10792cb5b..06bbe19c8b2c 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -1018,18 +1018,14 @@ static int nfs4_copy_lock_stateid(nfs4_stateid *dst,
 bool nfs4_copy_open_stateid(nfs4_stateid *dst, struct nfs4_state *state)
 {
 	bool ret;
-	const nfs4_stateid *src;
 	int seq;
 
 	do {
 		ret = false;
-		src = &zero_stateid;
 		seq = read_seqbegin(&state->seqlock);
-		if (test_bit(NFS_OPEN_STATE, &state->flags)) {
-			src = &state->open_stateid;
+		if (test_bit(NFS_OPEN_STATE, &state->flags))
 			ret = true;
-		}
-		nfs4_stateid_copy(dst, src);
+		nfs4_stateid_copy(dst, &state->open_stateid);
 	} while (read_seqretry(&state->seqlock, seq));
 	return ret;
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2 v3] NFSv4: Fix a livelock when CLOSE pre-emptively bumps state sequence
  2020-09-23 17:37 ` [PATCH 1/2 v3] NFSv4: Fix a livelock when CLOSE pre-emptively bumps state sequence Benjamin Coddington
@ 2020-09-23 18:53   ` Trond Myklebust
  2020-09-23 19:29     ` Benjamin Coddington
  0 siblings, 1 reply; 8+ messages in thread
From: Trond Myklebust @ 2020-09-23 18:53 UTC (permalink / raw)
  To: bcodding, anna.schumaker; +Cc: linux-nfs

On Wed, 2020-09-23 at 13:37 -0400, Benjamin Coddington wrote:
> Since commit 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in
> CLOSE/OPEN_DOWNGRADE") the following livelock may occur if a CLOSE
> races
> with the update of the nfs_state:
> 
> Process 1           Process 2           Server
> =========           =========           ========
>  OPEN file
>                     OPEN file
>                                         Reply OPEN (1)
>                                         Reply OPEN (2)
>  Update state (1)
>  CLOSE file (1)
>                                         Reply OLD_STATEID (1)
>  CLOSE file (2)
>                                         Reply CLOSE (-1)
>                     Update state (2)
>                     wait for state change
>  OPEN file
>                     wake
>  CLOSE file
>  OPEN file
>                     wake
>  CLOSE file
>  ...
>                     ...
> 
> As long as the first process continues updating state, the second
> process
> will fail to exit the loop in nfs_set_open_stateid_locked().  This
> livelock
> has been observed in generic/168.
> 
> Fix this by detecting the case in nfs_need_update_open_stateid() and
> then exit the loop if:
>  - the state is NFS_OPEN_STATE, and
>  - the stateid sequence is > 1, and
>  - the stateid doesn't match the current open stateid
> 
> Fixes: 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in
> CLOSE/OPEN_DOWNGRADE")
> Cc: stable@vger.kernel.org # v5.4+
> Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
> ---
>  fs/nfs/nfs4proc.c | 16 +++++++++-------
>  1 file changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
> index 6e95c85fe395..8c2bb91127ee 100644
> --- a/fs/nfs/nfs4proc.c
> +++ b/fs/nfs/nfs4proc.c
> @@ -1588,19 +1588,21 @@ static void
> nfs_test_and_clear_all_open_stateid(struct nfs4_state *state)
>  static bool nfs_need_update_open_stateid(struct nfs4_state *state,
>  		const nfs4_stateid *stateid)
>  {
> -	if (test_bit(NFS_OPEN_STATE, &state->flags) == 0 ||
> -	    !nfs4_stateid_match_other(stateid, &state->open_stateid)) {
> +	if (test_bit(NFS_OPEN_STATE, &state->flags)) {
> +		/* The common case - we're updating to a new sequence
> number */
> +		if (nfs4_stateid_match_other(stateid, &state-
> >open_stateid) &&
> +			nfs4_stateid_is_newer(stateid, &state-
> >open_stateid)) {
> +			nfs_state_log_out_of_order_open_stateid(state,
> stateid);
> +			return true;
> +		}
> +	} else {
> +		/* This is the first OPEN */
>  		if (stateid->seqid == cpu_to_be32(1))
>  			nfs_state_log_update_open_stateid(state);
>  		else
>  			set_bit(NFS_STATE_CHANGE_WAIT, &state->flags);

Isn't this going to cause a reopen of the file on the client if it ends
up processing the reply to the second OPEN after it processes the
successful CLOSE?

Isn't the real problem here rather that the reply to CLOSE needs to be
processed in order too?

>  		return true;
>  	}
> -
> -	if (nfs4_stateid_is_newer(stateid, &state->open_stateid)) {
> -		nfs_state_log_out_of_order_open_stateid(state,
> stateid);
> -		return true;
> -	}
>  	return false;
>  }
>  
-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2 v3] NFSv4: Fix a livelock when CLOSE pre-emptively bumps state sequence
  2020-09-23 18:53   ` Trond Myklebust
@ 2020-09-23 19:29     ` Benjamin Coddington
  2020-09-23 19:39       ` Trond Myklebust
  0 siblings, 1 reply; 8+ messages in thread
From: Benjamin Coddington @ 2020-09-23 19:29 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: anna.schumaker, linux-nfs

On 23 Sep 2020, at 14:53, Trond Myklebust wrote:

> On Wed, 2020-09-23 at 13:37 -0400, Benjamin Coddington wrote:
>> Since commit 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in
>> CLOSE/OPEN_DOWNGRADE") the following livelock may occur if a CLOSE
>> races
>> with the update of the nfs_state:
>>
>> Process 1           Process 2           Server
>> =========           =========           ========
>>  OPEN file
>>                     OPEN file
>>                                         Reply OPEN (1)
>>                                         Reply OPEN (2)
>>  Update state (1)
>>  CLOSE file (1)
>>                                         Reply OLD_STATEID (1)
>>  CLOSE file (2)
>>                                         Reply CLOSE (-1)
>>                     Update state (2)
>>                     wait for state change
>>  OPEN file
>>                     wake
>>  CLOSE file
>>  OPEN file
>>                     wake
>>  CLOSE file
>>  ...
>>                     ...
>>
>> As long as the first process continues updating state, the second
>> process
>> will fail to exit the loop in nfs_set_open_stateid_locked().  This
>> livelock
>> has been observed in generic/168.
>>
>> Fix this by detecting the case in nfs_need_update_open_stateid() and
>> then exit the loop if:
>>  - the state is NFS_OPEN_STATE, and
>>  - the stateid sequence is > 1, and
>>  - the stateid doesn't match the current open stateid
>>
>> Fixes: 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in
>> CLOSE/OPEN_DOWNGRADE")
>> Cc: stable@vger.kernel.org # v5.4+
>> Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
>> ---
>>  fs/nfs/nfs4proc.c | 16 +++++++++-------
>>  1 file changed, 9 insertions(+), 7 deletions(-)
>>
>> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
>> index 6e95c85fe395..8c2bb91127ee 100644
>> --- a/fs/nfs/nfs4proc.c
>> +++ b/fs/nfs/nfs4proc.c
>> @@ -1588,19 +1588,21 @@ static void
>> nfs_test_and_clear_all_open_stateid(struct nfs4_state *state)
>>  static bool nfs_need_update_open_stateid(struct nfs4_state *state,
>>  		const nfs4_stateid *stateid)
>>  {
>> -	if (test_bit(NFS_OPEN_STATE, &state->flags) == 0 ||
>> -	    !nfs4_stateid_match_other(stateid, &state->open_stateid)) {
>> +	if (test_bit(NFS_OPEN_STATE, &state->flags)) {
>> +		/* The common case - we're updating to a new sequence
>> number */
>> +		if (nfs4_stateid_match_other(stateid, &state-
>>> open_stateid) &&
>> +			nfs4_stateid_is_newer(stateid, &state-
>>> open_stateid)) {
>> +			nfs_state_log_out_of_order_open_stateid(state,
>> stateid);
>> +			return true;
>> +		}
>> +	} else {
>> +		/* This is the first OPEN */
>>  		if (stateid->seqid == cpu_to_be32(1))
>>  			nfs_state_log_update_open_stateid(state);
>>  		else
>>  			set_bit(NFS_STATE_CHANGE_WAIT, &state->flags);
>
> Isn't this going to cause a reopen of the file on the client if it ends
> up processing the reply to the second OPEN after it processes the
> successful CLOSE?

Yes, that's true - but that's a different bug that I haven't noticed or
considered.  This patch isn't introducing it.

> Isn't the real problem here rather that the reply to CLOSE needs to be
> processed in order too?

Not just the reply, the actual request as well.  If we have a way to
properly serialize procssing of CLOSE responses, we could just not send the
CLOSE in the first place.

I'd rather not send the CLOSE if there's another OPEN in play, and if that's
the barrier to getting this particular bug fixed, I'll work on that.  What
mechanism can be used?  What if the client kept a separate "pending" stateid
that could be updated before each operation that would attempt to predict
what the server's resulting change would be?

Maybe better would be a counter that gets incremented for each transition
to/from NFS_OPEN_STATE so we can check if the stateid is in the same
generation and a counter for outstanding operations that are expected to
bump the sequence.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2 v3] NFSv4: Fix a livelock when CLOSE pre-emptively bumps state sequence
  2020-09-23 19:29     ` Benjamin Coddington
@ 2020-09-23 19:39       ` Trond Myklebust
  2020-09-23 19:46         ` Benjamin Coddington
  0 siblings, 1 reply; 8+ messages in thread
From: Trond Myklebust @ 2020-09-23 19:39 UTC (permalink / raw)
  To: bcodding; +Cc: linux-nfs, anna.schumaker

On Wed, 2020-09-23 at 15:29 -0400, Benjamin Coddington wrote:
> On 23 Sep 2020, at 14:53, Trond Myklebust wrote:
> 
> > On Wed, 2020-09-23 at 13:37 -0400, Benjamin Coddington wrote:
> > > Since commit 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in
> > > CLOSE/OPEN_DOWNGRADE") the following livelock may occur if a
> > > CLOSE
> > > races
> > > with the update of the nfs_state:
> > > 
> > > Process 1           Process 2           Server
> > > =========           =========           ========
> > >  OPEN file
> > >                     OPEN file
> > >                                         Reply OPEN (1)
> > >                                         Reply OPEN (2)
> > >  Update state (1)
> > >  CLOSE file (1)
> > >                                         Reply OLD_STATEID (1)
> > >  CLOSE file (2)
> > >                                         Reply CLOSE (-1)
> > >                     Update state (2)
> > >                     wait for state change
> > >  OPEN file
> > >                     wake
> > >  CLOSE file
> > >  OPEN file
> > >                     wake
> > >  CLOSE file
> > >  ...
> > >                     ...
> > > 
> > > As long as the first process continues updating state, the second
> > > process
> > > will fail to exit the loop in
> > > nfs_set_open_stateid_locked().  This
> > > livelock
> > > has been observed in generic/168.
> > > 
> > > Fix this by detecting the case in nfs_need_update_open_stateid()
> > > and
> > > then exit the loop if:
> > >  - the state is NFS_OPEN_STATE, and
> > >  - the stateid sequence is > 1, and
> > >  - the stateid doesn't match the current open stateid
> > > 
> > > Fixes: 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in
> > > CLOSE/OPEN_DOWNGRADE")
> > > Cc: stable@vger.kernel.org # v5.4+
> > > Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
> > > ---
> > >  fs/nfs/nfs4proc.c | 16 +++++++++-------
> > >  1 file changed, 9 insertions(+), 7 deletions(-)
> > > 
> > > diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
> > > index 6e95c85fe395..8c2bb91127ee 100644
> > > --- a/fs/nfs/nfs4proc.c
> > > +++ b/fs/nfs/nfs4proc.c
> > > @@ -1588,19 +1588,21 @@ static void
> > > nfs_test_and_clear_all_open_stateid(struct nfs4_state *state)
> > >  static bool nfs_need_update_open_stateid(struct nfs4_state
> > > *state,
> > >  		const nfs4_stateid *stateid)
> > >  {
> > > -	if (test_bit(NFS_OPEN_STATE, &state->flags) == 0 ||
> > > -	    !nfs4_stateid_match_other(stateid, &state->open_stateid)) {
> > > +	if (test_bit(NFS_OPEN_STATE, &state->flags)) {
> > > +		/* The common case - we're updating to a new sequence
> > > number */
> > > +		if (nfs4_stateid_match_other(stateid, &state-
> > > > open_stateid) &&
> > > +			nfs4_stateid_is_newer(stateid, &state-
> > > > open_stateid)) {
> > > +			nfs_state_log_out_of_order_open_stateid(state,
> > > stateid);
> > > +			return true;
> > > +		}
> > > +	} else {
> > > +		/* This is the first OPEN */
> > >  		if (stateid->seqid == cpu_to_be32(1))
> > >  			nfs_state_log_update_open_stateid(state);
> > >  		else
> > >  			set_bit(NFS_STATE_CHANGE_WAIT, &state->flags);
> > 
> > Isn't this going to cause a reopen of the file on the client if it
> > ends
> > up processing the reply to the second OPEN after it processes the
> > successful CLOSE?
> 
> Yes, that's true - but that's a different bug that I haven't noticed
> or
> considered.  This patch isn't introducing it.
> 
> > Isn't the real problem here rather that the reply to CLOSE needs to
> > be
> > processed in order too?
> 
> Not just the reply, the actual request as well.  If we have a way to
> properly serialize procssing of CLOSE responses, we could just not
> send the
> CLOSE in the first place.
> 
> I'd rather not send the CLOSE if there's another OPEN in play, and if
> that's
> the barrier to getting this particular bug fixed, I'll work on
> that.  What
> mechanism can be used?  What if the client kept a separate "pending"
> stateid
> that could be updated before each operation that would attempt to
> predict
> what the server's resulting change would be?
> 
> Maybe better would be a counter that gets incremented for each
> transition
> to/from NFS_OPEN_STATE so we can check if the stateid is in the same
> generation and a counter for outstanding operations that are expected
> to
> bump the sequence.
> 

The client can't predict what is going to happen w.r.t. an OPEN call.
If it does an open by name, it doesn't even know which file is going to
get opened. That's why we have the wait loop
in nfs_set_open_stateid_locked(). Why should we not do the same in
CLOSE and OPEN_DOWNGRADE? It's the same problem.

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2 v3] NFSv4: Fix a livelock when CLOSE pre-emptively bumps state sequence
  2020-09-23 19:39       ` Trond Myklebust
@ 2020-09-23 19:46         ` Benjamin Coddington
  2020-09-23 19:55           ` Trond Myklebust
  0 siblings, 1 reply; 8+ messages in thread
From: Benjamin Coddington @ 2020-09-23 19:46 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: linux-nfs, anna.schumaker

On 23 Sep 2020, at 15:39, Trond Myklebust wrote:
> The client can't predict what is going to happen w.r.t. an OPEN call.
> If it does an open by name, it doesn't even know which file is going to
> get opened. That's why we have the wait loop
> in nfs_set_open_stateid_locked(). Why should we not do the same in
> CLOSE and OPEN_DOWNGRADE? It's the same problem.

I will give it a shot.  In the meantime, please consider adding this patch
which fixes a real bug today.  Thank you for your excellent advice and time.

Ben


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2 v3] NFSv4: Fix a livelock when CLOSE pre-emptively bumps state sequence
  2020-09-23 19:46         ` Benjamin Coddington
@ 2020-09-23 19:55           ` Trond Myklebust
  0 siblings, 0 replies; 8+ messages in thread
From: Trond Myklebust @ 2020-09-23 19:55 UTC (permalink / raw)
  To: bcodding; +Cc: linux-nfs, anna.schumaker

On Wed, 2020-09-23 at 15:46 -0400, Benjamin Coddington wrote:
> On 23 Sep 2020, at 15:39, Trond Myklebust wrote:
> > The client can't predict what is going to happen w.r.t. an OPEN
> > call.
> > If it does an open by name, it doesn't even know which file is
> > going to
> > get opened. That's why we have the wait loop
> > in nfs_set_open_stateid_locked(). Why should we not do the same in
> > CLOSE and OPEN_DOWNGRADE? It's the same problem.
> 
> I will give it a shot.  In the meantime, please consider adding this
> patch
> which fixes a real bug today.  Thank you for your excellent advice
> and time.
> 

I don't think we should take that patch, and certainly not as a stable
patch. I'd prefer to wait for the real fix.

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-09-23 19:55 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-23 17:37 [PATCH 0/2 v3] a stateid race and a cleanup Benjamin Coddington
2020-09-23 17:37 ` [PATCH 1/2 v3] NFSv4: Fix a livelock when CLOSE pre-emptively bumps state sequence Benjamin Coddington
2020-09-23 18:53   ` Trond Myklebust
2020-09-23 19:29     ` Benjamin Coddington
2020-09-23 19:39       ` Trond Myklebust
2020-09-23 19:46         ` Benjamin Coddington
2020-09-23 19:55           ` Trond Myklebust
2020-09-23 17:37 ` [PATCH 2/2 v3] NFSv4: cleanup unused zero_stateid copy Benjamin Coddington

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).