All of lore.kernel.org
 help / color / mirror / Atom feed
* xfs_buftarg_isolate(): "Correctly invert xfs_buftarg LRU isolation logic"
@ 2019-10-20 14:54 Alex Lyakas
  2019-10-21 12:47 ` Brian Foster
  0 siblings, 1 reply; 5+ messages in thread
From: Alex Lyakas @ 2019-10-20 14:54 UTC (permalink / raw)
  To: vbendel, bfoster; +Cc: linux-xfs

Hello Vratislav, Brian,

This is with regards to commit "xfs: Correctly invert xfs_buftarg LRU 
isolation logic" [1].

I am hitting this issue in kernel 4.14. However, after some debugging, I do 
not fully agree with the commit message, describing the effect of this 
defect.

In case b_lru_ref > 1, then indeed this xfs_buf will be taken off the LRU 
list, and immediately added back to it, with b_lru_ref being lesser by 1 
now.

In case b_lru_ref==1, then this xfs_buf will be similarly isolated (due to a 
bug), and xfs_buf_rele() will be called on it. But now its b_lru_ref==0. In 
this case, xfs_buf_rele() will free the buffer, rather than re-adding it 
back to the LRU. This is a problem, because we intended for this buffer to 
have another trip on the LRU. Only when b_lru_ref==0 upon entry to 
xfs_buftarg_isolate(), we want to free the buffer. So we are freeing the 
buffer one trip too early in this case.

In case b_lru_ref==0 (somehow), then due to a bug, this xfs_buf will not be 
removed off the LRU. It will remain sitting in the LRU with b_lru_ref==0. On 
next shrinker call, this xfs_buff will also remain on the LRU, due to the 
same bug. So this xfs_buf will be freed only on unmount or if 
xfs_buf_stale() is called on it.

Do you agree with the above?

If so, I think this fix should be backported to stable kernels.

Thanks,
Alex.

[1]
commit 19957a181608d25c8f4136652d0ea00b3738972d
Author: Vratislav Bendel <vbendel@redhat.com>
Date:   Tue Mar 6 17:07:44 2018 -0800

    xfs: Correctly invert xfs_buftarg LRU isolation logic

    Due to an inverted logic mistake in xfs_buftarg_isolate()
    the xfs_buffers with zero b_lru_ref will take another trip
    around LRU, while isolating buffers with non-zero b_lru_ref.

    Additionally those isolated buffers end up right back on the LRU
    once they are released, because b_lru_ref remains elevated.

    Fix that circuitous route by leaving them on the LRU
    as originally intended.

    Signed-off-by: Vratislav Bendel <vbendel@redhat.com>
    Reviewed-by: Brian Foster <bfoster@redhat.com>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: xfs_buftarg_isolate(): "Correctly invert xfs_buftarg LRU isolation logic"
  2019-10-20 14:54 xfs_buftarg_isolate(): "Correctly invert xfs_buftarg LRU isolation logic" Alex Lyakas
@ 2019-10-21 12:47 ` Brian Foster
  2019-10-22  6:49   ` Alex Lyakas
  0 siblings, 1 reply; 5+ messages in thread
From: Brian Foster @ 2019-10-21 12:47 UTC (permalink / raw)
  To: Alex Lyakas; +Cc: vbendel, linux-xfs

On Sun, Oct 20, 2019 at 05:54:03PM +0300, Alex Lyakas wrote:
> Hello Vratislav, Brian,
> 
> This is with regards to commit "xfs: Correctly invert xfs_buftarg LRU
> isolation logic" [1].
> 
> I am hitting this issue in kernel 4.14. However, after some debugging, I do
> not fully agree with the commit message, describing the effect of this
> defect.
> 
> In case b_lru_ref > 1, then indeed this xfs_buf will be taken off the LRU
> list, and immediately added back to it, with b_lru_ref being lesser by 1
> now.
> 
> In case b_lru_ref==1, then this xfs_buf will be similarly isolated (due to a
> bug), and xfs_buf_rele() will be called on it. But now its b_lru_ref==0. In
> this case, xfs_buf_rele() will free the buffer, rather than re-adding it
> back to the LRU. This is a problem, because we intended for this buffer to
> have another trip on the LRU. Only when b_lru_ref==0 upon entry to
> xfs_buftarg_isolate(), we want to free the buffer. So we are freeing the
> buffer one trip too early in this case.
> 
> In case b_lru_ref==0 (somehow), then due to a bug, this xfs_buf will not be
> removed off the LRU. It will remain sitting in the LRU with b_lru_ref==0. On
> next shrinker call, this xfs_buff will also remain on the LRU, due to the
> same bug. So this xfs_buf will be freed only on unmount or if
> xfs_buf_stale() is called on it.
> 
> Do you agree with the above?
> 

I'm not really sure how you're inferring what cases free the buffer vs.
what don't based on ->b_lru_ref. A buffer is freed when its reference
count (->b_hold) drops to zero unless ->b_lru_ref is non-zero, at which
point the buffer goes on the LRU and the LRU itself takes a ->b_hold
reference to keep the buffer from being freed. This reference is not
associated with how many cycles through the LRU a buffer is allowed
before it is dropped from the LRU, which is what ->b_lru_ref tracks.

Since the LRU holds a (single) reference to the buffer just like any
other user of the buffer, it doesn't make any direct decision on when to
free a buffer or not. In other words, the bug fixed by this patch is
related to when we decide to drop the buffer from the LRU based on the
LRU ref count. If the LRU ->b_hold reference happens to be the last on
the buffer when it is dropped from the LRU, then the buffer is freed.

> If so, I think this fix should be backported to stable kernels.
> 

Seems reasonable to me. Feel free to post a patch. :)

Brian

> Thanks,
> Alex.
> 
> [1]
> commit 19957a181608d25c8f4136652d0ea00b3738972d
> Author: Vratislav Bendel <vbendel@redhat.com>
> Date:   Tue Mar 6 17:07:44 2018 -0800
> 
>    xfs: Correctly invert xfs_buftarg LRU isolation logic
> 
>    Due to an inverted logic mistake in xfs_buftarg_isolate()
>    the xfs_buffers with zero b_lru_ref will take another trip
>    around LRU, while isolating buffers with non-zero b_lru_ref.
> 
>    Additionally those isolated buffers end up right back on the LRU
>    once they are released, because b_lru_ref remains elevated.
> 
>    Fix that circuitous route by leaving them on the LRU
>    as originally intended.
> 
>    Signed-off-by: Vratislav Bendel <vbendel@redhat.com>
>    Reviewed-by: Brian Foster <bfoster@redhat.com>
>    Reviewed-by: Christoph Hellwig <hch@lst.de>
>    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
>    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: xfs_buftarg_isolate(): "Correctly invert xfs_buftarg LRU isolation logic"
  2019-10-21 12:47 ` Brian Foster
@ 2019-10-22  6:49   ` Alex Lyakas
  2019-10-22 11:06     ` Brian Foster
  0 siblings, 1 reply; 5+ messages in thread
From: Alex Lyakas @ 2019-10-22  6:49 UTC (permalink / raw)
  To: Brian Foster; +Cc: vbendel, linux-xfs

Hi Brian,

Thank you for your response.

On Mon, Oct 21, 2019 at 3:47 PM Brian Foster <bfoster@redhat.com> wrote:
>
> On Sun, Oct 20, 2019 at 05:54:03PM +0300, Alex Lyakas wrote:
> > Hello Vratislav, Brian,
> >
> > This is with regards to commit "xfs: Correctly invert xfs_buftarg LRU
> > isolation logic" [1].
> >
> > I am hitting this issue in kernel 4.14. However, after some debugging, I do
> > not fully agree with the commit message, describing the effect of this
> > defect.
> >
> > In case b_lru_ref > 1, then indeed this xfs_buf will be taken off the LRU
> > list, and immediately added back to it, with b_lru_ref being lesser by 1
> > now.
> >
> > In case b_lru_ref==1, then this xfs_buf will be similarly isolated (due to a
> > bug), and xfs_buf_rele() will be called on it. But now its b_lru_ref==0. In
> > this case, xfs_buf_rele() will free the buffer, rather than re-adding it
> > back to the LRU. This is a problem, because we intended for this buffer to
> > have another trip on the LRU. Only when b_lru_ref==0 upon entry to
> > xfs_buftarg_isolate(), we want to free the buffer. So we are freeing the
> > buffer one trip too early in this case.
> >
> > In case b_lru_ref==0 (somehow), then due to a bug, this xfs_buf will not be
> > removed off the LRU. It will remain sitting in the LRU with b_lru_ref==0. On
> > next shrinker call, this xfs_buff will also remain on the LRU, due to the
> > same bug. So this xfs_buf will be freed only on unmount or if
> > xfs_buf_stale() is called on it.
> >
> > Do you agree with the above?
> >
>
> I'm not really sure how you're inferring what cases free the buffer vs.
> what don't based on ->b_lru_ref. A buffer is freed when its reference
> count (->b_hold) drops to zero unless ->b_lru_ref is non-zero, at which
> point the buffer goes on the LRU and the LRU itself takes a ->b_hold
> reference to keep the buffer from being freed. This reference is not
> associated with how many cycles through the LRU a buffer is allowed
> before it is dropped from the LRU, which is what ->b_lru_ref tracks.
>
> Since the LRU holds a (single) reference to the buffer just like any
> other user of the buffer, it doesn't make any direct decision on when to
> free a buffer or not. In other words, the bug fixed by this patch is
> related to when we decide to drop the buffer from the LRU based on the
> LRU ref count. If the LRU ->b_hold reference happens to be the last on
> the buffer when it is dropped from the LRU, then the buffer is freed.

I apologize for the confusion. By "freed" I really meant "taken off
the LRU". I am aware of the fact that b_hold is controlling whether
the buffer will be freed or not.

What I meant is that the commit description does not address the issue
accurately. From the description one can understand that the only
problem is the additional trip through the LRU.
But in fact, due to this issue, buffers will spend one cycle less in
the LRU than intended. If we initialize b_lru_ref to X, we intend the
buffer to survive X shrinker calls, and on the X+1'th call to be taken
off the LRU (and maybe freed). But with this issue, the buffer will be
taken off the LRU and immediately re-added back. But this will happen
X-1 times, because on the X'th time the b_lru_ref will be 0, and the
buffer will not be readded to the LRU. So the buffer will survive X-1
shrinker calls and not X as intended.

Furthermore, if somehow we end up with the buffer sitting on the LRU
and having b_lru_ref==0, this buffer will never be taken off the LRU,
due to the bug. I am not sure this can happen, because by default
b_lru_ref is set to 1.


>
> > If so, I think this fix should be backported to stable kernels.
> >
>
> Seems reasonable to me. Feel free to post a patch. :)
Will do.

Thanks,
Alex.


>
> Brian
>
> > Thanks,
> > Alex.
> >
> > [1]
> > commit 19957a181608d25c8f4136652d0ea00b3738972d
> > Author: Vratislav Bendel <vbendel@redhat.com>
> > Date:   Tue Mar 6 17:07:44 2018 -0800
> >
> >    xfs: Correctly invert xfs_buftarg LRU isolation logic
> >
> >    Due to an inverted logic mistake in xfs_buftarg_isolate()
> >    the xfs_buffers with zero b_lru_ref will take another trip
> >    around LRU, while isolating buffers with non-zero b_lru_ref.
> >
> >    Additionally those isolated buffers end up right back on the LRU
> >    once they are released, because b_lru_ref remains elevated.
> >
> >    Fix that circuitous route by leaving them on the LRU
> >    as originally intended.
> >
> >    Signed-off-by: Vratislav Bendel <vbendel@redhat.com>
> >    Reviewed-by: Brian Foster <bfoster@redhat.com>
> >    Reviewed-by: Christoph Hellwig <hch@lst.de>
> >    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
> >    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> >
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: xfs_buftarg_isolate(): "Correctly invert xfs_buftarg LRU isolation logic"
  2019-10-22  6:49   ` Alex Lyakas
@ 2019-10-22 11:06     ` Brian Foster
  2019-10-31 15:36       ` Alex Lyakas
  0 siblings, 1 reply; 5+ messages in thread
From: Brian Foster @ 2019-10-22 11:06 UTC (permalink / raw)
  To: Alex Lyakas; +Cc: vbendel, linux-xfs

On Tue, Oct 22, 2019 at 09:49:56AM +0300, Alex Lyakas wrote:
> Hi Brian,
> 
> Thank you for your response.
> 
> On Mon, Oct 21, 2019 at 3:47 PM Brian Foster <bfoster@redhat.com> wrote:
> >
> > On Sun, Oct 20, 2019 at 05:54:03PM +0300, Alex Lyakas wrote:
> > > Hello Vratislav, Brian,
> > >
> > > This is with regards to commit "xfs: Correctly invert xfs_buftarg LRU
> > > isolation logic" [1].
> > >
> > > I am hitting this issue in kernel 4.14. However, after some debugging, I do
> > > not fully agree with the commit message, describing the effect of this
> > > defect.
> > >
> > > In case b_lru_ref > 1, then indeed this xfs_buf will be taken off the LRU
> > > list, and immediately added back to it, with b_lru_ref being lesser by 1
> > > now.
> > >
> > > In case b_lru_ref==1, then this xfs_buf will be similarly isolated (due to a
> > > bug), and xfs_buf_rele() will be called on it. But now its b_lru_ref==0. In
> > > this case, xfs_buf_rele() will free the buffer, rather than re-adding it
> > > back to the LRU. This is a problem, because we intended for this buffer to
> > > have another trip on the LRU. Only when b_lru_ref==0 upon entry to
> > > xfs_buftarg_isolate(), we want to free the buffer. So we are freeing the
> > > buffer one trip too early in this case.
> > >
> > > In case b_lru_ref==0 (somehow), then due to a bug, this xfs_buf will not be
> > > removed off the LRU. It will remain sitting in the LRU with b_lru_ref==0. On
> > > next shrinker call, this xfs_buff will also remain on the LRU, due to the
> > > same bug. So this xfs_buf will be freed only on unmount or if
> > > xfs_buf_stale() is called on it.
> > >
> > > Do you agree with the above?
> > >
> >
> > I'm not really sure how you're inferring what cases free the buffer vs.
> > what don't based on ->b_lru_ref. A buffer is freed when its reference
> > count (->b_hold) drops to zero unless ->b_lru_ref is non-zero, at which
> > point the buffer goes on the LRU and the LRU itself takes a ->b_hold
> > reference to keep the buffer from being freed. This reference is not
> > associated with how many cycles through the LRU a buffer is allowed
> > before it is dropped from the LRU, which is what ->b_lru_ref tracks.
> >
> > Since the LRU holds a (single) reference to the buffer just like any
> > other user of the buffer, it doesn't make any direct decision on when to
> > free a buffer or not. In other words, the bug fixed by this patch is
> > related to when we decide to drop the buffer from the LRU based on the
> > LRU ref count. If the LRU ->b_hold reference happens to be the last on
> > the buffer when it is dropped from the LRU, then the buffer is freed.
> 
> I apologize for the confusion. By "freed" I really meant "taken off
> the LRU". I am aware of the fact that b_hold is controlling whether
> the buffer will be freed or not.
> 

Ok.

> What I meant is that the commit description does not address the issue
> accurately. From the description one can understand that the only
> problem is the additional trip through the LRU.
> But in fact, due to this issue, buffers will spend one cycle less in
> the LRU than intended. If we initialize b_lru_ref to X, we intend the
> buffer to survive X shrinker calls, and on the X+1'th call to be taken
> off the LRU (and maybe freed). But with this issue, the buffer will be
> taken off the LRU and immediately re-added back. But this will happen
> X-1 times, because on the X'th time the b_lru_ref will be 0, and the
> buffer will not be readded to the LRU. So the buffer will survive X-1
> shrinker calls and not X as intended.
> 

That sounds like a reasonable description of behavior to me. Personally,
I also think the commit log description is fine because I read it
generically as just pointing out the logic is backwards. Your
description above is more detailed and probably more technically
accurate, but I'm not sure if/why it matters at this point for anything
beyond just attempting to understand a historical bug (for stable
backport purposes?).

Brian

> Furthermore, if somehow we end up with the buffer sitting on the LRU
> and having b_lru_ref==0, this buffer will never be taken off the LRU,
> due to the bug. I am not sure this can happen, because by default
> b_lru_ref is set to 1.
> 
> 
> >
> > > If so, I think this fix should be backported to stable kernels.
> > >
> >
> > Seems reasonable to me. Feel free to post a patch. :)
> Will do.
> 
> Thanks,
> Alex.
> 
> 
> >
> > Brian
> >
> > > Thanks,
> > > Alex.
> > >
> > > [1]
> > > commit 19957a181608d25c8f4136652d0ea00b3738972d
> > > Author: Vratislav Bendel <vbendel@redhat.com>
> > > Date:   Tue Mar 6 17:07:44 2018 -0800
> > >
> > >    xfs: Correctly invert xfs_buftarg LRU isolation logic
> > >
> > >    Due to an inverted logic mistake in xfs_buftarg_isolate()
> > >    the xfs_buffers with zero b_lru_ref will take another trip
> > >    around LRU, while isolating buffers with non-zero b_lru_ref.
> > >
> > >    Additionally those isolated buffers end up right back on the LRU
> > >    once they are released, because b_lru_ref remains elevated.
> > >
> > >    Fix that circuitous route by leaving them on the LRU
> > >    as originally intended.
> > >
> > >    Signed-off-by: Vratislav Bendel <vbendel@redhat.com>
> > >    Reviewed-by: Brian Foster <bfoster@redhat.com>
> > >    Reviewed-by: Christoph Hellwig <hch@lst.de>
> > >    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
> > >    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > >
> >


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: xfs_buftarg_isolate(): "Correctly invert xfs_buftarg LRU isolation logic"
  2019-10-22 11:06     ` Brian Foster
@ 2019-10-31 15:36       ` Alex Lyakas
  0 siblings, 0 replies; 5+ messages in thread
From: Alex Lyakas @ 2019-10-31 15:36 UTC (permalink / raw)
  To: Brian Foster; +Cc: vbendel, linux-xfs

Hi Brian,

On Tue, Oct 22, 2019 at 2:06 PM Brian Foster <bfoster@redhat.com> wrote:
>
> On Tue, Oct 22, 2019 at 09:49:56AM +0300, Alex Lyakas wrote:
> > Hi Brian,
> >
> > Thank you for your response.
> >
> > On Mon, Oct 21, 2019 at 3:47 PM Brian Foster <bfoster@redhat.com> wrote:
> > >
> > > On Sun, Oct 20, 2019 at 05:54:03PM +0300, Alex Lyakas wrote:
> > > > Hello Vratislav, Brian,
> > > >
> > > > This is with regards to commit "xfs: Correctly invert xfs_buftarg LRU
> > > > isolation logic" [1].
> > > >
> > > > I am hitting this issue in kernel 4.14. However, after some debugging, I do
> > > > not fully agree with the commit message, describing the effect of this
> > > > defect.
> > > >
> > > > In case b_lru_ref > 1, then indeed this xfs_buf will be taken off the LRU
> > > > list, and immediately added back to it, with b_lru_ref being lesser by 1
> > > > now.
> > > >
> > > > In case b_lru_ref==1, then this xfs_buf will be similarly isolated (due to a
> > > > bug), and xfs_buf_rele() will be called on it. But now its b_lru_ref==0. In
> > > > this case, xfs_buf_rele() will free the buffer, rather than re-adding it
> > > > back to the LRU. This is a problem, because we intended for this buffer to
> > > > have another trip on the LRU. Only when b_lru_ref==0 upon entry to
> > > > xfs_buftarg_isolate(), we want to free the buffer. So we are freeing the
> > > > buffer one trip too early in this case.
> > > >
> > > > In case b_lru_ref==0 (somehow), then due to a bug, this xfs_buf will not be
> > > > removed off the LRU. It will remain sitting in the LRU with b_lru_ref==0. On
> > > > next shrinker call, this xfs_buff will also remain on the LRU, due to the
> > > > same bug. So this xfs_buf will be freed only on unmount or if
> > > > xfs_buf_stale() is called on it.
> > > >
> > > > Do you agree with the above?
> > > >
> > >
> > > I'm not really sure how you're inferring what cases free the buffer vs.
> > > what don't based on ->b_lru_ref. A buffer is freed when its reference
> > > count (->b_hold) drops to zero unless ->b_lru_ref is non-zero, at which
> > > point the buffer goes on the LRU and the LRU itself takes a ->b_hold
> > > reference to keep the buffer from being freed. This reference is not
> > > associated with how many cycles through the LRU a buffer is allowed
> > > before it is dropped from the LRU, which is what ->b_lru_ref tracks.
> > >
> > > Since the LRU holds a (single) reference to the buffer just like any
> > > other user of the buffer, it doesn't make any direct decision on when to
> > > free a buffer or not. In other words, the bug fixed by this patch is
> > > related to when we decide to drop the buffer from the LRU based on the
> > > LRU ref count. If the LRU ->b_hold reference happens to be the last on
> > > the buffer when it is dropped from the LRU, then the buffer is freed.
> >
> > I apologize for the confusion. By "freed" I really meant "taken off
> > the LRU". I am aware of the fact that b_hold is controlling whether
> > the buffer will be freed or not.
> >
>
> Ok.
>
> > What I meant is that the commit description does not address the issue
> > accurately. From the description one can understand that the only
> > problem is the additional trip through the LRU.
> > But in fact, due to this issue, buffers will spend one cycle less in
> > the LRU than intended. If we initialize b_lru_ref to X, we intend the
> > buffer to survive X shrinker calls, and on the X+1'th call to be taken
> > off the LRU (and maybe freed). But with this issue, the buffer will be
> > taken off the LRU and immediately re-added back. But this will happen
> > X-1 times, because on the X'th time the b_lru_ref will be 0, and the
> > buffer will not be readded to the LRU. So the buffer will survive X-1
> > shrinker calls and not X as intended.
> >
>
> That sounds like a reasonable description of behavior to me. Personally,
> I also think the commit log description is fine because I read it
> generically as just pointing out the logic is backwards. Your
> description above is more detailed and probably more technically
> accurate, but I'm not sure if/why it matters at this point for anything
> beyond just attempting to understand a historical bug (for stable
> backport purposes?).
Mostly I wanted to confirm my understanding. I have sent the stable
patch with this fix.

Thanks,
Alex.



>
> Brian
>
> > Furthermore, if somehow we end up with the buffer sitting on the LRU
> > and having b_lru_ref==0, this buffer will never be taken off the LRU,
> > due to the bug. I am not sure this can happen, because by default
> > b_lru_ref is set to 1.
> >
> >
> > >
> > > > If so, I think this fix should be backported to stable kernels.
> > > >
> > >
> > > Seems reasonable to me. Feel free to post a patch. :)
> > Will do.
> >
> > Thanks,
> > Alex.
> >
> >
> > >
> > > Brian
> > >
> > > > Thanks,
> > > > Alex.
> > > >
> > > > [1]
> > > > commit 19957a181608d25c8f4136652d0ea00b3738972d
> > > > Author: Vratislav Bendel <vbendel@redhat.com>
> > > > Date:   Tue Mar 6 17:07:44 2018 -0800
> > > >
> > > >    xfs: Correctly invert xfs_buftarg LRU isolation logic
> > > >
> > > >    Due to an inverted logic mistake in xfs_buftarg_isolate()
> > > >    the xfs_buffers with zero b_lru_ref will take another trip
> > > >    around LRU, while isolating buffers with non-zero b_lru_ref.
> > > >
> > > >    Additionally those isolated buffers end up right back on the LRU
> > > >    once they are released, because b_lru_ref remains elevated.
> > > >
> > > >    Fix that circuitous route by leaving them on the LRU
> > > >    as originally intended.
> > > >
> > > >    Signed-off-by: Vratislav Bendel <vbendel@redhat.com>
> > > >    Reviewed-by: Brian Foster <bfoster@redhat.com>
> > > >    Reviewed-by: Christoph Hellwig <hch@lst.de>
> > > >    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
> > > >    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > > >
> > >
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-10-31 15:36 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-20 14:54 xfs_buftarg_isolate(): "Correctly invert xfs_buftarg LRU isolation logic" Alex Lyakas
2019-10-21 12:47 ` Brian Foster
2019-10-22  6:49   ` Alex Lyakas
2019-10-22 11:06     ` Brian Foster
2019-10-31 15:36       ` Alex Lyakas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.