* [PATCH] fs/mbcache: make count_objects more robust.
@ 2017-11-27 3:30 Jiang Biao
2018-01-04 15:51 ` Jan Kara
0 siblings, 1 reply; 3+ messages in thread
From: Jiang Biao @ 2017-11-27 3:30 UTC (permalink / raw)
To: viro; +Cc: linux-fsdevel, linux-kernel, jiang.biao2, zhong.weidong
When running ltp stress test for 7*24 hours, the vmscan occasionally
complains the following warning continuously,
mb_cache_scan+0x0/0x3f0 negative objects to delete
nr=-9232265467809300450
...
The tracing result shows the freeable(mb_cache_count returns)
is -1, which causes the continuous accumulation and overflow of
total_scan.
This patch make sure the mb_cache_count not return negative value,
which make the mbcache shrinker more robust.
Signed-off-by: Jiang Biao <jiang.biao2@zte.com.cn>
---
fs/mbcache.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/mbcache.c b/fs/mbcache.c
index d818fd2..b8b8b9c 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -269,6 +269,9 @@ static unsigned long mb_cache_count(struct shrinker *shrink,
struct mb_cache *cache = container_of(shrink, struct mb_cache,
c_shrink);
+ /* Unlikely, but not impossible */
+ if (unlikely(cache->c_entry_count < 0))
+ return 0;
return cache->c_entry_count;
}
--
2.7.4
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] fs/mbcache: make count_objects more robust.
2017-11-27 3:30 [PATCH] fs/mbcache: make count_objects more robust Jiang Biao
@ 2018-01-04 15:51 ` Jan Kara
[not found] ` <201801050854566236393@zte.com.cn>
0 siblings, 1 reply; 3+ messages in thread
From: Jan Kara @ 2018-01-04 15:51 UTC (permalink / raw)
To: Jiang Biao; +Cc: viro, linux-fsdevel, linux-kernel, zhong.weidong
On Mon 27-11-17 11:30:19, Jiang Biao wrote:
> When running ltp stress test for 7*24 hours, the vmscan occasionally
> complains the following warning continuously,
>
> mb_cache_scan+0x0/0x3f0 negative objects to delete
> nr=-9232265467809300450
> ...
>
> The tracing result shows the freeable(mb_cache_count returns)
> is -1, which causes the continuous accumulation and overflow of
> total_scan.
>
> This patch make sure the mb_cache_count not return negative value,
> which make the mbcache shrinker more robust.
>
> Signed-off-by: Jiang Biao <jiang.biao2@zte.com.cn>
Going through some old email...
a) c_entry_count is unsigned so your patch is a nop as Coverity properly
noticed.
b) c_entry_count being outside 0..2*cache->c_max_entries is a plain bug. I
went through the logic and cannot find out how that could happen though.
But in either case your patch just does not make sense.
Honza
> ---
> fs/mbcache.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/fs/mbcache.c b/fs/mbcache.c
> index d818fd2..b8b8b9c 100644
> --- a/fs/mbcache.c
> +++ b/fs/mbcache.c
> @@ -269,6 +269,9 @@ static unsigned long mb_cache_count(struct shrinker *shrink,
> struct mb_cache *cache = container_of(shrink, struct mb_cache,
> c_shrink);
>
> + /* Unlikely, but not impossible */
> + if (unlikely(cache->c_entry_count < 0))
> + return 0;
> return cache->c_entry_count;
> }
>
> --
> 2.7.4
>
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] fs/mbcache: make count_objects more robust.
[not found] ` <201801050854566236393@zte.com.cn>
@ 2018-01-08 9:21 ` Jan Kara
0 siblings, 0 replies; 3+ messages in thread
From: Jan Kara @ 2018-01-08 9:21 UTC (permalink / raw)
To: jiang.biao2; +Cc: jack, viro, linux-fsdevel, linux-kernel, zhong.weidong
On Fri 05-01-18 08:54:56, jiang.biao2@zte.com.cn wrote:
> > On Mon 27-11-17 11:30:19, Jiang Biao wrote:
> > > When running ltp stress test for 7*24 hours, the vmscan occasionally
> > > complains the following warning continuously,
> >>
> >> mb_cache_scan+0x0/0x3f0 negative objects to delete
> >> nr=-9232265467809300450
> >> ...
> >>
> >> The tracing result shows the freeable(mb_cache_count returns)
> >> is -1, which causes the continuous accumulation and overflow of
> >> total_scan.
> >>
> >> This patch make sure the mb_cache_count not return negative value,
> >> which make the mbcache shrinker more robust.
> >>
> >> Signed-off-by: Jiang Biao <jiang.biao2@zte.com.cn>
> >
> > Going through some old email...
> > a) c_entry_count is unsigned so your patch is a nop as Coverity properly
> > noticed.
> Indeed, would the following casting be good?
> + if (unlikely((int)(cache->c_entry_count) < 0))
> + return 0;
That check would at least have a chance of hitting but still it is just
hiding the real problem.
> > b) c_entry_count being outside 0..2*cache->c_max_entries is a plain bug. I
> > went through the logic and cannot find out how that could happen though.
> Is there any possibility that decreasing c_entry_count from 0 to -1
> in mb_cache_entry_delete?
If we think we have -1 entries in a list, we have a larger problem than
just the wrong behavior of the shrinker. This is just a plain counter of
entries protected by a spinlock so there isn't space for accounting errors
or anything like that. If you can reproduce the problem on some reasonably
recent kernel, I'd be interested in debugging this.
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2018-01-08 9:21 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-27 3:30 [PATCH] fs/mbcache: make count_objects more robust Jiang Biao
2018-01-04 15:51 ` Jan Kara
[not found] ` <201801050854566236393@zte.com.cn>
2018-01-08 9:21 ` Jan Kara
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.