On Fri, 2018-08-31 at 13:34 -0700, Roman Gushchin wrote: > diff --git a/mm/vmscan.c b/mm/vmscan.c > index fa2c150ab7b9..c910cf6bf606 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -476,6 +476,10 @@ static unsigned long do_shrink_slab(struct > shrink_control *shrinkctl, > delta = freeable >> priority; > delta *= 4; > do_div(delta, shrinker->seeks); > + > + if (delta == 0 && freeable > 0) > + delta = min(freeable, batch_size); > + > total_scan += delta; > if (total_scan < 0) { > pr_err("shrink_slab: %pF negative objects to delete > nr=%ld\n", I agree that we need to shrink slabs with fewer than 4096 objects, but do we want to put more pressure on a slab the moment it drops below 4096 than we applied when it had just over 4096 objects on it? With this patch, a slab with 5000 objects on it will get 1 item scanned, while a slab with 4000 objects on it will see shrinker->batch or SHRINK_BATCH objects scanned every time. I don't know if this would cause any issues, just something to ponder. If nobody things this is a problem, you can give the patch my: Acked-by: Rik van Riel -- All Rights Reversed.