On Mon, Aug 22, 2022 at 3:18 AM Michal Hocko wrote: > > On Mon 22-08-22 11:55:33, Michal Hocko wrote: > > On Mon 22-08-22 00:17:35, Shakeel Butt wrote: > [...] > > > diff --git a/mm/page_counter.c b/mm/page_counter.c > > > index eb156ff5d603..47711aa28161 100644 > > > --- a/mm/page_counter.c > > > +++ b/mm/page_counter.c > > > @@ -17,24 +17,23 @@ static void propagate_protected_usage(struct page_counter *c, > > > unsigned long usage) > > > { > > > unsigned long protected, old_protected; > > > - unsigned long low, min; > > > long delta; > > > > > > if (!c->parent) > > > return; > > > > > > - min = READ_ONCE(c->min); > > > - if (min || atomic_long_read(&c->min_usage)) { > > > - protected = min(usage, min); > > > + protected = min(usage, READ_ONCE(c->min)); > > > + old_protected = atomic_long_read(&c->min_usage); > > > + if (protected != old_protected) { > > > > I have to cache that code back into brain. It is really subtle thing and > > it is not really obvious why this is still correct. I will think about > > that some more but the changelog could help with that a lot. > > OK, so the this patch will be most useful when the min > 0 && min < > usage because then the protection doesn't really change since the last > call. In other words when the usage grows above the protection and your > workload benefits from this change because that happens a lot as only a > part of the workload is protected. Correct? Yes, that is correct. I hope the experiment setup is clear now. > > Unless I have missed anything this shouldn't break the correctness but I > still have to think about the proportional distribution of the > protection because that adds to the complexity here. The patch is not changing any semantics. It is just removing an unnecessary atomic xchg() for a specific scenario (min > 0 && min < usage). I don't think there will be any change related to proportional distribution of the protection.