linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-lru.patch
@ 2021-11-08 15:57 Johannes Weiner
  2021-11-08 17:42 ` vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-lru.patch Andrew Morton
  0 siblings, 1 reply; 2+ messages in thread
From: Johannes Weiner @ 2021-11-08 15:57 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm, linux-kernel, kernel-team

Hi Andrew,

I promised to give this patch some more testing exposure while it sits
in -mm. We've been steadily rolling this version of the change to our
fleet over the last months and it's currently on 20% of FB servers. We
have not noticed crashes or performance regressions because of it.
(The other 80% is running a previous version of the patch.)

The comment in 'series' says "extra cycle" but that was 5.15 :-) Do
you think we can get it merged into 5.16?

Just to reiterate, without the patch, there is very broad production
breakage for FB beyond reduced cache effectiveness. Yes, we lose cache
pages prematurely. But a bigger problem is that we lose nonresident
info we store in the inodes. This defeats thrash detection, which in
turn defeats psi and central reclaim deciscion making. The downstream
effects of this are quite severe and widespread:

- memory prioity inversion between containers
- failure to offload cold memory to swap with proactive reclaim
- breakdown of container health monitoring and userspace OOM killing

I'm not exaggerating when I say we can't reliably operate our fleet
without this patch. We've had to carry variants of it for two years
now. It'd be great to get this fixed upstream.

Thanks,
Johannes

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-lru.patch
  2021-11-08 15:57 vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-lru.patch Johannes Weiner
@ 2021-11-08 17:42 ` Andrew Morton
  0 siblings, 0 replies; 2+ messages in thread
From: Andrew Morton @ 2021-11-08 17:42 UTC (permalink / raw)
  To: Johannes Weiner; +Cc: linux-mm, linux-kernel, kernel-team

On Mon, 8 Nov 2021 10:57:23 -0500 Johannes Weiner <hannes@cmpxchg.org> wrote:

> Hi Andrew,
> 
> I promised to give this patch some more testing exposure while it sits
> in -mm. We've been steadily rolling this version of the change to our
> fleet over the last months and it's currently on 20% of FB servers. We
> have not noticed crashes or performance regressions because of it.
> (The other 80% is running a previous version of the patch.)
> 
> The comment in 'series' says "extra cycle" but that was 5.15 :-) Do
> you think we can get it merged into 5.16?
> 
> Just to reiterate, without the patch, there is very broad production
> breakage for FB beyond reduced cache effectiveness. Yes, we lose cache
> pages prematurely. But a bigger problem is that we lose nonresident
> info we store in the inodes. This defeats thrash detection, which in
> turn defeats psi and central reclaim deciscion making. The downstream
> effects of this are quite severe and widespread:
> 
> - memory prioity inversion between containers
> - failure to offload cold memory to swap with proactive reclaim
> - breakdown of container health monitoring and userspace OOM killing
> 
> I'm not exaggerating when I say we can't reliably operate our fleet
> without this patch. We've had to carry variants of it for two years
> now. It'd be great to get this fixed upstream.

Cool, thanks for the update.  I'll sent it Linuswards this week.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-11-08 17:42 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-08 15:57 vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-lru.patch Johannes Weiner
2021-11-08 17:42 ` vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-lru.patch Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).