From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 608A4C3B187 for ; Tue, 11 Feb 2020 23:44:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 326E12082F for ; Tue, 11 Feb 2020 23:44:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1581464683; bh=W4c9i/00+OtJRA6RYvyTVANvmlPmRkbl7HuwUCwLBIw=; h=Date:From:To:Cc:Subject:In-Reply-To:References:List-ID:From; b=MQnTbXWULZ7koe2YreC3fIGAhxv26yhE5kn6Sp5hGSqOT9Y2+IyX4ri/Cj3dh66jy xo9Othe8wOI4ezx9xlYLugl3Zwemwm77c36xMV36RvDXdiZL27UWJRpDQoLGX5J9q7 kaVqF7wTQsTY490ve6oZARJPFEyUDHu9TuLY2Kwg= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728031AbgBKXom (ORCPT ); Tue, 11 Feb 2020 18:44:42 -0500 Received: from mail.kernel.org ([198.145.29.99]:58152 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727911AbgBKXom (ORCPT ); Tue, 11 Feb 2020 18:44:42 -0500 Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 099E62070A; Tue, 11 Feb 2020 23:44:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1581464679; bh=W4c9i/00+OtJRA6RYvyTVANvmlPmRkbl7HuwUCwLBIw=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=Gy5vZsucUuM1PKBvB9kViGEU0b/gfONXogIow6rQWkeafiuuquAX8xhsXk6Dm2kjk aDFAaHIulCLRA7ettmmWHza/yOXG1IXx2hpht0OCZF4FO/8r53o52twLJ5CRBlsFRU WyANiUs409HVXmI0frzjTFYdptz4wVzOzCWB2Ig0= Date: Tue, 11 Feb 2020 15:44:38 -0800 From: Andrew Morton To: Johannes Weiner Cc: Rik van Riel , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Dave Chinner , Yafang Shao , Michal Hocko , Roman Gushchin , Linus Torvalds , Al Viro , kernel-team@fb.com Subject: Re: [PATCH] vfs: keep inodes with page cache off the inode shrinker LRU Message-Id: <20200211154438.14ef129db412574c5576facf@linux-foundation.org> In-Reply-To: <20200211193101.GA178975@cmpxchg.org> References: <20200211175507.178100-1-hannes@cmpxchg.org> <29b6e848ff4ad69b55201751c9880921266ec7f4.camel@surriel.com> <20200211193101.GA178975@cmpxchg.org> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 11 Feb 2020 14:31:01 -0500 Johannes Weiner wrote: > On Tue, Feb 11, 2020 at 02:05:38PM -0500, Rik van Riel wrote: > > On Tue, 2020-02-11 at 12:55 -0500, Johannes Weiner wrote: > > > The VFS inode shrinker is currently allowed to reclaim inodes with > > > populated page cache. As a result it can drop gigabytes of hot and > > > active page cache on the floor without consulting the VM (recorded as > > > "inodesteal" events in /proc/vmstat). > > > > > > This causes real problems in practice. Consider for example how the > > > VM > > > would cache a source tree, such as the Linux git tree. As large parts > > > of the checked out files and the object database are accessed > > > repeatedly, the page cache holding this data gets moved to the active > > > list, where it's fully (and indefinitely) insulated from one-off > > > cache > > > moving through the inactive list. > > > > > This behavior of invalidating page cache from the inode shrinker goes > > > back to even before the git import of the kernel tree. It may have > > > been less noticeable when the VM itself didn't have real workingset > > > protection, and floods of one-off cache would push out any active > > > cache over time anyway. But the VM has come a long way since then and > > > the inode shrinker is now actively subverting its caching strategy. > > > > Two things come to mind when looking at this: > > - highmem > > - NUMA > > > > IIRC one of the reasons reclaim is done in this way is > > because a page cache page in one area of memory (highmem, > > or a NUMA node) can end up pinning inode slab memory in > > another memory area (normal zone, other NUMA node). > > That's a good point, highmem does ring a bell now that you mention it. Yup, that's why this mechanism exists. Here: https://marc.info/?l=git-commits-head&m=103646757213266&w=2 > If we still care, I think this could be solved by doing something > similar to what we do with buffer_heads_over_limit: allow a lowmem > allocation to reclaim page cache inside the highmem zone if the bhs > (or inodes in this case) have accumulated excessively. Well, reclaiming highmem pagecache at random would be a painful way to reclaim lowmem inodes. Better to pick an inode then shoot down all its pagecache. Perhaps we could take its pagecache's aging into account. Testing this will be a challenge, but the issue was real - a 7GB highmem machine isn't crazy and I expect the inode has become larger since those days. > AFAICS, we haven't done anything similar for NUMA, so it might not be > much of a problem there. I could imagine this is in part because NUMA > nodes tend to be more balanced in size, and the ratio between cache > memory and inode/bh memory means that these objects won't turn into a > significant externality. Whereas with extreme highmem:lowmem ratios, > they can.