From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AC70CA9ECB for ; Wed, 30 Oct 2019 21:43:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F12122067D for ; Wed, 30 Oct 2019 21:43:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F12122067D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=fromorbit.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7A1746B0003; Wed, 30 Oct 2019 17:43:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7532E6B0005; Wed, 30 Oct 2019 17:43:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 665BF6B0007; Wed, 30 Oct 2019 17:43:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id 3F73F6B0003 for ; Wed, 30 Oct 2019 17:43:41 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id C1FD3180AD820 for ; Wed, 30 Oct 2019 21:43:40 +0000 (UTC) X-FDA: 76101778200.07.shame44_7753044bdf850 X-HE-Tag: shame44_7753044bdf850 X-Filterd-Recvd-Size: 5202 Received: from mail104.syd.optusnet.com.au (mail104.syd.optusnet.com.au [211.29.132.246]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Wed, 30 Oct 2019 21:43:39 +0000 (UTC) Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id C48267E9BCE; Thu, 31 Oct 2019 08:43:37 +1100 (AEDT) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iPvkd-0006QK-OU; Thu, 31 Oct 2019 08:43:35 +1100 Date: Thu, 31 Oct 2019 08:43:35 +1100 From: Dave Chinner To: "Darrick J. Wong" Cc: linux-xfs@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: Re: [PATCH 04/26] xfs: Improve metadata buffer reclaim accountability Message-ID: <20191030214335.GQ4614@dread.disaster.area> References: <20191009032124.10541-1-david@fromorbit.com> <20191009032124.10541-5-david@fromorbit.com> <20191030172517.GO15222@magnolia> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191030172517.GO15222@magnolia> User-Agent: Mutt/1.10.1 (2018-07-13) X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=P6RKvmIu c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=kj9zAlcOel0A:10 a=XobE76Q3jBoA:10 a=20KFwNOVAAAA:8 a=7-415B0cAAAA:8 a=D5QIZhWRPjQ8d_Cs2q0A:9 a=CjuIK1q_8ugA:10 a=biEYGPWJfzWAr4FL6Ov7:22 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Oct 30, 2019 at 10:25:17AM -0700, Darrick J. Wong wrote: > On Wed, Oct 09, 2019 at 02:21:02PM +1100, Dave Chinner wrote: > > From: Dave Chinner > > > > The buffer cache shrinker frees more than just the xfs_buf slab > > objects - it also frees the pages attached to the buffers. Make sure > > the memory reclaim code accounts for this memory being freed > > correctly, similar to how the inode shrinker accounts for pages > > freed from the page cache due to mapping invalidation. > > > > We also need to make sure that the mm subsystem knows these are > > reclaimable objects. We provide the memory reclaim subsystem with a > > a shrinker to reclaim xfs_bufs, so we should really mark the slab > > that way. > > > > We also have a lot of xfs_bufs in a busy system, spread them around > > like we do inodes. > > > > Signed-off-by: Dave Chinner > > --- > > fs/xfs/xfs_buf.c | 6 +++++- > > 1 file changed, 5 insertions(+), 1 deletion(-) > > > > diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c > > index e484f6bead53..45b470f55ad7 100644 > > --- a/fs/xfs/xfs_buf.c > > +++ b/fs/xfs/xfs_buf.c > > @@ -324,6 +324,9 @@ xfs_buf_free( > > > > __free_page(page); > > } > > + if (current->reclaim_state) > > + current->reclaim_state->reclaimed_slab += > > + bp->b_page_count; > > Hmm, ok, I see how ZONE_RECLAIM and reclaimed_slab fit together. > > > } else if (bp->b_flags & _XBF_KMEM) > > kmem_free(bp->b_addr); > > _xfs_buf_free_pages(bp); > > @@ -2064,7 +2067,8 @@ int __init > > xfs_buf_init(void) > > { > > xfs_buf_zone = kmem_zone_init_flags(sizeof(xfs_buf_t), "xfs_buf", > > - KM_ZONE_HWALIGN, NULL); > > + KM_ZONE_HWALIGN | KM_ZONE_SPREAD | KM_ZONE_RECLAIM, > > I guess I'm fine with ZONE_SPREAD too, insofar as it only seems to apply > to a particular "use another node" memory policy when slab is in use. > Was that your intent? It's more documentation than anything - that we shouldn't be piling these structures all on to one node because that can have severe issues with NUMA memory reclaim algorithms. i.e. the xfs-buf shrinker sets SHRINKER_NUMA_AWARE, so memory pressure on a single node can reclaim all the xfs-bufs on that node without touching any other node. That means, for example, if we instantiate all the AG header buffers on a single node (e.g. like we do at mount time) then memory pressure on that one node will generate IO stalls across the entire filesystem as other nodes doing work have to repopulate the buffer cache for any allocation for freeing of space/inodes.. IOWs, for large NUMA systems using cpusets this cache should be spread around all of memory, especially as it has NUMA aware reclaim. For everyone else, it's just documentation that improper cgroup or NUMA memory policy could cause you all sorts of problems with this cache. It's worth noting that SLAB_MEM_SPREAD is used almost exclusively in filesystems for inode caches largely because, at the time (~2006), the only reclaimable cache that could grow to any size large enough to cause problems was the inode cache. It's been cargo-culted ever since, whether it is needed or not (e.g. ceph). In the case of the xfs_bufs, I've been running workloads recently that cache several million xfs_bufs and only a handful of inodes rather than the other way around. If we spread inodes because caching millions on a single node can cause problems on large NUMA machines, then we also need to spread xfs_bufs... Cheers, Dave. -- Dave Chinner david@fromorbit.com