From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEC21C433ED for ; Tue, 23 Mar 2021 22:24:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A79F3619C2 for ; Tue, 23 Mar 2021 22:24:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234221AbhCWWYR (ORCPT ); Tue, 23 Mar 2021 18:24:17 -0400 Received: from mail110.syd.optusnet.com.au ([211.29.132.97]:51309 "EHLO mail110.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234227AbhCWWWA (ORCPT ); Tue, 23 Mar 2021 18:22:00 -0400 Received: from dread.disaster.area (pa49-181-239-12.pa.nsw.optusnet.com.au [49.181.239.12]) by mail110.syd.optusnet.com.au (Postfix) with ESMTPS id 1CD85104722; Wed, 24 Mar 2021 09:21:52 +1100 (AEDT) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1lOpPM-005z57-0p; Wed, 24 Mar 2021 09:21:52 +1100 Date: Wed, 24 Mar 2021 09:21:52 +1100 From: Dave Chinner To: "Darrick J. Wong" Cc: linux-xfs@vger.kernel.org Subject: Re: [PATCH 10/11] xfs: parallelize inode inactivation Message-ID: <20210323222152.GH63242@dread.disaster.area> References: <161543194009.1947934.9910987247994410125.stgit@magnolia> <161543199635.1947934.2885924822578773349.stgit@magnolia> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <161543199635.1947934.2885924822578773349.stgit@magnolia> X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.3 cv=Tu+Yewfh c=1 sm=1 tr=0 cx=a_idp_x a=gO82wUwQTSpaJfP49aMSow==:117 a=gO82wUwQTSpaJfP49aMSow==:17 a=kj9zAlcOel0A:10 a=dESyimp9J3IA:10 a=VwQbUJbxAAAA:8 a=7-415B0cAAAA:8 a=KSi8dfOPv26F2t3GfnwA:9 a=CjuIK1q_8ugA:10 a=AjGcO6oz07-iQ99wixmX:22 a=biEYGPWJfzWAr4FL6Ov7:22 a=fCgQI5UlmZDRPDxm0A3o:22 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org On Wed, Mar 10, 2021 at 07:06:36PM -0800, Darrick J. Wong wrote: > From: Darrick J. Wong > > Split the inode inactivation work into per-AG work items so that we can > take advantage of parallelization. How does this scale out when we have thousands of AGs? I'm guessing that the gc_workqueue has the default "unbound" parallelism that means it will run up to 4 kworkers per CPU at a time? Which means we could have hundreds of ags trying to hammer on inactivations at the same time? And so bash hard on the log and completely starve the syscall front end of log space? It seems to me that this needs to bound the amount of concurrent work to quite low numbers - even though it is per-ag, we do not want this to swamp the system in kworkers blocked on log reservations when such concurrency it not necessary. Cheers, Dave. -- Dave Chinner david@fromorbit.com