From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from outgoing.mit.edu (outgoing-auth-1.mit.edu [18.9.28.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42C743FC5 for ; Thu, 16 Sep 2021 19:15:56 +0000 (UTC) Received: from cwcc.thunk.org (pool-72-74-133-215.bstnma.fios.verizon.net [72.74.133.215]) (authenticated bits=0) (User authenticated as tytso@ATHENA.MIT.EDU) by outgoing.mit.edu (8.14.7/8.12.4) with ESMTP id 18GJFTw4011993 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 16 Sep 2021 15:15:31 -0400 Received: by cwcc.thunk.org (Postfix, from userid 15806) id CBBD115C0098; Thu, 16 Sep 2021 15:15:29 -0400 (EDT) Date: Thu, 16 Sep 2021 15:15:29 -0400 From: "Theodore Ts'o" To: James Bottomley Cc: Chris Mason , Johannes Weiner , Kent Overstreet , Matthew Wilcox , Linus Torvalds , "linux-mm@kvack.org" , linux-fsdevel , "linux-kernel@vger.kernel.org" , Andrew Morton , "Darrick J. Wong" , Christoph Hellwig , David Howells , "ksummit@lists.linux.dev" Subject: Re: [MAINTAINER SUMMIT] Folios as a potential Kernel/Maintainers Summit topic? Message-ID: References: <17242A0C-3613-41BB-84E4-2617A182216E@fb.com> <33a2000f56d51284e2df0cfcd704e93977684b59.camel@HansenPartnership.com> Precedence: bulk X-Mailing-List: ksummit@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <33a2000f56d51284e2df0cfcd704e93977684b59.camel@HansenPartnership.com> On Thu, Sep 16, 2021 at 01:11:21PM -0400, James Bottomley wrote: > > Actually, I don't see who should ack being an unknown. The MAINTAINERS > file covers most of the kernel and a set of scripts will tell you based > on your code who the maintainers are ... that would seem to be the > definitive ack list. It's *really* not that simple. It is *not* the case that if a change touches a single line of fs/ext4 (as well as 60+ other filesystems), for example: - ei = kmem_cache_alloc(ext4_inode_cachep, GFP_NOFS); + ei = alloc_inode_sb(sb, ext4_inode_cachep, GFP_NOFS); that the submitter *must* get a ACK from me --- or that I am entitled to NACK the entire 79 patch series for any reason I feel like, or to withhold my ACK as hostage until the submitter does some development work that I want. What typically happens is if someone were to try to play games like this inside, say, the Networking subsystem, past a certain point, David Miller will just take the patch series, ignoring people who have NACK's down if they can't be justified. The difference is that even though Andrew Morton (the titular maintainer for all of Memory Management, per the MAINTAINERS file), Andrew seems to have a much lighter touch on how the mm subsystem is run. > I think the problem is the ack list for features covering large areas > is large and the problems come when the acker's don't agree ... some > like it, some don't. The only deadlock breaking mechanism we have for > this is either Linus yelling at everyone or something happening to get > everyone into alignment (like an MM summit meeting). Our current model > seems to be every acker has a foot on the brake, which means a single > nack can derail the process. It gets even worse if you get a couple of > nacks each requesting mutually conflicting things. > > We also have this other problem of subsystems not being entirely > collaborative. If one subsystem really likes it and another doesn't, > there's a fear in the maintainers of simply being overridden by the > pull request going through the liking subsystem's tree. This could be > seen as a deadlock breaking mechanism, but fear of this happening > drives overreactions. > > We could definitely do a clear definition of who is allowed to nack and > when can that be overridden. Well, yes. And this is why I think there is a process issue here that *is* within the MAINTAINERS SUMMIT purview, and if we need to technical BOF to settle the specific question of what needs to happen, whether it happens at LPC, or it needs to happen after LPC, then let's have it happen. I'd be really disappointed if we have to wait until December 2022 for the next LSF/MM, and if we don't get consensus there, ala DAX, that we then have to wait until late 2023, etc. As others have said, this is holding up some work that file system developers would really like to see. - Ted