From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A1F7C6FA8E for ; Sun, 5 Mar 2023 05:03:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229437AbjCEFDA (ORCPT ); Sun, 5 Mar 2023 00:03:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229379AbjCEFDA (ORCPT ); Sun, 5 Mar 2023 00:03:00 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99537213A; Sat, 4 Mar 2023 21:02:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=MYWBC0WOMIDYRhY6uikE3pHTCOX53fIXE4SZZXI8vwI=; b=sBPaYm/n2Dc6Etsx9Ww560hGO4 OWlFc1zCTcv8loJ8ST34qAl6MLaEJYurxh1yXvL9Jn6M5sYIcu/BDjmWOQM8k0CRNtFdWzhfITW+S m9YkL7yDZcajdzBZ5br+HDk43PM0+d6naiQU6/qSOz0klu4bXO/VWI779iB9PsmfZkax0iw1B8kR5 ZX7vuT8+BXm1cIVyOXWFYtpKSxwyWjx6zf8GKQYVkkNRZnDbyRQn+ZfxnRw2rFBqcIGu0tg/hB2PW wHO4pDInBlylBEyRG4Gsoby04Iq43ZP5ctGumc4bwT+Zg7MYG/u2oIGrVraGMn/4AttFWJYBXrcWR TMX3YJCA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pYgWB-004FVh-Ee; Sun, 05 Mar 2023 05:02:43 +0000 Date: Sun, 5 Mar 2023 05:02:43 +0000 From: Matthew Wilcox To: Luis Chamberlain Cc: James Bottomley , Keith Busch , Theodore Ts'o , lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org Subject: Re: [LSF/MM/BPF TOPIC] Cloud storage optimizations Message-ID: References: <2600732b9ed0ddabfda5831aff22fd7e4270e3be.camel@HansenPartnership.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org On Sat, Mar 04, 2023 at 08:15:50PM -0800, Luis Chamberlain wrote: > On Sat, Mar 04, 2023 at 04:39:02PM +0000, Matthew Wilcox wrote: > > I'm getting more and more > > comfortable with the idea that "Linux doesn't support block sizes > > > PAGE_SIZE on 32-bit machines" is an acceptable answer. > > First of all filesystems would need to add support for a larger block > sizes > PAGE_SIZE, and that takes effort. It is also a support question > too. > > I think garnering consensus from filesystem developers we don't want > to support block sizes > PAGE_SIZE on 32-bit systems would be a good > thing to review at LSFMM or even on this list. I hightly doubt anyone > is interested in that support. Agreed. > > XFS already works with arbitrary-order folios. > > But block sizes > PAGE_SIZE is work which is still not merged. It > *can* be with time. That would allow one to muck with larger block > sizes than 4k on x86-64 for instance. Without this, you can't play > ball. Do you mean that XFS is checking that fs block size <= PAGE_SIZE and that check needs to be dropped? If so, I don't see where that happens. Or do you mean that the blockdev "filesystem" needs to be enhanced to support large folios? That's going to be kind of a pain because it uses buffer_heads. And ext4 depends on it using buffer_heads. So, yup, more work needed than I remembered (but as I said, it's FS side, not block layer or driver work). Or were you referring to the NVMe PAGE_SIZE sanity check that Keith mentioned upthread? > > The only needed piece is > > specifying to the VFS that there's a minimum order for this particular > > inode, and having the VFS honour that everywhere. > > Other than the above too, don't we still also need to figure out what > fs APIs would incur larger order folios? And then what about corner cases > with the page cache? > > I was hoping some of these nooks and crannies could be explored with tmpfs. I think we're exploring all those with XFS. Or at least, many of them. A lot of the folio conversion patches you see flowing past are pure efficiency gains -- no need to convert between pages and folios implicitly; do the explicit conversions and save instructions. Most of the correctness issues were found & fixed a long time ago when PMD support was added to tmpfs. One notable exception would be the writeback path since tmpfs doesn't writeback, it has that special thing it does with swap. tmpfs is a rather special case as far as its use of the filesystem APIs go, but I suspect I've done most of the needed work to have it work with arbitrary order folios instead of just PTE and PMD sizes. There's probably some left-over assumptions that I didn't find yet. Maybe in the swap path, for example ;-)