From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DE53C433E2 for ; Tue, 8 Sep 2020 07:52:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E0F1820C09 for ; Tue, 8 Sep 2020 07:52:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729490AbgIHHwt (ORCPT ); Tue, 8 Sep 2020 03:52:49 -0400 Received: from mx2.suse.de ([195.135.220.15]:50930 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729414AbgIHHws (ORCPT ); Tue, 8 Sep 2020 03:52:48 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 635C3AE24 for ; Tue, 8 Sep 2020 07:52:46 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 02/17] btrfs: calculate inline extent buffer page size based on page size Date: Tue, 8 Sep 2020 15:52:15 +0800 Message-Id: <20200908075230.86856-3-wqu@suse.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200908075230.86856-1-wqu@suse.com> References: <20200908075230.86856-1-wqu@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Btrfs only support 64K as max node size, thus for 4K page system, we would have at most 16 pages for one extent buffer. But for 64K system, we only need and always need one page for extent buffer. This stays true even for future subpage sized sector size support (as long as extent buffer doesn't cross 64K boundary). So this patch will change how INLINE_EXTENT_BUFFER_PAGES is calculated. Instead of using fixed 16 pages, use (64K / PAGE_SIZE) as the result. This should save some bytes for extent buffer structure for 64K systems. Signed-off-by: Qu Wenruo --- fs/btrfs/extent_io.h | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 00a88f2eb5ab..e16c5449ba48 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -86,8 +86,8 @@ struct extent_io_ops { }; -#define INLINE_EXTENT_BUFFER_PAGES 16 -#define MAX_INLINE_EXTENT_BUFFER_SIZE (INLINE_EXTENT_BUFFER_PAGES * PAGE_SIZE) +#define MAX_INLINE_EXTENT_BUFFER_SIZE SZ_64K +#define INLINE_EXTENT_BUFFER_PAGES (MAX_INLINE_EXTENT_BUFFER_SIZE / PAGE_SIZE) struct extent_buffer { u64 start; unsigned long len; @@ -227,8 +227,15 @@ void wait_on_extent_buffer_writeback(struct extent_buffer *eb); static inline int num_extent_pages(const struct extent_buffer *eb) { - return (round_up(eb->start + eb->len, PAGE_SIZE) >> PAGE_SHIFT) - - (eb->start >> PAGE_SHIFT); + /* + * For sectorsize == PAGE_SIZE case, since eb is always aligned to + * sectorsize, it's just (eb->len / PAGE_SIZE) >> PAGE_SHIFT. + * + * For sectorsize < PAGE_SIZE case, we only want to support 64K + * PAGE_SIZE, and ensured all tree blocks won't cross page boundary. + * So in that case we always got 1 page. + */ + return (round_up(eb->len, PAGE_SIZE) >> PAGE_SHIFT); } static inline int extent_buffer_uptodate(const struct extent_buffer *eb) -- 2.28.0