From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E73FECAAD4 for ; Mon, 29 Aug 2022 11:04:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230235AbiH2LEi (ORCPT ); Mon, 29 Aug 2022 07:04:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230136AbiH2LEF (ORCPT ); Mon, 29 Aug 2022 07:04:05 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8ED6647C5; Mon, 29 Aug 2022 04:02:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 234E6611B7; Mon, 29 Aug 2022 11:02:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1AC22C433C1; Mon, 29 Aug 2022 11:02:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1661770964; bh=+KSLGZMqraHzaO39F9PVyiWap5Yit+9u0ukvMcYgJjE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kDRK0Gj2FjB8GHAIsiC3LQ3cJjX6ToffDteNbs5AU7W1sGU2gug5TLrM9+n19V9YL rxNiM+axp7KxsPRd+AJ160P6vjSuXMFc8ltVN/6PDlvgb6Y7y/TRvVVkcMhR8of7Zd YsTv7YsM7tz+2zkENV2uw25CyA7qT7vPf56VrNa0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Johannes Thumshirn , Naohiro Aota , David Sterba Subject: [PATCH 5.15 011/136] btrfs: zoned: revive max_zone_append_bytes Date: Mon, 29 Aug 2022 12:57:58 +0200 Message-Id: <20220829105805.113201981@linuxfoundation.org> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220829105804.609007228@linuxfoundation.org> References: <20220829105804.609007228@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Naohiro Aota commit c2ae7b772ef4e86c5ddf3fd47bf59045ae96a414 upstream This patch is basically a revert of commit 5a80d1c6a270 ("btrfs: zoned: remove max_zone_append_size logic"), but without unnecessary ASSERT and check. The max_zone_append_size will be used as a hint to estimate the number of extents to cover delalloc/writeback region in the later commits. The size of a ZONE APPEND bio is also limited by queue_max_segments(), so this commit considers it to calculate max_zone_append_size. Technically, a bio can be larger than queue_max_segments() * PAGE_SIZE if the pages are contiguous. But, it is safe to consider "queue_max_segments() * PAGE_SIZE" as an upper limit of an extent size to calculate the number of extents needed to write data. Reviewed-by: Johannes Thumshirn Signed-off-by: Naohiro Aota Signed-off-by: David Sterba Signed-off-by: Greg Kroah-Hartman --- fs/btrfs/ctree.h | 2 ++ fs/btrfs/zoned.c | 17 +++++++++++++++++ fs/btrfs/zoned.h | 1 + 3 files changed, 20 insertions(+) --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1017,6 +1017,8 @@ struct btrfs_fs_info { u64 zoned; }; + /* Max size to emit ZONE_APPEND write command */ + u64 max_zone_append_size; struct mutex zoned_meta_io_lock; spinlock_t treelog_bg_lock; u64 treelog_bg; --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -386,6 +386,16 @@ int btrfs_get_dev_zone_info(struct btrfs nr_sectors = bdev_nr_sectors(bdev); zone_info->zone_size_shift = ilog2(zone_info->zone_size); zone_info->nr_zones = nr_sectors >> ilog2(zone_sectors); + /* + * We limit max_zone_append_size also by max_segments * + * PAGE_SIZE. Technically, we can have multiple pages per segment. But, + * since btrfs adds the pages one by one to a bio, and btrfs cannot + * increase the metadata reservation even if it increases the number of + * extents, it is safe to stick with the limit. + */ + zone_info->max_zone_append_size = + min_t(u64, (u64)bdev_max_zone_append_sectors(bdev) << SECTOR_SHIFT, + (u64)bdev_max_segments(bdev) << PAGE_SHIFT); if (!IS_ALIGNED(nr_sectors, zone_sectors)) zone_info->nr_zones++; @@ -570,6 +580,7 @@ int btrfs_check_zoned_mode(struct btrfs_ u64 zoned_devices = 0; u64 nr_devices = 0; u64 zone_size = 0; + u64 max_zone_append_size = 0; const bool incompat_zoned = btrfs_fs_incompat(fs_info, ZONED); int ret = 0; @@ -605,6 +616,11 @@ int btrfs_check_zoned_mode(struct btrfs_ ret = -EINVAL; goto out; } + if (!max_zone_append_size || + (zone_info->max_zone_append_size && + zone_info->max_zone_append_size < max_zone_append_size)) + max_zone_append_size = + zone_info->max_zone_append_size; } nr_devices++; } @@ -654,6 +670,7 @@ int btrfs_check_zoned_mode(struct btrfs_ } fs_info->zone_size = zone_size; + fs_info->max_zone_append_size = max_zone_append_size; fs_info->fs_devices->chunk_alloc_policy = BTRFS_CHUNK_ALLOC_ZONED; /* --- a/fs/btrfs/zoned.h +++ b/fs/btrfs/zoned.h @@ -23,6 +23,7 @@ struct btrfs_zoned_device_info { */ u64 zone_size; u8 zone_size_shift; + u64 max_zone_append_size; u32 nr_zones; unsigned long *seq_zones; unsigned long *empty_zones;