From: Josef Bacik <josef@toxicpanda.com>
To: Naohiro Aota <naohiro.aota@wdc.com>,
linux-btrfs@vger.kernel.org, dsterba@suse.com
Cc: hare@suse.com, linux-fsdevel@vger.kernel.org,
Jens Axboe <axboe@kernel.dk>,
Christoph Hellwig <hch@infradead.org>,
"Darrick J. Wong" <darrick.wong@oracle.com>
Subject: Re: [PATCH v13 12/42] btrfs: calculate allocation offset for conventional zones
Date: Fri, 22 Jan 2021 10:07:19 -0500 [thread overview]
Message-ID: <8f7fbbfa-e100-14a4-fe56-ad2b017ba9d3@toxicpanda.com> (raw)
In-Reply-To: <617bb7d3a62aa5702bbf31f47ec67fbc56576b30.1611295439.git.naohiro.aota@wdc.com>
On 1/22/21 1:21 AM, Naohiro Aota wrote:
> Conventional zones do not have a write pointer, so we cannot use it to
> determine the allocation offset if a block group contains a conventional
> zone.
>
> But instead, we can consider the end of the last allocated extent in the
> block group as an allocation offset.
>
> For new block group, we cannot calculate the allocation offset by
> consulting the extent tree, because it can cause deadlock by taking extent
> buffer lock after chunk mutex (which is already taken in
> btrfs_make_block_group()). Since it is a new block group, we can simply set
> the allocation offset to 0, anyway.
>
> Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
> ---
> fs/btrfs/block-group.c | 4 +-
> fs/btrfs/zoned.c | 99 +++++++++++++++++++++++++++++++++++++++---
> fs/btrfs/zoned.h | 4 +-
> 3 files changed, 98 insertions(+), 9 deletions(-)
>
> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
> index 1c5ed46d376c..7c210aa5f25f 100644
> --- a/fs/btrfs/block-group.c
> +++ b/fs/btrfs/block-group.c
> @@ -1843,7 +1843,7 @@ static int read_one_block_group(struct btrfs_fs_info *info,
> goto error;
> }
>
> - ret = btrfs_load_block_group_zone_info(cache);
> + ret = btrfs_load_block_group_zone_info(cache, false);
> if (ret) {
> btrfs_err(info, "zoned: failed to load zone info of bg %llu",
> cache->start);
> @@ -2138,7 +2138,7 @@ int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
> if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE))
> cache->needs_free_space = 1;
>
> - ret = btrfs_load_block_group_zone_info(cache);
> + ret = btrfs_load_block_group_zone_info(cache, true);
> if (ret) {
> btrfs_put_block_group(cache);
> return ret;
> diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
> index 22c0665ee816..1b85a18d8573 100644
> --- a/fs/btrfs/zoned.c
> +++ b/fs/btrfs/zoned.c
> @@ -930,7 +930,68 @@ int btrfs_ensure_empty_zones(struct btrfs_device *device, u64 start, u64 size)
> return 0;
> }
>
> -int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache)
> +/*
> + * Calculate an allocation pointer from the extent allocation information
> + * for a block group consist of conventional zones. It is pointed to the
> + * end of the last allocated extent in the block group as an allocation
> + * offset.
> + */
> +static int calculate_alloc_pointer(struct btrfs_block_group *cache,
> + u64 *offset_ret)
> +{
> + struct btrfs_fs_info *fs_info = cache->fs_info;
> + struct btrfs_root *root = fs_info->extent_root;
> + struct btrfs_path *path;
> + struct btrfs_key key;
> + struct btrfs_key found_key;
> + int ret;
> + u64 length;
> +
> + path = btrfs_alloc_path();
> + if (!path)
> + return -ENOMEM;
> +
> + key.objectid = cache->start + cache->length;
> + key.type = 0;
> + key.offset = 0;
> +
> + ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
> + /* We should not find the exact match */
> + if (!ret)
> + ret = -EUCLEAN;
> + if (ret < 0)
> + goto out;
> +
> + ret = btrfs_previous_extent_item(root, path, cache->start);
> + if (ret) {
> + if (ret == 1) {
> + ret = 0;
> + *offset_ret = 0;
> + }
> + goto out;
> + }
> +
> + btrfs_item_key_to_cpu(path->nodes[0], &found_key, path->slots[0]);
> +
> + if (found_key.type == BTRFS_EXTENT_ITEM_KEY)
> + length = found_key.offset;
> + else
> + length = fs_info->nodesize;
> +
> + if (!(found_key.objectid >= cache->start &&
> + found_key.objectid + length <= cache->start + cache->length)) {
> + ret = -EUCLEAN;
> + goto out;
> + }
> + *offset_ret = found_key.objectid + length - cache->start;
> + ret = 0;
> +
> +out:
> + btrfs_free_path(path);
> + return ret;
> +}
> +
> +int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
> {
> struct btrfs_fs_info *fs_info = cache->fs_info;
> struct extent_map_tree *em_tree = &fs_info->mapping_tree;
> @@ -944,6 +1005,7 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache)
> int i;
> unsigned int nofs_flag;
> u64 *alloc_offsets = NULL;
> + u64 last_alloc = 0;
> u32 num_sequential = 0, num_conventional = 0;
>
> if (!btrfs_is_zoned(fs_info))
> @@ -1042,11 +1104,30 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache)
>
> if (num_conventional > 0) {
> /*
> - * Since conventional zones do not have a write pointer, we
> - * cannot determine alloc_offset from the pointer
> + * Avoid calling calculate_alloc_pointer() for new BG. It
> + * is no use for new BG. It must be always 0.
> + *
> + * Also, we have a lock chain of extent buffer lock ->
> + * chunk mutex. For new BG, this function is called from
> + * btrfs_make_block_group() which is already taking the
> + * chunk mutex. Thus, we cannot call
> + * calculate_alloc_pointer() which takes extent buffer
> + * locks to avoid deadlock.
> */
> - ret = -EINVAL;
> - goto out;
> + if (new) {
> + cache->alloc_offset = 0;
> + goto out;
> + }
> + ret = calculate_alloc_pointer(cache, &last_alloc);
> + if (ret || map->num_stripes == num_conventional) {
> + if (!ret)
> + cache->alloc_offset = last_alloc;
> + else
> + btrfs_err(fs_info,
> + "zoned: failed to determine allocation offset of bg %llu",
> + cache->start);
> + goto out;
> + }
> }
>
> switch (map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
> @@ -1068,6 +1149,14 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache)
> }
>
> out:
> + /* An extent is allocated after the write pointer */
> + if (num_conventional && last_alloc > cache->alloc_offset) {
> + btrfs_err(fs_info,
> + "zoned: got wrong write pointer in BG %llu: %llu > %llu",
> + logical, last_alloc, cache->alloc_offset);
> + ret = -EIO;
> + }
> +
Sorry I didn't notice this on the last go around, but you could conceivably eat
the ret value here, this should probably be
if (!ret && num_conventional && last_alloc > cache->alloc_offset) {
}
Thanks,
Josef
next prev parent reply other threads:[~2021-01-22 15:09 UTC|newest]
Thread overview: 62+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-22 6:21 [PATCH v13 00/42] btrfs: zoned block device support Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 01/42] block: add bio_add_zone_append_page Naohiro Aota
2021-01-23 2:50 ` Chaitanya Kulkarni
2021-01-23 3:08 ` Chaitanya Kulkarni
2021-01-25 17:31 ` Johannes Thumshirn
2021-01-22 6:21 ` [PATCH v13 02/42] iomap: support REQ_OP_ZONE_APPEND Naohiro Aota
2021-01-23 2:56 ` Chaitanya Kulkarni
2021-01-22 6:21 ` [PATCH v13 03/42] btrfs: defer loading zone info after opening trees Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 04/42] btrfs: use regular SB location on emulated zoned mode Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 05/42] btrfs: release path before calling into btrfs_load_block_group_zone_info Naohiro Aota
2021-01-22 14:57 ` Josef Bacik
2021-01-22 6:21 ` [PATCH v13 06/42] btrfs: do not load fs_info->zoned from incompat flag Naohiro Aota
2021-01-22 14:58 ` Josef Bacik
2021-01-22 6:21 ` [PATCH v13 07/42] btrfs: disallow fitrim in ZONED mode Naohiro Aota
2021-01-22 14:59 ` Josef Bacik
2021-01-22 6:21 ` [PATCH v13 08/42] btrfs: allow zoned mode on non-zoned block devices Naohiro Aota
2021-01-22 15:03 ` Josef Bacik
2021-01-22 6:21 ` [PATCH v13 09/42] btrfs: implement zoned chunk allocator Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 10/42] btrfs: verify device extent is aligned to zone Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 11/42] btrfs: load zone's allocation offset Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 12/42] btrfs: calculate allocation offset for conventional zones Naohiro Aota
2021-01-22 15:07 ` Josef Bacik [this message]
2021-01-22 6:21 ` [PATCH v13 13/42] btrfs: track unusable bytes for zones Naohiro Aota
2021-01-22 15:11 ` Josef Bacik
2021-01-25 10:37 ` Johannes Thumshirn
2021-01-25 12:08 ` Johannes Thumshirn
2021-01-22 6:21 ` [PATCH v13 14/42] btrfs: do sequential extent allocation in ZONED mode Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 15/42] btrfs: redirty released extent buffers " Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 16/42] btrfs: advance allocation pointer after tree log node Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 17/42] btrfs: enable to mount ZONED incompat flag Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 18/42] btrfs: reset zones of unused block groups Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 19/42] btrfs: extract page adding function Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 20/42] btrfs: use bio_add_zone_append_page for zoned btrfs Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 21/42] btrfs: handle REQ_OP_ZONE_APPEND as writing Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 22/42] btrfs: split ordered extent when bio is sent Naohiro Aota
2021-01-22 15:22 ` Josef Bacik
2021-01-25 8:56 ` Johannes Thumshirn
2021-01-25 9:02 ` Johannes Thumshirn
2021-01-22 6:21 ` [PATCH v13 23/42] btrfs: check if bio spans across an ordered extent Naohiro Aota
2021-01-22 15:23 ` Josef Bacik
2021-01-22 6:21 ` [PATCH v13 24/42] btrfs: extend btrfs_rmap_block for specifying a device Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 25/42] btrfs: cache if block-group is on a sequential zone Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 26/42] btrfs: save irq flags when looking up an ordered extent Naohiro Aota
2021-01-22 15:24 ` Josef Bacik
2021-01-22 6:21 ` [PATCH v13 27/42] btrfs: use ZONE_APPEND write for ZONED btrfs Naohiro Aota
2021-01-22 15:29 ` Josef Bacik
2021-01-22 6:21 ` [PATCH v13 28/42] btrfs: enable zone append writing for direct IO Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 29/42] btrfs: introduce dedicated data write path for ZONED mode Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 30/42] btrfs: serialize meta IOs on " Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 31/42] btrfs: wait existing extents before truncating Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 32/42] btrfs: avoid async metadata checksum on ZONED mode Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 33/42] btrfs: mark block groups to copy for device-replace Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 34/42] btrfs: implement cloning for ZONED device-replace Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 35/42] btrfs: implement copying " Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 36/42] btrfs: support dev-replace in ZONED mode Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 37/42] btrfs: enable relocation " Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 38/42] btrfs: relocate block group to repair IO failure in ZONED Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 39/42] btrfs: split alloc_log_tree() Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 40/42] btrfs: extend zoned allocator to use dedicated tree-log block group Naohiro Aota
2021-01-22 6:21 ` [PATCH v13 41/42] btrfs: serialize log transaction on ZONED mode Naohiro Aota
2021-01-22 15:37 ` Josef Bacik
2021-01-22 6:21 ` [PATCH v13 42/42] btrfs: reorder log node allocation Naohiro Aota
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8f7fbbfa-e100-14a4-fe56-ad2b017ba9d3@toxicpanda.com \
--to=josef@toxicpanda.com \
--cc=axboe@kernel.dk \
--cc=darrick.wong@oracle.com \
--cc=dsterba@suse.com \
--cc=hare@suse.com \
--cc=hch@infradead.org \
--cc=linux-btrfs@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=naohiro.aota@wdc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).