linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Damien Le Moal <Damien.LeMoal@wdc.com>
To: Anand Jain <anand.jain@oracle.com>,
	Naohiro Aota <Naohiro.Aota@wdc.com>,
	"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>,
	"dsterba@suse.com" <dsterba@suse.com>
Cc: "hare@suse.com" <hare@suse.com>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	Jens Axboe <axboe@kernel.dk>,
	"hch@infradead.org" <hch@infradead.org>,
	"Darrick J. Wong" <darrick.wong@oracle.com>
Subject: Re: [PATCH v14 12/42] btrfs: calculate allocation offset for conventional zones
Date: Wed, 3 Feb 2021 07:10:09 +0000	[thread overview]
Message-ID: <BL0PR04MB65140A1D843F9A652EF39D2FE7B49@BL0PR04MB6514.namprd04.prod.outlook.com> (raw)
In-Reply-To: a8f1aeb9-70c4-2d4f-50f3-d4902c3e4173@oracle.com

On 2021/02/03 15:58, Anand Jain wrote:
> 
> 
> On 2/3/2021 2:10 PM, Damien Le Moal wrote:
>> On 2021/02/03 14:22, Anand Jain wrote:
>>> On 1/26/2021 10:24 AM, Naohiro Aota wrote:
>>>> Conventional zones do not have a write pointer, so we cannot use it to
>>>> determine the allocation offset if a block group contains a conventional
>>>> zone.
>>>>
>>>> But instead, we can consider the end of the last allocated extent in the
>>>> block group as an allocation offset.
>>>>
>>>> For new block group, we cannot calculate the allocation offset by
>>>> consulting the extent tree, because it can cause deadlock by taking extent
>>>> buffer lock after chunk mutex (which is already taken in
>>>> btrfs_make_block_group()). Since it is a new block group, we can simply set
>>>> the allocation offset to 0, anyway.
>>>>
>>>
>>> Information about how are the WP of conventional zones used is missing here.
>>
>> Conventional zones do not have valid write pointers because they can be written
>> randomly. This is per ZBC/ZAC specifications. So the wp info is not used, as
>> stated at the beginning of the commit message.
> 
> I was looking for the information why still "end of the last allocated 
> extent in the block group" is assigned to it?

We wanted to keep sequential allocation even for conventional zones, to have a
coherent allocation policy for all groups instead of different policies for
different zone types. Hence the "last allocated extent" used as a replacement
for non-existent wp of conventional zones. we could revisit this, but I do like
the single allocation policy approach as that isolate, somewhat, zone types from
the block group mapping to zones. There is probably room for improvements in
this area though.

> 
> Thanks.
> 
>>> Reviewed-by: Anand Jain <anand.jain@oracle.com>
>>> Thanks.
>>>
>>>> Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
>>>> ---
>>>>    fs/btrfs/block-group.c |  4 +-
>>>>    fs/btrfs/zoned.c       | 99 +++++++++++++++++++++++++++++++++++++++---
>>>>    fs/btrfs/zoned.h       |  4 +-
>>>>    3 files changed, 98 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
>>>> index 0140fafedb6a..349b2a09bdf1 100644
>>>> --- a/fs/btrfs/block-group.c
>>>> +++ b/fs/btrfs/block-group.c
>>>> @@ -1851,7 +1851,7 @@ static int read_one_block_group(struct btrfs_fs_info *info,
>>>>    			goto error;
>>>>    	}
>>>>    
>>>> -	ret = btrfs_load_block_group_zone_info(cache);
>>>> +	ret = btrfs_load_block_group_zone_info(cache, false);
>>>>    	if (ret) {
>>>>    		btrfs_err(info, "zoned: failed to load zone info of bg %llu",
>>>>    			  cache->start);
>>>> @@ -2146,7 +2146,7 @@ int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
>>>>    	if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE))
>>>>    		cache->needs_free_space = 1;
>>>>    
>>>> -	ret = btrfs_load_block_group_zone_info(cache);
>>>> +	ret = btrfs_load_block_group_zone_info(cache, true);
>>>>    	if (ret) {
>>>>    		btrfs_put_block_group(cache);
>>>>    		return ret;
>>>> diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
>>>> index 22c0665ee816..ca7aef252d33 100644
>>>> --- a/fs/btrfs/zoned.c
>>>> +++ b/fs/btrfs/zoned.c
>>>> @@ -930,7 +930,68 @@ int btrfs_ensure_empty_zones(struct btrfs_device *device, u64 start, u64 size)
>>>>    	return 0;
>>>>    }
>>>>    
>>>> -int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache)
>>>> +/*
>>>> + * Calculate an allocation pointer from the extent allocation information
>>>> + * for a block group consist of conventional zones. It is pointed to the
>>>> + * end of the last allocated extent in the block group as an allocation
>>>> + * offset.
>>>> + */
>>>> +static int calculate_alloc_pointer(struct btrfs_block_group *cache,
>>>> +				   u64 *offset_ret)
>>>> +{
>>>> +	struct btrfs_fs_info *fs_info = cache->fs_info;
>>>> +	struct btrfs_root *root = fs_info->extent_root;
>>>> +	struct btrfs_path *path;
>>>> +	struct btrfs_key key;
>>>> +	struct btrfs_key found_key;
>>>> +	int ret;
>>>> +	u64 length;
>>>> +
>>>> +	path = btrfs_alloc_path();
>>>> +	if (!path)
>>>> +		return -ENOMEM;
>>>> +
>>>> +	key.objectid = cache->start + cache->length;
>>>> +	key.type = 0;
>>>> +	key.offset = 0;
>>>> +
>>>> +	ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
>>>> +	/* We should not find the exact match */
>>>> +	if (!ret)
>>>> +		ret = -EUCLEAN;
>>>> +	if (ret < 0)
>>>> +		goto out;
>>>> +
>>>> +	ret = btrfs_previous_extent_item(root, path, cache->start);
>>>> +	if (ret) {
>>>> +		if (ret == 1) {
>>>> +			ret = 0;
>>>> +			*offset_ret = 0;
>>>> +		}
>>>> +		goto out;
>>>> +	}
>>>> +
>>>> +	btrfs_item_key_to_cpu(path->nodes[0], &found_key, path->slots[0]);
>>>> +
>>>> +	if (found_key.type == BTRFS_EXTENT_ITEM_KEY)
>>>> +		length = found_key.offset;
>>>> +	else
>>>> +		length = fs_info->nodesize;
>>>> +
>>>> +	if (!(found_key.objectid >= cache->start &&
>>>> +	       found_key.objectid + length <= cache->start + cache->length)) {
>>>> +		ret = -EUCLEAN;
>>>> +		goto out;
>>>> +	}
>>>> +	*offset_ret = found_key.objectid + length - cache->start;
>>>> +	ret = 0;
>>>> +
>>>> +out:
>>>> +	btrfs_free_path(path);
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
>>>>    {
>>>>    	struct btrfs_fs_info *fs_info = cache->fs_info;
>>>>    	struct extent_map_tree *em_tree = &fs_info->mapping_tree;
>>>> @@ -944,6 +1005,7 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache)
>>>>    	int i;
>>>>    	unsigned int nofs_flag;
>>>>    	u64 *alloc_offsets = NULL;
>>>> +	u64 last_alloc = 0;
>>>>    	u32 num_sequential = 0, num_conventional = 0;
>>>>    
>>>>    	if (!btrfs_is_zoned(fs_info))
>>>> @@ -1042,11 +1104,30 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache)
>>>>    
>>>>    	if (num_conventional > 0) {
>>>>    		/*
>>>> -		 * Since conventional zones do not have a write pointer, we
>>>> -		 * cannot determine alloc_offset from the pointer
>>>> +		 * Avoid calling calculate_alloc_pointer() for new BG. It
>>>> +		 * is no use for new BG. It must be always 0.
>>>> +		 *
>>>> +		 * Also, we have a lock chain of extent buffer lock ->
>>>> +		 * chunk mutex.  For new BG, this function is called from
>>>> +		 * btrfs_make_block_group() which is already taking the
>>>> +		 * chunk mutex. Thus, we cannot call
>>>> +		 * calculate_alloc_pointer() which takes extent buffer
>>>> +		 * locks to avoid deadlock.
>>>>    		 */
>>>> -		ret = -EINVAL;
>>>> -		goto out;
>>>> +		if (new) {
>>>> +			cache->alloc_offset = 0;
>>>> +			goto out;
>>>> +		}
>>>> +		ret = calculate_alloc_pointer(cache, &last_alloc);
>>>> +		if (ret || map->num_stripes == num_conventional) {
>>>> +			if (!ret)
>>>> +				cache->alloc_offset = last_alloc;
>>>> +			else
>>>> +				btrfs_err(fs_info,
>>>> +			"zoned: failed to determine allocation offset of bg %llu",
>>>> +					  cache->start);
>>>> +			goto out;
>>>> +		}
>>>>    	}
>>>>    
>>>>    	switch (map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
>>>> @@ -1068,6 +1149,14 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache)
>>>>    	}
>>>>    
>>>>    out:
>>>> +	/* An extent is allocated after the write pointer */
>>>> +	if (!ret && num_conventional && last_alloc > cache->alloc_offset) {
>>>> +		btrfs_err(fs_info,
>>>> +			  "zoned: got wrong write pointer in BG %llu: %llu > %llu",
>>>> +			  logical, last_alloc, cache->alloc_offset);
>>>> +		ret = -EIO;
>>>> +	}
>>>> +
>>>>    	kfree(alloc_offsets);
>>>>    	free_extent_map(em);
>>>>    
>>>> diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
>>>> index 491b98c97f48..b53403ba0b10 100644
>>>> --- a/fs/btrfs/zoned.h
>>>> +++ b/fs/btrfs/zoned.h
>>>> @@ -41,7 +41,7 @@ u64 btrfs_find_allocatable_zones(struct btrfs_device *device, u64 hole_start,
>>>>    int btrfs_reset_device_zone(struct btrfs_device *device, u64 physical,
>>>>    			    u64 length, u64 *bytes);
>>>>    int btrfs_ensure_empty_zones(struct btrfs_device *device, u64 start, u64 size);
>>>> -int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache);
>>>> +int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new);
>>>>    #else /* CONFIG_BLK_DEV_ZONED */
>>>>    static inline int btrfs_get_dev_zone(struct btrfs_device *device, u64 pos,
>>>>    				     struct blk_zone *zone)
>>>> @@ -119,7 +119,7 @@ static inline int btrfs_ensure_empty_zones(struct btrfs_device *device,
>>>>    }
>>>>    
>>>>    static inline int btrfs_load_block_group_zone_info(
>>>> -	struct btrfs_block_group *cache)
>>>> +	struct btrfs_block_group *cache, bool new)
>>>>    {
>>>>    	return 0;
>>>>    }
>>>>
>>>
>>>
>>
>>
> 


-- 
Damien Le Moal
Western Digital Research

  reply	other threads:[~2021-02-03  7:11 UTC|newest]

Thread overview: 72+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-26  2:24 [PATCH v14 00/42] btrfs: zoned block device support Naohiro Aota
2021-01-26  2:24 ` [PATCH v14 01/42] block: add bio_add_zone_append_page Naohiro Aota
2021-01-26 16:08   ` Jens Axboe
2021-01-26  2:24 ` [PATCH v14 02/42] iomap: support REQ_OP_ZONE_APPEND Naohiro Aota
2021-01-26  2:24 ` [PATCH v14 03/42] btrfs: defer loading zone info after opening trees Naohiro Aota
2021-01-30 22:09   ` Anand Jain
2021-01-26  2:24 ` [PATCH v14 04/42] btrfs: use regular SB location on emulated zoned mode Naohiro Aota
2021-01-30 22:28   ` Anand Jain
2021-01-26  2:24 ` [PATCH v14 05/42] btrfs: release path before calling into btrfs_load_block_group_zone_info Naohiro Aota
2021-01-30 23:21   ` Anand Jain
2021-01-26  2:24 ` [PATCH v14 06/42] btrfs: do not load fs_info->zoned from incompat flag Naohiro Aota
2021-01-30 23:40   ` Anand Jain
2021-01-26  2:24 ` [PATCH v14 07/42] btrfs: disallow fitrim in ZONED mode Naohiro Aota
2021-01-30 23:44   ` Anand Jain
2021-01-26  2:24 ` [PATCH v14 08/42] btrfs: allow zoned mode on non-zoned block devices Naohiro Aota
2021-01-31  1:17   ` Anand Jain
2021-02-01 11:06     ` Johannes Thumshirn
2021-02-02  1:49   ` Anand Jain
2021-01-26  2:24 ` [PATCH v14 09/42] btrfs: implement zoned chunk allocator Naohiro Aota
2021-01-26  2:24 ` [PATCH v14 10/42] btrfs: verify device extent is aligned to zone Naohiro Aota
2021-01-26  2:24 ` [PATCH v14 11/42] btrfs: load zone's allocation offset Naohiro Aota
2021-01-26  2:24 ` [PATCH v14 12/42] btrfs: calculate allocation offset for conventional zones Naohiro Aota
2021-01-27 18:03   ` Josef Bacik
2021-02-03  5:19   ` Anand Jain
2021-02-03  6:10     ` Damien Le Moal
2021-02-03  6:56       ` Anand Jain
2021-02-03  7:10         ` Damien Le Moal [this message]
2021-01-26  2:24 ` [PATCH v14 13/42] btrfs: track unusable bytes for zones Naohiro Aota
2021-01-27 18:06   ` Josef Bacik
2021-01-26  2:24 ` [PATCH v14 14/42] btrfs: do sequential extent allocation in ZONED mode Naohiro Aota
2021-01-26  2:24 ` [PATCH v14 15/42] btrfs: redirty released extent buffers " Naohiro Aota
2021-01-26  2:24 ` [PATCH v14 16/42] btrfs: advance allocation pointer after tree log node Naohiro Aota
2021-01-26  2:24 ` [PATCH v14 17/42] btrfs: enable to mount ZONED incompat flag Naohiro Aota
2021-01-31 12:21   ` Anand Jain
2021-01-26  2:24 ` [PATCH v14 18/42] btrfs: reset zones of unused block groups Naohiro Aota
2021-01-26  2:24 ` [PATCH v14 19/42] btrfs: extract page adding function Naohiro Aota
2021-01-26  2:24 ` [PATCH v14 20/42] btrfs: use bio_add_zone_append_page for zoned btrfs Naohiro Aota
2021-01-26  2:24 ` [PATCH v14 21/42] btrfs: handle REQ_OP_ZONE_APPEND as writing Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 22/42] btrfs: split ordered extent when bio is sent Naohiro Aota
2021-01-27 19:00   ` Josef Bacik
2021-01-26  2:25 ` [PATCH v14 23/42] btrfs: check if bio spans across an ordered extent Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 24/42] btrfs: extend btrfs_rmap_block for specifying a device Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 25/42] btrfs: cache if block-group is on a sequential zone Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 26/42] btrfs: save irq flags when looking up an ordered extent Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 27/42] btrfs: use ZONE_APPEND write for ZONED btrfs Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 28/42] btrfs: enable zone append writing for direct IO Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 29/42] btrfs: introduce dedicated data write path for ZONED mode Naohiro Aota
2021-02-02 15:00   ` David Sterba
2021-02-04  8:25     ` Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 30/42] btrfs: serialize meta IOs on " Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 31/42] btrfs: wait existing extents before truncating Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 32/42] btrfs: avoid async metadata checksum on ZONED mode Naohiro Aota
2021-02-02 14:54   ` David Sterba
2021-02-02 16:50     ` Johannes Thumshirn
2021-02-02 19:28       ` David Sterba
2021-01-26  2:25 ` [PATCH v14 33/42] btrfs: mark block groups to copy for device-replace Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 34/42] btrfs: implement cloning for ZONED device-replace Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 35/42] btrfs: implement copying " Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 36/42] btrfs: support dev-replace in ZONED mode Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 37/42] btrfs: enable relocation " Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 38/42] btrfs: relocate block group to repair IO failure in ZONED Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 39/42] btrfs: split alloc_log_tree() Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 40/42] btrfs: extend zoned allocator to use dedicated tree-log block group Naohiro Aota
2021-01-26  2:25 ` [PATCH v14 41/42] btrfs: serialize log transaction on ZONED mode Naohiro Aota
2021-01-27 19:01   ` Josef Bacik
2021-02-01 15:48   ` Filipe Manana
2021-01-26  2:25 ` [PATCH v14 42/42] btrfs: reorder log node allocation Naohiro Aota
2021-02-01 15:48   ` Filipe Manana
2021-02-01 15:54     ` Johannes Thumshirn
2021-01-29  7:56 ` [PATCH v14 00/42] btrfs: zoned block device support Johannes Thumshirn
2021-01-29 20:44   ` David Sterba
2021-01-30 11:30     ` Johannes Thumshirn

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BL0PR04MB65140A1D843F9A652EF39D2FE7B49@BL0PR04MB6514.namprd04.prod.outlook.com \
    --to=damien.lemoal@wdc.com \
    --cc=Naohiro.Aota@wdc.com \
    --cc=anand.jain@oracle.com \
    --cc=axboe@kernel.dk \
    --cc=darrick.wong@oracle.com \
    --cc=dsterba@suse.com \
    --cc=hare@suse.com \
    --cc=hch@infradead.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).