From: Anand Jain <anand.jain@oracle.com>
To: Naohiro Aota <naohiro.aota@wdc.com>,
linux-btrfs@vger.kernel.org, dsterba@suse.com
Cc: hare@suse.com, linux-fsdevel@vger.kernel.org,
Jens Axboe <axboe@kernel.dk>,
Christoph Hellwig <hch@infradead.org>,
"Darrick J. Wong" <darrick.wong@oracle.com>,
Josef Bacik <josef@toxicpanda.com>
Subject: Re: [PATCH v10 14/41] btrfs: load zone's alloction offset
Date: Tue, 8 Dec 2020 17:54:57 +0800 [thread overview]
Message-ID: <de8efe1e-859e-07b7-9128-1749725ce0e7@oracle.com> (raw)
In-Reply-To: <e05710f61375174d7a64e2c14575555c0b89a431.1605007036.git.naohiro.aota@wdc.com>
On 10/11/20 7:26 pm, Naohiro Aota wrote:
> Zoned btrfs must allocate blocks at the zones' write pointer. The device's
> write pointer position can be mapped to a logical address within a block
> group. This commit adds "alloc_offset" to track the logical address.
>
> This logical address is populated in btrfs_load_block_group_zone_info()
> from write pointers of corresponding zones.
>
> For now, zoned btrfs only support the SINGLE profile. Supporting non-SINGLE
> profile with zone append writing is not trivial. For example, in the DUP
> profile, we send a zone append writing IO to two zones on a device. The
> device reply with written LBAs for the IOs. If the offsets of the returned
> addresses from the beginning of the zone are different, then it results in
> different logical addresses.
>
> We need fine-grained logical to physical mapping to support such separated
> physical address issue. Since it should require additional metadata type,
> disable non-SINGLE profiles for now.
>
> This commit supports the case all the zones in a block group are
> sequential. The next patch will handle the case having a conventional zone.
>
> Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
> Reviewed-by: Josef Bacik <josef@toxicpanda.com>
> ---
> fs/btrfs/block-group.c | 15 ++++
> fs/btrfs/block-group.h | 6 ++
> fs/btrfs/zoned.c | 154 +++++++++++++++++++++++++++++++++++++++++
> fs/btrfs/zoned.h | 7 ++
> 4 files changed, 182 insertions(+)
>
> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
> index 6b4831824f51..ffc64dfbe09e 100644
> --- a/fs/btrfs/block-group.c
> +++ b/fs/btrfs/block-group.c
> @@ -15,6 +15,7 @@
> #include "delalloc-space.h"
> #include "discard.h"
> #include "raid56.h"
> +#include "zoned.h"
>
> /*
> * Return target flags in extended format or 0 if restripe for this chunk_type
> @@ -1935,6 +1936,13 @@ static int read_one_block_group(struct btrfs_fs_info *info,
> goto error;
> }
>
> + ret = btrfs_load_block_group_zone_info(cache);
> + if (ret) {
> + btrfs_err(info, "zoned: failed to load zone info of bg %llu",
> + cache->start);
> + goto error;
> + }
> +
> /*
> * We need to exclude the super stripes now so that the space info has
> * super bytes accounted for, otherwise we'll think we have more space
> @@ -2161,6 +2169,13 @@ int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
> cache->last_byte_to_unpin = (u64)-1;
> cache->cached = BTRFS_CACHE_FINISHED;
> cache->needs_free_space = 1;
> +
> + ret = btrfs_load_block_group_zone_info(cache);
> + if (ret) {
> + btrfs_put_block_group(cache);
> + return ret;
> + }
> +
> ret = exclude_super_stripes(cache);
> if (ret) {
> /* We may have excluded something, so call this just in case */
> diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
> index adfd7583a17b..14e3043c9ce7 100644
> --- a/fs/btrfs/block-group.h
> +++ b/fs/btrfs/block-group.h
> @@ -183,6 +183,12 @@ struct btrfs_block_group {
>
> /* Record locked full stripes for RAID5/6 block group */
> struct btrfs_full_stripe_locks_tree full_stripe_locks_root;
> +
> + /*
> + * Allocation offset for the block group to implement sequential
> + * allocation. This is used only with ZONED mode enabled.
> + */
> + u64 alloc_offset;
> };
>
> static inline u64 btrfs_block_group_end(struct btrfs_block_group *block_group)
> diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
> index ed5de1c138d7..69d3412c4fef 100644
> --- a/fs/btrfs/zoned.c
> +++ b/fs/btrfs/zoned.c
> @@ -3,14 +3,20 @@
> #include <linux/bitops.h>
> #include <linux/slab.h>
> #include <linux/blkdev.h>
> +#include <linux/sched/mm.h>
> #include "ctree.h"
> #include "volumes.h"
> #include "zoned.h"
> #include "rcu-string.h"
> #include "disk-io.h"
> +#include "block-group.h"
>
> /* Maximum number of zones to report per blkdev_report_zones() call */
> #define BTRFS_REPORT_NR_ZONES 4096
> +/* Invalid allocation pointer value for missing devices */
> +#define WP_MISSING_DEV ((u64)-1)
> +/* Pseudo write pointer value for conventional zone */
> +#define WP_CONVENTIONAL ((u64)-2)
>
> /* Number of superblock log zones */
> #define BTRFS_NR_SB_LOG_ZONES 2
> @@ -777,3 +783,151 @@ int btrfs_ensure_empty_zones(struct btrfs_device *device, u64 start, u64 size)
>
> return 0;
> }
> +
> +int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache)
> +{
> + struct btrfs_fs_info *fs_info = cache->fs_info;
> + struct extent_map_tree *em_tree = &fs_info->mapping_tree;
> + struct extent_map *em;
> + struct map_lookup *map;
> + struct btrfs_device *device;
> + u64 logical = cache->start;
> + u64 length = cache->length;
> + u64 physical = 0;
> + int ret;
> + int i;
> + unsigned int nofs_flag;
> + u64 *alloc_offsets = NULL;
> + u32 num_sequential = 0, num_conventional = 0;
> +
> + if (!btrfs_is_zoned(fs_info))
> + return 0;
> +
> + /* Sanity check */
> + if (!IS_ALIGNED(length, fs_info->zone_size)) {
> + btrfs_err(fs_info, "zoned: block group %llu len %llu unaligned to zone size %llu",
> + logical, length, fs_info->zone_size);
> + return -EIO;
> + }
> +
> + /* Get the chunk mapping */
> + read_lock(&em_tree->lock);
> + em = lookup_extent_mapping(em_tree, logical, length);
> + read_unlock(&em_tree->lock);
> +
> + if (!em)
> + return -EINVAL;
> +
> + map = em->map_lookup;
> +
> + /*
> + * Get the zone type: if the group is mapped to a non-sequential zone,
> + * there is no need for the allocation offset (fit allocation is OK).
> + */
> + alloc_offsets = kcalloc(map->num_stripes, sizeof(*alloc_offsets),
> + GFP_NOFS);
> + if (!alloc_offsets) {
> + free_extent_map(em);
> + return -ENOMEM;
> + }
> +
> + for (i = 0; i < map->num_stripes; i++) {
> + bool is_sequential;
> + struct blk_zone zone;
> +
> + device = map->stripes[i].dev;
> + physical = map->stripes[i].physical;
> +
> + if (device->bdev == NULL) {
> + alloc_offsets[i] = WP_MISSING_DEV;
> + continue;
> + }
> +
> + is_sequential = btrfs_dev_is_sequential(device, physical);
> + if (is_sequential)
> + num_sequential++;
> + else
> + num_conventional++;
> +
> + if (!is_sequential) {
> + alloc_offsets[i] = WP_CONVENTIONAL;
> + continue;
> + }
> +
> + /*
> + * This zone will be used for allocation, so mark this
> + * zone non-empty.
> + */
> + btrfs_dev_clear_zone_empty(device, physical);
> +
> + /*
> + * The group is mapped to a sequential zone. Get the zone write
> + * pointer to determine the allocation offset within the zone.
> + */
> + WARN_ON(!IS_ALIGNED(physical, fs_info->zone_size));
> + nofs_flag = memalloc_nofs_save();
> + ret = btrfs_get_dev_zone(device, physical, &zone);
> + memalloc_nofs_restore(nofs_flag);
> + if (ret == -EIO || ret == -EOPNOTSUPP) {
> + ret = 0;
> + alloc_offsets[i] = WP_MISSING_DEV;
> + continue;
> + } else if (ret) {
> + goto out;
> + }
> +
> + switch (zone.cond) {
> + case BLK_ZONE_COND_OFFLINE:
> + case BLK_ZONE_COND_READONLY:
> + btrfs_err(fs_info, "zoned: offline/readonly zone %llu on device %s (devid %llu)",
> + physical >> device->zone_info->zone_size_shift,
> + rcu_str_deref(device->name), device->devid);
> + alloc_offsets[i] = WP_MISSING_DEV;
> + break;
> + case BLK_ZONE_COND_EMPTY:
> + alloc_offsets[i] = 0;
> + break;
> + case BLK_ZONE_COND_FULL:
> + alloc_offsets[i] = fs_info->zone_size;
> + break;
> + default:
> + /* Partially used zone */
> + alloc_offsets[i] =
> + ((zone.wp - zone.start) << SECTOR_SHIFT);
> + break;
> + }
> + }
> +
> + if (num_conventional > 0) {
> + /*
> + * Since conventional zones do not have a write pointer, we
> + * cannot determine alloc_offset from the pointer
> + */
> + ret = -EINVAL;
> + goto out;
> + }
> +
> + switch (map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
> + case 0: /* single */
> + cache->alloc_offset = alloc_offsets[0];
> + break;
> + case BTRFS_BLOCK_GROUP_DUP:
> + case BTRFS_BLOCK_GROUP_RAID1:
> + case BTRFS_BLOCK_GROUP_RAID0:
> + case BTRFS_BLOCK_GROUP_RAID10:
> + case BTRFS_BLOCK_GROUP_RAID5:
> + case BTRFS_BLOCK_GROUP_RAID6:
> + /* non-SINGLE profiles are not supported yet */
> + default:
> + btrfs_err(fs_info, "zoned: profile %s not supported",
> + btrfs_bg_type_to_raid_name(map->type));
> + ret = -EINVAL;
> + goto out;
> + }
> +
> +out:
> + kfree(alloc_offsets);
> + free_extent_map(em);
> +
> + return ret;
> +}
> diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
> index ec2391c52d8b..e3338a2f1be9 100644
> --- a/fs/btrfs/zoned.h
> +++ b/fs/btrfs/zoned.h
> @@ -40,6 +40,7 @@ u64 btrfs_find_allocatable_zones(struct btrfs_device *device, u64 hole_start,
> int btrfs_reset_device_zone(struct btrfs_device *device, u64 physical,
> u64 length, u64 *bytes);
> int btrfs_ensure_empty_zones(struct btrfs_device *device, u64 start, u64 size);
> +int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache);
> #else /* CONFIG_BLK_DEV_ZONED */
> static inline int btrfs_get_dev_zone(struct btrfs_device *device, u64 pos,
> struct blk_zone *zone)
> @@ -112,6 +113,12 @@ static inline int btrfs_ensure_empty_zones(struct btrfs_device *device,
> return 0;
> }
>
> +static inline int btrfs_load_block_group_zone_info(
> + struct btrfs_block_group *cache)
> +{
> + return 0;
> +}
> +
> #endif
>
> static inline bool btrfs_dev_is_sequential(struct btrfs_device *device, u64 pos)
>
looks good.
Reviewed-by: Anand Jain <anand.jain@oracle.com>
next prev parent reply other threads:[~2020-12-08 9:56 UTC|newest]
Thread overview: 117+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-10 11:26 [PATCH v10 00/41] btrfs: zoned block device support Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 01/41] block: add bio_add_zone_append_page Naohiro Aota
2020-11-10 17:20 ` Christoph Hellwig
2020-11-11 7:20 ` Johannes Thumshirn
2020-11-10 11:26 ` [PATCH v10 02/41] iomap: support REQ_OP_ZONE_APPEND Naohiro Aota
2020-11-10 17:25 ` Christoph Hellwig
2020-11-10 18:55 ` Darrick J. Wong
2020-11-10 19:01 ` Darrick J. Wong
2020-11-24 11:29 ` Christoph Hellwig
2020-11-30 18:11 ` Darrick J. Wong
2020-12-01 10:16 ` Johannes Thumshirn
2020-12-09 9:31 ` Christoph Hellwig
2020-12-09 10:08 ` Johannes Thumshirn
2020-12-09 10:10 ` hch
2020-12-09 10:16 ` Johannes Thumshirn
2020-12-09 13:38 ` Johannes Thumshirn
2020-12-11 7:26 ` Johannes Thumshirn
2020-12-11 21:24 ` Chaitanya Kulkarni
2020-12-12 10:22 ` Johannes Thumshirn
2020-11-10 11:26 ` [PATCH v10 03/41] btrfs: introduce ZONED feature flag Naohiro Aota
2020-11-19 21:31 ` David Sterba
2020-11-10 11:26 ` [PATCH v10 04/41] btrfs: get zone information of zoned block devices Naohiro Aota
2020-11-12 6:57 ` Anand Jain
2020-11-12 7:35 ` Johannes Thumshirn
2020-11-12 7:44 ` Damien Le Moal
2020-11-12 9:44 ` Anand Jain
2020-11-13 21:34 ` David Sterba
2020-11-12 9:39 ` Johannes Thumshirn
2020-11-12 12:57 ` Naohiro Aota
2020-11-18 11:17 ` Anand Jain
2020-11-30 11:16 ` Anand Jain
2020-11-25 21:47 ` David Sterba
2020-11-25 22:07 ` David Sterba
2020-11-25 23:50 ` Damien Le Moal
2020-11-26 14:11 ` David Sterba
2020-11-25 22:16 ` David Sterba
2020-11-10 11:26 ` [PATCH v10 05/41] btrfs: check and enable ZONED mode Naohiro Aota
2020-11-18 11:29 ` Anand Jain
2020-11-27 18:44 ` David Sterba
2020-11-30 12:12 ` Anand Jain
2020-11-30 13:15 ` Damien Le Moal
2020-12-01 2:19 ` Anand Jain
2020-12-01 2:29 ` Damien Le Moal
2020-12-01 5:53 ` Anand Jain
2020-12-01 6:09 ` Damien Le Moal
2020-12-01 7:12 ` Anand Jain
2020-12-01 10:45 ` Graham Cobb
2020-12-01 11:03 ` Damien Le Moal
2020-12-01 11:11 ` hch
2020-12-01 11:27 ` Damien Le Moal
2020-11-10 11:26 ` [PATCH v10 06/41] btrfs: introduce max_zone_append_size Naohiro Aota
2020-11-19 9:23 ` Anand Jain
2020-11-27 18:47 ` David Sterba
2020-11-10 11:26 ` [PATCH v10 07/41] btrfs: disallow space_cache in ZONED mode Naohiro Aota
2020-11-19 10:42 ` Anand Jain
2020-11-20 4:08 ` Anand Jain
2020-11-10 11:26 ` [PATCH v10 08/41] btrfs: disallow NODATACOW " Naohiro Aota
2020-11-20 4:17 ` Anand Jain
2020-11-23 17:21 ` David Sterba
2020-11-24 3:29 ` Anand Jain
2020-11-10 11:26 ` [PATCH v10 09/41] btrfs: disable fallocate " Naohiro Aota
2020-11-20 4:28 ` Anand Jain
2020-11-10 11:26 ` [PATCH v10 10/41] btrfs: disallow mixed-bg " Naohiro Aota
2020-11-20 4:32 ` Anand Jain
2020-11-10 11:26 ` [PATCH v10 11/41] btrfs: implement log-structured superblock for " Naohiro Aota
2020-11-23 17:46 ` David Sterba
2020-11-24 9:30 ` Johannes Thumshirn
2020-11-24 6:46 ` Anand Jain
2020-11-24 7:16 ` Hannes Reinecke
2020-11-10 11:26 ` [PATCH v10 12/41] btrfs: implement zoned chunk allocator Naohiro Aota
2020-11-24 11:36 ` Anand Jain
2020-11-25 1:57 ` Naohiro Aota
2020-11-25 7:17 ` Anand Jain
2020-11-25 11:48 ` Naohiro Aota
2020-11-25 9:59 ` Graham Cobb
2020-11-25 11:50 ` Naohiro Aota
2020-12-09 5:27 ` Anand Jain
2020-11-10 11:26 ` [PATCH v10 13/41] btrfs: verify device extent is aligned to zone Naohiro Aota
2020-11-27 6:27 ` Anand Jain
2020-11-10 11:26 ` [PATCH v10 14/41] btrfs: load zone's alloction offset Naohiro Aota
2020-12-08 9:54 ` Anand Jain [this message]
2020-11-10 11:26 ` [PATCH v10 15/41] btrfs: emulate write pointer for conventional zones Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 16/41] btrfs: track unusable bytes for zones Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 17/41] btrfs: do sequential extent allocation in ZONED mode Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 18/41] btrfs: reset zones of unused block groups Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 19/41] btrfs: redirty released extent buffers in ZONED mode Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 20/41] btrfs: extract page adding function Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 21/41] btrfs: use bio_add_zone_append_page for zoned btrfs Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 22/41] btrfs: handle REQ_OP_ZONE_APPEND as writing Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 23/41] btrfs: split ordered extent when bio is sent Naohiro Aota
2020-11-11 2:01 ` kernel test robot
2020-11-11 2:26 ` kernel test robot
2020-11-11 3:46 ` kernel test robot
2020-11-11 3:46 ` [RFC PATCH] btrfs: extract_ordered_extent() can be static kernel test robot
2020-11-11 4:12 ` [PATCH v10.1 23/41] btrfs: split ordered extent when bio is sent Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 24/41] btrfs: extend btrfs_rmap_block for specifying a device Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 25/41] btrfs: use ZONE_APPEND write for ZONED btrfs Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 26/41] btrfs: enable zone append writing for direct IO Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 27/41] btrfs: introduce dedicated data write path for ZONED mode Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 28/41] btrfs: serialize meta IOs on " Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 29/41] btrfs: wait existing extents before truncating Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 30/41] btrfs: avoid async metadata checksum on ZONED mode Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 31/41] btrfs: mark block groups to copy for device-replace Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 32/41] btrfs: implement cloning for ZONED device-replace Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 33/41] btrfs: implement copying " Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 34/41] btrfs: support dev-replace in ZONED mode Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 35/41] btrfs: enable relocation " Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 36/41] btrfs: relocate block group to repair IO failure in ZONED Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 37/41] btrfs: split alloc_log_tree() Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 38/41] btrfs: extend zoned allocator to use dedicated tree-log block group Naohiro Aota
2020-11-11 4:58 ` [PATCH v10.1 " Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 39/41] btrfs: serialize log transaction on ZONED mode Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 40/41] btrfs: reorder log node allocation Naohiro Aota
2020-11-10 11:26 ` [PATCH v10 41/41] btrfs: enable to mount ZONED incompat flag Naohiro Aota
2020-11-10 14:00 ` [PATCH v10 00/41] btrfs: zoned block device support Anand Jain
2020-11-11 5:07 ` Naohiro Aota
2020-11-27 19:28 ` David Sterba
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=de8efe1e-859e-07b7-9128-1749725ce0e7@oracle.com \
--to=anand.jain@oracle.com \
--cc=axboe@kernel.dk \
--cc=darrick.wong@oracle.com \
--cc=dsterba@suse.com \
--cc=hare@suse.com \
--cc=hch@infradead.org \
--cc=josef@toxicpanda.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=naohiro.aota@wdc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).