From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3F35C4363A for ; Fri, 30 Oct 2020 13:53:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 762632071A for ; Fri, 30 Oct 2020 13:53:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="IST/Mr1u" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726961AbgJ3Nxm (ORCPT ); Fri, 30 Oct 2020 09:53:42 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:22001 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726890AbgJ3NxM (ORCPT ); Fri, 30 Oct 2020 09:53:12 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1604065991; x=1635601991; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XGecI4/PSi+/P3b47C8WRFT3NXxMcXFh9BHBTUoqZ8w=; b=IST/Mr1u3LwBJ5IBdcBHXZKI8wHli+yRUcY5L0LYBwJW72vtrjTfmG+3 SJpkwPCzQ6UvuCt/XkSAv2h6hkzwLr7yfnTJIcFSMCbC0qMdg6CqGYcwz yT+vRZSeOdY1+++CC7v0uK5nC1/h8Lo4T3niA+6auQTMv2o2niV1i2KCl cDHy7JfKgm/BUgK2tqHdoBqmG5uVGT5Q2Nnv08f9MArF+iusd6eKcuCwW JugBkf7kJiqZh/CEFPWf7cU0+E27InOuEC6qzddro/3lOntgDES2ObcIv Ke6q0PsPVOq/a+T1UZ+PQ8ybT6RzjfAdFoolYw5VSdPjyIkEszgPl5zXh w==; IronPort-SDR: ECNTJaRzQmzbOFN6mmcnSixV7uychKkfGHAfFucoaKxAtyr9DRIs/fqxpfacRiOT1z+dvIryf9 47hSNNDB9D5H9UZFg/QKaYwZOAmoPjiSR5ayBB+zqynBe7V9fvhfHdqnNuUTuuF0sNDLBUhdL+ fycEZtfmLdP/AmPR/pcQUuUbfDACWNrnao+7lQecvkM5VdkUtU0mjwdIjZrSY2YXH9sNzwijzH av+9Ys+LbyQ8yr92rgQLgB8b6Z4hXgG9acyt1PWnHn8xH5hvh28lR0g79x9tJQ9oNMnx8NKrTj dIQ= X-IronPort-AV: E=Sophos;i="5.77,433,1596470400"; d="scan'208";a="155806642" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 30 Oct 2020 21:53:05 +0800 IronPort-SDR: R2DWJTltvZY49X/9F9Ncl8rgTKDKQMeb3pa18zWNQGVzCiRtwNikWLPzQF92Uei2SDwaUZggSO ImlqCpLamyArvQP3+NW6FtND/yah7JjnUwJ978FK+MJcOKtarrTvGk5EffFvSdk7Uz+NlUXtbO 8UKEa8go4jJTU79yaxw0bYLOUan0qXGrK8WiM/Hffjk3PWpuG4m14sFWEJh/LuNMy/LIz785OK T5G5LxISVTekgvqObDUc63QDVwZbCnQemTzKxATt5N2OhR9v4uwav4py8rANH2NgMIPh1dE7Nv D82r6KmCoZjLhUioX/XkJ5Ej Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 06:39:19 -0700 IronPort-SDR: cSzgD+kbOj2dsZaDbdd02UqyKMKPqyamf/rx6ujI94jhzHWeHsQTQs70D7AHiOkxJcxkhKhO4J y/ij3YTNCO/Bi287xy296iSYGdYmTg437tTL0cbs5AjXUIQ6RDp9xpsAxCFIetxybLpKoVluae j8bMxRr/U5q6sTNV9oTDzLDwAwfAbofpvUA7DkF93tH/G9FVFpMaPb2yc+6EkEqLdInWjz2CB6 ComnAeiMqJEJ6eU9q81avRNKd2ezpo5nSFwKDISIDnLtQq9sPgFUWKRtJlo6BR6sGvTRfJ2Trg ljM= WDCIronportException: Internal Received: from naota.dhcp.fujisawa.hgst.com ([10.149.52.155]) by uls-op-cesaip01.wdc.com with ESMTP; 30 Oct 2020 06:53:04 -0700 From: Naohiro Aota To: linux-btrfs@vger.kernel.org, dsterba@suse.com Cc: hare@suse.com, linux-fsdevel@vger.kernel.org, Naohiro Aota , Josef Bacik Subject: [PATCH v9 36/41] btrfs: relocate block group to repair IO failure in ZONED Date: Fri, 30 Oct 2020 22:51:43 +0900 Message-Id: <1a4cf83e2685980c0958e0425b2804bcbc1642a9.1604065695.git.naohiro.aota@wdc.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org When btrfs find a checksum error and if the file system has a mirror of the damaged data, btrfs read the correct data from the mirror and write the data to damaged blocks. This repairing, however, is against the sequential write required rule. We can consider three methods to repair an IO failure in ZONED mode: (1) Reset and rewrite the damaged zone (2) Allocate new device extent and replace the damaged device extent to the new extent (3) Relocate the corresponding block group Method (1) is most similar to a behavior done with regular devices. However, it also wipes non-damaged data in the same device extent, and so it unnecessary degrades non-damaged data. Method (2) is much like device replacing but done in the same device. It is safe because it keeps the device extent until the replacing finish. However, extending device replacing is non-trivial. It assumes "src_dev->physical == dst_dev->physical". Also, the extent mapping replacing function should be extended to support replacing device extent position in one device. Method (3) invokes relocation of the damaged block group, so it is straightforward to implement. It relocates all the mirrored device extents, so it is, potentially, a more costly operation than method (1) or (2). But it relocates only using extents which reduce the total IO size. Let's apply method (3) for now. In the future, we can extend device-replace and apply method (2). For protecting a block group gets relocated multiple time with multiple IO errors, this commit introduces "relocating_repair" bit to show it's now relocating to repair IO failures. Also it uses a new kthread "btrfs-relocating-repair", not to block IO path with relocating process. This commit also supports repairing in the scrub process. Signed-off-by: Naohiro Aota Reviewed-by: Josef Bacik --- fs/btrfs/block-group.h | 1 + fs/btrfs/extent_io.c | 3 ++ fs/btrfs/scrub.c | 3 ++ fs/btrfs/volumes.c | 71 ++++++++++++++++++++++++++++++++++++++++++ fs/btrfs/volumes.h | 1 + 5 files changed, 79 insertions(+) diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h index e91123495d68..50e5ddb0a19b 100644 --- a/fs/btrfs/block-group.h +++ b/fs/btrfs/block-group.h @@ -96,6 +96,7 @@ struct btrfs_block_group { unsigned int has_caching_ctl:1; unsigned int removed:1; unsigned int to_copy:1; + unsigned int relocating_repair:1; int disk_cache_state; diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 3cce444d5dbb..8ab5161a68b4 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2268,6 +2268,9 @@ int repair_io_failure(struct btrfs_fs_info *fs_info, u64 ino, u64 start, ASSERT(!(fs_info->sb->s_flags & SB_RDONLY)); BUG_ON(!mirror_num); + if (btrfs_is_zoned(fs_info)) + return btrfs_repair_one_zone(fs_info, logical); + bio = btrfs_io_bio_alloc(1); bio->bi_iter.bi_size = 0; map_length = length; diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 0e2211b9c810..e6a8df8a8f4f 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -861,6 +861,9 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check) have_csum = sblock_to_check->pagev[0]->have_csum; dev = sblock_to_check->pagev[0]->dev; + if (btrfs_is_zoned(fs_info) && !sctx->is_dev_replace) + return btrfs_repair_one_zone(fs_info, logical); + /* * We must use GFP_NOFS because the scrub task might be waiting for a * worker task executing this function and in turn a transaction commit diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 920292d0fca7..10e678f88f2a 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -7973,3 +7973,74 @@ bool btrfs_pinned_by_swapfile(struct btrfs_fs_info *fs_info, void *ptr) spin_unlock(&fs_info->swapfile_pins_lock); return node != NULL; } + +static int relocating_repair_kthread(void *data) +{ + struct btrfs_block_group *cache = (struct btrfs_block_group *) data; + struct btrfs_fs_info *fs_info = cache->fs_info; + u64 target; + int ret = 0; + + target = cache->start; + btrfs_put_block_group(cache); + + if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_BALANCE)) { + btrfs_info(fs_info, + "skip relocating block group %llu to repair: EBUSY", + target); + return -EBUSY; + } + + mutex_lock(&fs_info->delete_unused_bgs_mutex); + + /* ensure Block Group still exists */ + cache = btrfs_lookup_block_group(fs_info, target); + if (!cache) + goto out; + + if (!cache->relocating_repair) + goto out; + + ret = btrfs_may_alloc_data_chunk(fs_info, target); + if (ret < 0) + goto out; + + btrfs_info(fs_info, "relocating block group %llu to repair IO failure", + target); + ret = btrfs_relocate_chunk(fs_info, target); + +out: + if (cache) + btrfs_put_block_group(cache); + mutex_unlock(&fs_info->delete_unused_bgs_mutex); + btrfs_exclop_finish(fs_info); + + return ret; +} + +int btrfs_repair_one_zone(struct btrfs_fs_info *fs_info, u64 logical) +{ + struct btrfs_block_group *cache; + + /* do not attempt to repair in degraded state */ + if (btrfs_test_opt(fs_info, DEGRADED)) + return 0; + + cache = btrfs_lookup_block_group(fs_info, logical); + if (!cache) + return 0; + + spin_lock(&cache->lock); + if (cache->relocating_repair) { + spin_unlock(&cache->lock); + btrfs_put_block_group(cache); + return 0; + } + cache->relocating_repair = 1; + spin_unlock(&cache->lock); + + kthread_run(relocating_repair_kthread, cache, + "btrfs-relocating-repair"); + + return 0; +} diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h index cff1f7689eac..7c1ad6901791 100644 --- a/fs/btrfs/volumes.h +++ b/fs/btrfs/volumes.h @@ -584,5 +584,6 @@ void btrfs_scratch_superblocks(struct btrfs_fs_info *fs_info, int btrfs_bg_type_to_factor(u64 flags); const char *btrfs_bg_type_to_raid_name(u64 flags); int btrfs_verify_dev_extents(struct btrfs_fs_info *fs_info); +int btrfs_repair_one_zone(struct btrfs_fs_info *fs_info, u64 logical); #endif -- 2.27.0