From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B214EC433E0 for ; Fri, 15 Jan 2021 07:00:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 86B5022D50 for ; Fri, 15 Jan 2021 07:00:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730960AbhAOHAw (ORCPT ); Fri, 15 Jan 2021 02:00:52 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:41752 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730849AbhAOHAv (ORCPT ); Fri, 15 Jan 2021 02:00:51 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1610694051; x=1642230051; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vG5uupA+pIubBMRWkzIvSeukV4Tr04jgkobUzi1uE0k=; b=Pwn30BwiFI3dzicjzgOapvIueSM62Pr+/gUMlIqcAN1lzS1tfMHsYhCi g5ixrSzYcXpxy1n6V2CGuUoOQhR61mgjnFtIGO1wu4pkbr0uEf3Sfp6ep LZ4ULpBtnexJTTH4ElBYL0uGgAJoqsqLIyL3GjH8XWUSMIx6xzdzky2Lp lVwEiRWD2yT8hJzC/XVebbQ5uat0uAKNoo575XC/sICXmL8Bh6WTnyhKL Rv74mW8Q8WjTPkSu2UisWVUBDD8SZvr0xaM9usXMV4PA0VEDeICfEP3+T m/tJT+RqjZGvWDp4Biv+Oesc6YTNi4O983ruMzC9JiaHMDRnuGinWJwlK A==; IronPort-SDR: EWK/bjhATiB5N5PMgVmuZXcyTXld2LR1DaGM82RMzA18Y+1mkqut34h+7HIvEpTdA2Gkpxr7R5 Nyld0dzsBVHenCMCYVKlnh2oGEFa8P6dMQRgO/3asliW999yB8XQ4Tzcx8FUdNVp6+9z7WZVrU D7HgT6Q8G774CXokYi5a07yT3dyvJDfR9OSB9SetQlF6svkU/l4OFNWSsFd4pc33avKz0B+HbH ywdzJAP6lz3BHX/ppD4/LulCUMeNCv6FMcfx7YOSpgN8EW73Gz6CsVFO03p+tVOZIMholBWNCL d14= X-IronPort-AV: E=Sophos;i="5.79,348,1602518400"; d="scan'208";a="161928321" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 15 Jan 2021 14:56:19 +0800 IronPort-SDR: pz7M1eFqFdY9I5EO2eO7xqnd8f+fqyg9tXaSjsswQafTQM4i3EA2pxemZl3b/kA3tVgTb4AfXc bbWWCMGmGxqkxIeAIycX3X84rlsJM4XZ21LCacLZX5X4UMAVTlvIExc9CS73Y64JjZZuSmXcap ERGZFfxc89cXFPmw2Wg3g54rYjZXoa7puTyjEyK0W66FBktmfdMCngFIEHc59v341LoiSUkGYp 3B8SJHx9UzkyTcpQXVB2NHF1AtpW1Q0Nx9aDZHPM404mcm6mXCke6YGWF9zNQRKpML2U6O1apV fxrNi48fVPed8vj/5Jk1A6e0 Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jan 2021 22:41:01 -0800 IronPort-SDR: sHobbkB1pBpbFpUk4zu1fGN6jFUMXoAnsewKhya7GiAGzDlWOnz9HdM9jQVEy5d7NhZZB+9rQC bMv7XNuO0GRcDp6bW2uO7zmGnR5FGx9QIkbshrugUkfp0gZjvslAklQZabFd9i6z0VR7fwHO+9 V2J3MmuVA4CCMDOM8NWCkPn4Nzf2krbuKCMs7Btjz2yxYhStLqGb2SK1jMu934RuHSoYVIxAqw 6x0mEiYKyKDcVTpcO+WKewJKc3B2dqMQkIkjq9Z/QBub+sDnIoD+rNyZT/nsrmfix7v/jq7uTx FdM= WDCIronportException: Internal Received: from naota.dhcp.fujisawa.hgst.com ([10.149.52.155]) by uls-op-cesaip01.wdc.com with SMTP; 14 Jan 2021 22:56:18 -0800 Received: (nullmailer pid 1916494 invoked by uid 1000); Fri, 15 Jan 2021 06:55:02 -0000 From: Naohiro Aota To: linux-btrfs@vger.kernel.org, dsterba@suse.com Cc: hare@suse.com, linux-fsdevel@vger.kernel.org, Jens Axboe , Christoph Hellwig , "Darrick J. Wong" , Naohiro Aota , Josef Bacik Subject: [PATCH v12 37/41] btrfs: relocate block group to repair IO failure in ZONED Date: Fri, 15 Jan 2021 15:53:41 +0900 Message-Id: <7daa3aa0dc8a454a49b81380fd6b8a9bb19237a9.1610693037.git.naohiro.aota@wdc.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org When btrfs find a checksum error and if the file system has a mirror of the damaged data, btrfs read the correct data from the mirror and write the data to damaged blocks. This repairing, however, is against the sequential write required rule. We can consider three methods to repair an IO failure in ZONED mode: (1) Reset and rewrite the damaged zone (2) Allocate new device extent and replace the damaged device extent to the new extent (3) Relocate the corresponding block group Method (1) is most similar to a behavior done with regular devices. However, it also wipes non-damaged data in the same device extent, and so it unnecessary degrades non-damaged data. Method (2) is much like device replacing but done in the same device. It is safe because it keeps the device extent until the replacing finish. However, extending device replacing is non-trivial. It assumes "src_dev->physical == dst_dev->physical". Also, the extent mapping replacing function should be extended to support replacing device extent position in one device. Method (3) invokes relocation of the damaged block group, so it is straightforward to implement. It relocates all the mirrored device extents, so it is, potentially, a more costly operation than method (1) or (2). But it relocates only using extents which reduce the total IO size. Let's apply method (3) for now. In the future, we can extend device-replace and apply method (2). For protecting a block group gets relocated multiple time with multiple IO errors, this commit introduces "relocating_repair" bit to show it's now relocating to repair IO failures. Also it uses a new kthread "btrfs-relocating-repair", not to block IO path with relocating process. This commit also supports repairing in the scrub process. Signed-off-by: Naohiro Aota Reviewed-by: Josef Bacik --- fs/btrfs/block-group.h | 1 + fs/btrfs/extent_io.c | 3 ++ fs/btrfs/scrub.c | 3 ++ fs/btrfs/volumes.c | 71 ++++++++++++++++++++++++++++++++++++++++++ fs/btrfs/volumes.h | 1 + 5 files changed, 79 insertions(+) diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h index 3dec66ed36cb..36654bcd2a83 100644 --- a/fs/btrfs/block-group.h +++ b/fs/btrfs/block-group.h @@ -96,6 +96,7 @@ struct btrfs_block_group { unsigned int has_caching_ctl:1; unsigned int removed:1; unsigned int to_copy:1; + unsigned int relocating_repair:1; int disk_cache_state; diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 3d004bae2fa2..c4453cfcbf14 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2260,6 +2260,9 @@ int repair_io_failure(struct btrfs_fs_info *fs_info, u64 ino, u64 start, ASSERT(!(fs_info->sb->s_flags & SB_RDONLY)); BUG_ON(!mirror_num); + if (btrfs_is_zoned(fs_info)) + return btrfs_repair_one_zone(fs_info, logical); + bio = btrfs_io_bio_alloc(1); bio->bi_iter.bi_size = 0; map_length = length; diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 2f577f3b1c31..d0c47ef72d46 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -857,6 +857,9 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check) have_csum = sblock_to_check->pagev[0]->have_csum; dev = sblock_to_check->pagev[0]->dev; + if (btrfs_is_zoned(fs_info) && !sctx->is_dev_replace) + return btrfs_repair_one_zone(fs_info, logical); + /* * We must use GFP_NOFS because the scrub task might be waiting for a * worker task executing this function and in turn a transaction commit diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index f3ab7ff0769f..dbcc4b66972d 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -7990,3 +7990,74 @@ bool btrfs_pinned_by_swapfile(struct btrfs_fs_info *fs_info, void *ptr) spin_unlock(&fs_info->swapfile_pins_lock); return node != NULL; } + +static int relocating_repair_kthread(void *data) +{ + struct btrfs_block_group *cache = (struct btrfs_block_group *) data; + struct btrfs_fs_info *fs_info = cache->fs_info; + u64 target; + int ret = 0; + + target = cache->start; + btrfs_put_block_group(cache); + + if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_BALANCE)) { + btrfs_info(fs_info, + "zoned: skip relocating block group %llu to repair: EBUSY", + target); + return -EBUSY; + } + + mutex_lock(&fs_info->delete_unused_bgs_mutex); + + /* Ensure Block Group still exists */ + cache = btrfs_lookup_block_group(fs_info, target); + if (!cache) + goto out; + + if (!cache->relocating_repair) + goto out; + + ret = btrfs_may_alloc_data_chunk(fs_info, target); + if (ret < 0) + goto out; + + btrfs_info(fs_info, "zoned: relocating block group %llu to repair IO failure", + target); + ret = btrfs_relocate_chunk(fs_info, target); + +out: + if (cache) + btrfs_put_block_group(cache); + mutex_unlock(&fs_info->delete_unused_bgs_mutex); + btrfs_exclop_finish(fs_info); + + return ret; +} + +int btrfs_repair_one_zone(struct btrfs_fs_info *fs_info, u64 logical) +{ + struct btrfs_block_group *cache; + + /* Do not attempt to repair in degraded state */ + if (btrfs_test_opt(fs_info, DEGRADED)) + return 0; + + cache = btrfs_lookup_block_group(fs_info, logical); + if (!cache) + return 0; + + spin_lock(&cache->lock); + if (cache->relocating_repair) { + spin_unlock(&cache->lock); + btrfs_put_block_group(cache); + return 0; + } + cache->relocating_repair = 1; + spin_unlock(&cache->lock); + + kthread_run(relocating_repair_kthread, cache, + "btrfs-relocating-repair"); + + return 0; +} diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h index 0bcf87a9e594..54f475e0c702 100644 --- a/fs/btrfs/volumes.h +++ b/fs/btrfs/volumes.h @@ -597,5 +597,6 @@ void btrfs_scratch_superblocks(struct btrfs_fs_info *fs_info, int btrfs_bg_type_to_factor(u64 flags); const char *btrfs_bg_type_to_raid_name(u64 flags); int btrfs_verify_dev_extents(struct btrfs_fs_info *fs_info); +int btrfs_repair_one_zone(struct btrfs_fs_info *fs_info, u64 logical); #endif -- 2.27.0