From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F23EC433F5 for ; Thu, 19 May 2022 19:13:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229541AbiESTNd (ORCPT ); Thu, 19 May 2022 15:13:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243841AbiESTNZ (ORCPT ); Thu, 19 May 2022 15:13:25 -0400 Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACB14AF303; Thu, 19 May 2022 12:13:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=5THCuTuhB9Pg9vg5D4m+VHayGZUEXqzPFCngvBnB6Js=; b=iCRYgfCvxgb35jDAzhb0ZKLeUp eO9+zkTgEkWF90tHMsTZ3SBgCV66LRy5bKlFi1FvqFiAE7K1vlZE3qTtYq94VJ4ftf21eCXpSmKKy UV/ouRtTN+HCKxP/SKOAHHorYMV0Wj1B0/OUmQ6LxOnsSPoUCDXQd99OwtErGF3aPWKx5la7208Qk bTFK7l+5au4Vo30j3CpEqQ+DdmI8hS8LkOo9uBcvRpfpEAohN5/k8X8qLCV8cpB0TqtFas1D6zVjb cyjzb/3Q7/FYcd7V2N0u+dl0iJwJU+/2qoU0q4EJUniD7djBU9diTmIFVklNjlOSDrE2qgCWGBJCH OhaiEdsA==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1nrlaI-002TqR-Vm; Thu, 19 May 2022 13:13:19 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.94.2) (envelope-from ) id 1nrlaF-0004Tx-Td; Thu, 19 May 2022 13:13:16 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, Song Liu Cc: Christoph Hellwig , Guoqing Jiang , Xiao Ni , Stephen Bates , Martin Oliveira , David Sloan , Logan Gunthorpe Date: Thu, 19 May 2022 13:13:10 -0600 Message-Id: <20220519191311.17119-15-logang@deltatee.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220519191311.17119-1-logang@deltatee.com> References: <20220519191311.17119-1-logang@deltatee.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, song@kernel.org, hch@infradead.org, guoqing.jiang@linux.dev, xni@redhat.com, sbates@raithlin.com, Martin.Oliveira@eideticom.com, David.Sloan@eideticom.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH v1 14/15] md: Ensure resync is reported after it starts X-SA-Exim-Version: 4.2.1 (built Sat, 13 Feb 2021 17:57:42 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-raid@vger.kernel.org The 07layouts test in mdadm fails on some systems. The failure presents itself as the backup file not being removed before the next layout is grown into: mdadm: /dev/md0: cannot create backup file /tmp/md-test-backup: File exists This is because the background mdadm process, which is responsible for cleaning up this backup file gets into an infinite loop waiting for the reshape to start. mdadm checks the mdstat file if a reshape is going and, if it is not, it waits for an event on the file or times out in 5 seconds. On faster machines, the reshape may complete before the 5 seconds times out, and thus the background mdadm process loops waiting for a reshape to start that has already occurred. mdadm reads the mdstat file to start, but mdstat does not report that the reshape has begun, even though it has indeed begun. So the mdstat_wait() call (in mdadm) which polls on the mdstat file won't ever return until timing out. The reason mdstat reports the reshape has started is due to an issue in status_resync(). recovery_active is subtracted from curr_resync which will result in a value of zero for the first chunk of reshaped data, and the resulting read will report no reshape in progress. To fix this, if "resync - recovery_active" is zero: force the value to be 4 so the code reports a resync in progress. Signed-off-by: Logan Gunthorpe --- drivers/md/md.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 8273ac5eef06..dbac63c8e35c 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -8022,10 +8022,18 @@ static int status_resync(struct seq_file *seq, struct mddev *mddev) if (test_bit(MD_RECOVERY_DONE, &mddev->recovery)) /* Still cleaning up */ resync = max_sectors; - } else if (resync > max_sectors) + } else if (resync > max_sectors) { resync = max_sectors; - else + } else { resync -= atomic_read(&mddev->recovery_active); + if (!resync) { + /* + * Resync has started, but if it's zero, ensure + * it is still reported, by forcing it to be 4 + */ + resync = 4; + } + } if (resync == 0) { if (test_bit(MD_RESYNCING_REMOTE, &mddev->recovery)) { -- 2.30.2