linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Carlos Carvalho <carlos@fisica.ufpr.br>
To: Song Liu <song@kernel.org>
Cc: Bagas Sanjaya <bagasdotme@gmail.com>,
	Christoph Hellwig <hch@lst.de>, AceLan Kao <acelan@gmail.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Linux Regressions <regressions@lists.linux.dev>,
	Linux RAID <linux-raid@vger.kernel.org>
Subject: Re: Infiniate systemd loop when power off the machine with multiple MD RAIDs
Date: Tue, 22 Aug 2023 16:13:59 -0300	[thread overview]
Message-ID: <ZOUI9yDzjxuFP68E@fisica.ufpr.br> (raw)
In-Reply-To: <CAPhsuW5Od9tczboEBxC8gn+2XLkEbirfCUm7WuJBey5MKQjwDA@mail.gmail.com>

Song Liu (song@kernel.org) wrote on Tue, Aug 22, 2023 at 03:56:04PM -03:
> >From systemd code, i.e. function delete_md(), this error:
> 
> [ 205.957004] systemd-shutdown[1]: Stopping MD /dev/md124p1 (259:6).
> [ 205.964177] systemd-shutdown[1]: Could not stop MD /dev/md124p1:
> Device or resource busy
> 
> is most likely triggered by ioctl(STOP_ARRAY).
> 
> And based on the code, I think the ioctl fails here:
> 
>         if (cmd == STOP_ARRAY || cmd == STOP_ARRAY_RO) {
>                 /* Need to flush page cache, and ensure no-one else opens
>                  * and writes
>                  */
>                 mutex_lock(&mddev->open_mutex);
>                 if (mddev->pers && atomic_read(&mddev->openers) > 1) {
>                         mutex_unlock(&mddev->open_mutex);
>                         err = -EBUSY;
>                         goto out;        ////////////////////// HERE
>                 }
>                 if (test_and_set_bit(MD_CLOSING, &mddev->flags)) {
>                         mutex_unlock(&mddev->open_mutex);
>                         err = -EBUSY;
>                         goto out;
>                 }
>                 did_set_md_closing = true;
>                 mutex_unlock(&mddev->open_mutex);
>                 sync_blockdev(bdev);
>         }

Probably. The problem is why doesn't it manage to flush the page cache? I find
strange that the problem appears only when trying to stop the array, I get it
when trying to umount the filesystem, where it also hangs because of the same
reason. The kworker thread runs continuously using 100% cpu of only 1 core.

These are all similar symptoms of the underlying problem which I complained
about days ago: the system doesn't manage to write to the disks, which stay
nearly idle. If you wait long enough without issuing writes, which can be
several hours, it'll eventually flush the page cache and proceed to a "normal"
umount or reboot.

The bug is dependent on the rate of writes and also on uptime; it rarely
appears soon after boot, and increases as times passes.

  reply	other threads:[~2023-08-22 19:14 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-16  9:37 Fwd: Infiniate systemd loop when power off the machine with multiple MD RAIDs Bagas Sanjaya
2023-08-18  8:16 ` Mariusz Tkaczyk
2023-08-18  9:21   ` Hannes Reinecke
2023-08-21  3:23     ` AceLan Kao
2023-08-22  3:51   ` Guoqing Jiang
2023-08-22  6:17     ` Song Liu
2023-08-22  6:39       ` Mariusz Tkaczyk
2023-08-22  8:13         ` AceLan Kao
2023-08-22 12:41           ` Guoqing Jiang
2023-08-23  8:02             ` AceLan Kao
2023-08-23 13:25               ` Song Liu
2023-08-26  4:31                 ` AceLan Kao
2023-08-28  5:20                   ` Song Liu
2023-08-28 10:48                     ` AceLan Kao
2023-08-29  3:12                       ` AceLan Kao
2023-08-28 13:50                     ` Yu Kuai
2023-08-31  2:28                       ` Yu Kuai
2023-08-31  6:50                         ` Mariusz Tkaczyk
2023-09-06  6:26                           ` AceLan Kao
2023-09-06 10:27                             ` Mariusz Tkaczyk
2023-09-07  2:04                               ` Yu Kuai
2023-09-07 10:18                                 ` Mariusz Tkaczyk
     [not found]                                   ` <cffca94f-5729-622d-9327-632b3ff2891a@huaweicloud.com>
     [not found]                                     ` <3e7edf0c-cadd-59b0-4e10-dffdb86b93b7@huaweicloud.com>
2023-09-07 12:41                                       ` Mariusz Tkaczyk
2023-09-07 12:53                                         ` Yu Kuai
2023-09-07 15:09                                           ` Mariusz Tkaczyk
2023-09-08 20:25                                             ` Song Liu
2023-08-21 13:18 ` Fwd: " Yu Kuai
2023-08-22  1:39   ` AceLan Kao
2023-08-22 18:56 ` Song Liu
2023-08-22 19:13   ` Carlos Carvalho [this message]
2023-08-23  1:28     ` Yu Kuai
2023-08-23  6:04       ` Hannes Reinecke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZOUI9yDzjxuFP68E@fisica.ufpr.br \
    --to=carlos@fisica.ufpr.br \
    --cc=acelan@gmail.com \
    --cc=bagasdotme@gmail.com \
    --cc=hch@lst.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=regressions@lists.linux.dev \
    --cc=song@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).