All of lore.kernel.org
 help / color / mirror / Atom feed
From: Reindl Harald <h.reindl@thelounge.net>
To: Wols Lists <antlists@youngman.org.uk>, linux-raid@vger.kernel.org
Subject: Re: “root account locked” after removing one RAID1 hard disc
Date: Mon, 30 Nov 2020 13:13:47 +0100	[thread overview]
Message-ID: <832e1194-cc76-b8f8-cb59-2e6bedaeb4dc@thelounge.net> (raw)
In-Reply-To: <5FC4DEED.9030802@youngman.org.uk>



Am 30.11.20 um 13:00 schrieb Wols Lists:
> On 30/11/20 10:31, Reindl Harald wrote:
>> since when is it broken that way?
>>
>> from where should that commandlien come from when the operating system
>> itself is on the for no vali dreason not assembling RAID?
>>
>> luckily the past few years no disks died but on the office server 300
>> kilometers from here with /boot, os and /data on RAID1 this was not true
>> at least 10 years
>>
>> * disk died
>> * boss replaced it and made sure
>>    the remaining is on the first SATA
>>    port
>> * power on
>> * machine booted
>> * me partitioned and added the new drive
>>
>> hell it's and ordinary situation for a RAID that a disk disappears
>> without warning because they tend to die from one moment to the next
>>
>> hell it's expected behavior to boot from the remaining disks, no matter
>> RAID1, RAID10, RAID5 as long as there are enough present for the whole
>> dataset
>>
>> the only thing i expect in that case is that it takes a little longer to
>> boot when soemthing tries to wait until a timeout for the missing device
>> / componenzt
>>
> So what happened? The disk failed, you shut down the server, the boss
> replaced it, and you rebooted?

in most cases smartd shouts a warning, the machine is powered down 
*without* remove the partitions from the RAID devices

the disk with SMART alerts is replaced by a blank, unpartitioned one

the remaining disk is made to be sure on the first SATA so that the 
first disk found by the BIOS is not the new blank one

> In that case I would EXPECT the system to come back - the superblock
> matches the disks, the system says "everything is as it was", and your
> degraded array boots fine.

correct, RAID comes up degraded

> EXCEPT THAT'S NOT WHAT IS HAPPENING HERE.
> 
> The - fully functional - array is shut down.
> 
> A disk is removed.
> 
> On boot, reality and the superblock DISAGREE. In which case the system
> takes the only sensible route, screams "help!", and waits for MANUAL
> INTERVENTION.

but i fail to see the difference and to understand why reality and 
superblock disagree, it shouldn't matter how and when a disk is removed, 
it's not there, so what as long as there are enough disks to bring the 
array up

in my case the fully functional array is shutdown too by shutdown the 
machine and after that one disk is replaced and when the RAID comes up 
there is a disk logically missing because on it's place is a blank one 
without any partitions

> That's why you only have to force a degraded array to boot once - once
> the disks and superblock are back in sync, the system assumes the ops
> know about it.
still don't get how that happens and why

  reply	other threads:[~2020-11-30 12:14 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-30  8:44 “root account locked” after removing one RAID1 hard disc c.buhtz
2020-11-30  9:27 ` antlists
2020-11-30 10:29   ` c.buhtz
2020-11-30 11:40     ` Wols Lists
2020-11-30 10:31   ` Reindl Harald
2020-11-30 11:10     ` Rudy Zijlstra
2020-11-30 11:18       ` Reindl Harald
2020-11-30 20:06         ` ???root account locked??? " David T-G
2020-11-30 21:57           ` Reindl Harald
2020-11-30 22:06             ` RAID repair script (was "Re: ???root account locked??? after removing one RAID1 hard disc" David T-G
2020-11-30 12:00     ` “root account locked” after removing one RAID1 hard disc Wols Lists
2020-11-30 12:13       ` Reindl Harald [this message]
2020-11-30 13:11         ` antlists
2020-11-30 13:16           ` Reindl Harald
2020-11-30 13:47             ` antlists
2020-11-30 13:53               ` Reindl Harald
2020-11-30 14:46                 ` Rudy Zijlstra
2020-11-30 20:05 ` partitions & filesystems (was "Re: ???root account locked??? after removing one RAID1 hard disc") David T-G
2020-11-30 20:51   ` antlists
2020-11-30 21:03     ` Rudy Zijlstra
2020-11-30 21:49     ` Reindl Harald
2020-11-30 22:31       ` antlists
2020-11-30 23:21         ` Reindl Harald
2020-11-30 23:59           ` antlists
2020-11-30 22:04     ` partitions & filesystems David T-G
2020-12-01  8:45     ` partitions & filesystems (was "Re: ???root account locked??? after removing one RAID1 hard disc") c.buhtz
2020-12-01  9:18       ` Rudy Zijlstra
2020-12-01 10:00       ` Wols Lists
2020-12-01  8:41   ` buhtz
2020-12-01  9:13     ` Reindl Harald
2020-12-01  8:42   ` c.buhtz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=832e1194-cc76-b8f8-cb59-2e6bedaeb4dc@thelounge.net \
    --to=h.reindl@thelounge.net \
    --cc=antlists@youngman.org.uk \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.