All of lore.kernel.org
 help / color / mirror / Atom feed
* Raid working but stuck at 99.9%
@ 2021-01-02 12:37 Teejay
  2021-01-02 19:32 ` antlists
  0 siblings, 1 reply; 4+ messages in thread
From: Teejay @ 2021-01-02 12:37 UTC (permalink / raw)
  To: linux-raid

Hi,

Firstly I should say I am very new to RAID and this is my first post, so 
please forgive naive questions and woolly thinking!

I have been running a RAID 0 array on three 4TB drives for some time 
with no issue. However, I have always been a little concerned that I had 
no redundancy so I purchased another two drives, same type and size, 
with the plan of growing the array from 12TB to 16TB and introducing 
RAID5. What seemed like a good idea turned out to be a bit of a nightmare!

To upgrade the array I used the following command:

sudo mdadm --grow /dev/md0 --level=5 --raid-devices=5 --add /dev/sde 
/dev/sdf --backup-file=/tmp/grow_md0.bak

To my surprise it returned almost instantly with no errors. So I had a 
look at the status:

less /proc/mdstat

and it came back as being a raid 5 array and stated that it was reshape 
= 0.01% and would take several million minutes to complete! Somewhat 
concerned, I left it for half an hour and tried again only to find that 
the number of complete blocks was the same and the time had grown to an 
even more crazy number. It was clear the process had stalled.

I tried to stop the array and the command just hang. After some time I 
forced a power down and rebooted (could not figure what else to do!)

When the machine came back up, it did not assemble (as I had not updated 
the mdadm.conf so I ran the following:

lounge@lounge:~$ sudo blkid
[sudo] password for lounge:
/dev/sda1: UUID="314a0be6-e180-4dc3-8439-b5a84ee7f4a5" TYPE="ext4" 
PARTUUID="e01c24f7-01"
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/loop3: TYPE="squashfs"
/dev/loop4: TYPE="squashfs"
/dev/loop5: TYPE="squashfs"
/dev/loop6: TYPE="squashfs"
/dev/loop7: TYPE="squashfs"
/dev/sdc: UUID="5e89b9c4-dbdd-62a9-6bcc-20e02f58cdd2" 
UUID_SUB="7b17180b-6db1-675b-d208-84d43a3eb154" LABEL="lounge:0" 
TYPE="linux_raid_member"
/dev/sdb: UUID="5e89b9c4-dbdd-62a9-6bcc-20e02f58cdd2" 
UUID_SUB="c36a1dcd-e199-0ca5-3a89-7e6b298c9240" LABEL="lounge:0" 
TYPE="linux_raid_member"
/dev/sdd: UUID="5e89b9c4-dbdd-62a9-6bcc-20e02f58cdd2" 
UUID_SUB="2dc33f33-e2de-4c7d-d5e2-6006640b4e38" LABEL="lounge:0" 
TYPE="linux_raid_member"
/dev/sde: UUID="5e89b9c4-dbdd-62a9-6bcc-20e02f58cdd2" 
UUID_SUB="a41557e2-f35b-8acb-0203-281cafd5c18e" LABEL="lounge:0" 
TYPE="linux_raid_member"
/dev/sdf: UUID="5e89b9c4-dbdd-62a9-6bcc-20e02f58cdd2" 
UUID_SUB="7951f3fc-d111-65ef-dc28-11befcdc4769" LABEL="lounge:0" 
TYPE="linux_raid_member"

I then modified my config file as follows:

ARRAY /dev/md/0  level=raid5 metadata=1.2 num-devices=5: 
UUID=5e89b9c4:dbdd62a9:6bcc20e0:2f58cdd2 name=lounge:0
    devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf
MAILADDR root

and the ran the assemble command

sudo mdadm --assemble --verbose /dev/md0

It assembled fine, but the same thing happened, the reshape hang, so I 
hit google. Anyway to cut a long story short (post a lot of googling) I 
have determined that the drives that I am using are the dreaded SMR and 
that is probably why it all locked up (I wish I had found the wiki first!!!)

so I figured I needed to abort the reshape and following some advice 
posted on some forum I tried:

sudo mdadm --assemble --update=revert-reshape /dev/md0

It told me it had done it but needed a backup file name so I tried:

sudo mdadm --assemble --verbose --backup-file=/tmp/reshape.backup /dev/md0

The situation I am now is the array will assemble, but with one drive 
missing (not sure why) and the reshape is stuck at 99.9%

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] 
[raid4] [raid10]
md0 : active raid5 sdf[4] sdb[2] sdd[1] sdc[0]
       11720658432 blocks super 1.2 level 5, 512k chunk, algorithm 2 
[4/3] [UUU_]
       [===================>.]  reshape = 99.9% (3906632192/3906886144) 
finish=1480857.6min speed=0K/sec

unused devices: <none>

The good news is that if I mount the array (read only) it seems to be 
intact, I tried watching a movie and it seems good. Also fsck reports 
that the filesystem is clean. I have not had the courage to force a full 
check as I do not wish to write anything to it while it is in this state.

This is the output of  sudo mdadm -D /dev/md0
/dev/md0:
            Version : 1.2
      Creation Time : Sun Aug 16 16:43:19 2020
         Raid Level : raid5
         Array Size : 11720658432 (11177.69 GiB 12001.95 GB)
      Used Dev Size : 3906886144 (3725.90 GiB 4000.65 GB)
       Raid Devices : 4
      Total Devices : 4
        Persistence : Superblock is persistent

        Update Time : Sat Jan  2 11:48:45 2021
              State : clean, degraded, reshaping
     Active Devices : 4
    Working Devices : 4
     Failed Devices : 0
      Spare Devices : 0

             Layout : left-symmetric
         Chunk Size : 512K

Consistency Policy : resync

     Reshape Status : 99% complete
      Delta Devices : -1, (5->4)
         New Layout : parity-last

               Name : lounge:0  (local to host lounge)
               UUID : 5e89b9c4:dbdd62a9:6bcc20e0:2f58cdd2
             Events : 876

     Number   Major   Minor   RaidDevice State
        0       8       32        0      active sync /dev/sdc
        1       8       48        1      active sync /dev/sdd
        2       8       16        2      active sync /dev/sdb
        -       0        0        3      removed

        4       8       80        4      active sync /dev/sdf


so it looks like it attempting to become a RAID 4 array (parity-last?) 
and had got stuck. It is not hung just in this state of limbo.

I am not sure what I can do. It would seem my drives are not any good 
for RAID, although everything seemed fine while at level 0 since the 
offset (Maybe I got lucky). I need to somehow get back to a useful 
state. If I could get back to level 0 with three drives that would be 
great. I could then delete some junk and then backup the data the other 
two drives using rsync or something.

So I guess my questions are

1 - Can I safely get back to a three drive level 0 RAID thereby freeing 
the two drives I added to allow me to make a backup of the data?
2 - Even if I can revert, should I move my data and no longer even use 
RAID 0 until I can get some decent hard drives?
3 - Any other cunning ideas, at the moment I think my only option, if I 
can't revert, which is to buy many TB's of storage to back up the read 
only file system, which I can ill afford to do!

Any advice would be great. Where I am now, I think my data is safe-ish 
(baring a drive failure!). I just need to make it safer the best way 
possible, while not valuable, it will take many hundreds of hours to 
rebuild from scratch. I also need to end up with a system I can write 
too and trust.

Regards


TeeJay





^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Raid working but stuck at 99.9%
  2021-01-02 12:37 Raid working but stuck at 99.9% Teejay
@ 2021-01-02 19:32 ` antlists
       [not found]   ` <33f592e7-3408-4f9b-7146-11af526b1af8@gizzy.co.uk>
  0 siblings, 1 reply; 4+ messages in thread
From: antlists @ 2021-01-02 19:32 UTC (permalink / raw)
  To: Teejay, linux-raid

On 02/01/2021 12:37, Teejay wrote:
> Hi,
> 
> Firstly I should say I am very new to RAID and this is my first post, so 
> please forgive naive questions and woolly thinking!
> 
> I have been running a RAID 0 array on three 4TB drives for some time 
> with no issue. However, I have always been a little concerned that I had 
> no redundancy so I purchased another two drives, same type and size, 
> with the plan of growing the array from 12TB to 16TB and introducing 
> RAID5. What seemed like a good idea turned out to be a bit of a nightmare!

Raid 0? Not that good an idea ... bear in mind in this case 
probabilities add up - you've just TREBLED your chances of losing your 
data to a disk failure ...
> 
> To upgrade the array I used the following command:
> 
> sudo mdadm --grow /dev/md0 --level=5 --raid-devices=5 --add /dev/sde 
> /dev/sdf --backup-file=/tmp/grow_md0.bak
> 
> To my surprise it returned almost instantly with no errors. So I had a 
> look at the status:
> 
> less /proc/mdstat
> 
> and it came back as being a raid 5 array and stated that it was reshape 
> = 0.01% and would take several million minutes to complete! Somewhat 
> concerned, I left it for half an hour and tried again only to find that 
> the number of complete blocks was the same and the time had grown to an 
> even more crazy number. It was clear the process had stalled.

uname -a ?
mdadm --version ?

This sounds similar to a problem we've regularly seen with raid-5. And 
it's noticeable with older Ubuntus.
> 
> I tried to stop the array and the command just hang. After some time I 
> forced a power down and rebooted (could not figure what else to do!)
> 
> When the machine came back up, it did not assemble (as I had not updated 
> the mdadm.conf

This *shouldn't* make any difference ...

> so I ran the following:
> 
> lounge@lounge:~$ sudo blkid
> [sudo] password for lounge:
> /dev/sda1: UUID="314a0be6-e180-4dc3-8439-b5a84ee7f4a5" TYPE="ext4" 
> PARTUUID="e01c24f7-01"
> /dev/loop0: TYPE="squashfs"
> /dev/loop1: TYPE="squashfs"
> /dev/loop2: TYPE="squashfs"
> /dev/loop3: TYPE="squashfs"
> /dev/loop4: TYPE="squashfs"
> /dev/loop5: TYPE="squashfs"
> /dev/loop6: TYPE="squashfs"
> /dev/loop7: TYPE="squashfs"
> /dev/sdc: UUID="5e89b9c4-dbdd-62a9-6bcc-20e02f58cdd2" 
> UUID_SUB="7b17180b-6db1-675b-d208-84d43a3eb154" LABEL="lounge:0" 
> TYPE="linux_raid_member"
> /dev/sdb: UUID="5e89b9c4-dbdd-62a9-6bcc-20e02f58cdd2" 
> UUID_SUB="c36a1dcd-e199-0ca5-3a89-7e6b298c9240" LABEL="lounge:0" 
> TYPE="linux_raid_member"
> /dev/sdd: UUID="5e89b9c4-dbdd-62a9-6bcc-20e02f58cdd2" 
> UUID_SUB="2dc33f33-e2de-4c7d-d5e2-6006640b4e38" LABEL="lounge:0" 
> TYPE="linux_raid_member"
> /dev/sde: UUID="5e89b9c4-dbdd-62a9-6bcc-20e02f58cdd2" 
> UUID_SUB="a41557e2-f35b-8acb-0203-281cafd5c18e" LABEL="lounge:0" 
> TYPE="linux_raid_member"
> /dev/sdf: UUID="5e89b9c4-dbdd-62a9-6bcc-20e02f58cdd2" 
> UUID_SUB="7951f3fc-d111-65ef-dc28-11befcdc4769" LABEL="lounge:0" 
> TYPE="linux_raid_member"
> 
> I then modified my config file as follows:
> 
> ARRAY /dev/md/0  level=raid5 metadata=1.2 num-devices=5: 
> UUID=5e89b9c4:dbdd62a9:6bcc20e0:2f58cdd2 name=lounge:0
>     devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf
> MAILADDR root
> 
> and the ran the assemble command
> 
> sudo mdadm --assemble --verbose /dev/md0
> 
> It assembled fine, but the same thing happened, the reshape hang, so I 
> hit google. Anyway to cut a long story short (post a lot of googling) I 
> have determined that the drives that I am using are the dreaded SMR and 
> that is probably why it all locked up (I wish I had found the wiki 
> first!!!)

Ahhhhh ... you MAY be able to RMA them. What are they? If they're WD 
Reds I'd RMA them as a matter of course as "unfit for purpose". If 
they're BarraCudas, well, tough luck but you might get away with it.
> 
> so I figured I needed to abort the reshape and following some advice 
> posted on some forum I tried:
> 
> sudo mdadm --assemble --update=revert-reshape /dev/md0
> 
> It told me it had done it but needed a backup file name so I tried:
> 
> sudo mdadm --assemble --verbose --backup-file=/tmp/reshape.backup /dev/md0

DON'T use a backup file unless it tells you it needs it. Usually it 
doesn't, now, if you're adding space I believe it dumps the backup in 
the new space that will become available.
> 
> The situation I am now is the array will assemble, but with one drive 
> missing (not sure why) and the reshape is stuck at 99.9%
> 
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] 
> [raid4] [raid10]
> md0 : active raid5 sdf[4] sdb[2] sdd[1] sdc[0]
>        11720658432 blocks super 1.2 level 5, 512k chunk, algorithm 2 
> [4/3] [UUU_]
>        [===================>.]  reshape = 99.9% (3906632192/3906886144) 
> finish=1480857.6min speed=0K/sec
> 
> unused devices: <none>

So we have a four-drive raid 5. Not bad. Does that include one of the 
SMR drives?!?
> 
> The good news is that if I mount the array (read only) it seems to be 
> intact, I tried watching a movie and it seems good. Also fsck reports 
> that the filesystem is clean. I have not had the courage to force a full 
> check as I do not wish to write anything to it while it is in this state.
> 
> This is the output of  sudo mdadm -D /dev/md0
> /dev/md0:
>             Version : 1.2
>       Creation Time : Sun Aug 16 16:43:19 2020
>          Raid Level : raid5
>          Array Size : 11720658432 (11177.69 GiB 12001.95 GB)
>       Used Dev Size : 3906886144 (3725.90 GiB 4000.65 GB)
>        Raid Devices : 4
>       Total Devices : 4
>         Persistence : Superblock is persistent
> 
>         Update Time : Sat Jan  2 11:48:45 2021
>               State : clean, degraded, reshaping
>      Active Devices : 4
>     Working Devices : 4
>      Failed Devices : 0
>       Spare Devices : 0
> 
>              Layout : left-symmetric
>          Chunk Size : 512K
> 
> Consistency Policy : resync
> 
>      Reshape Status : 99% complete
>       Delta Devices : -1, (5->4)
>          New Layout : parity-last
> 
>                Name : lounge:0  (local to host lounge)
>                UUID : 5e89b9c4:dbdd62a9:6bcc20e0:2f58cdd2
>              Events : 876
> 
>      Number   Major   Minor   RaidDevice State
>         0       8       32        0      active sync /dev/sdc
>         1       8       48        1      active sync /dev/sdd
>         2       8       16        2      active sync /dev/sdb
>         -       0        0        3      removed
> 
>         4       8       80        4      active sync /dev/sdf
> 
> 
> so it looks like it attempting to become a RAID 4 array (parity-last?) 
> and had got stuck. It is not hung just in this state of limbo.
> 
> I am not sure what I can do. It would seem my drives are not any good 
> for RAID, although everything seemed fine while at level 0 since the 
> offset (Maybe I got lucky).

Okay. What are those drives? I'm *guessing* your original three drives 
were WD Reds. What is the type number? If they're Reds this ends in EFAX 
or EFRX, if I remember correctly. I think EFAX are good and EFRX are 
bad. It could be the other way round ...

> I need to somehow get back to a useful 
> state. If I could get back to level 0 with three drives that would be 
> great. I could then delete some junk and then backup the data the other 
> two drives using rsync or something.
> 
> So I guess my questions are
> 
> 1 - Can I safely get back to a three drive level 0 RAID thereby freeing 
> the two drives I added to allow me to make a backup of the data?

I'll let others comment.

> 2 - Even if I can revert, should I move my data and no longer even use 
> RAID 0 until I can get some decent hard drives?

Don't mess with it yet.

> 3 - Any other cunning ideas, at the moment I think my only option, if I 
> can't revert, which is to buy many TB's of storage to back up the read 
> only file system, which I can ill afford to do!

What I personally would do is - if the old drives are okay and you can 
RMA the two new ones - RMA one of them and swap it for a BarraCuda 12TB. 
I think a 4TB Red is about £100 and a 12TB BarraCuda is about £200. Yes 
I know BarraCudas are SMR, but this is a backup drive, so it doesn't 
really matter. RMA the other and swap it for a 4TB IronWolf.

Timing here is the problem - one drive you want to RMA is currently in 
the array, and you don't want to return it until you've got both new 
drives. Get the BarraCuda and back up the array first. Then ADD the 
IronWolf using "mdadm --replace /dev/smr --with /dev/ironwolf". The 
other possibility is, IFF it was a shop and you can go in person, they 
might be able to copy the data from one drive to the other - "dd -if 
/dev/Red -of /dev/IronWolf". That will at least keep your data safe AND 
REDUNDANT.

(Oh, and if you're returning £200 of drives and swapping it for £300, 
they might decide to be kind ... :-)
> 
> Any advice would be great. Where I am now, I think my data is safe-ish 
> (baring a drive failure!). I just need to make it safer the best way 
> possible, while not valuable, it will take many hundreds of hours to 
> rebuild from scratch. I also need to end up with a system I can write 
> too and trust.
> 
That should stream all the data off the SMR drive on to the IronWolf and 
you should end up with a properly working array ... (although if the 
shop copied it, this will be irrelevant.)

(I'd put LVM on the BarraCuda, so you can then use an "inplace rsync" to 
take incremental backups - remember RAID IS NOT A BACKUP.)

NB - if you found the wiki, why didn't you follow its advice ...
https://raid.wiki.kernel.org/index.php/Linux_Raid#When_Things_Go_Wrogn
There's a reason we ask for all that information ;-) which is why I've 
asked you for a lot of the information you should have supplied already 
:-) Yes it's a lot we ask for, but I'm guessing a lot here, and guesses 
aren't a good idea for your data ...

Cheers,
Wol

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Raid working but stuck at 99.9%
       [not found]   ` <33f592e7-3408-4f9b-7146-11af526b1af8@gizzy.co.uk>
@ 2021-01-03 20:26     ` antlists
  2021-01-03 21:34       ` Rudy Zijlstra
  0 siblings, 1 reply; 4+ messages in thread
From: antlists @ 2021-01-03 20:26 UTC (permalink / raw)
  To: Teejay, antlists, linux-raid

On 02/01/2021 23:10, Teejay wrote:
> On 02/01/2021 19:32, antlists wrote:
>> On 02/01/2021 12:37, Teejay wrote:
>>> Hi,
> 
> Point taken, but it is what it is and I need to fix it. The internet is 
> full of advice about RAID - most of it contradicts itself as there is 
> little real consensus (welcome to the 21st Century :-) . As a newbie, I 
> went with what seemed right at the time. I concede I made a bad call. 
> Let's move on.
> 
That's why I took on updating the wiki. To try and provide an up-to-date 
reference site. Problem is, like you did, people usually find the old 
duff stuff first.
>>>
>>> To upgrade the array I used the following command:
>>>
>>> sudo mdadm --grow /dev/md0 --level=5 --raid-devices=5 --add /dev/sde 
>>> /dev/sdf --backup-file=/tmp/grow_md0.bak
>>>
>>> To my surprise it returned almost instantly with no errors. So I had 
>>> a look at the status:

Did your example tell you to use --backup-file? If it did, I hope it 
wasn't the wiki!

If the --grow told you it needed a backup, I'd be surprised but if it 
asks for it it needs it.

Once you were trying to fix things, it would be normal for it to ask for 
the backup you originally gave it ...
>>>
>>> less /proc/mdstat
>>>
>>> and it came back as being a raid 5 array and stated that it was 
>>> reshape = 0.01% and would take several million minutes to complete! 
>>> Somewhat concerned, I left it for half an hour and tried again only 
>>> to find that the number of complete blocks was the same and the time 
>>> had grown to an even more crazy number. It was clear the process had 
>>> stalled.
>>
>> uname -a ?
> 
> Linux lounge 5.4.0-58-generic #64-Ubuntu SMP Wed Dec 9 08:16:25 UTC 2020 
> x86_64 x86_64 x86_64 GNU/Linux
> 
>> mdadm --version ?
> 
> mdadm - v4.1 - 2018-10-01
> 
>>
>> This sounds similar to a problem we've regularly seen with raid-5. And 
>> it's noticeable with older Ubuntus.
> 
> My install of Ubuntu is not that old is it? - and it has all the 
> official updates.
> 
> lsb_release -a
> 
> Distributor ID:    Ubuntu
> Description:    Ubuntu 20.04.1 LTS
> Release:    20.04
> Codename:    focal
> 
That's new enough. So it's not (I hope) that problem.
>>>
>>
>> Ahhhhh ... you MAY be able to RMA them. What are they? If they're WD 
>> Reds I'd RMA them as a matter of course as "unfit for purpose". If 
>> they're BarraCudas, well, tough luck but you might get away with it. 
> RMA? - Newbie here!

Well, I don't know what the letters stand for, but the expression is a 
pretty standard term for "return to supplier". If you return anything 
that is defective, the supplier will usually ask you to "fill in an RMA".
> 
>>
>> Okay. What are those drives? I'm *guessing* your original three drives 
>> were WD Reds. What is the type number? If they're Reds this ends in 
>> EFAX or EFRX, if I remember correctly. I think EFAX are good and EFRX 
>> are bad. It could be the other way round ...
> 
> That means little to me. This is what I know: The four drives that form 
> the working array are the three original ones and one of the new ones. 
> Non of them are Reds. They are all the same Seagate drives, though it is 
> possible they are different internally as they were purchased at 
> different times, I have not opened them up. The Array is working and 
> AFAIK it is all there, I can find not evidence to the contrary. It 
> mounts (I have only tried Read Only), and I can access the data, with 
> not apparent issues.
> 
Dare I suggest you need to read this page ...

http://www.catb.org/~esr/faqs/smart-questions.html

If you haven't come across ESR he may be a bit of a nutter, but he is an 
extremely good psychologist - it is well worth reading!

I gave you a link. If you went there, the very first link in it gave you 
some advice - https://raid.wiki.kernel.org/index.php/Asking_for_help

YOU DIDN'T FOLLOW IT.

One of the things it asks for is the smartctl info for your drives - ALL 
OF THEM. It'll tell you the model number of your drives. How many 
different models do you have? Are they SMR? Google the model numbers and 
see if you can find out! If you come back and say you can't make head or 
tail of what you've found, that's fine. What's not fine is if you don't try.
> 
>>
>>> I need to somehow get back to a useful state. If I could get back to 
>>> level 0 with three drives that would be great. I could then delete 
>>> some junk and then backup the data the other two drives using rsync 
>>> or something.
>>>
>>> So I guess my questions are
>>>
>>> 1 - Can I safely get back to a three drive level 0 RAID thereby 
>>> freeing the two drives I added to allow me to make a backup of the data?
>>
>> I'll let others comment.
>>
>>> 2 - Even if I can revert, should I move my data and no longer even 
>>> use RAID 0 until I can get some decent hard drives?
>>
>> Don't mess with it yet.
>>
>>> 3 - Any other cunning ideas, at the moment I think my only option, if 
>>> I can't revert, which is to buy many TB's of storage to back up the 
>>> read only file system, which I can ill afford to do!
>>
Okay, to throw another option into the mix, get that 12TB BarraCuda, and 
copy your data across as your main drive. You're probably better off 
with an IronWolf, but I don't know if they come as a 12TB and they'd 
cost rather more ...

Then you can combine the other drives with btrfs as a backup volume. 
That way, if the 12TB breaks you've got a backup, and if one of the 4TBs 
break, btrfs means you only lose what's on that disk (which is "backed 
up" on your live disk ...)
> 
> Sounds like you misunderstood what I wrote; sarcasm is not a great way 
> of helping someone, especially when you only half read the email!
> 
Sorry, I don't mean to be sarcastic, and offering to spend more money 
usually gets vendors on side ...

> again let's move on!
> 
and imho, you NEED to back up your data, which I think means spending 
money, whether you can afford it or not :-(

I personally don't have experience playing with broken arrays (unlike 
others on this list), so I don't want to advise you to do something that 
trashes your array and loses everything ...
>>>
> 
> I did read the wiki, promise! More than once! - it does not seem to 
> cover my situation, or if it does, it makes a very good job of hiding 
> it. Like many such sites is was clearly written by someone that knows 
> exactly how everything works and forgets that the reader will not 
> necessary have a similar level of knowledge, in other words, it is not 
> for written for novices. It uses many abbreviations and has an 
> assumption of knowledge that is way beyond mine. Asking for stuff if 
> fine, but not much help if you don't know what it is asking for. While I 
> really appreciate the help, it is not useful if you to go on the 
> offensive. I said up front that I am a newbie. The best piece of advice 
> I could find on the wiki was not to do anything unless I was sure and to 
> ask for help first, thus the email. Unfortunately, I did not find the 
> wiki in time to avoid getting in to the mess, but when I did I followed 
> the advice and asked for help -  so all I ask is for some help getting 
> out of this mess. If you need more info, I will do my best to provide it 
> but please remember I am at the bottom looking up, and the view from 
> down here is not as clear as the view from where you stand; I understand 
> it can be difficult to remember that, I am an engineer too, just not one 
> that know anything about RAID :)
> 
You describe exactly how I often feel, so we do understand. And yes, 
when your raid isn't working properly I can understand the panic. Let's 
work out WHAT is wrong. I'm hoping, if your drives were purchased at 
different times, the problem might be the new drives only are SMR. If it 
looks like something I'm not happy dealing with, I can kick it up to 
people who have more experience.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Raid working but stuck at 99.9%
  2021-01-03 20:26     ` antlists
@ 2021-01-03 21:34       ` Rudy Zijlstra
  0 siblings, 0 replies; 4+ messages in thread
From: Rudy Zijlstra @ 2021-01-03 21:34 UTC (permalink / raw)
  To: antlists, Teejay, linux-raid



Op 03-01-2021 om 21:26 schreef antlists:
> On 02/01/2021 23:10, Teejay wrote:
>> On 02/01/2021 19:32, antlists wrote:
>> Ahhhhh ... you MAY be able to RMA them. What are they? If they're WD 
>> Reds I'd RMA them as a matter of course as "unfit for purpose". If 
>> they're BarraCudas, well, tough luck but you might get away with it. 
>> RMA? - Newbie here!
>
> Well, I don't know what the letters stand for, but the expression is a 
> pretty standard term for "return to supplier". If you return anything 
> that is defective, the supplier will usually ask you to "fill in an RMA".

Return Material Authorisation


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-01-03 21:52 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-02 12:37 Raid working but stuck at 99.9% Teejay
2021-01-02 19:32 ` antlists
     [not found]   ` <33f592e7-3408-4f9b-7146-11af526b1af8@gizzy.co.uk>
2021-01-03 20:26     ` antlists
2021-01-03 21:34       ` Rudy Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.