All of lore.kernel.org
 help / color / mirror / Atom feed
* RAID5 now recognized as RAID1
@ 2021-07-08 10:07 Nicolas Martin
  2021-07-12  7:08 ` Fine Fan
  2021-07-12  7:52 ` Wols Lists
  0 siblings, 2 replies; 4+ messages in thread
From: Nicolas Martin @ 2021-07-08 10:07 UTC (permalink / raw)
  To: linux-raid

Hi,

For a bit of context :I had a RAID5 with 4 disks running on a QNAP NAS.
One disk started failing, so I ordered a replacement disk, but in the mean time the NAS became irresponsive and I had to reboot it.
Now the NAS does not (really) come back alive, and I can only log onto it with ssh.

When I run cat /proc/mdstatus, this is what I get :
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md322 : active raid1 sdd5[3](S) sdc5[2](S) sdb5[1] sda5[0]
      7235136 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdc4[24] sda4[1] sdb4[0] sdd4[25]
      458880 blocks super 1.0 [24/4] [UUUU____________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sdb1[0] sdd1[25] sdc1[24] sda1[26]
      530048 blocks super 1.0 [24/4] [UUUU____________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

So, I don’t know how this could happen ? I looked up on the FAQ, but I can’t seem to see what could explain this, nor how I can recover from this ?

Any help appreciated.

Thanks

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: RAID5 now recognized as RAID1
  2021-07-08 10:07 RAID5 now recognized as RAID1 Nicolas Martin
@ 2021-07-12  7:08 ` Fine Fan
  2021-07-12  7:52 ` Wols Lists
  1 sibling, 0 replies; 4+ messages in thread
From: Fine Fan @ 2021-07-12  7:08 UTC (permalink / raw)
  To: Nicolas Martin; +Cc: linux-raid

Hi ,
I actually have a QNAP NAS in my house.

[admin@NASXXXXX /]# cat /etc/issue
Welcome to TS-431(192.168.1.XXX), QNAP Systems, Inc.  <<<<< As you can
see this is an old version device and out of date.


It has 3 1T disks RAID5 on it but only 2disks work now, I bought a new
one, still waiting for the shipment.
The web UI still working on my side, even I reboot it for times.

I think the most important thing for now is to copy your important data out.

I think you could find your data path via "df -h"
Here is my output:
[admin@NASXXXXX bin]# df -h
Filesystem                Size      Used Available Use% Mounted on
...
...
...
/dev/mapper/cachedev1     1.8T      1.2T    548.1G  70% /share/CACHEDEV1_DATA
...
...
...

Go to /dev/mapper/cachedev1  directory and then use "scp <your
important files>
root@other_ip_address_in_your_LAN:/path/to/directory"

You could use cat /proc/disks to check if other disks here.
[admin@NASXXXXX /]# cat /proc/diskstats   # I like to use "lsblk" but
it doesn't have it.
...
...
...  # As you could see that it only has sda and sdb there:   sdc is
pluged in the NAS ,but it's not here, it means it doesn't workout now.
  43      14 nbd14 0 0 0 0 0 0 0 0 0 0 0
  43      15 nbd15 0 0 0 0 0 0 0 0 0 0 0
   8       0 sda 2666861 242260046 1982460610 47733070 164771 1228360
10752008 15960880 0 15367000 63725850
   8       1 sda1 47492 10036 3846782 3169370 49715 76658 840503
2781750 0 4585920 5950920
   8       2 sda2 2699 3160 46772 471070 17555 21902 264929 583500 0
756970 1054680
   8       3 sda3 2446860 242235903 1972738628 40493400 77962 792166
6842863 10862250 0 9402270 115057530
   8       4 sda4 169688 10947 5827481 3598940 19288 337634 2803664
1730950 0 1990760 5329690
   8       5 sda5 112 0 867 290 14 0 49 210 0 500 500
   8      16 sdb 2691840 242249364 1980699749 51394800 166919 1228269
10768808 16194030 0 15714940 67632830
   8      17 sdb1 46538 2740 2857410 4336940 49658 76714 840487
2817990 0 4794540 7154720
   8      18 sdb2 2735 3549 50410 353200 17690 21768 264929 598910 0
740140 952170
   8      19 sdb3 2443821 242238229 1972683584 42970960 80052 792134
6859679 11226530 0 9529510 54241740
   8      20 sdb4 198631 4846 5107279 3733440 19268 337653 2803664
1548260 0 1968070 5281580
   8      21 sdb5 105 0 986 190 14 0 49 200 0 390 390
  31       0 mtdblock0 20 0 160 4450 0 0 0 0 0 4450 4450
  31       1 mtdblock1 169 0 1352 19230 0 0 0 0 0 19070 19220

...
...
...

Why  it happended ?
I actually don't know, you know hard disks are consumable.
About one year ago the same NAS (I made a 4 1T-disks RAID5 on it that
time) 's 4th slot doesn't workout suddenly and it cause the 4th disk
broken.
I contacted QNAP , they asked me to send the whole NAS (without disks)
to them to test, I did, the result is the chip which has 4 SATA ports
and connected to disks is broken (actually the 4th SATA port on that
chip is broken ). I have to buy a new one from QNAP cause my device is
out of date.
But when I got my NAS back with new chip on it, plugin a new disk into
the 4th port, it still can't be detected, So I spend about 20 hours to
move my data out , and re-create my 4-disks RAID5  into 3-disks RAID5
on it. and then copy my data into that 3-disks RAID5 QNAP NAS.


How can I recover from this?
1.Copy out your important data first  (use scp as I mentioned.)
2.Check if the disks are still there (cat /proc/diskstats), if the
disks are ther, you may need to reinstall the NAS OS on your device.
if not you got broken disks.
3.Contact QNAP, or , Buy new disks build new RAID copy data back.


On Thu, Jul 8, 2021 at 6:07 PM Nicolas Martin
<nicolas.martin.3d@gmail.com> wrote:
>
> Hi,
>
> For a bit of context :I had a RAID5 with 4 disks running on a QNAP NAS.
> One disk started failing, so I ordered a replacement disk, but in the mean time the NAS became irresponsive and I had to reboot it.
> Now the NAS does not (really) come back alive, and I can only log onto it with ssh.
>
> When I run cat /proc/mdstatus, this is what I get :
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
> md322 : active raid1 sdd5[3](S) sdc5[2](S) sdb5[1] sda5[0]
>       7235136 blocks super 1.0 [2/2] [UU]
>       bitmap: 0/1 pages [0KB], 65536KB chunk
>
> md256 : active raid1 sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0]
>       530112 blocks super 1.0 [2/2] [UU]
>       bitmap: 0/1 pages [0KB], 65536KB chunk
>
> md13 : active raid1 sdc4[24] sda4[1] sdb4[0] sdd4[25]
>       458880 blocks super 1.0 [24/4] [UUUU____________________]
>       bitmap: 1/1 pages [4KB], 65536KB chunk
>
> md9 : active raid1 sdb1[0] sdd1[25] sdc1[24] sda1[26]
>       530048 blocks super 1.0 [24/4] [UUUU____________________]
>       bitmap: 1/1 pages [4KB], 65536KB chunk
>
> unused devices: <none>
>
> So, I don’t know how this could happen ? I looked up on the FAQ, but I can’t seem to see what could explain this, nor how I can recover from this ?
>
> Any help appreciated.
>
> Thanks
>


-- 




Fine Fan

Kernel Storage QE

ffan@redhat.com

T: 8388117
M: (+86)-15901470329


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: RAID5 now recognized as RAID1
  2021-07-08 10:07 RAID5 now recognized as RAID1 Nicolas Martin
  2021-07-12  7:08 ` Fine Fan
@ 2021-07-12  7:52 ` Wols Lists
  2021-07-15  4:59   ` Fine Fan
  1 sibling, 1 reply; 4+ messages in thread
From: Wols Lists @ 2021-07-12  7:52 UTC (permalink / raw)
  To: Nicolas Martin, linux-raid

On 08/07/21 11:07, Nicolas Martin wrote:
> So, I don’t know how this could happen ? I looked up on the FAQ, but I can’t seem to see what could explain this, nor how I can recover from this ?

https://raid.wiki.kernel.org/index.php/Asking_for_help

Cheers,
Wol

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: RAID5 now recognized as RAID1
  2021-07-12  7:52 ` Wols Lists
@ 2021-07-15  4:59   ` Fine Fan
  0 siblings, 0 replies; 4+ messages in thread
From: Fine Fan @ 2021-07-15  4:59 UTC (permalink / raw)
  To: Wols Lists; +Cc: Nicolas Martin, linux-raid

Hi Nicolas Martin,

I just got my new disk and plugged it into the NAS waiting it finish
RAID 5 rebuild.
Here is what I found:
* The OS itself will create 4 RAID1 to used for the OS , and the RAID5
of your data  it should be  /dev/md1

Here is the diagram in text version:  (You may need to copy them into
a txt to make it looks normal.)


  +---------+        +---------+        +---------+
  |         |        |         |        |         |  /dev/md9  RAID1
 /mnt/HDA_ROOT
  |   1     |        |   1     |        |   1     |      sda1
  |    543MB|        |    543MB|        |    543MB|      sdb1
  -----------        -----------        -----------      sdc1
  |         |        |         |        |         |
  |   2     |        |   2     |        |   2     |  /dev/md256 RAID1
  |    543MB|        |    543MB|        |    543MB|      sda2
  -----------        -----------        -----------      sdb2
  |         |        |         |        |         |      sdc2
  |         |        |         |        |         |
  |         |        |         |        |         |
  |         |        |         |        |         |
  |         |        |         |        |         |
  |         |        |         |        |         |  /dev/md1 RAID5
LV: lv1    1.78T
  |         |        |         |        |         |       sda3
 lv544  18.54G
  |         |        |         |        |         |       sdb3
  |         |        |         |        |         |       sdc3       VG: vg288
  |         |        |         |        |         |
PV: /dev/md1
  |         |        |         |        |         |
  |   3     |        |   3     |        |   3     |
  |    990GB|        |    990GB|        |    990GB|
  |         |        |         |        |         |
  |         |        |         |        |         |
  |         |        |         |        |         |
  |         |        |         |        |         |
  |         |        |         |        |         |
  |         |        |         |        |         |
  |         |        |         |        |         |
  |         |        |         |        |         |
  |         |        |         |        |         |
  |         |        |         |        |         |
  -----------        -----------        -----------
  |         |        |         |        |         |  /dev/md13 RAID1
  /mnt/ext
  |   4     |        |   4     |        |   4     |       sda4
  |    543MB|        |    543MB|        |    543MB|       sdb4
  -----------        -----------        -----------       sdc4
  |         |        |         |        |         |
  |   5     |        |   5     |        |   5     |
  |  8544MB |        |  8544MB |        |  8544MB |  /dev/md322 RAID1
  |         |        |         |        |         |       sda5
  +---------+        +---------+        +---------+       sdb5
     sda                sdb                sdc            sdc6


Table Summary:
/dev/md9     RAID1    (sda1 sdb1 sdc1 ;543MB each)  /mnt/HDA_ROOT
/dev/md256 RAID1    (sda2 sdb2 sdc2 ;543MB each)
/dev/md1     RAID5    (sda3 sdb3 sdc3 ;990GB each)  PV:/dev/md1
VG:vg288   LV:lv1
/dev/md13   RAID1   (sda4 sdb4 sdc4 ;543MB each)  /mnt/ext
/dev/md322 RAID1   (sda5 sdb5 sdc5 ;8544MB each)

As you could see ,  my data is under  /dev/md1 RAID5 , for others they
are the RAID1.

Hope these could help you a little.


On Mon, Jul 12, 2021 at 4:29 PM Wols Lists <antlists@youngman.org.uk> wrote:
>
> On 08/07/21 11:07, Nicolas Martin wrote:
> > So, I don’t know how this could happen ? I looked up on the FAQ, but I can’t seem to see what could explain this, nor how I can recover from this ?
>
> https://raid.wiki.kernel.org/index.php/Asking_for_help
>
> Cheers,
> Wol
>


-- 




Fine Fan

Kernel Storage QE

ffan@redhat.com

T: 8388117
M: (+86)-15901470329


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-07-15  4:58 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-08 10:07 RAID5 now recognized as RAID1 Nicolas Martin
2021-07-12  7:08 ` Fine Fan
2021-07-12  7:52 ` Wols Lists
2021-07-15  4:59   ` Fine Fan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.