* Can't mount /dev/md0 Raid5
@ 2017-10-11 10:25 Joseba Ibarra
2017-10-11 10:42 ` Rudy Zijlstra
0 siblings, 1 reply; 17+ messages in thread
From: Joseba Ibarra @ 2017-10-11 10:25 UTC (permalink / raw)
To: list linux-raid
The md0 is ext4 formated. But now, I can even start the SO when all the
disk are pluged. One of them is corrupt. It makes an odd sound at the
starting. However if I unplug that disk the system start fine, however
no RAID is detected and after assemble it says:
root@grafico:/home/jose# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Aug 5 23:10:50 2017
Raid Level : raid5
Used Dev Size : 976629760 (931.39 GiB 1000.07 GB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Thu Sep 21 13:34:35 2017
State : active, degraded, Not Started
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : servidor:0
UUID : 0b44a3b8:83eafabc:644afc87:bdb5b1f3
Events : 3109
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
I'm not sure how to continue, since i don't see the RAID. GParted see
the disks, however doesn't see the md0 and I'm bit scared if I lost the
data content.
Joseba Ibarra
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Can't mount /dev/md0 Raid5
2017-10-11 10:25 Can't mount /dev/md0 Raid5 Joseba Ibarra
@ 2017-10-11 10:42 ` Rudy Zijlstra
2017-10-11 11:14 ` Joseba Ibarra
0 siblings, 1 reply; 17+ messages in thread
From: Rudy Zijlstra @ 2017-10-11 10:42 UTC (permalink / raw)
To: Joseba Ibarra, list linux-raid
Hi Joseba,
On 11.10.17 12:25, Joseba Ibarra wrote:
> The md0 is ext4 formated. But now, I can even start the SO when all
> the disk are pluged. One of them is corrupt. It makes an odd sound at
> the starting. However if I unplug that disk the system start fine,
> however no RAID is detected and after assemble it says:
>
>
> root@grafico:/home/jose# mdadm --detail /dev/md0
> /dev/md0:
> Version : 1.2
> Creation Time : Sat Aug 5 23:10:50 2017
> Raid Level : raid5
> Used Dev Size : 976629760 (931.39 GiB 1000.07 GB)
> Raid Devices : 4
> Total Devices : 3
> Persistence : Superblock is persistent
>
> Update Time : Thu Sep 21 13:34:35 2017
> State : active, degraded, Not Started
> Active Devices : 3
> Working Devices : 3
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 512K
>
> Name : servidor:0
> UUID : 0b44a3b8:83eafabc:644afc87:bdb5b1f3
> Events : 3109
>
> Number Major Minor RaidDevice State
> - 0 0 0 removed
> 1 8 17 1 active sync /dev/sdb1
> 2 8 33 2 active sync /dev/sdc1
> 3 8 49 3 active sync /dev/sdd1
>
>
> I'm not sure how to continue, since i don't see the RAID. GParted see
> the disks, however doesn't see the md0 and I'm bit scared if I lost
> the data content.
>
Let me see if i understand you correctly
- with all 4 disks plugged in, your system does not boot
- with the broken disk unplugged, it boots (and from your description it
is really broken, no DISK recovery possible unless by specialised company)
- raid does not get assembled during boot, you do a manual assembly?
-> please provide the command you are using
from the log above, you should be able to do a mount of /dev/md0 which
would auto-start the raid.
If that works, the next step would be to check the health of the other
disks. smartctl would be your friend.
Another useful action would be to copy all important data to a backup
before you add a new disk to replace the failed disk.
Cheers
Rudy
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Can't mount /dev/md0 Raid5
2017-10-11 10:42 ` Rudy Zijlstra
@ 2017-10-11 11:14 ` Joseba Ibarra
2017-10-11 11:29 ` Adam Goryachev
0 siblings, 1 reply; 17+ messages in thread
From: Joseba Ibarra @ 2017-10-11 11:14 UTC (permalink / raw)
To: Rudy Zijlstra, list linux-raid
Hi Rudy
1- Yes, with all 4 disk plugged in, system does not boot
2- Yes, with the broken disk unplugged, it boots
3 - Yes, raid does not assemble during boot. I assemble manually doing
root@grafico:/home/jose# mdadm --assemble --scan /dev/md0
root@grafico:/home/jose# mdadm --assemble --scan
root@grafico:/home/jose# mdadm --assemble /dev/md0
4 -When I try to mount
mount /dev/md0 /mnt
mount: wrong file system, bad option, bad superblock in /dev/md0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try dmesg | tail or
something like that.
I do dmesg | tail
root@grafico:/mnt# dmesg | tail
[ 705.021959] md: pers->run() failed ...
[ 849.719439] EXT4-fs (md0): unable to read superblock
[ 849.719564] EXT4-fs (md0): unable to read superblock
[ 849.719589] EXT4-fs (md0): unable to read superblock
[ 849.719616] UDF-fs: error (device md0): udf_read_tagged: read failed,
block=256, location=256
[ 849.719625] UDF-fs: error (device md0): udf_read_tagged: read failed,
block=512, location=512
[ 849.719638] UDF-fs: error (device md0): udf_read_tagged: read failed,
block=256, location=256
[ 849.719642] UDF-fs: error (device md0): udf_read_tagged: read failed,
block=512, location=512
[ 849.719643] UDF-fs: warning (device md0): udf_fill_super: No
partition found (1)
[ 849.719667] isofs_fill_super: bread failed, dev=md0, iso_blknum=16,
block=32
Thanks a lot for your helping
> Rudy Zijlstra <mailto:rudy@grumpydevil.homelinux.org>
> 11 de octubre de 2017, 12:42
> Hi Joseba,
>
>
>
> Let me see if i understand you correctly
>
> - with all 4 disks plugged in, your system does not boot
> - with the broken disk unplugged, it boots (and from your description
> it is really broken, no DISK recovery possible unless by specialised
> company)
> - raid does not get assembled during boot, you do a manual assembly?
> -> please provide the command you are using
>
> from the log above, you should be able to do a mount of /dev/md0 which
> would auto-start the raid.
>
> If that works, the next step would be to check the health of the other
> disks. smartctl would be your friend.
> Another useful action would be to copy all important data to a backup
> before you add a new disk to replace the failed disk.
>
> Cheers
>
> Rudy
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Can't mount /dev/md0 Raid5
2017-10-11 11:14 ` Joseba Ibarra
@ 2017-10-11 11:29 ` Adam Goryachev
2017-10-11 11:56 ` Joseba Ibarra
0 siblings, 1 reply; 17+ messages in thread
From: Adam Goryachev @ 2017-10-11 11:29 UTC (permalink / raw)
To: Joseba Ibarra, Rudy Zijlstra, list linux-raid
Hi Rudy,
Please send the output of all of the following commands:
cat /proc/mdstat
mdadm --manage /dev/md0 --stop
mdadm --assemble /dev/md0 /dev/sd[bcd]1
cat /proc/mdstat
mdadm --manage /dev/md0 --run
mdadm --manage /dev/md0 --readwrite
cat /proc/mdstat
Basically the above is just looking at what the system has done
currently, stopping/clearing that, and then trying to assemble it again,
finally, we try to start it, even if it has one faulty disk.
At this stage, chances look good for recovering all your data, though I
would advise to get yourself a replacement disk for the dead one so that
you can restore redundancy as soon as possible.
Regards,Adam
On 11/10/17 22:14, Joseba Ibarra wrote:
> Hi Rudy
>
> 1- Yes, with all 4 disk plugged in, system does not boot
> 2- Yes, with the broken disk unplugged, it boots
> 3 - Yes, raid does not assemble during boot. I assemble manually doing
>
> root@grafico:/home/jose# mdadm --assemble --scan /dev/md0
> root@grafico:/home/jose# mdadm --assemble --scan
> root@grafico:/home/jose# mdadm --assemble /dev/md0
>
> 4 -When I try to mount
>
> mount /dev/md0 /mnt
>
> mount: wrong file system, bad option, bad superblock in /dev/md0,
> missing codepage or helper program, or other error
>
> In some cases useful info is found in syslog - try dmesg | tail or
> something like that.
>
> I do dmesg | tail
>
> root@grafico:/mnt# dmesg | tail
> [ 705.021959] md: pers->run() failed ...
> [ 849.719439] EXT4-fs (md0): unable to read superblock
> [ 849.719564] EXT4-fs (md0): unable to read superblock
> [ 849.719589] EXT4-fs (md0): unable to read superblock
> [ 849.719616] UDF-fs: error (device md0): udf_read_tagged: read
> failed, block=256, location=256
> [ 849.719625] UDF-fs: error (device md0): udf_read_tagged: read
> failed, block=512, location=512
> [ 849.719638] UDF-fs: error (device md0): udf_read_tagged: read
> failed, block=256, location=256
> [ 849.719642] UDF-fs: error (device md0): udf_read_tagged: read
> failed, block=512, location=512
> [ 849.719643] UDF-fs: warning (device md0): udf_fill_super: No
> partition found (1)
> [ 849.719667] isofs_fill_super: bread failed, dev=md0, iso_blknum=16,
> block=32
>
> Thanks a lot for your helping
>> Rudy Zijlstra <mailto:rudy@grumpydevil.homelinux.org>
>> 11 de octubre de 2017, 12:42
>> Hi Joseba,
>>
>>
>>
>> Let me see if i understand you correctly
>>
>> - with all 4 disks plugged in, your system does not boot
>> - with the broken disk unplugged, it boots (and from your description
>> it is really broken, no DISK recovery possible unless by specialised
>> company)
>> - raid does not get assembled during boot, you do a manual assembly?
>> -> please provide the command you are using
>>
>> from the log above, you should be able to do a mount of /dev/md0
>> which would auto-start the raid.
>>
>> If that works, the next step would be to check the health of the
>> other disks. smartctl would be your friend.
>> Another useful action would be to copy all important data to a backup
>> before you add a new disk to replace the failed disk.
>>
>> Cheers
>>
>> Rudy
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Adam Goryachev
Website Managers
P: +61 2 8304 0000 adam@websitemanagers.com.au
F: +61 2 8304 0001 www.websitemanagers.com.au
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Can't mount /dev/md0 Raid5
2017-10-11 11:29 ` Adam Goryachev
@ 2017-10-11 11:56 ` Joseba Ibarra
2017-10-11 13:23 ` Adam Goryachev
2017-10-11 14:01 ` Mikael Abrahamsson
0 siblings, 2 replies; 17+ messages in thread
From: Joseba Ibarra @ 2017-10-11 11:56 UTC (permalink / raw)
To: Adam Goryachev, Rudy Zijlstra, list linux-raid
Hi Adam
root@grafico:/mnt# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : inactive sdd1[3] sdb1[1] sdc1[2]
2929889280 blocks super 1.2
unused devices: <none>
root@grafico:/mnt# mdadm --manage /dev/md0 --stop
mdadm: stopped /dev/md0
root@grafico:/mnt# mdadm --assemble /dev/md0 /dev/sd[bcd]1
mdadm: /dev/md0 assembled from 3 drives - not enough to start the array
while not clean - consider --force.
root@grafico:/mnt# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
unused devices: <none>
At this point I´ve followed the advise using --force
root@grafico:/mnt# mdadm --assemble --force /dev/md0 /dev/sd[bcd]1
mdadm: Marking array /dev/md0 as 'clean'
mdadm: /dev/md0 has been started with 3 drives (out of 4).
root@grafico:/mnt# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : active (auto-read-only) raid5 sdb1[1] sdd1[3] sdc1[2]
2929889280 blocks super 1.2 level 5, 512k chunk, algorithm 2
[4/3] [_UUU]
bitmap: 0/8 pages [0KB], 65536KB chunk
unused devices: <none>
Now I see the RAID, however can't be mounted. So, I'm not sure how to
backup the data. Gparted shows the partition /dev/md0p1 with the used
and free space.
If I try
mount /dev/md0 /mnt
again the output is
mount: wrong file system, bad option, bad superblock in /dev/md0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try dmesg | tail or
something like that.
I do dmesg | tail
If I try root@grafico:/mnt# mount /dev/md0p1 /mnt
mount: /dev/md0p1: can't read superblock
And
root@grafico:/mnt# dmesg | tail
[ 3263.411724] VFS: Dirty inode writeback failed for block device md0p1
(err=-5).
[ 3280.486813] md0: p1
[ 3280.514024] md0: p1
[ 3452.496811] UDF-fs: warning (device md0): udf_fill_super: No
partition found (2)
[ 3463.731052] JBD2: Invalid checksum recovering block 630194476 in log
[ 3464.933960] Buffer I/O error on dev md0p1, logical block 630194474,
lost async page write
[ 3464.933971] Buffer I/O error on dev md0p1, logical block 630194475,
lost async page write
[ 3465.928066] JBD2: recovery failed
[ 3465.928070] EXT4-fs (md0p1): error loading journal
[ 3465.936852] VFS: Dirty inode writeback failed for block device md0p1
(err=-5).
Thanks a lot for your time
Joseba Ibarra
> Adam Goryachev <mailto:adam@websitemanagers.com.au>
> 11 de octubre de 2017, 13:29
> Hi Rudy,
>
> Please send the output of all of the following commands:
>
> cat /proc/mdstat
>
> mdadm --manage /dev/md0 --stop
>
> mdadm --assemble /dev/md0 /dev/sd[bcd]1
>
> cat /proc/mdstat
>
> mdadm --manage /dev/md0 --run
>
> mdadm --manage /dev/md0 --readwrite
>
> cat /proc/mdstat
>
>
> Basically the above is just looking at what the system has done
> currently, stopping/clearing that, and then trying to assemble it
> again, finally, we try to start it, even if it has one faulty disk.
>
> At this stage, chances look good for recovering all your data, though
> I would advise to get yourself a replacement disk for the dead one so
> that you can restore redundancy as soon as possible.
>
> Regards,Adam
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Can't mount /dev/md0 Raid5
2017-10-11 11:56 ` Joseba Ibarra
@ 2017-10-11 13:23 ` Adam Goryachev
2017-10-11 13:35 ` Joseba Ibarra
2017-10-11 14:01 ` Mikael Abrahamsson
1 sibling, 1 reply; 17+ messages in thread
From: Adam Goryachev @ 2017-10-11 13:23 UTC (permalink / raw)
To: Joseba Ibarra, Rudy Zijlstra, list linux-raid
On 11/10/17 22:56, Joseba Ibarra wrote:
> Hi Adam
>
> root@grafico:/mnt# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md0 : inactive sdd1[3] sdb1[1] sdc1[2]
> 2929889280 blocks super 1.2
>
> unused devices: <none>
>
>
> root@grafico:/mnt# mdadm --manage /dev/md0 --stop
> mdadm: stopped /dev/md0
>
>
> root@grafico:/mnt# mdadm --assemble /dev/md0 /dev/sd[bcd]1
> mdadm: /dev/md0 assembled from 3 drives - not enough to start the
> array while not clean - consider --force.
>
>
>
> root@grafico:/mnt# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> unused devices: <none>
>
> At this point I´ve followed the advise using --force
>
> root@grafico:/mnt# mdadm --assemble --force /dev/md0 /dev/sd[bcd]1
> mdadm: Marking array /dev/md0 as 'clean'
> mdadm: /dev/md0 has been started with 3 drives (out of 4).
>
>
> root@grafico:/mnt# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md0 : active (auto-read-only) raid5 sdb1[1] sdd1[3] sdc1[2]
> 2929889280 blocks super 1.2 level 5, 512k chunk, algorithm 2
> [4/3] [_UUU]
> bitmap: 0/8 pages [0KB], 65536KB chunk
>
> unused devices: <none>
>
>
> Now I see the RAID, however can't be mounted. So, I'm not sure how to
> backup the data. Gparted shows the partition /dev/md0p1 with the used
> and free space.
>
It still says read-only, can you try:
mdadm --manage /dev/md0 --run
or
mdadm --manage /dev/md0 --readwrite
PS, usually mounting will automatically convert from readonly to
readwrite, but I recall some cases where this didn't happen for me, so
it might help you as well.
Regards,
Adam
> Joseba Ibarra
>> Adam Goryachev <mailto:adam@websitemanagers.com.au>
>> 11 de octubre de 2017, 13:29
>> Hi Rudy,
>>
>> Please send the output of all of the following commands:
>>
>> cat /proc/mdstat
>>
>> mdadm --manage /dev/md0 --stop
>>
>> mdadm --assemble /dev/md0 /dev/sd[bcd]1
>>
>> cat /proc/mdstat
>>
>> mdadm --manage /dev/md0 --run
>>
>> mdadm --manage /dev/md0 --readwrite
>>
>> cat /proc/mdstat
>>
>>
>> Basically the above is just looking at what the system has done
>> currently, stopping/clearing that, and then trying to assemble it
>> again, finally, we try to start it, even if it has one faulty disk.
>>
>> At this stage, chances look good for recovering all your data, though
>> I would advise to get yourself a replacement disk for the dead one so
>> that you can restore redundancy as soon as possible.
>>
>> Regards,Adam
>>
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Can't mount /dev/md0 Raid5
2017-10-11 13:23 ` Adam Goryachev
@ 2017-10-11 13:35 ` Joseba Ibarra
2017-10-11 19:13 ` Adam Goryachev
0 siblings, 1 reply; 17+ messages in thread
From: Joseba Ibarra @ 2017-10-11 13:35 UTC (permalink / raw)
To: Adam Goryachev, Rudy Zijlstra, list linux-raid
Still the same. Nothing seems to have changed.
Same answers for mount commands.
Adam Goryachev escribió:
> mdadm --manage /dev/md0 --readwrite
Jose Ibarra
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Can't mount /dev/md0 Raid5
2017-10-11 11:56 ` Joseba Ibarra
2017-10-11 13:23 ` Adam Goryachev
@ 2017-10-11 14:01 ` Mikael Abrahamsson
2017-10-11 17:27 ` Joseba Ibarra
2017-10-11 19:49 ` John Stoffel
1 sibling, 2 replies; 17+ messages in thread
From: Mikael Abrahamsson @ 2017-10-11 14:01 UTC (permalink / raw)
To: Joseba Ibarra; +Cc: Adam Goryachev, Rudy Zijlstra, list linux-raid
On Wed, 11 Oct 2017, Joseba Ibarra wrote:
> Now I see the RAID, however can't be mounted. So, I'm not sure how to backup
> the data. Gparted shows the partition /dev/md0p1 with the used and free
> space.
Do you know what file system you had? Looks like next step is to try to
run fsck -n (read-only) on md0 and/or md0p1.
What does /etc/fstab contain regarding md0?
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Can't mount /dev/md0 Raid5
2017-10-11 14:01 ` Mikael Abrahamsson
@ 2017-10-11 17:27 ` Joseba Ibarra
2017-10-11 20:46 ` NeilBrown
2017-10-11 19:49 ` John Stoffel
1 sibling, 1 reply; 17+ messages in thread
From: Joseba Ibarra @ 2017-10-11 17:27 UTC (permalink / raw)
To: Mikael Abrahamsson, Adam Goryachev, list linux-raid, Rudy Zijlstra
Hi Mikael,
I had ext4
and for commands:
root@grafico:/mnt# fsck -n /dev/md0
fsck de util-linux 2.29.2
e2fsck 1.43.4 (31-Jan-2017)
ext2fs_open2(): Bad magic number in superblock
fsck.ext2: invalid superblock, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/md0
The superblock could not be read or does not describe a ext2/ext3/ext4
filesystem.
If the device is invalid and it really contains an ext2/ext3/ext4 filesystem
(and not swap or ufs or something else), then the superblock is corrupt;
and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
o
e2fsck -b 32768 <device>
A gpt partition table is found in /dev/md0
I'm getting more escared....... No idea what to do
Thanks
> Mikael Abrahamsson <mailto:swmike@swm.pp.se>
> 11 de octubre de 2017, 16:01
> On Wed, 11 Oct 2017, Joseba Ibarra wrote:
>
>
> Do you know what file system you had? Looks like next step is to try
> to run fsck -n (read-only) on md0 and/or md0p1.
>
> What does /etc/fstab contain regarding md0?
>
> Joseba Ibarra <mailto:wajalotnet@gmail.com>
> 11 de octubre de 2017, 13:56
> Hi Adam
>
> root@grafico:/mnt# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md0 : inactive sdd1[3] sdb1[1] sdc1[2]
> 2929889280 blocks super 1.2
>
> unused devices: <none>
>
>
> root@grafico:/mnt# mdadm --manage /dev/md0 --stop
> mdadm: stopped /dev/md0
>
>
> root@grafico:/mnt# mdadm --assemble /dev/md0 /dev/sd[bcd]1
> mdadm: /dev/md0 assembled from 3 drives - not enough to start the
> array while not clean - consider --force.
>
>
>
> root@grafico:/mnt# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> unused devices: <none>
>
> At this point I´ve followed the advise using --force
>
> root@grafico:/mnt# mdadm --assemble --force /dev/md0 /dev/sd[bcd]1
> mdadm: Marking array /dev/md0 as 'clean'
> mdadm: /dev/md0 has been started with 3 drives (out of 4).
>
>
> root@grafico:/mnt# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md0 : active (auto-read-only) raid5 sdb1[1] sdd1[3] sdc1[2]
> 2929889280 blocks super 1.2 level 5, 512k chunk, algorithm 2
> [4/3] [_UUU]
> bitmap: 0/8 pages [0KB], 65536KB chunk
>
> unused devices: <none>
>
>
> Now I see the RAID, however can't be mounted. So, I'm not sure how to
> backup the data. Gparted shows the partition /dev/md0p1 with the used
> and free space.
>
>
> If I try
>
> mount /dev/md0 /mnt
>
> again the output is
>
> mount: wrong file system, bad option, bad superblock in /dev/md0,
> missing codepage or helper program, or other error
>
> In some cases useful info is found in syslog - try dmesg | tail or
> something like that.
>
> I do dmesg | tail
>
> If I try root@grafico:/mnt# mount /dev/md0p1 /mnt
> mount: /dev/md0p1: can't read superblock
>
> And
>
>
> root@grafico:/mnt# dmesg | tail
> [ 3263.411724] VFS: Dirty inode writeback failed for block device
> md0p1 (err=-5).
> [ 3280.486813] md0: p1
> [ 3280.514024] md0: p1
> [ 3452.496811] UDF-fs: warning (device md0): udf_fill_super: No
> partition found (2)
> [ 3463.731052] JBD2: Invalid checksum recovering block 630194476 in log
> [ 3464.933960] Buffer I/O error on dev md0p1, logical block 630194474,
> lost async page write
> [ 3464.933971] Buffer I/O error on dev md0p1, logical block 630194475,
> lost async page write
> [ 3465.928066] JBD2: recovery failed
> [ 3465.928070] EXT4-fs (md0p1): error loading journal
> [ 3465.936852] VFS: Dirty inode writeback failed for block device
> md0p1 (err=-5).
>
>
> Thanks a lot for your time
>
>
> Joseba Ibarra
>
> Adam Goryachev <mailto:adam@websitemanagers.com.au>
> 11 de octubre de 2017, 13:29
> Hi Rudy,
>
> Please send the output of all of the following commands:
>
> cat /proc/mdstat
>
> mdadm --manage /dev/md0 --stop
>
> mdadm --assemble /dev/md0 /dev/sd[bcd]1
>
> cat /proc/mdstat
>
> mdadm --manage /dev/md0 --run
>
> mdadm --manage /dev/md0 --readwrite
>
> cat /proc/mdstat
>
>
> Basically the above is just looking at what the system has done
> currently, stopping/clearing that, and then trying to assemble it
> again, finally, we try to start it, even if it has one faulty disk.
>
> At this stage, chances look good for recovering all your data, though
> I would advise to get yourself a replacement disk for the dead one so
> that you can restore redundancy as soon as possible.
>
> Regards,Adam
>
>
>
>
>
> Joseba Ibarra <mailto:wajalotnet@gmail.com>
> 11 de octubre de 2017, 13:14
> Hi Rudy
>
> 1- Yes, with all 4 disk plugged in, system does not boot
> 2- Yes, with the broken disk unplugged, it boots
> 3 - Yes, raid does not assemble during boot. I assemble manually doing
>
> root@grafico:/home/jose# mdadm --assemble --scan /dev/md0
> root@grafico:/home/jose# mdadm --assemble --scan
> root@grafico:/home/jose# mdadm --assemble /dev/md0
>
> 4 -When I try to mount
>
> mount /dev/md0 /mnt
>
> mount: wrong file system, bad option, bad superblock in /dev/md0,
> missing codepage or helper program, or other error
>
> In some cases useful info is found in syslog - try dmesg | tail or
> something like that.
>
> I do dmesg | tail
>
> root@grafico:/mnt# dmesg | tail
> [ 705.021959] md: pers->run() failed ...
> [ 849.719439] EXT4-fs (md0): unable to read superblock
> [ 849.719564] EXT4-fs (md0): unable to read superblock
> [ 849.719589] EXT4-fs (md0): unable to read superblock
> [ 849.719616] UDF-fs: error (device md0): udf_read_tagged: read
> failed, block=256, location=256
> [ 849.719625] UDF-fs: error (device md0): udf_read_tagged: read
> failed, block=512, location=512
> [ 849.719638] UDF-fs: error (device md0): udf_read_tagged: read
> failed, block=256, location=256
> [ 849.719642] UDF-fs: error (device md0): udf_read_tagged: read
> failed, block=512, location=512
> [ 849.719643] UDF-fs: warning (device md0): udf_fill_super: No
> partition found (1)
> [ 849.719667] isofs_fill_super: bread failed, dev=md0, iso_blknum=16,
> block=32
>
> Thanks a lot for your helping
> Rudy Zijlstra <mailto:rudy@grumpydevil.homelinux.org>
> 11 de octubre de 2017, 12:42
> Hi Joseba,
>
>
>
> Let me see if i understand you correctly
>
> - with all 4 disks plugged in, your system does not boot
> - with the broken disk unplugged, it boots (and from your description
> it is really broken, no DISK recovery possible unless by specialised
> company)
> - raid does not get assembled during boot, you do a manual assembly?
> -> please provide the command you are using
>
> from the log above, you should be able to do a mount of /dev/md0 which
> would auto-start the raid.
>
> If that works, the next step would be to check the health of the other
> disks. smartctl would be your friend.
> Another useful action would be to copy all important data to a backup
> before you add a new disk to replace the failed disk.
>
> Cheers
>
> Rudy
--
<http://64bits.es/>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Can't mount /dev/md0 Raid5
2017-10-11 13:35 ` Joseba Ibarra
@ 2017-10-11 19:13 ` Adam Goryachev
2017-10-11 19:46 ` Joseba Ibarra
0 siblings, 1 reply; 17+ messages in thread
From: Adam Goryachev @ 2017-10-11 19:13 UTC (permalink / raw)
To: Joseba Ibarra, list linux-raid
What is the output of cat /proc/mdstat after running the readwrite below?
Did you try the --run option?
You need to make sure the array is active/running before you try to
mount it.
BTW, has anything else happened to the array, other than the drive that
failed?
Probably should have asked for this before, but what is the full output
of smartctl for each drive, also mdadm --misc --examine /dev/sd[bcd]1
(or all three relevant devices)?
Regards,
Adam
On 12/10/17 00:35, Joseba Ibarra wrote:
> Still the same. Nothing seems to have changed.
>
>
> Same answers for mount commands.
>
> Adam Goryachev escribió:
>> mdadm --manage /dev/md0 --readwrite
>
> Jose Ibarra
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Can't mount /dev/md0 Raid5
2017-10-11 19:13 ` Adam Goryachev
@ 2017-10-11 19:46 ` Joseba Ibarra
0 siblings, 0 replies; 17+ messages in thread
From: Joseba Ibarra @ 2017-10-11 19:46 UTC (permalink / raw)
To: Adam Goryachev, list linux-raid, Rudy Zijlstra,
Mikael Abrahamsson, NeilBrown
Hi Adam,
root@grafico:/mnt# mdadm --manage /dev/md0 --readwrite
root@grafico:/mnt# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : active raid5 sdb1[1] sdd1[3] sdc1[2]
2929889280 blocks super 1.2 level 5, 512k chunk, algorithm 2
[4/3] [_UUU]
bitmap: 5/8 pages [20KB], 65536KB chunk
unused devices: <none>
If you mean mdadm --manage /dev/md0 --run I already done and the result
is the same.
Doing --detail /dev/md0
root@grafico:/mnt# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Aug 5 23:10:50 2017
Raid Level : raid5
Array Size : 2929889280 (2794.16 GiB 3000.21 GB)
Used Dev Size : 976629760 (931.39 GiB 1000.07 GB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed Oct 11 15:33:13 2017
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : servidor:0
UUID : 0b44a3b8:83eafabc:644afc87:bdb5b1f3
Events : 3115
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
I started the thread with this
https://marc.info/?l=linux-raid&m=150607163322491&w=2 but after some
days out cause travel by work I did found the disk stopped working.
I'll give you the current info:
root@grafico:/mnt# fdisk -l
Disco /dev/sdd: 931,5 GiB, 1000204886016 bytes, 1953525168 sectores
Unidades: sectores de 1 * 512 = 512 bytes
Tamaño de sector (lógico/físico): 512 bytes / 512 bytes
Tamaño de E/S (mínimo/óptimo): 512 bytes / 512 bytes
Tipo de etiqueta de disco: dos
Identificador del disco: 0xa69b0d7f
Disposit. Inicio Comienzo Final Sectores Tamaño Id Tipo
/dev/sdd1 2048 1953523711 1953521664 931,5G fd Linux raid
autodetect
Disco /dev/sdb: 931,5 GiB, 1000204886016 bytes, 1953525168 sectores
Unidades: sectores de 1 * 512 = 512 bytes
Tamaño de sector (lógico/físico): 512 bytes / 512 bytes
Tamaño de E/S (mínimo/óptimo): 512 bytes / 512 bytes
Tipo de etiqueta de disco: dos
Identificador del disco: 0x87ea0d19
Disposit. Inicio Comienzo Final Sectores Tamaño Id Tipo
/dev/sdb1 2048 1953523711 1953521664 931,5G fd Linux raid
autodetect
Disco /dev/sdc: 931,5 GiB, 1000204886016 bytes, 1953525168 sectores
Unidades: sectores de 1 * 512 = 512 bytes
Tamaño de sector (lógico/físico): 512 bytes / 512 bytes
Tamaño de E/S (mínimo/óptimo): 512 bytes / 512 bytes
Tipo de etiqueta de disco: dos
Identificador del disco: 0x8b48be4a
Disposit. Inicio Comienzo Final Sectores Tamaño Id Tipo
/dev/sdc1 2048 1953523711 1953521664 931,5G fd Linux raid
autodetect
Disco /dev/md0: 2,7 TiB, 3000206622720 bytes, 5859778560 sectores
Unidades: sectores de 1 * 512 = 512 bytes
Tamaño de sector (lógico/físico): 512 bytes / 512 bytes
Tamaño de E/S (mínimo/óptimo): 524288 bytes / 1572864 bytes
Tipo de etiqueta de disco: gpt
Identificador del disco: A87C8BBF-5876-4FA6-83F6-46DAF1BDCF75
Disposit. Comienzo Final Sectores Tamaño Tipo
/dev/md0p1 2048 5859776511 5859774464 2,7T Sistema de ficheros de
Linux
root@grafico:/mnt# mdadm --examine /dev/sdb
/dev/sdb:
MBR Magic : aa55
Partition[0] : 1953521664 sectors at 2048 (type fd)
root@grafico:/mnt# mdadm --examine /dev/sdc
/dev/sdc:
MBR Magic : aa55
Partition[0] : 1953521664 sectors at 2048 (type fd)
root@grafico:/mnt# mdadm --examine /dev/sdd
/dev/sdd:
MBR Magic : aa55
Partition[0] : 1953521664 sectors at 2048 (type fd)
root@grafico:/mnt# mdadm --misc --examine /dev/sd[bcd]1
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 0b44a3b8:83eafabc:644afc87:bdb5b1f3
Name : servidor:0
Creation Time : Sat Aug 5 23:10:50 2017
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 1953259520 (931.39 GiB 1000.07 GB)
Array Size : 2929889280 (2794.16 GiB 3000.21 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 32271db0:38a5220c:c19968af:8fe1a3fb
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Oct 11 15:33:13 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 86b5e249 - correct
Events : 3115
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : .AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 0b44a3b8:83eafabc:644afc87:bdb5b1f3
Name : servidor:0
Creation Time : Sat Aug 5 23:10:50 2017
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 1953259520 (931.39 GiB 1000.07 GB)
Array Size : 2929889280 (2794.16 GiB 3000.21 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 9014cf8b:ea7e22e2:9274be9d:9aeee689
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Oct 11 15:33:13 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : b5009136 - correct
Events : 3115
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : .AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x9
Array UUID : 0b44a3b8:83eafabc:644afc87:bdb5b1f3
Name : servidor:0
Creation Time : Sat Aug 5 23:10:50 2017
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 1953259520 (931.39 GiB 1000.07 GB)
Array Size : 2929889280 (2794.16 GiB 3000.21 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : f95affb4:138bca63:d10df091:1f20af37
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Oct 11 15:33:13 2017
Bad Block Log : 512 entries available at offset 72 sectors - bad
blocks present.
Checksum : 1d2ae95 - correct
Events : 3115
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : .AAA ('A' == active, '.' == missing, 'R' == replacing)
The smartctl -a give this log:
root@grafico:/mnt# smartctl -a /dev/sdb
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-3-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Device Model: QEMU HARDDISK
Serial Number: QM00009
Firmware Version: 2.5+
User Capacity: 1.000.204.886.016 bytes [1,00 TB]
Sector Size: 512 bytes logical/physical
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ATA/ATAPI-7, ATA/ATAPI-5 published, ANSI NCITS 340-2000
Local Time is: Wed Oct 11 21:42:39 2017 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test
routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 288) seconds.
Offline data collection
capabilities: (0x19) SMART execute Offline immediate.
No Auto Offline data collection support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
No Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
No General Purpose Logging support.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 54) minutes.
SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x0003 100 100 006 Pre-fail
Always - 0
3 Spin_Up_Time 0x0003 100 100 000 Pre-fail
Always - 16
4 Start_Stop_Count 0x0002 100 100 020 Old_age
Always - 100
5 Reallocated_Sector_Ct 0x0003 100 100 036 Pre-fail
Always - 0
9 Power_On_Hours 0x0003 100 100 000 Pre-fail
Always - 1
12 Power_Cycle_Count 0x0003 100 100 000 Pre-fail
Always - 0
190 Airflow_Temperature_Cel 0x0003 069 069 050 Pre-fail
Always - 31 (Min/Max 31/31)
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
Selective Self-tests/Logging not supported
root@grafico:/mnt# smartctl -a /dev/sdc
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-3-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Device Model: QEMU HARDDISK
Serial Number: QM00011
Firmware Version: 2.5+
User Capacity: 1.000.204.886.016 bytes [1,00 TB]
Sector Size: 512 bytes logical/physical
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ATA/ATAPI-7, ATA/ATAPI-5 published, ANSI NCITS 340-2000
Local Time is: Wed Oct 11 21:44:02 2017 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test
routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 288) seconds.
Offline data collection
capabilities: (0x19) SMART execute Offline immediate.
No Auto Offline data collection support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
No Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
No General Purpose Logging support.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 54) minutes.
SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x0003 100 100 006 Pre-fail
Always - 0
3 Spin_Up_Time 0x0003 100 100 000 Pre-fail
Always - 16
4 Start_Stop_Count 0x0002 100 100 020 Old_age
Always - 100
5 Reallocated_Sector_Ct 0x0003 100 100 036 Pre-fail
Always - 0
9 Power_On_Hours 0x0003 100 100 000 Pre-fail
Always - 1
12 Power_Cycle_Count 0x0003 100 100 000 Pre-fail
Always - 0
190 Airflow_Temperature_Cel 0x0003 069 069 050 Pre-fail
Always - 31 (Min/Max 31/31)
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
Selective Self-tests/Logging not supported
root@grafico:/mnt# smartctl -a /dev/sdd
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-3-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Device Model: QEMU HARDDISK
Serial Number: QM00013
Firmware Version: 2.5+
User Capacity: 1.000.204.886.016 bytes [1,00 TB]
Sector Size: 512 bytes logical/physical
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ATA/ATAPI-7, ATA/ATAPI-5 published, ANSI NCITS 340-2000
Local Time is: Wed Oct 11 21:44:24 2017 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test
routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 288) seconds.
Offline data collection
capabilities: (0x19) SMART execute Offline immediate.
No Auto Offline data collection support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
No Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
No General Purpose Logging support.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 54) minutes.
SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x0003 100 100 006 Pre-fail
Always - 0
3 Spin_Up_Time 0x0003 100 100 000 Pre-fail
Always - 16
4 Start_Stop_Count 0x0002 100 100 020 Old_age
Always - 100
5 Reallocated_Sector_Ct 0x0003 100 100 036 Pre-fail
Always - 0
9 Power_On_Hours 0x0003 100 100 000 Pre-fail
Always - 1
12 Power_Cycle_Count 0x0003 100 100 000 Pre-fail
Always - 0
190 Airflow_Temperature_Cel 0x0003 069 069 050 Pre-fail
Always - 31 (Min/Max 31/31)
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
Selective Self-tests/Logging not supported
In my humble opinioin all these three disks are ok.
Thanks again
Adam Goryachev escribió:
> What is the output of cat /proc/mdstat after running the readwrite below?
>
> Did you try the --run option?
>
> You need to make sure the array is active/running before you try to
> mount it.
>
> BTW, has anything else happened to the array, other than the drive
> that failed?
>
> Probably should have asked for this before, but what is the full
> output of smartctl for each drive, also mdadm --misc --examine
> /dev/sd[bcd]1 (or all three relevant devices)?
>
> Regards,
> Adam
>
>
> On 12/10/17 00:35, Joseba Ibarra wrote:
>> Still the same. Nothing seems to have changed.
>>
>>
>> Same answers for mount commands.
>>
>> Adam Goryachev escribió:
>>> mdadm --manage /dev/md0 --readwrite
>>
>> Jose Ibarra
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
<http://64bits.es/>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Can't mount /dev/md0 Raid5
2017-10-11 14:01 ` Mikael Abrahamsson
2017-10-11 17:27 ` Joseba Ibarra
@ 2017-10-11 19:49 ` John Stoffel
2017-10-11 20:57 ` Joseba Ibarra
1 sibling, 1 reply; 17+ messages in thread
From: John Stoffel @ 2017-10-11 19:49 UTC (permalink / raw)
To: Mikael Abrahamsson
Cc: Joseba Ibarra, Adam Goryachev, Rudy Zijlstra, list linux-raid
>>>>> "Mikael" == Mikael Abrahamsson <swmike@swm.pp.se> writes:
Mikael> On Wed, 11 Oct 2017, Joseba Ibarra wrote:
>> Now I see the RAID, however can't be mounted. So, I'm not sure how to backup
>> the data. Gparted shows the partition /dev/md0p1 with the used and free
>> space.
Mikael> Do you know what file system you had? Looks like next step is to try to
Mikael> run fsck -n (read-only) on md0 and/or md0p1.
Mikael> What does /etc/fstab contain regarding md0?
Did you have the RAID5 setup as a PV inside a VG? What does:
vgscan
give you back when you run it as root?
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Can't mount /dev/md0 Raid5
2017-10-11 17:27 ` Joseba Ibarra
@ 2017-10-11 20:46 ` NeilBrown
[not found] ` <59DE891F.1@gmail.com>
0 siblings, 1 reply; 17+ messages in thread
From: NeilBrown @ 2017-10-11 20:46 UTC (permalink / raw)
To: Joseba Ibarra, Mikael Abrahamsson, Adam Goryachev,
list linux-raid, Rudy Zijlstra
[-- Attachment #1: Type: text/plain, Size: 8002 bytes --]
On Wed, Oct 11 2017, Joseba Ibarra wrote:
> Hi Mikael,
>
> I had ext4
>
> and for commands:
>
> root@grafico:/mnt# fsck -n /dev/md0
> fsck de util-linux 2.29.2
> e2fsck 1.43.4 (31-Jan-2017)
> ext2fs_open2(): Bad magic number in superblock
> fsck.ext2: invalid superblock, trying backup blocks...
> fsck.ext2: Bad magic number in super-block while trying to open /dev/md0
>
> The superblock could not be read or does not describe a ext2/ext3/ext4
> filesystem.
> If the device is invalid and it really contains an ext2/ext3/ext4 filesystem
> (and not swap or ufs or something else), then the superblock is corrupt;
> and you might try running e2fsck with an alternate superblock:
> e2fsck -b 8193 <device>
> o
> e2fsck -b 32768 <device>
>
> A gpt partition table is found in /dev/md0
Mikael suggested:
>> try to run fsck -n (read-only) on md0 and/or md0p1.
But you only tried
fsck -n /dev/md0
why didn't you also try
fsck -n /dev/md0p1
??
NeilBrown
>
>
> I'm getting more escared....... No idea what to do
>
> Thanks
>> Mikael Abrahamsson <mailto:swmike@swm.pp.se>
>> 11 de octubre de 2017, 16:01
>> On Wed, 11 Oct 2017, Joseba Ibarra wrote:
>>
>>
>> Do you know what file system you had? Looks like next step is to try
>> to run fsck -n (read-only) on md0 and/or md0p1.
>>
>> What does /etc/fstab contain regarding md0?
>>
>> Joseba Ibarra <mailto:wajalotnet@gmail.com>
>> 11 de octubre de 2017, 13:56
>> Hi Adam
>>
>> root@grafico:/mnt# cat /proc/mdstat
>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
>> [raid4] [raid10]
>> md0 : inactive sdd1[3] sdb1[1] sdc1[2]
>> 2929889280 blocks super 1.2
>>
>> unused devices: <none>
>>
>>
>> root@grafico:/mnt# mdadm --manage /dev/md0 --stop
>> mdadm: stopped /dev/md0
>>
>>
>> root@grafico:/mnt# mdadm --assemble /dev/md0 /dev/sd[bcd]1
>> mdadm: /dev/md0 assembled from 3 drives - not enough to start the
>> array while not clean - consider --force.
>>
>>
>>
>> root@grafico:/mnt# cat /proc/mdstat
>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
>> [raid4] [raid10]
>> unused devices: <none>
>>
>> At this point I´ve followed the advise using --force
>>
>> root@grafico:/mnt# mdadm --assemble --force /dev/md0 /dev/sd[bcd]1
>> mdadm: Marking array /dev/md0 as 'clean'
>> mdadm: /dev/md0 has been started with 3 drives (out of 4).
>>
>>
>> root@grafico:/mnt# cat /proc/mdstat
>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
>> [raid4] [raid10]
>> md0 : active (auto-read-only) raid5 sdb1[1] sdd1[3] sdc1[2]
>> 2929889280 blocks super 1.2 level 5, 512k chunk, algorithm 2
>> [4/3] [_UUU]
>> bitmap: 0/8 pages [0KB], 65536KB chunk
>>
>> unused devices: <none>
>>
>>
>> Now I see the RAID, however can't be mounted. So, I'm not sure how to
>> backup the data. Gparted shows the partition /dev/md0p1 with the used
>> and free space.
>>
>>
>> If I try
>>
>> mount /dev/md0 /mnt
>>
>> again the output is
>>
>> mount: wrong file system, bad option, bad superblock in /dev/md0,
>> missing codepage or helper program, or other error
>>
>> In some cases useful info is found in syslog - try dmesg | tail or
>> something like that.
>>
>> I do dmesg | tail
>>
>> If I try root@grafico:/mnt# mount /dev/md0p1 /mnt
>> mount: /dev/md0p1: can't read superblock
>>
>> And
>>
>>
>> root@grafico:/mnt# dmesg | tail
>> [ 3263.411724] VFS: Dirty inode writeback failed for block device
>> md0p1 (err=-5).
>> [ 3280.486813] md0: p1
>> [ 3280.514024] md0: p1
>> [ 3452.496811] UDF-fs: warning (device md0): udf_fill_super: No
>> partition found (2)
>> [ 3463.731052] JBD2: Invalid checksum recovering block 630194476 in log
>> [ 3464.933960] Buffer I/O error on dev md0p1, logical block 630194474,
>> lost async page write
>> [ 3464.933971] Buffer I/O error on dev md0p1, logical block 630194475,
>> lost async page write
>> [ 3465.928066] JBD2: recovery failed
>> [ 3465.928070] EXT4-fs (md0p1): error loading journal
>> [ 3465.936852] VFS: Dirty inode writeback failed for block device
>> md0p1 (err=-5).
>>
>>
>> Thanks a lot for your time
>>
>>
>> Joseba Ibarra
>>
>> Adam Goryachev <mailto:adam@websitemanagers.com.au>
>> 11 de octubre de 2017, 13:29
>> Hi Rudy,
>>
>> Please send the output of all of the following commands:
>>
>> cat /proc/mdstat
>>
>> mdadm --manage /dev/md0 --stop
>>
>> mdadm --assemble /dev/md0 /dev/sd[bcd]1
>>
>> cat /proc/mdstat
>>
>> mdadm --manage /dev/md0 --run
>>
>> mdadm --manage /dev/md0 --readwrite
>>
>> cat /proc/mdstat
>>
>>
>> Basically the above is just looking at what the system has done
>> currently, stopping/clearing that, and then trying to assemble it
>> again, finally, we try to start it, even if it has one faulty disk.
>>
>> At this stage, chances look good for recovering all your data, though
>> I would advise to get yourself a replacement disk for the dead one so
>> that you can restore redundancy as soon as possible.
>>
>> Regards,Adam
>>
>>
>>
>>
>>
>> Joseba Ibarra <mailto:wajalotnet@gmail.com>
>> 11 de octubre de 2017, 13:14
>> Hi Rudy
>>
>> 1- Yes, with all 4 disk plugged in, system does not boot
>> 2- Yes, with the broken disk unplugged, it boots
>> 3 - Yes, raid does not assemble during boot. I assemble manually doing
>>
>> root@grafico:/home/jose# mdadm --assemble --scan /dev/md0
>> root@grafico:/home/jose# mdadm --assemble --scan
>> root@grafico:/home/jose# mdadm --assemble /dev/md0
>>
>> 4 -When I try to mount
>>
>> mount /dev/md0 /mnt
>>
>> mount: wrong file system, bad option, bad superblock in /dev/md0,
>> missing codepage or helper program, or other error
>>
>> In some cases useful info is found in syslog - try dmesg | tail or
>> something like that.
>>
>> I do dmesg | tail
>>
>> root@grafico:/mnt# dmesg | tail
>> [ 705.021959] md: pers->run() failed ...
>> [ 849.719439] EXT4-fs (md0): unable to read superblock
>> [ 849.719564] EXT4-fs (md0): unable to read superblock
>> [ 849.719589] EXT4-fs (md0): unable to read superblock
>> [ 849.719616] UDF-fs: error (device md0): udf_read_tagged: read
>> failed, block=256, location=256
>> [ 849.719625] UDF-fs: error (device md0): udf_read_tagged: read
>> failed, block=512, location=512
>> [ 849.719638] UDF-fs: error (device md0): udf_read_tagged: read
>> failed, block=256, location=256
>> [ 849.719642] UDF-fs: error (device md0): udf_read_tagged: read
>> failed, block=512, location=512
>> [ 849.719643] UDF-fs: warning (device md0): udf_fill_super: No
>> partition found (1)
>> [ 849.719667] isofs_fill_super: bread failed, dev=md0, iso_blknum=16,
>> block=32
>>
>> Thanks a lot for your helping
>> Rudy Zijlstra <mailto:rudy@grumpydevil.homelinux.org>
>> 11 de octubre de 2017, 12:42
>> Hi Joseba,
>>
>>
>>
>> Let me see if i understand you correctly
>>
>> - with all 4 disks plugged in, your system does not boot
>> - with the broken disk unplugged, it boots (and from your description
>> it is really broken, no DISK recovery possible unless by specialised
>> company)
>> - raid does not get assembled during boot, you do a manual assembly?
>> -> please provide the command you are using
>>
>> from the log above, you should be able to do a mount of /dev/md0 which
>> would auto-start the raid.
>>
>> If that works, the next step would be to check the health of the other
>> disks. smartctl would be your friend.
>> Another useful action would be to copy all important data to a backup
>> before you add a new disk to replace the failed disk.
>>
>> Cheers
>>
>> Rudy
>
> --
> <http://64bits.es/>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Can't mount /dev/md0 Raid5
2017-10-11 19:49 ` John Stoffel
@ 2017-10-11 20:57 ` Joseba Ibarra
0 siblings, 0 replies; 17+ messages in thread
From: Joseba Ibarra @ 2017-10-11 20:57 UTC (permalink / raw)
To: John Stoffel, Mikael Abrahamsson, Adam Goryachev, Rudy Zijlstra,
list linux-raid, NeilBrown
Hi John,
The 4 disk are into a HP Proliant where I run Proxmox. The RAID is
linked to a particular Virtual Machine with Debian 9. It worked fine
until I broke stopping a proccess giving permissions to a specific
directory into the RAID. Then it crashed.
Doing vgscan
root@grafico:/mnt# vgscan
Reading volume groups from cache.
I keep as root all the time while i try to recover RAID or at least data
files.
I had to remove the fstab line for /dev/md0 since an error comes out.
If I add the line to /etc/fstab
UUID=xxxx-xxxx--xxxxxx /media/raid5 ext4 defaults 0 0
where UUID is the UUID for /dev/md0p1
the error I get booting is showed in picture
http://64bits.es/boot.png
Thanks for your time
> John Stoffel <mailto:john@stoffel.org>
> 11 de octubre de 2017, 21:49
>>>>>> "Mikael" == Mikael Abrahamsson<swmike@swm.pp.se> writes:
>
> Mikael> On Wed, 11 Oct 2017, Joseba Ibarra wrote:
>>> Now I see the RAID, however can't be mounted. So, I'm not sure how to backup
>>> the data. Gparted shows the partition /dev/md0p1 with the used and free
>>> space.
>
> Mikael> Do you know what file system you had? Looks like next step is to try to
> Mikael> run fsck -n (read-only) on md0 and/or md0p1.
>
> Mikael> What does /etc/fstab contain regarding md0?
>
> Did you have the RAID5 setup as a PV inside a VG? What does:
>
> vgscan
>
> give you back when you run it as root?
>
> Mikael Abrahamsson <mailto:swmike@swm.pp.se>
> 11 de octubre de 2017, 16:01
> On Wed, 11 Oct 2017, Joseba Ibarra wrote:
>
>
> Do you know what file system you had? Looks like next step is to try
> to run fsck -n (read-only) on md0 and/or md0p1.
>
> What does /etc/fstab contain regarding md0?
>
> Joseba Ibarra <mailto:wajalotnet@gmail.com>
> 11 de octubre de 2017, 13:56
> Hi Adam
>
> root@grafico:/mnt# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md0 : inactive sdd1[3] sdb1[1] sdc1[2]
> 2929889280 blocks super 1.2
>
> unused devices: <none>
>
>
> root@grafico:/mnt# mdadm --manage /dev/md0 --stop
> mdadm: stopped /dev/md0
>
>
> root@grafico:/mnt# mdadm --assemble /dev/md0 /dev/sd[bcd]1
> mdadm: /dev/md0 assembled from 3 drives - not enough to start the
> array while not clean - consider --force.
>
>
>
> root@grafico:/mnt# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> unused devices: <none>
>
> At this point I´ve followed the advise using --force
>
> root@grafico:/mnt# mdadm --assemble --force /dev/md0 /dev/sd[bcd]1
> mdadm: Marking array /dev/md0 as 'clean'
> mdadm: /dev/md0 has been started with 3 drives (out of 4).
>
>
> root@grafico:/mnt# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md0 : active (auto-read-only) raid5 sdb1[1] sdd1[3] sdc1[2]
> 2929889280 blocks super 1.2 level 5, 512k chunk, algorithm 2
> [4/3] [_UUU]
> bitmap: 0/8 pages [0KB], 65536KB chunk
>
> unused devices: <none>
>
>
> Now I see the RAID, however can't be mounted. So, I'm not sure how to
> backup the data. Gparted shows the partition /dev/md0p1 with the used
> and free space.
>
>
> If I try
>
> mount /dev/md0 /mnt
>
> again the output is
>
> mount: wrong file system, bad option, bad superblock in /dev/md0,
> missing codepage or helper program, or other error
>
> In some cases useful info is found in syslog - try dmesg | tail or
> something like that.
>
> I do dmesg | tail
>
> If I try root@grafico:/mnt# mount /dev/md0p1 /mnt
> mount: /dev/md0p1: can't read superblock
>
> And
>
>
> root@grafico:/mnt# dmesg | tail
> [ 3263.411724] VFS: Dirty inode writeback failed for block device
> md0p1 (err=-5).
> [ 3280.486813] md0: p1
> [ 3280.514024] md0: p1
> [ 3452.496811] UDF-fs: warning (device md0): udf_fill_super: No
> partition found (2)
> [ 3463.731052] JBD2: Invalid checksum recovering block 630194476 in log
> [ 3464.933960] Buffer I/O error on dev md0p1, logical block 630194474,
> lost async page write
> [ 3464.933971] Buffer I/O error on dev md0p1, logical block 630194475,
> lost async page write
> [ 3465.928066] JBD2: recovery failed
> [ 3465.928070] EXT4-fs (md0p1): error loading journal
> [ 3465.936852] VFS: Dirty inode writeback failed for block device
> md0p1 (err=-5).
>
>
> Thanks a lot for your time
>
>
> Joseba Ibarra
>
> Adam Goryachev <mailto:adam@websitemanagers.com.au>
> 11 de octubre de 2017, 13:29
> Hi Rudy,
>
> Please send the output of all of the following commands:
>
> cat /proc/mdstat
>
> mdadm --manage /dev/md0 --stop
>
> mdadm --assemble /dev/md0 /dev/sd[bcd]1
>
> cat /proc/mdstat
>
> mdadm --manage /dev/md0 --run
>
> mdadm --manage /dev/md0 --readwrite
>
> cat /proc/mdstat
>
>
> Basically the above is just looking at what the system has done
> currently, stopping/clearing that, and then trying to assemble it
> again, finally, we try to start it, even if it has one faulty disk.
>
> At this stage, chances look good for recovering all your data, though
> I would advise to get yourself a replacement disk for the dead one so
> that you can restore redundancy as soon as possible.
>
> Regards,Adam
>
>
>
>
>
> Joseba Ibarra <mailto:wajalotnet@gmail.com>
> 11 de octubre de 2017, 13:14
> Hi Rudy
>
> 1- Yes, with all 4 disk plugged in, system does not boot
> 2- Yes, with the broken disk unplugged, it boots
> 3 - Yes, raid does not assemble during boot. I assemble manually doing
>
> root@grafico:/home/jose# mdadm --assemble --scan /dev/md0
> root@grafico:/home/jose# mdadm --assemble --scan
> root@grafico:/home/jose# mdadm --assemble /dev/md0
>
> 4 -When I try to mount
>
> mount /dev/md0 /mnt
>
> mount: wrong file system, bad option, bad superblock in /dev/md0,
> missing codepage or helper program, or other error
>
> In some cases useful info is found in syslog - try dmesg | tail or
> something like that.
>
> I do dmesg | tail
>
> root@grafico:/mnt# dmesg | tail
> [ 705.021959] md: pers->run() failed ...
> [ 849.719439] EXT4-fs (md0): unable to read superblock
> [ 849.719564] EXT4-fs (md0): unable to read superblock
> [ 849.719589] EXT4-fs (md0): unable to read superblock
> [ 849.719616] UDF-fs: error (device md0): udf_read_tagged: read
> failed, block=256, location=256
> [ 849.719625] UDF-fs: error (device md0): udf_read_tagged: read
> failed, block=512, location=512
> [ 849.719638] UDF-fs: error (device md0): udf_read_tagged: read
> failed, block=256, location=256
> [ 849.719642] UDF-fs: error (device md0): udf_read_tagged: read
> failed, block=512, location=512
> [ 849.719643] UDF-fs: warning (device md0): udf_fill_super: No
> partition found (1)
> [ 849.719667] isofs_fill_super: bread failed, dev=md0, iso_blknum=16,
> block=32
>
> Thanks a lot for your helping
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Can't mount /dev/md0 Raid5
[not found] ` <59DE9313.50509@gmail.com>
@ 2017-10-11 21:55 ` Joseba Ibarra
0 siblings, 0 replies; 17+ messages in thread
From: Joseba Ibarra @ 2017-10-11 21:55 UTC (permalink / raw)
To: NeilBrown, John Stoffel, Mikael Abrahamsson, Adam Goryachev,
Rudy Zijlstra, list linux-raid
Hi all. After latest advises from Neil i did fsck /dev/md0p1 and then
asked me to fix some nodes and blocks. I said yes, then I mounted
and....EUREKA!!!!!
Just to say thank you to all of you. I'm really happy.
Thanks, thanks, thanks!!
I know all of this is for helping to people, but, I would love gift you
a beer, so, just give me by private email your paypal email and you will
have your worthy beer.
Cheers!!!
Joseba Ibarra
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Can't mount /dev/md0 Raid5
2017-09-22 9:13 Joseba Ibarra
@ 2017-09-27 22:38 ` NeilBrown
0 siblings, 0 replies; 17+ messages in thread
From: NeilBrown @ 2017-09-27 22:38 UTC (permalink / raw)
To: Joseba Ibarra, list linux-raid
[-- Attachment #1: Type: text/plain, Size: 1315 bytes --]
On Fri, Sep 22 2017, Joseba Ibarra wrote:
> Hi.I have a RAID5 using 4 drives in a HO Proliant G8 Microserver
>
> I got a Proxmox installed and the disks for RAID are associated with a
> specific VM with Debian 9. It worked like a charm.
>
> But, some days ago i broke a proccess giving some permissions (sudo
> chmod....) to some folders into of the mounted /dev/md0. Then after
> restart the VM the RAID din't mount again. The raid contents very
> important data for me, so i'm not very sure what to do in order don't
> lose the data. So, i´m getting bit panic.
>
> I need some guide to rebuild the system without losing the content.
Thanks for providing lots of useful data.
>
> root@grafico:/# cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0]
> [raid1] [raid10]
> md0 : active (auto-read-only) raid5 sdb1[0] sde1[3] sdc1[1] sdd1[2]
> 2929889280 blocks super 1.2 level 5, 512k chunk, algorithm 2
> [4/4] [UUUU]
> bitmap: 0/8 pages [0KB], 65536KB chunk
It appears that md0 has been successfully assembled. What does it
contain? What do you expect it to contain? ext4? xfs? btrfs? LVM volume
??
What does "pvscan" show?
What does "fsck -n /dev/md0" show?
What happens when you "mount /dev/md0 /mnt" ?
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Can't mount /dev/md0 Raid5
@ 2017-09-22 9:13 Joseba Ibarra
2017-09-27 22:38 ` NeilBrown
0 siblings, 1 reply; 17+ messages in thread
From: Joseba Ibarra @ 2017-09-22 9:13 UTC (permalink / raw)
To: list linux-raid
Hi.I have a RAID5 using 4 drives in a HO Proliant G8 Microserver
I got a Proxmox installed and the disks for RAID are associated with a
specific VM with Debian 9. It worked like a charm.
But, some days ago i broke a proccess giving some permissions (sudo
chmod....) to some folders into of the mounted /dev/md0. Then after
restart the VM the RAID din't mount again. The raid contents very
important data for me, so i'm not very sure what to do in order don't
lose the data. So, i´m getting bit panic.
I need some guide to rebuild the system without losing the content.
mdadm version is 3.4-4+b1
This is the result of the fdisk -l related with the RAID
root@grafico:/# fdisk -l
Disco /dev/sdb: 931,5 GiB, 1000204886016 bytes, 1953525168 sectores
Unidades: sectores de 1 * 512 = 512 bytes
Tamaño de sector (lógico/físico): 512 bytes / 512 bytes
Tamaño de E/S (mínimo/óptimo): 512 bytes / 512 bytes
Tipo de etiqueta de disco: dos
Identificador del disco: 0x087383f0
Disposit. Inicio Comienzo Final Sectores Tamaño Id Tipo
/dev/sdb1 2048 1953523711 1953521664 931,5G fd Linux raid
autodetect
Disco /dev/sdd: 931,5 GiB, 1000204886016 bytes, 1953525168 sectores
Unidades: sectores de 1 * 512 = 512 bytes
Tamaño de sector (lógico/físico): 512 bytes / 512 bytes
Tamaño de E/S (mínimo/óptimo): 512 bytes / 512 bytes
Tipo de etiqueta de disco: dos
Identificador del disco: 0x8b48be4a
Disposit. Inicio Comienzo Final Sectores Tamaño Id Tipo
/dev/sdd1 2048 1953523711 1953521664 931,5G fd Linux raid
autodetect
Disco /dev/sdc: 931,5 GiB, 1000204886016 bytes, 1953525168 sectores
Unidades: sectores de 1 * 512 = 512 bytes
Tamaño de sector (lógico/físico): 512 bytes / 512 bytes
Tamaño de E/S (mínimo/óptimo): 512 bytes / 512 bytes
Tipo de etiqueta de disco: dos
Identificador del disco: 0x87ea0d19
Disposit. Inicio Comienzo Final Sectores Tamaño Id Tipo
/dev/sdc1 2048 1953523711 1953521664 931,5G fd Linux raid
autodetect
Disco /dev/sde: 931,5 GiB, 1000204886016 bytes, 1953525168 sectores
Unidades: sectores de 1 * 512 = 512 bytes
Tamaño de sector (lógico/físico): 512 bytes / 512 bytes
Tamaño de E/S (mínimo/óptimo): 512 bytes / 512 bytes
Tipo de etiqueta de disco: dos
Identificador del disco: 0xa69b0d7f
Disposit. Inicio Comienzo Final Sectores Tamaño Id Tipo
/dev/sde1 2048 1953523711 1953521664 931,5G fd Linux raid
autodetect
Disco /dev/md0: 2,7 TiB, 3000206622720 bytes, 5859778560 sectores
Unidades: sectores de 1 * 512 = 512 bytes
Tamaño de sector (lógico/físico): 512 bytes / 512 bytes
Tamaño de E/S (mínimo/óptimo): 524288 bytes / 1572864 bytes
Tipo de etiqueta de disco: gpt
Identificador del disco: A87C8BBF-5876-4FA6-83F6-46DAF1BDCF75
Disposit. Comienzo Final Sectores Tamaño Tipo
/dev/md0p1 2048 5859776511 5859774464 2,7T Sistema de ficheros de Linux
And here some more information (as you can see the /dev/sde drive have
some bad blocks)
root@grafico:/# mdadm --examine /dev/sdb
/dev/sdb:
MBR Magic : aa55
Partition[0] : 1953521664 sectors at 2048 (type fd)
root@grafico:/# mdadm --examine /dev/sdc
/dev/sdc:
MBR Magic : aa55
Partition[0] : 1953521664 sectors at 2048 (type fd)
root@grafico:/# mdadm --examine /dev/sdd
/dev/sdd:
MBR Magic : aa55
Partition[0] : 1953521664 sectors at 2048 (type fd)
root@grafico:/# mdadm --examine /dev/sde
/dev/sde:
MBR Magic : aa55
Partition[0] : 1953521664 sectors at 2048 (type fd)
root@grafico:/# mdadm --examine /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 0b44a3b8:83eafabc:644afc87:bdb5b1f3
Name : servidor:0
Creation Time : Sat Aug 5 23:10:50 2017
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 1953259520 (931.39 GiB 1000.07 GB)
Array Size : 2929889280 (2794.16 GiB 3000.21 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : ddc94c8e:aab339df:2233112a:5c938bab
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Sep 19 01:36:51 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 626e25ae - correct
Events : 3106
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
root@grafico:/# mdadm --examine /dev/sdc1
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 0b44a3b8:83eafabc:644afc87:bdb5b1f3
Name : servidor:0
Creation Time : Sat Aug 5 23:10:50 2017
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 1953259520 (931.39 GiB 1000.07 GB)
Array Size : 2929889280 (2794.16 GiB 3000.21 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 32271db0:38a5220c:c19968af:8fe1a3fb
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Sep 19 01:36:51 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 86971d3c - correct
Events : 3106
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
root@grafico:/# mdadm --examine /dev/sdd1
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 0b44a3b8:83eafabc:644afc87:bdb5b1f3
Name : servidor:0
Creation Time : Sat Aug 5 23:10:50 2017
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 1953259520 (931.39 GiB 1000.07 GB)
Array Size : 2929889280 (2794.16 GiB 3000.21 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 9014cf8b:ea7e22e2:9274be9d:9aeee689
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Sep 19 01:36:51 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : b4e1cc29 - correct
Events : 3106
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
root@grafico:/# mdadm --examine /dev/sde1
/dev/sde1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x9
Array UUID : 0b44a3b8:83eafabc:644afc87:bdb5b1f3
Name : servidor:0
Creation Time : Sat Aug 5 23:10:50 2017
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 1953259520 (931.39 GiB 1000.07 GB)
Array Size : 2929889280 (2794.16 GiB 3000.21 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : f95affb4:138bca63:d10df091:1f20af37
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Sep 19 01:36:51 2017
Bad Block Log : 512 entries available at offset 72 sectors - bad
blocks present.
Checksum : 1b3e988 - correct
Events : 3106
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
root@grafico:/# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Aug 5 23:10:50 2017
Raid Level : raid5
Array Size : 2929889280 (2794.16 GiB 3000.21 GB)
Used Dev Size : 976629760 (931.39 GiB 1000.07 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Sep 19 01:36:51 2017
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : servidor:0
UUID : 0b44a3b8:83eafabc:644afc87:bdb5b1f3
Events : 3106
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
root@grafico:/# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0]
[raid1] [raid10]
md0 : active (auto-read-only) raid5 sdb1[0] sde1[3] sdc1[1] sdd1[2]
2929889280 blocks super 1.2 level 5, 512k chunk, algorithm 2
[4/4] [UUUU]
bitmap: 0/8 pages [0KB], 65536KB chunk
Joseba Ibarra
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2017-10-11 21:55 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-11 10:25 Can't mount /dev/md0 Raid5 Joseba Ibarra
2017-10-11 10:42 ` Rudy Zijlstra
2017-10-11 11:14 ` Joseba Ibarra
2017-10-11 11:29 ` Adam Goryachev
2017-10-11 11:56 ` Joseba Ibarra
2017-10-11 13:23 ` Adam Goryachev
2017-10-11 13:35 ` Joseba Ibarra
2017-10-11 19:13 ` Adam Goryachev
2017-10-11 19:46 ` Joseba Ibarra
2017-10-11 14:01 ` Mikael Abrahamsson
2017-10-11 17:27 ` Joseba Ibarra
2017-10-11 20:46 ` NeilBrown
[not found] ` <59DE891F.1@gmail.com>
[not found] ` <878tghwf6j.fsf@notabene.neil.brown.name>
[not found] ` <59DE9313.50509@gmail.com>
2017-10-11 21:55 ` Joseba Ibarra
2017-10-11 19:49 ` John Stoffel
2017-10-11 20:57 ` Joseba Ibarra
-- strict thread matches above, loose matches on Subject: below --
2017-09-22 9:13 Joseba Ibarra
2017-09-27 22:38 ` NeilBrown
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.