All of lore.kernel.org
 help / color / mirror / Atom feed
From: tyranastrasz@gmx.de
To: antlists <antlists@youngman.org.uk>, linux-raid@vger.kernel.org
Subject: Re: Restoring a raid0 for data rescue
Date: Sun, 2 Aug 2020 22:38:02 +0200	[thread overview]
Message-ID: <057b0c58-3876-0f03-af33-86f3b266a18a@gmx.de> (raw)
In-Reply-To: <d8f9d16d-b6a2-77ba-bff2-a56c62dac5df@gmx.de>

On 02.08.20 21:24, tyranastrasz@gmx.de wrote:
> On 02.08.20 21:01, antlists wrote:
>> On 02/08/2020 19:09, tyranastrasz@gmx.de wrote:
>>> Hello
>>>
>>> I've a problem with my raid0.
>>> The probelmatic disks (2x 1TB wdred) were in usage in my server, now
>>> they got replaced with 3x 4TB seagate in a raid5.
>>>
>>> Before I turned them off, I made a backup on an external drive (normal
>>> hdd via USB) via rsync -avx /source /mnt/external/
>>>
>>> Whatever happens in the night, the backup isn't complete and I miss
>>> files.
>>> So I put the old raid again into the server and wanted to start, but the
>>> Intel Raid Controller said that one of the disks are no member of a
>>> raid.
>>>
>>> My server mainboard is from Gigabyte a MX11-PC0.
>>>
>>> Well I made some mdadm examines, smartctl, mdstat, lsdrv logfiles and
>>> attached them to the mail.
>>>
>> Ow...
>>
>> This is still the same linux on the server? Because mdstat says no raid
>> personalities are installed. Either linux has changed or you've got
>> hardware raid. in which case you'll need to read up on the motherboard
>> manual.
>>
>> I'm not sure what they're called, but try "insmod raid1x" I think it is.
>> Could be raid0x. If that loads the raid0 driver, cat /proc/mdstat should
>> list raid0 as a personality. Once that's there, mdadm may be able to
>> start the array.
>>
>> Until you've got a working raid driver in the kernel, I certainly can't
>> help any further. But hopefully reading the mobo manual might help. The
>> other thing to try is an up-to-date rescue disk and see if that can read
>> the array.
>>
>> Cheers,
>> Wol
>
> No, I have the disks in my pc.
> The server can't boot the disks because Intel Storage says the raid has
> a failure, because one of the disks has no raid information. But as I
> read them both yesterday they had, now (see the last attachment) one of
> them has none.
> It makes no sense... I need the files
>
> Intel means "yeah make a new raid, with data loss" that's no option.
>
> Nara


I tried something what was told here
https://askubuntu.com/questions/69086/mdadm-superblock-recovery

root@Nibler:~# mdadm --create /dev/md0 -v -f -l 0 -c 128 -n 2 /dev/sdd
/dev/sdb
mdadm: /dev/sdd appears to be part of a raid array:
        level=container devices=0 ctime=Thu Jan  1 01:00:00 1970
mdadm: partition table exists on /dev/sdb
mdadm: partition table exists on /dev/sdb but will be lost or
        meaningless after creating array
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.


root@Nibler:~# mdadm --examine /dev/sdb
/dev/sdb:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x0
      Array UUID : db01d7d9:e46ce30a:792e1d3a:31618e71
            Name : Nibler:0  (local to host Nibler)
   Creation Time : Sun Aug  2 22:13:10 2020
      Raid Level : raid0
    Raid Devices : 2

  Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
     Data Offset : 264192 sectors
    Super Offset : 8 sectors
    Unused Space : before=264112 sectors, after=0 sectors
           State : clean
     Device UUID : 0ea95638:7e83e76b:848ff6d2:e264029b

     Update Time : Sun Aug  2 22:13:10 2020
   Bad Block Log : 512 entries available at offset 8 sectors
        Checksum : 1b2cf600 - correct
          Events : 0

      Chunk Size : 128K

    Device Role : Active device 1
    Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
root@Nibler:~# mdadm --examine /dev/sdd
/dev/sdd:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x0
      Array UUID : db01d7d9:e46ce30a:792e1d3a:31618e71
            Name : Nibler:0  (local to host Nibler)
   Creation Time : Sun Aug  2 22:13:10 2020
      Raid Level : raid0
    Raid Devices : 2

  Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
     Data Offset : 264192 sectors
    Super Offset : 8 sectors
    Unused Space : before=264112 sectors, after=0 sectors
           State : clean
     Device UUID : cef9d210:a794ef1e:6e37ee0e:34e10c52

     Update Time : Sun Aug  2 22:13:10 2020
   Bad Block Log : 512 entries available at offset 8 sectors
        Checksum : 99b37c22 - correct
          Events : 0

      Chunk Size : 128K

    Device Role : Active device 0
    Array State : AA ('A' == active, '.' == missing, 'R' == replacing)


But I have no access to /dev/md0 or /dev/md0p1 or /dev/md0p2

root@Nibler:~# mount -o ro /dev/md0p1 /mnt/raid
NTFS signature is missing.
Failed to mount '/dev/md0p1': Invalid argument
The device '/dev/md0p1' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?


What can I do now?
Even if it costs money...

Nara

  reply	other threads:[~2020-08-02 20:38 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <S1726630AbgHBR6v/20200802175851Z+2001@vger.kernel.org>
2020-08-02 18:09 ` tyranastrasz
2020-08-02 19:01   ` antlists
2020-08-02 19:24     ` tyranastrasz
2020-08-02 20:38       ` tyranastrasz [this message]
2020-08-02 20:50         ` antlists
2020-08-03  0:46           ` tyranastrasz
2020-08-03  2:55             ` Phil Turmel
2020-08-03  4:37         ` NeilBrown
2020-08-04  0:51           ` tyranastrasz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=057b0c58-3876-0f03-af33-86f3b266a18a@gmx.de \
    --to=tyranastrasz@gmx.de \
    --cc=antlists@youngman.org.uk \
    --cc=linux-raid@vger.kernel.org \
    --subject='Re: Restoring a raid0 for data rescue' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.