All of lore.kernel.org
 help / color / mirror / Atom feed
* data corruption after rebuid
@ 2015-05-14  4:54 d tbsky
  2015-05-14  6:00 ` Adam Goryachev
  0 siblings, 1 reply; 3+ messages in thread
From: d tbsky @ 2015-05-14  4:54 UTC (permalink / raw)
  To: linux-raid

Hi:
     I think I did something wrong and cause mdadm data corruption.
but I am curious which steps brings me down. hope someone can told me
about it.

       I have two hosts forms a vm HA cluster. each one is a 2TB * 2
mdadm raid 1. I need to test something but lack of disks, so I pull
one disk of each hosts.  after testing, I put the disk back and let
mdadm rebuild. host B took several hours to rebuild and looks fine.
host A took only 5 min to rebuild. after completing rebuild, the
virtual machines stand  above the raid crashed one by one. since I
know I have write many data  bytes during the testing, host A should
not took only 5 min to recover. I must do something wrong to confuse
mdadm.  below is what I done:

1. I test a 3 disk mdadm raid 5: sda (hostA sdb), sdb (hostB sdb), sdc
(new disk). I wrote about 5G data for system restore testing.

2. then I test a 4 disk mdadm raid10: sda (hostA sdb), sdb(hostB
sdb),sdc(new disk),sdd (new disk). I wrote about 5G data for system
restore testing.

3. then I test again a  4 disk mdadm rai10, but wrote about 120G data
for system restore testing.

then I put back the disk to hostA and hostB (hot plugin, HOST A and B
are still running). at hostA I issue command blow:
   mdadm --stop /dev/md126; mdadm --stop /dev/md127 (the plugged disk
has raid data on it and udev seems found it)
   dd if=/dev/sda of=/dev/sdb bs=1k count=1000  (to recreate mbr
partition table).
   partprobe /dev/sdb
   mdadm --add /dev/md0 /dev/sdb1 (this is a small 500MB raid for /boot).
   mdadm --add /dev/md1 /dev/sdb2 (this is about 2TB raid).

   mdadm seems confused it only took 5 min to recover. I did the same
at host B and it took several hours to recover.

   so I did something wrong to confuse the mdadm superblock? should I
use "mdadm --zero-superblock /dev/sdb2" before I add it back to mdadm?

  thanks a lot for advice!!

Regards,
tbskyd

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: data corruption after rebuid
  2015-05-14  4:54 data corruption after rebuid d tbsky
@ 2015-05-14  6:00 ` Adam Goryachev
  2015-05-14  6:36   ` d tbsky
  0 siblings, 1 reply; 3+ messages in thread
From: Adam Goryachev @ 2015-05-14  6:00 UTC (permalink / raw)
  To: d tbsky, linux-raid

On 14/05/15 14:54, d tbsky wrote:
> Hi:
>      I think I did something wrong and cause mdadm data corruption.
> but I am curious which steps brings me down. hope someone can told me
> about it.
>
>        I have two hosts forms a vm HA cluster. each one is a 2TB * 2
> mdadm raid 1. I need to test something but lack of disks, so I pull
> one disk of each hosts.  after testing, I put the disk back and let
> mdadm rebuild. host B took several hours to rebuild and looks fine.
> host A took only 5 min to rebuild. after completing rebuild, the
> virtual machines stand  above the raid crashed one by one. since I
> know I have write many data  bytes during the testing, host A should
> not took only 5 min to recover. I must do something wrong to confuse
> mdadm.  below is what I done:
>
> 1. I test a 3 disk mdadm raid 5: sda (hostA sdb), sdb (hostB sdb), sdc
> (new disk). I wrote about 5G data for system restore testing.
>
> 2. then I test a 4 disk mdadm raid10: sda (hostA sdb), sdb(hostB
> sdb),sdc(new disk),sdd (new disk). I wrote about 5G data for system
> restore testing.
>
> 3. then I test again a  4 disk mdadm rai10, but wrote about 120G data
> for system restore testing.
>
> then I put back the disk to hostA and hostB (hot plugin, HOST A and B
> are still running). at hostA I issue command blow:
>    mdadm --stop /dev/md126; mdadm --stop /dev/md127 (the plugged disk
> has raid data on it and udev seems found it)
>    dd if=/dev/sda of=/dev/sdb bs=1k count=1000  (to recreate mbr
> partition table).

My guess is this is where you went wrong.
Instead use a tool that will do this intelligently, quick google shows this:
sgdisk -R=/dev/sdb /dev/sda
sgdisk -G /dev/sdb
Taken from:
http://unix.stackexchange.com/questions/12986/how-to-copy-the-partition-layout-of-a-whole-disk-using-standard-tools

My guess is you copied some data as well as the partition table, and
that you are using a mdadm label which is stored at the beginning of the
disk, and possibly are using bitmap on one server and not the other.

Maybe if you could show some details from each of the raid arrays then
more people can make more informed comments.

My suggestion is to create *just* the partitions, and then add them to
the array.

Hope that helps.

Regards,
Adam

-- 
Adam Goryachev
Website Managers
Ph: +61 2 8304 0000                            adam@websitemanagers.com.au
Fax: +61 2 8304 0001                            www.websitemanagers.com.au


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: data corruption after rebuid
  2015-05-14  6:00 ` Adam Goryachev
@ 2015-05-14  6:36   ` d tbsky
  0 siblings, 0 replies; 3+ messages in thread
From: d tbsky @ 2015-05-14  6:36 UTC (permalink / raw)
  To: Adam Goryachev; +Cc: linux-raid

2015-05-14 14:00 GMT+08:00 Adam Goryachev <adam@websitemanagers.com.au>:
> On 14/05/15 14:54, d tbsky wrote:
> My guess is this is where you went wrong.
> Instead use a tool that will do this intelligently, quick google shows this:
> sgdisk -R=/dev/sdb /dev/sda
> sgdisk -G /dev/sdb
> Taken from:
> http://unix.stackexchange.com/questions/12986/how-to-copy-the-partition-layout-of-a-whole-disk-using-standard-tools
>
> My guess is you copied some data as well as the partition table, and
> that you are using a mdadm label which is stored at the beginning of the
> disk, and possibly are using bitmap on one server and not the other.

    this step did look suspicious. but I had use it many times on
blank hard disks. and it seems works..

> Maybe if you could show some details from each of the raid arrays then
> more people can make more informed comments.

 sure. below is the raid structure of  hostA & hostB:
/dev/md1:
        Version : 1.1
  Creation Time : Wed Feb 19 09:34:03 2014
     Raid Level : raid1
     Array Size : 1953177408 (1862.70 GiB 2000.05 GB)
  Used Dev Size : 1953177408 (1862.70 GiB 2000.05 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal


  and below is the raid structure of my first test:
/dev/md1:
        Version : 1.1
  Creation Time : Mon Oct 20 18:51:47 2014
     Raid Level : raid5
     Array Size : 3906354176 (3725.39 GiB 4000.11 GB)
  Used Dev Size : 1953177088 (1862.69 GiB 2000.05 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu May 14 14:32:25 2015
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K



    and the raid structure of my second and thrid test:
/dev/md1:
        Version : 1.1
  Creation Time : Sun Sep 28 19:20:59 2014
     Raid Level : raid10
     Array Size : 1953033216 (1862.56 GiB 1999.91 GB)
  Used Dev Size : 976516608 (931.28 GiB 999.95 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu May 14 14:33:14 2015
          State : active
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2015-05-14  6:36 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-14  4:54 data corruption after rebuid d tbsky
2015-05-14  6:00 ` Adam Goryachev
2015-05-14  6:36   ` d tbsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.