All of lore.kernel.org
 help / color / mirror / Atom feed
* RAID 1 resync with both members good.
@ 2009-05-07  8:08 Simon Jackson
  2009-05-07 19:18 ` Richard Scobie
       [not found] ` <3D76E016F4A2A749A1EB1A3FD83E22BC02783863@uk-email.terastack.bluearc.c om>
  0 siblings, 2 replies; 6+ messages in thread
From: Simon Jackson @ 2009-05-07  8:08 UTC (permalink / raw)
  To: linux-raid

Could someone help me understand why I am seeing the following
behaviour?

We use a pair of disks with 3 RAID1 partitions inside an appliance
system.

During testing we have seen some instances of the RAID devices resyncing
even though both members are marked as good in the output of
/proc/mdstat.

merc:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10] 
md2 : active raid1 sda3[0] sdb3[1]
7823552 blocks [2/2] [UU]
resync=DELAYED
md0 : active raid1 sda5[0] sdb5[1]
7823552 blocks [2/2] [UU]
md1 : active raid1 sda6[0] sdb6[1]
55841792 blocks [2/2] [UU]
[=================>...] resync = 88.8% (49606848/55841792) finish=8.4min
speed=12256K/sec
unused devices: <none>


Why would a resync occur if both members are marked us good?

What we usually see when a drive is failed removed and readded is that
the resync marks the new drive as down "_" until the resync completes.

Firstly why is a resync occurring when both drives are still good in the
raid set?  Is this be expected behaviour or an indication of an
underlying problem.

Thanks for any assistance.

Using Debian 2.6.26-1 

Thanks Simon.  

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID 1 resync with both members good.
  2009-05-07  8:08 RAID 1 resync with both members good Simon Jackson
@ 2009-05-07 19:18 ` Richard Scobie
  2009-05-07 20:19   ` Mario 'BitKoenig' Holbe
       [not found] ` <3D76E016F4A2A749A1EB1A3FD83E22BC02783863@uk-email.terastack.bluearc.c om>
  1 sibling, 1 reply; 6+ messages in thread
From: Richard Scobie @ 2009-05-07 19:18 UTC (permalink / raw)
  To: Simon Jackson; +Cc: linux-raid

I could be way off the mark as I do not use Debian, but it is my 
understanding that they install a script which performs regular (weekly? 
monthly?) checks on md arrays.

These will show as resync events.

Regards,

Richard

Simon Jackson wrote:
> Could someone help me understand why I am seeing the following
> behaviour?
> 
> We use a pair of disks with 3 RAID1 partitions inside an appliance
> system.
> 
> During testing we have seen some instances of the RAID devices resyncing
> even though both members are marked as good in the output of
> /proc/mdstat.
> 
> merc:~# cat /proc/mdstat 
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10] 
> md2 : active raid1 sda3[0] sdb3[1]
> 7823552 blocks [2/2] [UU]
> resync=DELAYED
> md0 : active raid1 sda5[0] sdb5[1]
> 7823552 blocks [2/2] [UU]
> md1 : active raid1 sda6[0] sdb6[1]
> 55841792 blocks [2/2] [UU]
> [=================>...] resync = 88.8% (49606848/55841792) finish=8.4min
> speed=12256K/sec
> unused devices: <none>
> 
> 
> Why would a resync occur if both members are marked us good?
> 
> What we usually see when a drive is failed removed and readded is that
> the resync marks the new drive as down "_" until the resync completes.
> 
> Firstly why is a resync occurring when both drives are still good in the
> raid set?  Is this be expected behaviour or an indication of an
> underlying problem.
> 
> Thanks for any assistance.
> 
> Using Debian 2.6.26-1 
> 
> Thanks Simon.  
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID 1 resync with both members good.
  2009-05-07 19:18 ` Richard Scobie
@ 2009-05-07 20:19   ` Mario 'BitKoenig' Holbe
  0 siblings, 0 replies; 6+ messages in thread
From: Mario 'BitKoenig' Holbe @ 2009-05-07 20:19 UTC (permalink / raw)
  To: linux-raid

Richard Scobie <richard@sauce.co.nz> wrote:
> I could be way off the mark as I do not use Debian, but it is my 
> understanding that they install a script which performs regular (weekly? 
> monthly?) checks on md arrays.
> These will show as resync events.

Nope, they will show as check events since 2.6.19
      [====>................]  check = 20.9% ...

>> Using Debian 2.6.26-1 


regards
   Mario
-- 
Oh Du mein Koenig ... Eine Netzgroesse schrieb mal sinngemaess:
Du musst es so lesen wie ich es meine, nicht so wie ich es schreibe.
Ich meine es natuerlich so, wie Du es schreibst 8--)
                                    O.G. Schwenk - de.comm.chatsystems


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID 1 resync with both members good.
       [not found] ` <3D76E016F4A2A749A1EB1A3FD83E22BC02783863@uk-email.terastack.bluearc.c om>
@ 2009-05-08  0:32   ` NeilBrown
  2009-05-08  6:07     ` Luca Berra
  2009-05-08  7:15     ` Simon Jackson
  0 siblings, 2 replies; 6+ messages in thread
From: NeilBrown @ 2009-05-08  0:32 UTC (permalink / raw)
  To: Simon Jackson; +Cc: linux-raid

On Thu, May 7, 2009 6:08 pm, Simon Jackson wrote:
> Could someone help me understand why I am seeing the following
> behaviour?
>
> We use a pair of disks with 3 RAID1 partitions inside an appliance
> system.
>
> During testing we have seen some instances of the RAID devices resyncing
> even though both members are marked as good in the output of
> /proc/mdstat.

That is exactly as expectedc.
If one of the devices had failed and was being replaced by a spare, you
would see "recovery" not "resync".

Resync happens after an unclean shutdown.
In that case, both drives have good data, but they might not be the same.
The resync makes sure that all copies have exactly the same data.

So you there must have been an unclean shutdown before the most recent
restart.

NeilBrown


>
> merc:~# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md2 : active raid1 sda3[0] sdb3[1]
> 7823552 blocks [2/2] [UU]
> resync=DELAYED
> md0 : active raid1 sda5[0] sdb5[1]
> 7823552 blocks [2/2] [UU]
> md1 : active raid1 sda6[0] sdb6[1]
> 55841792 blocks [2/2] [UU]
> [=================>...] resync = 88.8% (49606848/55841792) finish=8.4min
> speed=12256K/sec
> unused devices: <none>
>
>
> Why would a resync occur if both members are marked us good?
>
> What we usually see when a drive is failed removed and readded is that
> the resync marks the new drive as down "_" until the resync completes.
>
> Firstly why is a resync occurring when both drives are still good in the
> raid set?  Is this be expected behaviour or an indication of an
> underlying problem.
>
> Thanks for any assistance.
>
> Using Debian 2.6.26-1
>
> Thanks Simon.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID 1 resync with both members good.
  2009-05-08  0:32   ` NeilBrown
@ 2009-05-08  6:07     ` Luca Berra
  2009-05-08  7:15     ` Simon Jackson
  1 sibling, 0 replies; 6+ messages in thread
From: Luca Berra @ 2009-05-08  6:07 UTC (permalink / raw)
  To: linux-raid

On Fri, May 08, 2009 at 10:32:01AM +1000, NeilBrown wrote:
>On Thu, May 7, 2009 6:08 pm, Simon Jackson wrote:
>> Could someone help me understand why I am seeing the following
>> behaviour?
>>
>> We use a pair of disks with 3 RAID1 partitions inside an appliance
>> system.
>>
>> During testing we have seen some instances of the RAID devices resyncing
>> even though both members are marked as good in the output of
>> /proc/mdstat.
>
>That is exactly as expectedc.
>If one of the devices had failed and was being replaced by a spare, you
>would see "recovery" not "resync".
>
>Resync happens after an unclean shutdown.
>In that case, both drives have good data, but they might not be the same.
>The resync makes sure that all copies have exactly the same data.
>
>So you there must have been an unclean shutdown before the most recent
>restart.

or a stupid udev rule that gets incremental assembly wrong

L.

-- 
Luca Berra -- bluca@comedia.it
         Communication Media & Services S.r.l.
  /"\
  \ /     ASCII RIBBON CAMPAIGN
   X        AGAINST HTML MAIL
  / \

^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: RAID 1 resync with both members good.
  2009-05-08  0:32   ` NeilBrown
  2009-05-08  6:07     ` Luca Berra
@ 2009-05-08  7:15     ` Simon Jackson
  1 sibling, 0 replies; 6+ messages in thread
From: Simon Jackson @ 2009-05-08  7:15 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

Thanks Neil that clarifies the situation.

-----Original Message-----
From: NeilBrown [mailto:neilb@suse.de] 
Sent: 08 May 2009 01:32
To: Simon Jackson
Cc: linux-raid@vger.kernel.org
Subject: Re: RAID 1 resync with both members good.

On Thu, May 7, 2009 6:08 pm, Simon Jackson wrote:
> Could someone help me understand why I am seeing the following
> behaviour?
>
> We use a pair of disks with 3 RAID1 partitions inside an appliance
> system.
>
> During testing we have seen some instances of the RAID devices
resyncing
> even though both members are marked as good in the output of
> /proc/mdstat.

That is exactly as expectedc.
If one of the devices had failed and was being replaced by a spare, you
would see "recovery" not "resync".

Resync happens after an unclean shutdown.
In that case, both drives have good data, but they might not be the
same.
The resync makes sure that all copies have exactly the same data.

So you there must have been an unclean shutdown before the most recent
restart.

NeilBrown


>
> merc:~# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md2 : active raid1 sda3[0] sdb3[1]
> 7823552 blocks [2/2] [UU]
> resync=DELAYED
> md0 : active raid1 sda5[0] sdb5[1]
> 7823552 blocks [2/2] [UU]
> md1 : active raid1 sda6[0] sdb6[1]
> 55841792 blocks [2/2] [UU]
> [=================>...] resync = 88.8% (49606848/55841792)
finish=8.4min
> speed=12256K/sec
> unused devices: <none>
>
>
> Why would a resync occur if both members are marked us good?
>
> What we usually see when a drive is failed removed and readded is that
> the resync marks the new drive as down "_" until the resync completes.
>
> Firstly why is a resync occurring when both drives are still good in
the
> raid set?  Is this be expected behaviour or an indication of an
> underlying problem.
>
> Thanks for any assistance.
>
> Using Debian 2.6.26-1
>
> Thanks Simon.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2009-05-08  7:15 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-05-07  8:08 RAID 1 resync with both members good Simon Jackson
2009-05-07 19:18 ` Richard Scobie
2009-05-07 20:19   ` Mario 'BitKoenig' Holbe
     [not found] ` <3D76E016F4A2A749A1EB1A3FD83E22BC02783863@uk-email.terastack.bluearc.c om>
2009-05-08  0:32   ` NeilBrown
2009-05-08  6:07     ` Luca Berra
2009-05-08  7:15     ` Simon Jackson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.