All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0 of 4] dm-raid: various bug fixes
@ 2012-04-18  2:30 Jonathan Brassow
  2012-04-18  2:36 ` [PATCH 1 of 5] DM RAID: Set recovery flags on resume Jonathan Brassow
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Jonathan Brassow @ 2012-04-18  2:30 UTC (permalink / raw)
  To: dm-devel, linux-raid; +Cc: agk, neilb

Neil,

I've cleaned up the first two patches I sent earlier:
	[1 of 5] dm-raid-set-recovery-flags-on-resume.patch
	[2 of 5] dm-raid-record-and-handle-missing-devices.patch
and added a couple more:
	[3 of 5] dm-raid-need-safe-version-of-rdev_for_each.patch
	[4 of 5] dm-raid-use-md_error-in-place-of-faulty-bit.patch
	[5 of 5] md-raid1-further-conditionalize-fullsync.patch

Patch [5 of 5] I think needs some work.  It fixes the problem I'm seeing
and seems to go along with similar logic used for RAID5 in commit
d6b212f4b19da5301e6b6eca562e5c7a2a6e8c8d.  It also seems like a workable
solution based on the code surrounding commit
d30519fc59c5cc2f7772fa67b16b1a2426d36c95.  Can you let me know if I'm
stretching the usage of 'saved_raid_disk' too far?

Thanks,
 brassow


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1 of 5] DM RAID: Set recovery flags on resume
  2012-04-18  2:30 [PATCH 0 of 4] dm-raid: various bug fixes Jonathan Brassow
@ 2012-04-18  2:36 ` Jonathan Brassow
  2012-04-18  2:37 ` [PATCH 2 of 5] DM RAID: Record and handle missing devices Jonathan Brassow
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Jonathan Brassow @ 2012-04-18  2:36 UTC (permalink / raw)
  To: dm-devel, linux-raid; +Cc: neilb, agk

Properly initialize MD recovery flags when resuming device-mapper devices.

When a device-mapper device is suspended, all I/O must stop.  This is done by
calling 'md_stop_writes' and 'mddev_suspend'.  These calls in-turn manipulate
the recovery flags - including setting 'MD_RECOVERY_FROZEN'.  The DM device
may have been suspended while recovery was not yet complete, so the process
needs to pick-up where it left off.  Since 'mddev_resume' does not unset
'MD_RECOVERY_FROZEN' and set 'MD_RECOVERY_NEEDED', we must do it ourselves.
'MD_RECOVERY_NEEDED' can safely be set in 'mddev_resume', but 'MD_RECOVERY_FROZEN'
must be set outside of 'mddev_resume' due to how MD handles RAID reshaping.
(e.g.  It is possible for a user to delay reshaping a RAID5->RAID6 by purposefully
setting 'MD_RECOVERY_FROZEN'.  Clearing it in 'mddev_resume' would override the
desired behavior.)

Because 'mddev_resume' already unconditionally calls 'md_wakeup_thread(mddev->thread)'
there is no need to make this call from 'raid_resume' since it calls 'mddev_resume'.

Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>

Index: linux-upstream/drivers/md/dm-raid.c
===================================================================
--- linux-upstream.orig/drivers/md/dm-raid.c
+++ linux-upstream/drivers/md/dm-raid.c
@@ -1255,9 +1255,9 @@ static void raid_resume(struct dm_target
 	if (!rs->bitmap_loaded) {
 		bitmap_load(&rs->md);
 		rs->bitmap_loaded = 1;
-	} else
-		md_wakeup_thread(rs->md.thread);
+	}
 
+	clear_bit(MD_RECOVERY_FROZEN, &rs->md.recovery);
 	mddev_resume(&rs->md);
 }
 
Index: linux-upstream/drivers/md/md.c
===================================================================
--- linux-upstream.orig/drivers/md/md.c
+++ linux-upstream/drivers/md/md.c
@@ -400,6 +400,7 @@ void mddev_resume(struct mddev *mddev)
 	wake_up(&mddev->sb_wait);
 	mddev->pers->quiesce(mddev, 0);
 
+	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
 	md_wakeup_thread(mddev->thread);
 	md_wakeup_thread(mddev->sync_thread); /* possibly kick off a reshape */
 }



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 2 of 5] DM RAID: Record and handle missing devices
  2012-04-18  2:30 [PATCH 0 of 4] dm-raid: various bug fixes Jonathan Brassow
  2012-04-18  2:36 ` [PATCH 1 of 5] DM RAID: Set recovery flags on resume Jonathan Brassow
@ 2012-04-18  2:37 ` Jonathan Brassow
  2012-04-18  2:38 ` [PATCH 3 of 5] DM RAID: Use safe version of rdev_for_each Jonathan Brassow
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Jonathan Brassow @ 2012-04-18  2:37 UTC (permalink / raw)
  To: dm-devel, linux-raid; +Cc: neilb, agk

Missing dm-raid devices should be recorded in the superblock

When specifying the devices that compose a DM RAID array, it is possible to denote
failed or missing devices with '-'s.  When this occurs, we must record this in the
superblock.  We do this by checking if the array position's data device is missing
and then forcing MD to record the superblock by setting 'MD_CHANGE_DEVS' in
'raid_resume'.  If we do not cause the superblock to be rewritten by the resume
function, it is possible for a stale superblock to be written by an out-going
in-active table (during 'raid_dtr').

Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>

Index: linux-upstream/drivers/md/dm-raid.c
===================================================================
--- linux-upstream.orig/drivers/md/dm-raid.c
+++ linux-upstream/drivers/md/dm-raid.c
@@ -617,16 +617,18 @@ static int read_disk_sb(struct md_rdev *
 
 static void super_sync(struct mddev *mddev, struct md_rdev *rdev)
 {
-	struct md_rdev *r;
+	int i;
 	uint64_t failed_devices;
 	struct dm_raid_superblock *sb;
+	struct raid_set *rs = container_of(mddev, struct raid_set, md);
 
 	sb = page_address(rdev->sb_page);
 	failed_devices = le64_to_cpu(sb->failed_devices);
 
-	rdev_for_each(r, mddev)
-		if ((r->raid_disk >= 0) && test_bit(Faulty, &r->flags))
-			failed_devices |= (1ULL << r->raid_disk);
+	for (i = 0; i < mddev->raid_disks; i++)
+		if (!rs->dev[i].data_dev ||
+		    test_bit(Faulty, &(rs->dev[i].rdev.flags)))
+			failed_devices |= (1ULL << i);
 
 	memset(sb, 0, sizeof(*sb));
 
@@ -1252,6 +1254,7 @@ static void raid_resume(struct dm_target
 {
 	struct raid_set *rs = ti->private;
 
+	set_bit(MD_CHANGE_DEVS, &rs->md.flags);
 	if (!rs->bitmap_loaded) {
 		bitmap_load(&rs->md);
 		rs->bitmap_loaded = 1;



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 3 of 5] DM RAID: Use safe version of rdev_for_each
  2012-04-18  2:30 [PATCH 0 of 4] dm-raid: various bug fixes Jonathan Brassow
  2012-04-18  2:36 ` [PATCH 1 of 5] DM RAID: Set recovery flags on resume Jonathan Brassow
  2012-04-18  2:37 ` [PATCH 2 of 5] DM RAID: Record and handle missing devices Jonathan Brassow
@ 2012-04-18  2:38 ` Jonathan Brassow
  2012-04-18  2:41 ` [PATCH 4 of 5] DM RAID: Use md_error() in place of simply setting Faulty bit Jonathan Brassow
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Jonathan Brassow @ 2012-04-18  2:38 UTC (permalink / raw)
  To: dm-devel, linux-raid; +Cc: neilb, agk

Fix segfault caused by using rdev_for_each instead of rdev_for_each_safe

Commit dafb20fa34320a472deb7442f25a0c086e0feb33 mistakenly replaced a safe
iterator with an unsafe one when making some macro changes.

Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>

Index: linux-upstream/drivers/md/dm-raid.c
===================================================================
--- linux-upstream.orig/drivers/md/dm-raid.c
+++ linux-upstream/drivers/md/dm-raid.c
@@ -861,7 +861,7 @@ static int analyse_superblocks(struct dm
 	int ret;
 	unsigned redundancy = 0;
 	struct raid_dev *dev;
-	struct md_rdev *rdev, *freshest;
+	struct md_rdev *rdev, *tmp, *freshest;
 	struct mddev *mddev = &rs->md;
 
 	switch (rs->raid_type->level) {
@@ -879,7 +879,7 @@ static int analyse_superblocks(struct dm
 	}
 
 	freshest = NULL;
-	rdev_for_each(rdev, mddev) {
+	rdev_for_each_safe(rdev, tmp, mddev) {
 		if (!rdev->meta_bdev)
 			continue;
 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 4 of 5] DM RAID: Use md_error() in place of simply setting Faulty bit
  2012-04-18  2:30 [PATCH 0 of 4] dm-raid: various bug fixes Jonathan Brassow
                   ` (2 preceding siblings ...)
  2012-04-18  2:38 ` [PATCH 3 of 5] DM RAID: Use safe version of rdev_for_each Jonathan Brassow
@ 2012-04-18  2:41 ` Jonathan Brassow
  2012-04-18  2:43 ` [PATCH 5 of 5] MD RAID1: Further conditionalize 'fullsync' Jonathan Brassow
  2012-04-18  3:48 ` [PATCH 0 of 4] dm-raid: various bug fixes NeilBrown
  5 siblings, 0 replies; 11+ messages in thread
From: Jonathan Brassow @ 2012-04-18  2:41 UTC (permalink / raw)
  To: dm-devel, linux-raid; +Cc: neilb, agk

When encountering an error while reading the superblock, call md_error.

We are currently setting the 'Faulty' bit on one of the array devices when an
error is encountered while reading the superblock of a dm-raid array.  We should
be calling md_error(), as it handles the error more completely.

Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>
Index: linux-upstream/drivers/md/dm-raid.c
===================================================================
--- linux-upstream.orig/drivers/md/dm-raid.c
+++ linux-upstream/drivers/md/dm-raid.c
@@ -606,7 +606,7 @@ static int read_disk_sb(struct md_rdev *
 	if (!sync_page_io(rdev, 0, size, rdev->sb_page, READ, 1)) {
 		DMERR("Failed to read superblock of device at position %d",
 		      rdev->raid_disk);
-		set_bit(Faulty, &rdev->flags);
+		md_error(rdev->mddev, rdev);
 		return -EINVAL;
 	}
 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 5 of 5] MD RAID1: Further conditionalize 'fullsync'
  2012-04-18  2:30 [PATCH 0 of 4] dm-raid: various bug fixes Jonathan Brassow
                   ` (3 preceding siblings ...)
  2012-04-18  2:41 ` [PATCH 4 of 5] DM RAID: Use md_error() in place of simply setting Faulty bit Jonathan Brassow
@ 2012-04-18  2:43 ` Jonathan Brassow
  2012-04-18  3:48 ` [PATCH 0 of 4] dm-raid: various bug fixes NeilBrown
  5 siblings, 0 replies; 11+ messages in thread
From: Jonathan Brassow @ 2012-04-18  2:43 UTC (permalink / raw)
  To: dm-devel, linux-raid; +Cc: neilb, agk

A RAID1 device does not necessarily need a fullsync if the bitmap can be used instead.

Similar to commit d6b212f4b19da5301e6b6eca562e5c7a2a6e8c8d in raid5.c, if a raid1
device can be brought back (i.e. from a transient failure) it shouldn't need a
complete resync.  Provided the bitmap is not to old, it will have recorded the areas
of the disk that need recovery.

** I've used 'saved_raid_disk' here similar to RAID5, but it doesn't seem to fit
   as well.  The positions aren't really as important as they are in RAID5, and
   I'm using the 'saved_raid_disk' as more of an indicator that it can use the
   bitmap, rather than for any other purpose.  Perhaps the meaning is being
   overloaded and a different solution should be found?

RFC-by: Jonathan Brassow <jbrassow@redhat.com>

Index: linux-upstream/drivers/md/raid1.c
===================================================================
--- linux-upstream.orig/drivers/md/raid1.c
+++ linux-upstream/drivers/md/raid1.c
@@ -2597,7 +2597,8 @@ static struct r1conf *setup_conf(struct 
 		if (!disk->rdev ||
 		    !test_bit(In_sync, &disk->rdev->flags)) {
 			disk->head_position = 0;
-			if (disk->rdev)
+			if (disk->rdev &&
+			    (disk->rdev->saved_raid_disk != disk->rdev->raid_disk))
 				conf->fullsync = 1;
 		} else if (conf->last_used < 0)
 			/*



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0 of 4] dm-raid: various bug fixes
  2012-04-18  2:30 [PATCH 0 of 4] dm-raid: various bug fixes Jonathan Brassow
                   ` (4 preceding siblings ...)
  2012-04-18  2:43 ` [PATCH 5 of 5] MD RAID1: Further conditionalize 'fullsync' Jonathan Brassow
@ 2012-04-18  3:48 ` NeilBrown
  2012-04-18 14:05   ` Brassow Jonathan
  5 siblings, 1 reply; 11+ messages in thread
From: NeilBrown @ 2012-04-18  3:48 UTC (permalink / raw)
  To: Jonathan Brassow; +Cc: dm-devel, linux-raid, agk

[-- Attachment #1: Type: text/plain, Size: 1286 bytes --]

On Tue, 17 Apr 2012 21:30:19 -0500 Jonathan Brassow <jbrassow@redhat.com>
wrote:

> Neil,
> 
> I've cleaned up the first two patches I sent earlier:
> 	[1 of 5] dm-raid-set-recovery-flags-on-resume.patch
> 	[2 of 5] dm-raid-record-and-handle-missing-devices.patch
> and added a couple more:
> 	[3 of 5] dm-raid-need-safe-version-of-rdev_for_each.patch
> 	[4 of 5] dm-raid-use-md_error-in-place-of-faulty-bit.patch
> 	[5 of 5] md-raid1-further-conditionalize-fullsync.patch
> 
> Patch [5 of 5] I think needs some work.  It fixes the problem I'm seeing
> and seems to go along with similar logic used for RAID5 in commit
> d6b212f4b19da5301e6b6eca562e5c7a2a6e8c8d.  It also seems like a workable
> solution based on the code surrounding commit
> d30519fc59c5cc2f7772fa67b16b1a2426d36c95.  Can you let me know if I'm
> stretching the usage of 'saved_raid_disk' too far?
> 
> Thanks,
>  brassow

Thanks.

3-of-5 should go in 3.4 presumably.  The rest wait for 3.5?  Or do you think
they should be in 3.4?

5-of-5:  Maybe it would make sense just to check if saved_raid_disk >= 0 ??

This is only relevant for dm-raid isn't it?  I'd need to think through how
all that fits together again.

The rest are all fine and are in my for-next

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0 of 4] dm-raid: various bug fixes
  2012-04-18  3:48 ` [PATCH 0 of 4] dm-raid: various bug fixes NeilBrown
@ 2012-04-18 14:05   ` Brassow Jonathan
  2012-04-18 21:32     ` Brassow Jonathan
  0 siblings, 1 reply; 11+ messages in thread
From: Brassow Jonathan @ 2012-04-18 14:05 UTC (permalink / raw)
  To: NeilBrown; +Cc: dm-devel, linux-raid, agk


On Apr 17, 2012, at 10:48 PM, NeilBrown wrote:

> On Tue, 17 Apr 2012 21:30:19 -0500 Jonathan Brassow <jbrassow@redhat.com>
> wrote:
> 
>> Neil,
>> 
>> I've cleaned up the first two patches I sent earlier:
>> 	[1 of 5] dm-raid-set-recovery-flags-on-resume.patch
>> 	[2 of 5] dm-raid-record-and-handle-missing-devices.patch
>> and added a couple more:
>> 	[3 of 5] dm-raid-need-safe-version-of-rdev_for_each.patch
>> 	[4 of 5] dm-raid-use-md_error-in-place-of-faulty-bit.patch
>> 	[5 of 5] md-raid1-further-conditionalize-fullsync.patch
>> 
>> Patch [5 of 5] I think needs some work.  It fixes the problem I'm seeing
>> and seems to go along with similar logic used for RAID5 in commit
>> d6b212f4b19da5301e6b6eca562e5c7a2a6e8c8d.  It also seems like a workable
>> solution based on the code surrounding commit
>> d30519fc59c5cc2f7772fa67b16b1a2426d36c95.  Can you let me know if I'm
>> stretching the usage of 'saved_raid_disk' too far?
>> 
>> Thanks,
>> brassow
> 
> Thanks.
> 
> 3-of-5 should go in 3.4 presumably.  The rest wait for 3.5?  Or do you think
> they should be in 3.4?
> 
> 5-of-5:  Maybe it would make sense just to check if saved_raid_disk >= 0 ??
> 
> This is only relevant for dm-raid isn't it?  I'd need to think through how
> all that fits together again.
> 
> The rest are all fine and are in my for-next

Thanks Neil,

Yes, 3-of-5 should probably go in sooner rather than later.  Waiting on the others shouldn't hurt.

5-of-5: changing the check to 'saved_raid_disk >= 0' would be fine, but I think I should initialize 'saved_raid_disk' to -1 in dm-raid.c then normally.  Right now, an nominal initial value is not set - meaning it is '0'.  (When a device comes back from a failure, 'saved_raid_disk' is assigned its old position.)

 brassow

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0 of 4] dm-raid: various bug fixes
  2012-04-18 14:05   ` Brassow Jonathan
@ 2012-04-18 21:32     ` Brassow Jonathan
  2012-04-18 23:58       ` [dm-devel] " NeilBrown
  0 siblings, 1 reply; 11+ messages in thread
From: Brassow Jonathan @ 2012-04-18 21:32 UTC (permalink / raw)
  To: device-mapper development; +Cc: linux-raid, agk


[-- Attachment #1.1: Type: text/plain, Size: 685 bytes --]


On Apr 18, 2012, at 9:05 AM, Brassow Jonathan wrote:

> 
> 5-of-5: changing the check to 'saved_raid_disk >= 0' would be fine, but I think I should initialize 'saved_raid_disk' to -1 in dm-raid.c then normally.  Right now, an nominal initial value is not set - meaning it is '0'.  (When a device comes back from a failure, 'saved_raid_disk' is assigned its old position.)

... that's not quite right.  I do call 'md_rdev_init' which sets 'saved_raid_disk' to -1.  Then, if the device has returned after a disappearance, I set 'saved_raid_disk' to it's old position.  Therefore, 'saved_raid_disk >= 0' would be fine and wouldn't require me to set -1 in dm-raid.c.

 brassow


[-- Attachment #1.2: Type: text/html, Size: 1536 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dm-devel] [PATCH 0 of 4] dm-raid: various bug fixes
  2012-04-18 21:32     ` Brassow Jonathan
@ 2012-04-18 23:58       ` NeilBrown
  2012-04-19  2:42         ` Brassow Jonathan
  0 siblings, 1 reply; 11+ messages in thread
From: NeilBrown @ 2012-04-18 23:58 UTC (permalink / raw)
  To: Brassow Jonathan; +Cc: device-mapper development, linux-raid, agk

[-- Attachment #1: Type: text/plain, Size: 1118 bytes --]

On Wed, 18 Apr 2012 16:32:00 -0500 Brassow Jonathan <jbrassow@redhat.com>
wrote:

> 
> On Apr 18, 2012, at 9:05 AM, Brassow Jonathan wrote:
> 
> > 
> > 5-of-5: changing the check to 'saved_raid_disk >= 0' would be fine, but I think I should initialize 'saved_raid_disk' to -1 in dm-raid.c then normally.  Right now, an nominal initial value is not set - meaning it is '0'.  (When a device comes back from a failure, 'saved_raid_disk' is assigned its old position.)
> 
> ... that's not quite right.  I do call 'md_rdev_init' which sets 'saved_raid_disk' to -1.  Then, if the device has returned after a disappearance, I set 'saved_raid_disk' to it's old position.  Therefore, 'saved_raid_disk >= 0' would be fine and wouldn't require me to set -1 in dm-raid.c.
> 
>  brassow
> 

Excellent.  I've taken the liberty of making that change in the patch you
sent me and converted your RFC-by: to Signed-off-by:

Result can be viewed at or near the top of

http://neil.brown.name/git?p=md;a=shortlog;h=refs/heads/for-next

Please confirm that is OK to submit (eventually for 3.5).

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dm-devel] [PATCH 0 of 4] dm-raid: various bug fixes
  2012-04-18 23:58       ` [dm-devel] " NeilBrown
@ 2012-04-19  2:42         ` Brassow Jonathan
  0 siblings, 0 replies; 11+ messages in thread
From: Brassow Jonathan @ 2012-04-19  2:42 UTC (permalink / raw)
  To: NeilBrown; +Cc: device-mapper development, linux-raid, agk


On Apr 18, 2012, at 6:58 PM, NeilBrown wrote:

> On Wed, 18 Apr 2012 16:32:00 -0500 Brassow Jonathan <jbrassow@redhat.com>
> wrote:
> 
>> 
>> On Apr 18, 2012, at 9:05 AM, Brassow Jonathan wrote:
>> 
>>> 
>>> 5-of-5: changing the check to 'saved_raid_disk >= 0' would be fine, but I think I should initialize 'saved_raid_disk' to -1 in dm-raid.c then normally.  Right now, an nominal initial value is not set - meaning it is '0'.  (When a device comes back from a failure, 'saved_raid_disk' is assigned its old position.)
>> 
>> ... that's not quite right.  I do call 'md_rdev_init' which sets 'saved_raid_disk' to -1.  Then, if the device has returned after a disappearance, I set 'saved_raid_disk' to it's old position.  Therefore, 'saved_raid_disk >= 0' would be fine and wouldn't require me to set -1 in dm-raid.c.
>> 
>> brassow
>> 
> 
> Excellent.  I've taken the liberty of making that change in the patch you
> sent me and converted your RFC-by: to Signed-off-by:
> 
> Result can be viewed at or near the top of
> 
> http://neil.brown.name/git?p=md;a=shortlog;h=refs/heads/for-next
> 
> Please confirm that is OK to submit (eventually for 3.5).

Perfect, thank-you.

 brassow

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2012-04-19  2:42 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-18  2:30 [PATCH 0 of 4] dm-raid: various bug fixes Jonathan Brassow
2012-04-18  2:36 ` [PATCH 1 of 5] DM RAID: Set recovery flags on resume Jonathan Brassow
2012-04-18  2:37 ` [PATCH 2 of 5] DM RAID: Record and handle missing devices Jonathan Brassow
2012-04-18  2:38 ` [PATCH 3 of 5] DM RAID: Use safe version of rdev_for_each Jonathan Brassow
2012-04-18  2:41 ` [PATCH 4 of 5] DM RAID: Use md_error() in place of simply setting Faulty bit Jonathan Brassow
2012-04-18  2:43 ` [PATCH 5 of 5] MD RAID1: Further conditionalize 'fullsync' Jonathan Brassow
2012-04-18  3:48 ` [PATCH 0 of 4] dm-raid: various bug fixes NeilBrown
2012-04-18 14:05   ` Brassow Jonathan
2012-04-18 21:32     ` Brassow Jonathan
2012-04-18 23:58       ` [dm-devel] " NeilBrown
2012-04-19  2:42         ` Brassow Jonathan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.