All of lore.kernel.org
 help / color / mirror / Atom feed
* mdadm --monitor: need extra feature?
@ 2012-08-21 10:41 Sergiusz Brzeziński
  2012-08-21 10:44 ` David Brown
  0 siblings, 1 reply; 9+ messages in thread
From: Sergiusz Brzeziński @ 2012-08-21 10:41 UTC (permalink / raw)
  To: linux-raid

Hi,

I use Raid1 to make backup of the whole system. I remove hot-swap one drive from 
the array, and insert another drive. The goal is, that the only activity of a 
man is remove and insert the drive. The rest has to be automated.

Every drive, I use for this, has prepared Raid partition and was once added to 
array, so the zero block exists and UUID is equal to the working array.

After changing drives, the rebuilding must be initialized (mdadm -add)

But first, it must be discovered, that the device with consistent UUID appeared 
in the system with degraded array and is available for us. After such discover, 
I can add the device to the degraded array and start rebuilding.

"mdadm --monitor" can recognize that a drive disappeared, but it can't 
recognize, thant a drive with consistent UUID appeared!

Or maybye I am wrong and I didn't understand something from mdadm and mdadm.conf?

Now I use for this my own script in crontab and i don't use "mdadm --monitor" at 
all because it only partially do what I want.

My question is: Is it possibe to recognize with "mdadm --monitor" that the new 
device with consistent UUID appeared? Or if no, do You mean, this feature is 
worth to implement?

Sergiusz

ps.

sorry for my english :)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mdadm --monitor: need extra feature?
  2012-08-21 10:41 mdadm --monitor: need extra feature? Sergiusz Brzeziński
@ 2012-08-21 10:44 ` David Brown
  2012-08-21 11:51   ` Sergiusz Brzeziński
  0 siblings, 1 reply; 9+ messages in thread
From: David Brown @ 2012-08-21 10:44 UTC (permalink / raw)
  To: Sergiusz Brzeziński; +Cc: linux-raid

On 21/08/2012 12:41, Sergiusz Brzeziński wrote:
> Hi,
>
> I use Raid1 to make backup of the whole system.

Raid is not a backup system.  It is to improve uptimes, minimise 
downtimes due to disk failures, and possibly to improve disk speed 
and/or capacity.

I would recommend you first think about what you are trying to achieve 
here - what are you trying to back up, how do you see restores being 
used, how efficiently are you using your hardware, your bandwidth, your 
time and effort?

You would probably be better off with a normal fixed 2-disk raid1 to 
minimise the problems caused by a single disk failure, combined with an 
rsync snapshot style backup that can be fully automated and give quick 
and easy recovery of multiple old versions of files in the face of the 
most common cause of data loss - human error.

> I remove hot-swap one
> drive from the array, and insert another drive. The goal is, that the
> only activity of a man is remove and insert the drive. The rest has to
> be automated.
>
> Every drive, I use for this, has prepared Raid partition and was once
> added to array, so the zero block exists and UUID is equal to the
> working array.
>
> After changing drives, the rebuilding must be initialized (mdadm -add)
>
> But first, it must be discovered, that the device with consistent UUID
> appeared in the system with degraded array and is available for us.
> After such discover, I can add the device to the degraded array and
> start rebuilding.
>
> "mdadm --monitor" can recognize that a drive disappeared, but it can't
> recognize, thant a drive with consistent UUID appeared!
>
> Or maybye I am wrong and I didn't understand something from mdadm and
> mdadm.conf?
>
> Now I use for this my own script in crontab and i don't use "mdadm
> --monitor" at all because it only partially do what I want.
>
> My question is: Is it possibe to recognize with "mdadm --monitor" that
> the new device with consistent UUID appeared? Or if no, do You mean,
> this feature is worth to implement?
>
> Sergiusz
>
> ps.
>
> sorry for my english :)
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mdadm --monitor: need extra feature?
  2012-08-21 10:44 ` David Brown
@ 2012-08-21 11:51   ` Sergiusz Brzeziński
  2012-08-21 12:39     ` Adam Goryachev
  0 siblings, 1 reply; 9+ messages in thread
From: Sergiusz Brzeziński @ 2012-08-21 11:51 UTC (permalink / raw)
  To: David Brown; +Cc: linux-raid



W dniu 21.08.2012 12:44, David Brown pisze:
> On 21/08/2012 12:41, Sergiusz Brzeziński wrote:
>> Hi,
>>
>> I use Raid1 to make backup of the whole system.
>
> Raid is not a backup system. It is to improve uptimes, minimise downtimes due to
> disk failures, and possibly to improve disk speed and/or capacity.
>
> I would recommend you first think about what you are trying to achieve here -
> what are you trying to back up, how do you see restores being used, how
> efficiently are you using your hardware, your bandwidth, your time and effort?
>
> You would probably be better off with a normal fixed 2-disk raid1 to minimise
> the problems caused by a single disk failure, combined with an rsync snapshot
> style backup that can be fully automated and give quick and easy recovery of
> multiple old versions of files in the face of the most common cause of data loss
> - human error.
[...]

I know, I know. Raid is not a backup system :)

Or better: it is not intended to be a backup system.

But I do use Raid1 as backup solution because I don't know another backup 
solution giving me together:
- so simple configuration (there is no configuration!)
- so low cost (no software cost, only hdd cost)
- complete copy of the whole system (not only data but also configuration)
- so quick start in case of hardware failure
- so fast backup process without excessive system load
- so simple handling (just remove and insert hdd with hot-swap bay)

With some more rotating disks it can simulate very good backup solutions and I 
still can make backups of critical data (dumping databases, copying files) 
independently of Raid (and I do it).

And even I You don't want using it as a regular backup it is alwas worth to do a 
bootable disk with mirrored system partition and keep it somewhere out of the 
box for the bad times - just a small system backup for quick start in case of 
hardware failure.

So, maybye Raid 1 is not a backup system but for some cases this is the best 
backup solution it can be! And I am an opportunist :)

Sergiusz

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mdadm --monitor: need extra feature?
  2012-08-21 11:51   ` Sergiusz Brzeziński
@ 2012-08-21 12:39     ` Adam Goryachev
  2012-08-22  7:14       ` Sergiusz Brzeziński
  0 siblings, 1 reply; 9+ messages in thread
From: Adam Goryachev @ 2012-08-21 12:39 UTC (permalink / raw)
  To: Sergiusz Brzeziński; +Cc: linux-raid

On 21/08/12 21:51, Sergiusz Brzeziński wrote:
>
>
> W dniu 21.08.2012 12:44, David Brown pisze:
>> On 21/08/2012 12:41, Sergiusz Brzeziński wrote:
>>> Hi,
>>>
>>> I use Raid1 to make backup of the whole system.
>>
>> Raid is not a backup system. It is to improve uptimes, minimise
>> downtimes due to
>> disk failures, and possibly to improve disk speed and/or capacity.
>>
>> I would recommend you first think about what you are trying to
>> achieve here -
>> what are you trying to back up, how do you see restores being used, how
>> efficiently are you using your hardware, your bandwidth, your time
>> and effort?
>>
>> You would probably be better off with a normal fixed 2-disk raid1 to
>> minimise
>> the problems caused by a single disk failure, combined with an rsync
>> snapshot
>> style backup that can be fully automated and give quick and easy
>> recovery of
>> multiple old versions of files in the face of the most common cause
>> of data loss
>> - human error.
> [...]
>
> I know, I know. Raid is not a backup system :)
Aside from RAID is not a backup, perhaps the more useful suggestion
would be to use the right tool for the job...

So, again, ignoring that you possibly should not be using RAID for a
backup... how about using udev scripts to see when you plugin a drive,
and that script can check the UUID against any md arrays, and if it
matches, add it to the array....

BTW, I've used BackupPC (Linux based, free software for complete backups
of linux + windows + pretty much any other OS, using hard links to
de-dupe files), which would export the most recent backup of each
machine and dump it as a plain tar.bz2 file onto external HDD, which was
auto-detected based on the following criteria (which I decided was "safe
enough"):
1) Matching the UUID to a list of known UUID's for backup drives in the pool
2) We could mount the first partition with a pre-determined FS type
(mount -t ext3 blah)
3) After mounting, a specific file existed (if [ -f
/mnt/archive/special_file ])
If all that matched, then we would create a new archive for the first
host, then delete any old archive for this host, repeat for all hosts,
unmount, send complete report to monitoring system.


Regards,
Adam


-- 
Adam Goryachev
Website Managers
www.websitemanagers.com.au

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mdadm --monitor: need extra feature?
  2012-08-21 12:39     ` Adam Goryachev
@ 2012-08-22  7:14       ` Sergiusz Brzeziński
  2012-08-22  7:44         ` NeilBrown
  0 siblings, 1 reply; 9+ messages in thread
From: Sergiusz Brzeziński @ 2012-08-22  7:14 UTC (permalink / raw)
  To: Adam Goryachev; +Cc: linux-raid

W dniu 21.08.2012 14:39, Adam Goryachev pisze:
> On 21/08/12 21:51, Sergiusz Brzeziński wrote:
>>
>>
>> W dniu 21.08.2012 12:44, David Brown pisze:
>>> On 21/08/2012 12:41, Sergiusz Brzeziński wrote:
>>>> Hi,
>>>>
>>>> I use Raid1 to make backup of the whole system.
>>>
>>> Raid is not a backup system. It is to improve uptimes, minimise
>>> downtimes due to
>>> disk failures, and possibly to improve disk speed and/or capacity.
>>>
>>> I would recommend you first think about what you are trying to
>>> achieve here -
>>> what are you trying to back up, how do you see restores being used, how
>>> efficiently are you using your hardware, your bandwidth, your time
>>> and effort?
>>>
>>> You would probably be better off with a normal fixed 2-disk raid1 to
>>> minimise
>>> the problems caused by a single disk failure, combined with an rsync
>>> snapshot
>>> style backup that can be fully automated and give quick and easy
>>> recovery of
>>> multiple old versions of files in the face of the most common cause
>>> of data loss
>>> - human error.
>> [...]
>>
>> I know, I know. Raid is not a backup system :)
> Aside from RAID is not a backup, perhaps the more useful suggestion
> would be to use the right tool for the job...
>
> So, again, ignoring that you possibly should not be using RAID for a
> backup... how about using udev scripts to see when you plugin a drive,
> and that script can check the UUID against any md arrays, and if it
> matches, add it to the array....

I wrote a script making this work. It runs once a hour. I pass the parameter 
with md device to the script. It checks the state of the array with "mdadm 
--detail". If there is something wrong (State : degraded) it reads UUID of that 
array. Then it scans for /dev/sd* partitions and checks with "mdadm --examine" 
if UUID matches. If so, the partition can be added with "mdadm --add". That is 
why I asked abut this feature in mdadm - recognising if there is a new partition 
belonging to monitored array. With mdadm this procedure would work on elegant 
manner.

>
> BTW, I've used BackupPC (Linux based, free software for complete backups
> of linux + windows + pretty much any other OS, using hard links to
> de-dupe files), which would export the most recent backup of each
> machine and dump it as a plain tar.bz2 file onto external HDD, which was
> auto-detected based on the following criteria (which I decided was "safe
> enough"):
> 1) Matching the UUID to a list of known UUID's for backup drives in the pool
> 2) We could mount the first partition with a pre-determined FS type
> (mount -t ext3 blah)
> 3) After mounting, a specific file existed (if [ -f
> /mnt/archive/special_file ])
> If all that matched, then we would create a new archive for the first
> host, then delete any old archive for this host, repeat for all hosts,
> unmount, send complete report to monitoring system.
>
>
> Regards,
> Adam
>

Yes, that is probably good backup tool. (I don't know this tool, but I guess it 
is :) ). In both cases (my, and Your) we can change the hdd in hot-swap bay. The 
difference is (for me it is very importand difference) that in moment of 
removing disk, You have just a backup, and I have the backup + working, ready to 
use, with most up-to-date files, system. Ok, there can be some corrupted files 
(hot-swap removing can cause this) but that is the reason for making "real 
backup" of critical data independently of raid and storing it for example on the 
raid device itself! I do it for example with postgres database with pg_dump in 
cron. Then I rotate it with logrotate to have some more backups from the past. 
So, on my mirrored, just removed disk is the whole, probably ready for instant 
use, system. In case of problems (corrupted database indices or data files for 
example) I also have "real" backups on the disk from the past.

I don't want to argue, which solution is better. It can depend on many 
circumstances and needs. As we sad, "Raid is not a backup system" - so it 
doesn't have many features which real backup system have. I only want to say, 
that for me Raid1 is the excelent backup solution regardless of wheter I should 
use Raid1 for backup or not. Fortunately no one can prohibit using Raid on this 
way :) And I just choose the solution which works for me better.

best regards

Sergiusz
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mdadm --monitor: need extra feature?
  2012-08-22  7:14       ` Sergiusz Brzeziński
@ 2012-08-22  7:44         ` NeilBrown
  2012-08-22  9:50           ` Sergiusz Brzeziński
  0 siblings, 1 reply; 9+ messages in thread
From: NeilBrown @ 2012-08-22  7:44 UTC (permalink / raw)
  To: Sergiusz Brzeziński; +Cc: Adam Goryachev, linux-raid

[-- Attachment #1: Type: text/plain, Size: 2647 bytes --]

On Wed, 22 Aug 2012 09:14:15 +0200 Sergiusz Brzeziński
<Sergiusz.Brzezinski@supersystem.pl> wrote:

> W dniu 21.08.2012 14:39, Adam Goryachev pisze:
> > On 21/08/12 21:51, Sergiusz Brzeziński wrote:
> >>
> >>
> >> W dniu 21.08.2012 12:44, David Brown pisze:
> >>> On 21/08/2012 12:41, Sergiusz Brzeziński wrote:
> >>>> Hi,
> >>>>
> >>>> I use Raid1 to make backup of the whole system.
> >>>
> >>> Raid is not a backup system. It is to improve uptimes, minimise
> >>> downtimes due to
> >>> disk failures, and possibly to improve disk speed and/or capacity.
> >>>
> >>> I would recommend you first think about what you are trying to
> >>> achieve here -
> >>> what are you trying to back up, how do you see restores being used, how
> >>> efficiently are you using your hardware, your bandwidth, your time
> >>> and effort?
> >>>
> >>> You would probably be better off with a normal fixed 2-disk raid1 to
> >>> minimise
> >>> the problems caused by a single disk failure, combined with an rsync
> >>> snapshot
> >>> style backup that can be fully automated and give quick and easy
> >>> recovery of
> >>> multiple old versions of files in the face of the most common cause
> >>> of data loss
> >>> - human error.
> >> [...]
> >>
> >> I know, I know. Raid is not a backup system :)
> > Aside from RAID is not a backup, perhaps the more useful suggestion
> > would be to use the right tool for the job...
> >
> > So, again, ignoring that you possibly should not be using RAID for a
> > backup... how about using udev scripts to see when you plugin a drive,
> > and that script can check the UUID against any md arrays, and if it
> > matches, add it to the array....
> 
> I wrote a script making this work. It runs once a hour. I pass the parameter 
> with md device to the script. It checks the state of the array with "mdadm 
> --detail". If there is something wrong (State : degraded) it reads UUID of that 
> array. Then it scans for /dev/sd* partitions and checks with "mdadm --examine" 
> if UUID matches. If so, the partition can be added with "mdadm --add". That is 
> why I asked abut this feature in mdadm - recognising if there is a new partition 
> belonging to monitored array. With mdadm this procedure would work on elegant 
> manner.
>

udev really is the right way to do this.  Just get udev to run
  mdadm -I /dev/newdev
whenever a device is discovered.  It can then be automatically re-added
depending on the policy set up in mdadm.conf.
"mdadm --monitor" will not gain this functionality.  It is for monitoring
active arrays, not for monitor new devices.

NeilBrown


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mdadm --monitor: need extra feature?
  2012-08-22  7:44         ` NeilBrown
@ 2012-08-22  9:50           ` Sergiusz Brzeziński
       [not found]             ` <5034B0A2.4080403@websitemanagers.com.au>
  0 siblings, 1 reply; 9+ messages in thread
From: Sergiusz Brzeziński @ 2012-08-22  9:50 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

W dniu 22.08.2012 09:44, NeilBrown pisze:
[...]
>>> So, again, ignoring that you possibly should not be using RAID for a
>>> backup... how about using udev scripts to see when you plugin a drive,
>>> and that script can check the UUID against any md arrays, and if it
>>> matches, add it to the array....
>>
>> I wrote a script making this work. It runs once a hour. I pass the parameter
>> with md device to the script. It checks the state of the array with "mdadm
>> --detail". If there is something wrong (State : degraded) it reads UUID of that
>> array. Then it scans for /dev/sd* partitions and checks with "mdadm --examine"
>> if UUID matches. If so, the partition can be added with "mdadm --add". That is
>> why I asked abut this feature in mdadm - recognising if there is a new partition
>> belonging to monitored array. With mdadm this procedure would work on elegant
>> manner.
>>
>
> udev really is the right way to do this.  Just get udev to run
>    mdadm -I /dev/newdev
> whenever a device is discovered.  It can then be automatically re-added
> depending on the policy set up in mdadm.conf.
> "mdadm --monitor" will not gain this functionality.  It is for monitoring
> active arrays, not for monitor new devices.
>
> NeilBrown
>

Yes, that ist what I need and what I asked for!: udev + mdadm -I

I didn't know the Incremental function of mdadm (mea culpa). And udev is even 
prepared for this! (at least in Ubuntu)

I never thougt it is so simple!

thank You

For another people looking for similar solution I write what I did:

1.
I switched /etc/udev/udev.conf/udev_log to "debug" to see what happen after 
inserting new drive.

2.
After inserting the drive I found in logs following lines:

Aug 22 11:08:47 serwer-linmot udevd[2287]: '/sbin/mdadm --incremental 
/dev/sdc3'(err) 'mdadm: not adding /dev/sdc3 to active array (without --run) 
/dev/md/0'
Aug 22 11:08:47 serwer-linmot udevd[2287]: '/sbin/mdadm --incremental /dev/sdc3' 
[2310] exit with return code 2

3.
In Ubuntu there is in /lib/udev/rules.d/64-md-raid.rules file responsible for 
this. I only changed one line:

ACTION=="add", RUN+="/sbin/mdadm --incremental $tempnode"
to:
ACTION=="add", RUN+="/sbin/mdadm --incremental --run $tempnode"

And thats all!

Now, I don't have to use my script from crontab anymore. Raid start rebuilding 
array immediately after disk is inserted.

Sergiusz





^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mdadm --monitor: need extra feature?
       [not found]             ` <5034B0A2.4080403@websitemanagers.com.au>
@ 2012-08-22 10:50               ` Sergiusz Brzeziński
  2012-08-22 10:57                 ` Sergiusz Brzeziński
  0 siblings, 1 reply; 9+ messages in thread
From: Sergiusz Brzeziński @ 2012-08-22 10:50 UTC (permalink / raw)
  To: Adam Goryachev; +Cc: linux-raid

W dniu 22.08.2012 12:12, Adam Goryachev pisze:
> On 22/08/12 19:50, Sergiusz Brzeziński wrote:
>> 3. In Ubuntu there is in /lib/udev/rules.d/64-md-raid.rules file
>> responsible for this. I only changed one line:
>>
>> ACTION=="add", RUN+="/sbin/mdadm --incremental $tempnode"
>> to:
>> ACTION=="add", RUN+="/sbin/mdadm --incremental --run $tempnode"
>>
>> And thats all!
>>
>> Now, I don't have to use my script from crontab anymore. Raid start
>> rebuilding array immediately after disk is inserted.
> Could this cause a problem (ie, what is the reason this is not the
> default value)?
>
> My guess is it may cause md to start an array where all "available"
> disks are not yet added to the array. ie, if you are plugging in three
> drives, it might run the array after plugging in the second array, and
> then when you plugin the third drive it will need to do a resync.
 >
 > I suppose worst case is a resync where it wasn't really needed.
 >
 > Regards,
 > Adam

I made a try.

1.
I never had before 3 drives at one time. After inserting the third drive for the 
first time (the drive had a raid partition with proper UUID) it happened 
nothing. (or maybye something happened but I don't know what because i turned 
off the debugging :( )

2.
Than I made mdadm /dev/md0 -a /dev/third_drive
It was added as a spare.

3.
I removed and inserted again the third drive. Again nothing. No info in log, and 
it was not added as a spare.

I don't know if it is good or wrong behawior (maybye the drive should appear 
automaticaly as a spare?) but for me it is not a problem because I don't use 
spares :)

best regards

Sergiusz
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mdadm --monitor: need extra feature?
  2012-08-22 10:50               ` Sergiusz Brzeziński
@ 2012-08-22 10:57                 ` Sergiusz Brzeziński
  0 siblings, 0 replies; 9+ messages in thread
From: Sergiusz Brzeziński @ 2012-08-22 10:57 UTC (permalink / raw)
  To: Adam Goryachev; +Cc: linux-raid



W dniu 22.08.2012 12:50, Sergiusz Brzeziński pisze:
> W dniu 22.08.2012 12:12, Adam Goryachev pisze:
>> On 22/08/12 19:50, Sergiusz Brzeziński wrote:
>>> 3. In Ubuntu there is in /lib/udev/rules.d/64-md-raid.rules file
>>> responsible for this. I only changed one line:
>>>
>>> ACTION=="add", RUN+="/sbin/mdadm --incremental $tempnode"
>>> to:
>>> ACTION=="add", RUN+="/sbin/mdadm --incremental --run $tempnode"
>>>
>>> And thats all!
>>>
>>> Now, I don't have to use my script from crontab anymore. Raid start
>>> rebuilding array immediately after disk is inserted.
>> Could this cause a problem (ie, what is the reason this is not the
>> default value)?
>>
>> My guess is it may cause md to start an array where all "available"
>> disks are not yet added to the array. ie, if you are plugging in three
>> drives, it might run the array after plugging in the second array, and
>> then when you plugin the third drive it will need to do a resync.
>  >
>  > I suppose worst case is a resync where it wasn't really needed.
>  >
>  > Regards,
>  > Adam
>
> I made a try.
>
> 1.
> I never had before 3 drives at one time. After inserting the third drive for the
> first time (the drive had a raid partition with proper UUID) it happened
> nothing. (or maybye something happened but I don't know what because i turned
> off the debugging :( )
>
> 2.
> Than I made mdadm /dev/md0 -a /dev/third_drive
> It was added as a spare.
>
> 3.
> I removed and inserted again the third drive. Again nothing. No info in log, and
> it was not added as a spare.
>
> I don't know if it is good or wrong behawior (maybye the drive should appear
> automaticaly as a spare?) but for me it is not a problem because I don't use
> spares :)

CORRECTION: IT WAS AUTOMATICALY ADDED AS A SPARE

So I think it works as expected!

I did the test fast - I did probably a mistake checking that.


Sergiusz
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2012-08-22 10:57 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-08-21 10:41 mdadm --monitor: need extra feature? Sergiusz Brzeziński
2012-08-21 10:44 ` David Brown
2012-08-21 11:51   ` Sergiusz Brzeziński
2012-08-21 12:39     ` Adam Goryachev
2012-08-22  7:14       ` Sergiusz Brzeziński
2012-08-22  7:44         ` NeilBrown
2012-08-22  9:50           ` Sergiusz Brzeziński
     [not found]             ` <5034B0A2.4080403@websitemanagers.com.au>
2012-08-22 10:50               ` Sergiusz Brzeziński
2012-08-22 10:57                 ` Sergiusz Brzeziński

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.