* replacing drives
@ 2013-04-26 14:27 Roberto Nunnari
2013-04-26 15:36 ` Tregaron Bayly
` (3 more replies)
0 siblings, 4 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-04-26 14:27 UTC (permalink / raw)
To: linux-raid
Hi all.
I'd like to replace two hd in raid1 with larger ones.
I could just add the new drives in raid1 and mount it on /opt after a
dump/restore, but I'd prefer to just have to drives instead of four..
less noise and less power consumption and noise.
The question is: what whould be the best way to go?
Tricks and tips? Drawbacks? Common errors?
Any hint/advice welcome.
Thank you. :-)
present HD: two WD caviar green 500GB
new HD: two WD caviar green 2TB
root@host1:~# uname -rms
Linux 2.6.32-46-server x86_64
root@host1:~# mdadm --version
mdadm - v2.6.7.1 - 15th October 2008
root@host1:~# cat /proc/mdstat
Personalities : [linear] [raid1] [multipath] [raid0] [raid6] [raid5]
[raid4] [raid10]
md1 : active raid1 sda2[0] sdb2[1]
7812032 blocks [2/2] [UU]
md2 : active raid1 sda3[0] sdb3[1]
431744960 blocks [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1]
48827328 blocks [2/2] [UU]
unused devices: <none>
root@host1:~# parted /dev/sda print
Model: ATA WDC WD5000ABPS-0 (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 50.0GB 50.0GB primary ext4 boot, raid
2 50.0GB 58.0GB 8000MB primary linux-swap(v1) raid
3 58.0GB 500GB 442GB primary ext4 raid
root@host1:~# parted /dev/sdb print
Model: ATA WDC WD5000ABPS-0 (scsi)
Disk /dev/sdb: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 50.0GB 50.0GB primary ext4 boot, raid
2 50.0GB 58.0GB 8000MB primary linux-swap(v1) raid
3 58.0GB 500GB 442GB primary ext4 raid
root@host1:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/md0 48060232 6460476 39158392 15% /
none 3052888 272 3052616 1% /dev
none 3057876 0 3057876 0% /dev/shm
none 3057876 88 3057788 1% /var/run
none 3057876 0 3057876 0% /var/lock
none 3057876 0 3057876 0% /lib/init/rw
/dev/md2 424970552 399524484 3858820 100% /opt
Thank you and best regards.
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-26 14:27 replacing drives Roberto Nunnari
@ 2013-04-26 15:36 ` Tregaron Bayly
2013-04-26 15:42 ` Keith Keller
` (2 subsequent siblings)
3 siblings, 0 replies; 38+ messages in thread
From: Tregaron Bayly @ 2013-04-26 15:36 UTC (permalink / raw)
To: Roberto Nunnari; +Cc: linux-raid
> Any hint/advice welcome.
> Thank you. :-)
There's a section on extending an existing array on the raid wiki that I
believe covers what you're trying to do. I've never had any problems
using the approach that's documented there.
https://raid.wiki.kernel.org/index.php/Growing
Regards,
Tregaron
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-26 14:27 replacing drives Roberto Nunnari
2013-04-26 15:36 ` Tregaron Bayly
@ 2013-04-26 15:42 ` Keith Keller
2013-04-26 15:53 ` Robin Hill
2013-04-26 22:20 ` Roberto Nunnari
3 siblings, 0 replies; 38+ messages in thread
From: Keith Keller @ 2013-04-26 15:42 UTC (permalink / raw)
To: linux-raid
On 2013-04-26, Roberto Nunnari <roberto.nunnari@supsi.ch> wrote:
>
> I'd like to replace two hd in raid1 with larger ones.
>
> I could just add the new drives in raid1 and mount it on /opt after a
> dump/restore, but I'd prefer to just have to drives instead of four..
> less noise and less power consumption and noise.
>
> The question is: what whould be the best way to go?
I've never tried it, but mdadm supports growing arrays. If you replace
each drive in your RAID1 and do a rebuild, when the second rebuild is
complete, you can use the --grow -z max option to tell md to resize the
array to use the new space. Then you will need to resize the filesystem
using FS tools like resize2fs or xfs_growfs.
--keith
--
kkeller@wombat.san-francisco.ca.us
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-26 14:27 replacing drives Roberto Nunnari
2013-04-26 15:36 ` Tregaron Bayly
2013-04-26 15:42 ` Keith Keller
@ 2013-04-26 15:53 ` Robin Hill
2013-04-30 13:17 ` Roberto Nunnari
` (4 more replies)
2013-04-26 22:20 ` Roberto Nunnari
3 siblings, 5 replies; 38+ messages in thread
From: Robin Hill @ 2013-04-26 15:53 UTC (permalink / raw)
To: Roberto Nunnari; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 2866 bytes --]
On Fri Apr 26, 2013 at 04:27:01PM +0200, Roberto Nunnari wrote:
> Hi all.
>
> I'd like to replace two hd in raid1 with larger ones.
>
> I could just add the new drives in raid1 and mount it on /opt after a
> dump/restore, but I'd prefer to just have to drives instead of four..
> less noise and less power consumption and noise.
>
> The question is: what whould be the best way to go?
> Tricks and tips? Drawbacks? Common errors?
>
> Any hint/advice welcome.
> Thank you. :-)
>
>
> present HD: two WD caviar green 500GB
> new HD: two WD caviar green 2TB
>
I don't think these have SCTERC configuration options, so you'll need to
make sure you increase the timeout in the storage stack to prevent read
timeouts from causing drives to be prematurely kicked out of the array.
>
> root@host1:~# uname -rms
> Linux 2.6.32-46-server x86_64
>
That'll be too old for the hot-replacement functionality, but that
doesn't make much difference for RAID1 anyway.
> root@host1:~# cat /proc/mdstat
> Personalities : [linear] [raid1] [multipath] [raid0] [raid6] [raid5]
> [raid4] [raid10]
> md1 : active raid1 sda2[0] sdb2[1]
> 7812032 blocks [2/2] [UU]
>
> md2 : active raid1 sda3[0] sdb3[1]
> 431744960 blocks [2/2] [UU]
>
> md0 : active raid1 sda1[0] sdb1[1]
> 48827328 blocks [2/2] [UU]
>
> unused devices: <none>
>
The safest option would be:
- add in the new disks
- partition to at least the same size as your existing partitions (they
can be larger)
- add the new partitions into the arrays (they'll go in as spares)
- grow the arrays to 4 members (this avoids any loss of redundancy)
- wait for the resync to complete
- install grub/lilo/syslinux to the new disks
- fail and remove the old disk partitions from the arrays
- shrink the arrays back down to 2 members
- remove the old disks
Then, if you're keeping the same number of partitions but increasing the
size:
- grow the arrays to fill the partitions
- grow the filesystems to fill the arrays
or, if you're adding extra partitions:
- create new arrays on extra partitions
- format and mount
If you have hot-plug bays then you can do all this without any downtime
(you could also do one disk at a time and just grow the arrays to 3
members), otherwise you'll need to shut down to install and remove the
disks. If you only have two bays then you could fail one of the disks
then recover to a new one, but that's definitely a risky option.
That's the outline of the process anyway - if you need any details of
the actual commands then do ask.
HTH,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-26 14:27 replacing drives Roberto Nunnari
` (2 preceding siblings ...)
2013-04-26 15:53 ` Robin Hill
@ 2013-04-26 22:20 ` Roberto Nunnari
3 siblings, 0 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-04-26 22:20 UTC (permalink / raw)
To: linux-raid
Thank you all very much for your valuable advices.
I'll give it a try, maybe already this weekend.
Best regards.
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-26 15:53 ` Robin Hill
@ 2013-04-30 13:17 ` Roberto Nunnari
2013-04-30 13:20 ` Mikael Abrahamsson
2013-04-30 13:45 ` Robin Hill
2013-04-30 15:19 ` Roberto Nunnari
` (3 subsequent siblings)
4 siblings, 2 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-04-30 13:17 UTC (permalink / raw)
To: Roberto Nunnari, linux-raid
Robin Hill wrote:
> On Fri Apr 26, 2013 at 04:27:01PM +0200, Roberto Nunnari wrote:
>
>> Hi all.
>>
>> I'd like to replace two hd in raid1 with larger ones.
>>
>> I could just add the new drives in raid1 and mount it on /opt after a
>> dump/restore, but I'd prefer to just have to drives instead of four..
>> less noise and less power consumption and noise.
>>
>> The question is: what whould be the best way to go?
>> Tricks and tips? Drawbacks? Common errors?
>>
>> Any hint/advice welcome.
>> Thank you. :-)
>>
>>
>> present HD: two WD caviar green 500GB
>> new HD: two WD caviar green 2TB
>>
> I don't think these have SCTERC configuration options, so you'll need to
> make sure you increase the timeout in the storage stack to prevent read
> timeouts from causing drives to be prematurely kicked out of the array.
How do I increase that timeout?
Also, the old HD are up and running for over 4 years now, and never got
any trouble.. just time to time a few warning on /dev/sdb from smartctl:
Device: /dev/sdb, ATA error count increased from 27 to 28
But I don't believe that's something to worry about..
>
>> root@host1:~# uname -rms
>> Linux 2.6.32-46-server x86_64
>>
> That'll be too old for the hot-replacement functionality, but that
> doesn't make much difference for RAID1 anyway.
ok.
>
>> root@host1:~# cat /proc/mdstat
>> Personalities : [linear] [raid1] [multipath] [raid0] [raid6] [raid5]
>> [raid4] [raid10]
>> md1 : active raid1 sda2[0] sdb2[1]
>> 7812032 blocks [2/2] [UU]
>>
>> md2 : active raid1 sda3[0] sdb3[1]
>> 431744960 blocks [2/2] [UU]
>>
>> md0 : active raid1 sda1[0] sdb1[1]
>> 48827328 blocks [2/2] [UU]
>>
>> unused devices: <none>
>>
> The safest option would be:
> - add in the new disks
> - partition to at least the same size as your existing partitions (they
> can be larger)
> - add the new partitions into the arrays (they'll go in as spares)
got till here..
> - grow the arrays to 4 members (this avoids any loss of redundancy)
now the next step.. that's a raid1 array.. is it possible to grow the
arrays to 4 members?
Thank you!
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-30 13:17 ` Roberto Nunnari
@ 2013-04-30 13:20 ` Mikael Abrahamsson
2013-04-30 14:11 ` Roberto Nunnari
` (3 more replies)
2013-04-30 13:45 ` Robin Hill
1 sibling, 4 replies; 38+ messages in thread
From: Mikael Abrahamsson @ 2013-04-30 13:20 UTC (permalink / raw)
To: linux-raid
On Tue, 30 Apr 2013, Roberto Nunnari wrote:
> How do I increase that timeout?
for x in /sys/block/sd[a-z] ; do echo 180 > $x/device/timeout ; done
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-30 13:17 ` Roberto Nunnari
2013-04-30 13:20 ` Mikael Abrahamsson
@ 2013-04-30 13:45 ` Robin Hill
2013-04-30 14:05 ` Roberto Nunnari
1 sibling, 1 reply; 38+ messages in thread
From: Robin Hill @ 2013-04-30 13:45 UTC (permalink / raw)
To: Roberto Nunnari; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 3375 bytes --]
On Tue Apr 30, 2013 at 03:17:30PM +0200, Roberto Nunnari wrote:
> Robin Hill wrote:
> > On Fri Apr 26, 2013 at 04:27:01PM +0200, Roberto Nunnari wrote:
> >
> >> Hi all.
> >>
> >> I'd like to replace two hd in raid1 with larger ones.
> >>
> >> I could just add the new drives in raid1 and mount it on /opt after a
> >> dump/restore, but I'd prefer to just have to drives instead of four..
> >> less noise and less power consumption and noise.
> >>
> >> The question is: what whould be the best way to go?
> >> Tricks and tips? Drawbacks? Common errors?
> >>
> >> Any hint/advice welcome.
> >> Thank you. :-)
> >>
> >>
> >> present HD: two WD caviar green 500GB
> >> new HD: two WD caviar green 2TB
> >>
> > I don't think these have SCTERC configuration options, so you'll need to
> > make sure you increase the timeout in the storage stack to prevent read
> > timeouts from causing drives to be prematurely kicked out of the array.
>
> How do I increase that timeout?
>
Mikael's just answered this one.
> Also, the old HD are up and running for over 4 years now, and never got
> any trouble.. just time to time a few warning on /dev/sdb from smartctl:
>
> Device: /dev/sdb, ATA error count increased from 27 to 28
>
> But I don't believe that's something to worry about..
>
Probably not. The only counter that's really significant is the number
of reallocated sectors. As for not having had any timeout issues before,
it does depend on the setup. It may be that the disk manufacturers have
increased timeouts on newer disks (the higher data desnity could well
increase the odds of getting failures on the first pass), or it may be
down to vibrations in the chassis causing problems, etc. It's safer to
make sure that the storage subsystem has longer timeouts than the drives
anyway.
> >
> >> root@host1:~# uname -rms
> >> Linux 2.6.32-46-server x86_64
> >>
> > That'll be too old for the hot-replacement functionality, but that
> > doesn't make much difference for RAID1 anyway.
>
> ok.
>
>
> >
> >> root@host1:~# cat /proc/mdstat
> >> Personalities : [linear] [raid1] [multipath] [raid0] [raid6] [raid5]
> >> [raid4] [raid10]
> >> md1 : active raid1 sda2[0] sdb2[1]
> >> 7812032 blocks [2/2] [UU]
> >>
> >> md2 : active raid1 sda3[0] sdb3[1]
> >> 431744960 blocks [2/2] [UU]
> >>
> >> md0 : active raid1 sda1[0] sdb1[1]
> >> 48827328 blocks [2/2] [UU]
> >>
> >> unused devices: <none>
> >>
> > The safest option would be:
> > - add in the new disks
> > - partition to at least the same size as your existing partitions (they
> > can be larger)
> > - add the new partitions into the arrays (they'll go in as spares)
>
> got till here..
>
>
> > - grow the arrays to 4 members (this avoids any loss of redundancy)
>
> now the next step.. that's a raid1 array.. is it possible to grow the
> arrays to 4 members?
>
Yes, there's no problem with running RAID1 arrays with more than two
mirrors (with md anyway) - they're all identical so it doesn't really
make any difference how many you have.
Cheers,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-30 13:45 ` Robin Hill
@ 2013-04-30 14:05 ` Roberto Nunnari
2013-04-30 14:28 ` Roberto Nunnari
0 siblings, 1 reply; 38+ messages in thread
From: Roberto Nunnari @ 2013-04-30 14:05 UTC (permalink / raw)
To: Roberto Nunnari, linux-raid
Robin Hill wrote:
> On Tue Apr 30, 2013 at 03:17:30PM +0200, Roberto Nunnari wrote:
>>> - grow the arrays to 4 members (this avoids any loss of redundancy)
>> now the next step.. that's a raid1 array.. is it possible to grow the
>> arrays to 4 members?
>>
> Yes, there's no problem with running RAID1 arrays with more than two
> mirrors (with md anyway) - they're all identical so it doesn't really
> make any difference how many you have.
>
> Cheers,
> Robin
ok.. it's rebuilding.. I started with md0.. I'll wait it finishes and
then do md1(8GB) and after that, md2(almost 2TB).. for now it seems to
be going well, isn't it?
# mdadm -D /dev/md0
/dev/md0:
Version : 00.90
Creation Time : Fri Apr 22 08:20:49 2011
Raid Level : raid1
Array Size : 48827328 (46.57 GiB 50.00 GB)
Used Dev Size : 48827328 (46.57 GiB 50.00 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Apr 30 16:01:40 2013
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 4
Failed Devices : 0
Spare Devices : 2
Rebuild Status : 15% complete
UUID : 1158db16:ee1fcafc:b6fab772:d376c644
Events : 0.964
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 spare rebuilding /dev/sdd1
5 8 17 3 spare rebuilding /dev/sdb1
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-30 13:20 ` Mikael Abrahamsson
@ 2013-04-30 14:11 ` Roberto Nunnari
2013-04-30 14:22 ` Robin Hill
2013-04-30 14:40 ` Mikael Abrahamsson
2013-04-30 14:27 ` Roberto Nunnari
` (2 subsequent siblings)
3 siblings, 2 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-04-30 14:11 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: linux-raid
Mikael Abrahamsson wrote:
> On Tue, 30 Apr 2013, Roberto Nunnari wrote:
>
>> How do I increase that timeout?
>
> for x in /sys/block/sd[a-z] ; do echo 180 > $x/device/timeout ; done
>
You believe that this is not enough? What's the unit? milliseconds?
# for x in /sys/block/sd[a-z] ; do cat $x/device/timeout ; done
30
30
30
30
Thank you.
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-30 14:11 ` Roberto Nunnari
@ 2013-04-30 14:22 ` Robin Hill
2013-04-30 14:40 ` Mikael Abrahamsson
1 sibling, 0 replies; 38+ messages in thread
From: Robin Hill @ 2013-04-30 14:22 UTC (permalink / raw)
To: Roberto Nunnari; +Cc: Mikael Abrahamsson, linux-raid
[-- Attachment #1: Type: text/plain, Size: 981 bytes --]
On Tue Apr 30, 2013 at 04:11:55PM +0200, Roberto Nunnari wrote:
> Mikael Abrahamsson wrote:
> > On Tue, 30 Apr 2013, Roberto Nunnari wrote:
> >
> >> How do I increase that timeout?
> >
> > for x in /sys/block/sd[a-z] ; do echo 180 > $x/device/timeout ; done
> >
>
> You believe that this is not enough? What's the unit? milliseconds?
> # for x in /sys/block/sd[a-z] ; do cat $x/device/timeout ; done
> 30
> 30
> 30
> 30
>
The units are seconds. According to WD[1], their drives can take up to 2
minutes to timeout, but other manufacturers may differ. 30 seconds is
definitely too short.
[1] http://wdc.custhelp.com/app/answers/detail/a_id/1397/p/227,283/session/L3RpbWUvMTMyMTQzOTc4NS9zaWQvdVhvYmpmSms%3D
Cheers,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-30 13:20 ` Mikael Abrahamsson
2013-04-30 14:11 ` Roberto Nunnari
@ 2013-04-30 14:27 ` Roberto Nunnari
2013-04-30 14:39 ` Roberto Nunnari
2013-05-02 17:43 ` Roy Sigurd Karlsbakk
3 siblings, 0 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-04-30 14:27 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: linux-raid
Mikael Abrahamsson wrote:
> On Tue, 30 Apr 2013, Roberto Nunnari wrote:
>
>> How do I increase that timeout?
>
> for x in /sys/block/sd[a-z] ; do echo 180 > $x/device/timeout ; done
>
done!
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-30 14:05 ` Roberto Nunnari
@ 2013-04-30 14:28 ` Roberto Nunnari
0 siblings, 0 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-04-30 14:28 UTC (permalink / raw)
To: Roberto Nunnari, linux-raid
Roberto Nunnari wrote:
> Robin Hill wrote:
>> On Tue Apr 30, 2013 at 03:17:30PM +0200, Roberto Nunnari wrote:
>>>> - grow the arrays to 4 members (this avoids any loss of redundancy)
>>> now the next step.. that's a raid1 array.. is it possible to grow the
>>> arrays to 4 members?
>>>
>> Yes, there's no problem with running RAID1 arrays with more than two
>> mirrors (with md anyway) - they're all identical so it doesn't really
>> make any difference how many you have.
>>
>> Cheers,
>> Robin
>
> ok.. it's rebuilding.. I started with md0.. I'll wait it finishes and
> then do md1(8GB) and after that, md2(almost 2TB).. for now it seems to
> be going well, isn't it?
>
> # mdadm -D /dev/md0
> /dev/md0:
> Version : 00.90
> Creation Time : Fri Apr 22 08:20:49 2011
> Raid Level : raid1
> Array Size : 48827328 (46.57 GiB 50.00 GB)
> Used Dev Size : 48827328 (46.57 GiB 50.00 GB)
> Raid Devices : 4
> Total Devices : 4
> Preferred Minor : 0
> Persistence : Superblock is persistent
>
> Update Time : Tue Apr 30 16:01:40 2013
> State : clean, degraded, recovering
> Active Devices : 2
> Working Devices : 4
> Failed Devices : 0
> Spare Devices : 2
>
> Rebuild Status : 15% complete
>
> UUID : 1158db16:ee1fcafc:b6fab772:d376c644
> Events : 0.964
>
> Number Major Minor RaidDevice State
> 0 8 1 0 active sync /dev/sda1
> 1 8 33 1 active sync /dev/sdc1
> 4 8 49 2 spare rebuilding /dev/sdd1
> 5 8 17 3 spare rebuilding /dev/sdb1
>
>
> Robi
rebuilt and clean! hehehe
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-30 13:20 ` Mikael Abrahamsson
2013-04-30 14:11 ` Roberto Nunnari
2013-04-30 14:27 ` Roberto Nunnari
@ 2013-04-30 14:39 ` Roberto Nunnari
2013-04-30 14:42 ` Mikael Abrahamsson
2013-04-30 15:11 ` Phil Turmel
2013-05-02 17:43 ` Roy Sigurd Karlsbakk
3 siblings, 2 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-04-30 14:39 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: linux-raid
Mikael Abrahamsson wrote:
> On Tue, 30 Apr 2013, Roberto Nunnari wrote:
>
>> How do I increase that timeout?
>
> for x in /sys/block/sd[a-z] ; do echo 180 > $x/device/timeout ; done
what the... these are SECONDS! What on earth could delay a sata attached
disk read for 30 (now 180) seconds if not a disk failure? I OS hang?
Sorry about that question, but I don't understand.. I have never seen
such a problem.
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-30 14:11 ` Roberto Nunnari
2013-04-30 14:22 ` Robin Hill
@ 2013-04-30 14:40 ` Mikael Abrahamsson
1 sibling, 0 replies; 38+ messages in thread
From: Mikael Abrahamsson @ 2013-04-30 14:40 UTC (permalink / raw)
To: Roberto Nunnari; +Cc: linux-raid
On Tue, 30 Apr 2013, Roberto Nunnari wrote:
> You believe that this is not enough? What's the unit? milliseconds?
> # for x in /sys/block/sd[a-z] ; do cat $x/device/timeout ; done
> 30
> 30
> 30
> 30
The unit is in seconds, and yes, you want the drive to report an error
before the kernel drops it, so 180 seconds is sufficient for most drives
to give up and return a read error.
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-30 14:39 ` Roberto Nunnari
@ 2013-04-30 14:42 ` Mikael Abrahamsson
2013-04-30 15:10 ` Roberto Nunnari
2013-04-30 15:11 ` Phil Turmel
1 sibling, 1 reply; 38+ messages in thread
From: Mikael Abrahamsson @ 2013-04-30 14:42 UTC (permalink / raw)
To: Roberto Nunnari; +Cc: linux-raid
On Tue, 30 Apr 2013, Roberto Nunnari wrote:
> what the... these are SECONDS! What on earth could delay a sata attached
> disk read for 30 (now 180) seconds if not a disk failure? I OS hang?
> Sorry about that question, but I don't understand.. I have never seen
> such a problem.
A consumer drive will spend considerable time trying to read a sector
before giving up. I believe 120 seconds is not uncommon.
The raid drives will typically give up after 7 seconds.
http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-30 14:42 ` Mikael Abrahamsson
@ 2013-04-30 15:10 ` Roberto Nunnari
0 siblings, 0 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-04-30 15:10 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: linux-raid
Mikael Abrahamsson wrote:
> On Tue, 30 Apr 2013, Roberto Nunnari wrote:
>
>> what the... these are SECONDS! What on earth could delay a sata
>> attached disk read for 30 (now 180) seconds if not a disk failure? I
>> OS hang? Sorry about that question, but I don't understand.. I have
>> never seen such a problem.
>
> A consumer drive will spend considerable time trying to read a sector
> before giving up. I believe 120 seconds is not uncommon.
>
> The raid drives will typically give up after 7 seconds.
>
> http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery
ok.. thank you for the explanation. Now I got it. :-)
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-30 14:39 ` Roberto Nunnari
2013-04-30 14:42 ` Mikael Abrahamsson
@ 2013-04-30 15:11 ` Phil Turmel
2013-04-30 15:39 ` Roberto Spadim
1 sibling, 1 reply; 38+ messages in thread
From: Phil Turmel @ 2013-04-30 15:11 UTC (permalink / raw)
To: Roberto Nunnari; +Cc: Mikael Abrahamsson, linux-raid
On 04/30/2013 10:39 AM, Roberto Nunnari wrote:
> Mikael Abrahamsson wrote:
>> On Tue, 30 Apr 2013, Roberto Nunnari wrote:
>>
>>> How do I increase that timeout?
>>
>> for x in /sys/block/sd[a-z] ; do echo 180 > $x/device/timeout ; done
>
> what the... these are SECONDS! What on earth could delay a sata attached
> disk read for 30 (now 180) seconds if not a disk failure? I OS hang?
> Sorry about that question, but I don't understand.. I have never seen
> such a problem.
The worst horror stories on this mailing list are directly attributable
to this problem. Usually after months or even years of apparently
trouble-free operation.
Consumer-grade drives intended for desktop usage are not
"out-of-the-box" compatible with RAID. Some of them are configurable
after each power-up to behave like an enterprise drive. The rest must
be accommodated with extended driver timeouts.
Search the list archives for "scterc", "timeout", and "URE" (or
combinations thereof).
Phil
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-26 15:53 ` Robin Hill
2013-04-30 13:17 ` Roberto Nunnari
@ 2013-04-30 15:19 ` Roberto Nunnari
2013-05-02 13:56 ` Roberto Nunnari
` (2 subsequent siblings)
4 siblings, 0 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-04-30 15:19 UTC (permalink / raw)
To: Roberto Nunnari, linux-raid
Robin Hill wrote:
> Then, if you're keeping the same number of partitions but increasing the
> size:
> - grow the arrays to fill the partitions
> - grow the filesystems to fill the arrays
This is the most scaring part.. I'd better take a full backup before
doing that.. but 500GB.. don't have LVM.. and can't afford a long
service stop..
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-30 15:11 ` Phil Turmel
@ 2013-04-30 15:39 ` Roberto Spadim
2013-05-01 1:55 ` Brad Campbell
0 siblings, 1 reply; 38+ messages in thread
From: Roberto Spadim @ 2013-04-30 15:39 UTC (permalink / raw)
To: Phil Turmel; +Cc: Roberto Nunnari, Mikael Abrahamsson, linux-raid
check disk temperature via smartctl
some sata disks stop running after 55C and wait any time (a day if
needed) to get temperature down to +-53C and resume operation
2013/4/30 Phil Turmel <philip@turmel.org>:
> On 04/30/2013 10:39 AM, Roberto Nunnari wrote:
>> Mikael Abrahamsson wrote:
>>> On Tue, 30 Apr 2013, Roberto Nunnari wrote:
>>>
>>>> How do I increase that timeout?
>>>
>>> for x in /sys/block/sd[a-z] ; do echo 180 > $x/device/timeout ; done
>>
>> what the... these are SECONDS! What on earth could delay a sata attached
>> disk read for 30 (now 180) seconds if not a disk failure? I OS hang?
>> Sorry about that question, but I don't understand.. I have never seen
>> such a problem.
>
> The worst horror stories on this mailing list are directly attributable
> to this problem. Usually after months or even years of apparently
> trouble-free operation.
>
> Consumer-grade drives intended for desktop usage are not
> "out-of-the-box" compatible with RAID. Some of them are configurable
> after each power-up to behave like an enterprise drive. The rest must
> be accommodated with extended driver timeouts.
>
> Search the list archives for "scterc", "timeout", and "URE" (or
> combinations thereof).
>
> Phil
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Roberto Spadim
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-30 15:39 ` Roberto Spadim
@ 2013-05-01 1:55 ` Brad Campbell
2013-05-01 15:06 ` Roberto Nunnari
2013-05-01 18:14 ` Roberto Spadim
0 siblings, 2 replies; 38+ messages in thread
From: Brad Campbell @ 2013-05-01 1:55 UTC (permalink / raw)
To: Roberto Spadim
Cc: Phil Turmel, Roberto Nunnari, Mikael Abrahamsson, linux-raid
On 30/04/13 23:39, Roberto Spadim wrote:
> check disk temperature via smartctl
> some sata disks stop running after 55C and wait any time (a day if
> needed) to get temperature down to +-53C and resume operation
>
G'day Roberto,
Have you got any data or model numbers to back this one up?
I've seen plenty of disks set the "warranty void" flag if they exceed
55C, but I've never seen a disk stop or pause (and I've run some _very_
hot).
I'd be really interested in finding out a bit more about this.
Regards,
Brad
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-05-01 1:55 ` Brad Campbell
@ 2013-05-01 15:06 ` Roberto Nunnari
2013-05-01 18:14 ` Roberto Spadim
1 sibling, 0 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-05-01 15:06 UTC (permalink / raw)
To: Brad Campbell; +Cc: Roberto Spadim, Phil Turmel, Mikael Abrahamsson, linux-raid
On 05/01/2013 03:55 AM, Brad Campbell wrote:
> On 30/04/13 23:39, Roberto Spadim wrote:
>> check disk temperature via smartctl
>> some sata disks stop running after 55C and wait any time (a day if
>> needed) to get temperature down to +-53C and resume operation
>>
>
> G'day Roberto,
>
> Have you got any data or model numbers to back this one up?
> I've seen plenty of disks set the "warranty void" flag if they exceed
> 55C, but I've never seen a disk stop or pause (and I've run some _very_
> hot).
>
> I'd be really interested in finding out a bit more about this.
>
> Regards,
> Brad
Hi Brad.
No.. Luckly that never was a problem for me.. at present my old drives
are 39 and 44 degrees.
Regards.
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-05-01 1:55 ` Brad Campbell
2013-05-01 15:06 ` Roberto Nunnari
@ 2013-05-01 18:14 ` Roberto Spadim
2013-05-02 17:49 ` Roy Sigurd Karlsbakk
1 sibling, 1 reply; 38+ messages in thread
From: Roberto Spadim @ 2013-05-01 18:14 UTC (permalink / raw)
To: Brad Campbell
Cc: Phil Turmel, Roberto Nunnari, Mikael Abrahamsson, linux-raid
2013/4/30 Brad Campbell <lists2009@fnarfbargle.com>:
> On 30/04/13 23:39, Roberto Spadim wrote:
>>
>> check disk temperature via smartctl
>> some sata disks stop running after 55C and wait any time (a day if
>> needed) to get temperature down to +-53C and resume operation
>>
>
> G'day Roberto,
>
> Have you got any data or model numbers to back this one up?
> I've seen plenty of disks set the "warranty void" flag if they exceed 55C,
> but I've never seen a disk stop or pause (and I've run some _very_ hot).
>
> I'd be really interested in finding out a bit more about this.
>
> Regards,
> Brad
i replaced 4 old disk with this 'new' one:
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-2.6.39-ARCH] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
=== START OF INFORMATION SECTION ===
Model Family: SAMSUNG SpinPoint F3
Device Model: SAMSUNG HD502HJ
Serial Number: S2BWJ60B237288
LU WWN Device Id: 5 0024e9 4009a5bf6
Firmware Version: 1AJ10001
User Capacity: 500.107.862.016 bytes [500 GB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: 8
ATA Standard is: ATA-8-ACS revision 6
Local Time is: Wed May 1 15:07:29 2013 BRT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 4740) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection
on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 79) minutes.
SCT capabilities: (0x003f) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 100 100 051 Pre-fail
Always - 0
2 Throughput_Performance 0x0026 048 048 000 Old_age
Always - 5091
3 Spin_Up_Time 0x0023 082 081 025 Pre-fail
Always - 5638
4 Start_Stop_Count 0x0032 100 100 000 Old_age
Always - 19
5 Reallocated_Sector_Ct 0x0033 252 252 010 Pre-fail
Always - 0
7 Seek_Error_Rate 0x002e 252 252 051 Old_age
Always - 0
8 Seek_Time_Performance 0x0024 252 252 015 Old_age
Offline - 0
9 Power_On_Hours 0x0032 100 100 000 Old_age
Always - 5921
10 Spin_Retry_Count 0x0032 252 252 051 Old_age
Always - 0
11 Calibration_Retry_Count 0x0032 252 252 000 Old_age
Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age
Always - 23
191 G-Sense_Error_Rate 0x0022 252 252 000 Old_age
Always - 0
192 Power-Off_Retract_Count 0x0022 252 252 000 Old_age
Always - 0
194 Temperature_Celsius 0x0002 064 056 000 Old_age
Always - 27 (Min/Max 15/45)
195 Hardware_ECC_Recovered 0x003a 100 100 000 Old_age
Always - 0
196 Reallocated_Event_Count 0x0032 252 252 000 Old_age
Always - 0
197 Current_Pending_Sector 0x0032 252 252 000 Old_age
Always - 0
198 Offline_Uncorrectable 0x0030 252 252 000 Old_age
Offline - 0
199 UDMA_CRC_Error_Count 0x0036 200 200 000 Old_age
Always - 0
200 Multi_Zone_Error_Rate 0x002a 100 100 000 Old_age
Always - 2
223 Load_Retry_Count 0x0032 252 252 000 Old_age
Always - 0
225 Load_Cycle_Count 0x0032 100 100 000 Old_age
Always - 23
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining
LifeTime(hours) LBA_of_first_error
# 1 Extended offline Completed without error 00% 4074 -
# 2 Extended offline Completed without error 00% 383 -
Note: selective self-test log revision number (0) not 1 implies that
no selective self-test has ever been run
SMART Selective self-test log data structure revision number 0
Note: revision number not 1 implies that no selective self-test has
ever been run
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Completed [00% left] (0-65535)
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
this disk model (SAMSUNG HD502HJ) stop and wait temperature get down
again (they have about 6months of heavy read, and are not enterprise
level), i have about 20 units, i replace every 6months, they are cheap
and never lost information with this 6months interval, but i'm not
surelly safe without a raid1 and backups, and of course it's a
enterprise level system running not enterprise level hardware (ok it's
wrong but still working)
after 6months i replace the disk, i replace the first with 6months the
second with 7, the third with 8 and the fourth with 9, after 6 months
i replace normally each disk (the older have 9 months on the first
replace), i had no problem yet, it's a raid1 4 disks setup running
mariadb/mysql database with high read and low write, raid0/raid10
don't work in this workload
--
Roberto Spadim
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-26 15:53 ` Robin Hill
2013-04-30 13:17 ` Roberto Nunnari
2013-04-30 15:19 ` Roberto Nunnari
@ 2013-05-02 13:56 ` Roberto Nunnari
2013-05-02 14:54 ` Robin Hill
2013-05-03 16:28 ` Roberto Nunnari
2013-05-10 21:35 ` Roberto Nunnari
4 siblings, 1 reply; 38+ messages in thread
From: Roberto Nunnari @ 2013-05-02 13:56 UTC (permalink / raw)
To: Roberto Nunnari, linux-raid
Robin Hill wrote:
> The safest option would be:
> - add in the new disks
> - partition to at least the same size as your existing partitions (they
> can be larger)
> - add the new partitions into the arrays (they'll go in as spares)
> - grow the arrays to 4 members (this avoids any loss of redundancy)
> - wait for the resync to complete
> - install grub/lilo/syslinux to the new disks
Ok.. got here.
> - fail and remove the old disk partitions from the arrays
Now.. I may need some guidance for the next steps.. please correct me if
I'm wrong..
to fail the old disk partition from the arrays I should:
mdadm -f /dev/md0 /dev/sda1
mdadm -f /dev/md1 /dev/sda2
mdadm -f /dev/md2 /dev/sda3
mdadm -f /dev/md0 /dev/sdc1
mdadm -f /dev/md1 /dev/sdc2
mdadm -f /dev/md2 /dev/sdc3
and to remove the old disk partition from the arrays I should:
mdadm -r /dev/md0 /dev/sda1
mdadm -r /dev/md1 /dev/sda2
mdadm -r /dev/md2 /dev/sda3
mdadm -r /dev/md0 /dev/sdc1
mdadm -r /dev/md1 /dev/sdc2
mdadm -r /dev/md2 /dev/sdc3
correct?
> - shrink the arrays back down to 2 members
to shrink the arrays back down to two members:
mdadm --grow --raid-devices=2 /dev/md0
mdadm --grow --raid-devices=2 /dev/md1
mdadm --grow --raid-devices=2 /dev/md2
correct?
Thank you very much for your precious help!
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-05-02 13:56 ` Roberto Nunnari
@ 2013-05-02 14:54 ` Robin Hill
2013-05-02 15:00 ` Roberto Nunnari
0 siblings, 1 reply; 38+ messages in thread
From: Robin Hill @ 2013-05-02 14:54 UTC (permalink / raw)
To: Roberto Nunnari; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 1904 bytes --]
On Thu May 02, 2013 at 03:56:54PM +0200, Roberto Nunnari wrote:
> Robin Hill wrote:
> > The safest option would be:
> > - add in the new disks
> > - partition to at least the same size as your existing partitions (they
> > can be larger)
> > - add the new partitions into the arrays (they'll go in as spares)
> > - grow the arrays to 4 members (this avoids any loss of redundancy)
> > - wait for the resync to complete
> > - install grub/lilo/syslinux to the new disks
>
> Ok.. got here.
>
>
> > - fail and remove the old disk partitions from the arrays
>
> Now.. I may need some guidance for the next steps.. please correct me if
> I'm wrong..
>
> to fail the old disk partition from the arrays I should:
> mdadm -f /dev/md0 /dev/sda1
> mdadm -f /dev/md1 /dev/sda2
> mdadm -f /dev/md2 /dev/sda3
> mdadm -f /dev/md0 /dev/sdc1
> mdadm -f /dev/md1 /dev/sdc2
> mdadm -f /dev/md2 /dev/sdc3
>
> and to remove the old disk partition from the arrays I should:
> mdadm -r /dev/md0 /dev/sda1
> mdadm -r /dev/md1 /dev/sda2
> mdadm -r /dev/md2 /dev/sda3
> mdadm -r /dev/md0 /dev/sdc1
> mdadm -r /dev/md1 /dev/sdc2
> mdadm -r /dev/md2 /dev/sdc3
>
> correct?
>
Assuming sda & sdc are your old drives, yes (they were sda & sdb in your
original mail, but possibly they've been reordered if you've rebooted
with the new drives in).
>
> > - shrink the arrays back down to 2 members
>
> to shrink the arrays back down to two members:
> mdadm --grow --raid-devices=2 /dev/md0
> mdadm --grow --raid-devices=2 /dev/md1
> mdadm --grow --raid-devices=2 /dev/md2
>
> correct?
>
Yes, that's all correct.
Cheers,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-05-02 14:54 ` Robin Hill
@ 2013-05-02 15:00 ` Roberto Nunnari
0 siblings, 0 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-05-02 15:00 UTC (permalink / raw)
To: Roberto Nunnari, linux-raid
Robin Hill wrote:
> On Thu May 02, 2013 at 03:56:54PM +0200, Roberto Nunnari wrote:
>
>> Robin Hill wrote:
>>> The safest option would be:
>>> - add in the new disks
>>> - partition to at least the same size as your existing partitions (they
>>> can be larger)
>>> - add the new partitions into the arrays (they'll go in as spares)
>>> - grow the arrays to 4 members (this avoids any loss of redundancy)
>>> - wait for the resync to complete
>>> - install grub/lilo/syslinux to the new disks
>> Ok.. got here.
>>
>>
>>> - fail and remove the old disk partitions from the arrays
>> Now.. I may need some guidance for the next steps.. please correct me if
>> I'm wrong..
>>
>> to fail the old disk partition from the arrays I should:
>> mdadm -f /dev/md0 /dev/sda1
>> mdadm -f /dev/md1 /dev/sda2
>> mdadm -f /dev/md2 /dev/sda3
>> mdadm -f /dev/md0 /dev/sdc1
>> mdadm -f /dev/md1 /dev/sdc2
>> mdadm -f /dev/md2 /dev/sdc3
>>
>> and to remove the old disk partition from the arrays I should:
>> mdadm -r /dev/md0 /dev/sda1
>> mdadm -r /dev/md1 /dev/sda2
>> mdadm -r /dev/md2 /dev/sda3
>> mdadm -r /dev/md0 /dev/sdc1
>> mdadm -r /dev/md1 /dev/sdc2
>> mdadm -r /dev/md2 /dev/sdc3
>>
>> correct?
>>
> Assuming sda & sdc are your old drives, yes (they were sda & sdb in your
> original mail, but possibly they've been reordered if you've rebooted
> with the new drives in).
yes.. they've been reordered.
Any checks I should run here, or can I relay on error messages
evenctually printed on screen when the commands are run?
Thanks.
Robi
>
>>> - shrink the arrays back down to 2 members
>> to shrink the arrays back down to two members:
>> mdadm --grow --raid-devices=2 /dev/md0
>> mdadm --grow --raid-devices=2 /dev/md1
>> mdadm --grow --raid-devices=2 /dev/md2
>>
>> correct?
>>
> Yes, that's all correct.
>
> Cheers,
> Robin
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-30 13:20 ` Mikael Abrahamsson
` (2 preceding siblings ...)
2013-04-30 14:39 ` Roberto Nunnari
@ 2013-05-02 17:43 ` Roy Sigurd Karlsbakk
3 siblings, 0 replies; 38+ messages in thread
From: Roy Sigurd Karlsbakk @ 2013-05-02 17:43 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: linux-raid
----- Opprinnelig melding -----
> On Tue, 30 Apr 2013, Roberto Nunnari wrote:
>
> > How do I increase that timeout?
>
> for x in /sys/block/sd[a-z] ; do echo 180 > $x/device/timeout ; done
I have this in my /etc/rc.local to attempt to enable scterc and fall back to increasing timeout when it fails (on my wd black drives)
for i in b c d e f g h
do
dev=sd$i
smartctl -l scterc,70,70 /dev/$dev || echo 180 > /sys/block/$dev/device/timeout
done
--
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
roy@karlsbakk.net
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-05-01 18:14 ` Roberto Spadim
@ 2013-05-02 17:49 ` Roy Sigurd Karlsbakk
0 siblings, 0 replies; 38+ messages in thread
From: Roy Sigurd Karlsbakk @ 2013-05-02 17:49 UTC (permalink / raw)
To: Roberto Spadim
Cc: Phil Turmel, Roberto Nunnari, Mikael Abrahamsson, linux-raid,
Brad Campbell
> SCT capabilities: (0x003f) SCT Status supported.
> SCT Error Recovery Control supported.
> SCT Feature Control supported.
> SCT Data Table supported.
drive supports SCT, so enable it. See the script I just posted (from my rc.local)
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
roy@karlsbakk.net
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-26 15:53 ` Robin Hill
` (2 preceding siblings ...)
2013-05-02 13:56 ` Roberto Nunnari
@ 2013-05-03 16:28 ` Roberto Nunnari
2013-05-06 11:30 ` Roberto Nunnari
2013-05-07 7:53 ` Robin Hill
2013-05-10 21:35 ` Roberto Nunnari
4 siblings, 2 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-05-03 16:28 UTC (permalink / raw)
To: Roberto Nunnari, linux-raid
Robin Hill wrote:
> The safest option would be:
> - add in the new disks
> - partition to at least the same size as your existing partitions (they
> can be larger)
> - add the new partitions into the arrays (they'll go in as spares)
> - grow the arrays to 4 members (this avoids any loss of redundancy)
> - wait for the resync to complete
> - install grub/lilo/syslinux to the new disks
> - fail and remove the old disk partitions from the arrays
> - shrink the arrays back down to 2 members
> - remove the old disks
>
> Then, if you're keeping the same number of partitions but increasing the
> size:
Ok.. got here.
> - grow the arrays to fill the partitions
> - grow the filesystems to fill the arrays
Now the scary part.. so.. here I believe I should give the following
commands:
mdadm --grow /dev/md0 --size=max
mdadm --grow /dev/md1 --size=max
mdadm --grow /dev/md2 --size=max
and after that
fsck /dev/md0
fsck /dev/md1
fsck /dev/md2
and
resize2fs /dev/md0
resize2fs /dev/md1
resize2fs /dev/md2
Correct?
.. I still have a couple of questions:
1) how do I know if there's a bitmap?
2) at present /dev/md2 usage is 100%.. could that cause any problem?
3) the new drives are 2TG drives.. As around one year ago had trouble on
linux (it was a server dated 2006 with CentOS 5) that would not handle
drives larger than 2TB.. I wander what happens if one day one drive
fails and the drive I'll buy to replace will be sold as 2TB but in
reality slightly larger than 2TB.. what will happen? Will linux fail
again to use a drive larger than 2TB?
At present I'm on ubuntu 10.04, all software from standard distribution.
Pitfalls I should know?
Thank you very much
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-05-03 16:28 ` Roberto Nunnari
@ 2013-05-06 11:30 ` Roberto Nunnari
2013-05-07 7:53 ` Robin Hill
1 sibling, 0 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-05-06 11:30 UTC (permalink / raw)
To: Roberto Nunnari, linux-raid
Roberto Nunnari wrote:
> Robin Hill wrote:
>> The safest option would be:
>> - add in the new disks
>> - partition to at least the same size as your existing partitions (they
>> can be larger)
>> - add the new partitions into the arrays (they'll go in as spares)
>> - grow the arrays to 4 members (this avoids any loss of redundancy)
>> - wait for the resync to complete
>> - install grub/lilo/syslinux to the new disks
>> - fail and remove the old disk partitions from the arrays
>> - shrink the arrays back down to 2 members
>> - remove the old disks
>>
>> Then, if you're keeping the same number of partitions but increasing the
>> size:
>
> Ok.. got here.
>
>> - grow the arrays to fill the partitions
>> - grow the filesystems to fill the arrays
>
> Now the scary part.. so.. here I believe I should give the following
> commands:
>
> mdadm --grow /dev/md0 --size=max
> mdadm --grow /dev/md1 --size=max
> mdadm --grow /dev/md2 --size=max
>
> and after that
>
> fsck /dev/md0
> fsck /dev/md1
> fsck /dev/md2
>
> and
>
> resize2fs /dev/md0
> resize2fs /dev/md1
> resize2fs /dev/md2
>
> Correct?
>
>
> .. I still have a couple of questions:
>
> 1) how do I know if there's a bitmap?
>
> 2) at present /dev/md2 usage is 100%.. could that cause any problem?
>
> 3) the new drives are 2TG drives.. As around one year ago had trouble on
> linux (it was a server dated 2006 with CentOS 5) that would not handle
> drives larger than 2TB.. I wander what happens if one day one drive
> fails and the drive I'll buy to replace will be sold as 2TB but in
> reality slightly larger than 2TB.. what will happen? Will linux fail
> again to use a drive larger than 2TB?
> At present I'm on ubuntu 10.04, all software from standard distribution.
>
> Pitfalls I should know?
>
> Thank you very much
> Robi
Anybody on this, please?
I'd really appreciate some guidance, especially for these last steps.
Thank you and best regards.
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-05-03 16:28 ` Roberto Nunnari
2013-05-06 11:30 ` Roberto Nunnari
@ 2013-05-07 7:53 ` Robin Hill
2013-05-07 10:22 ` Roberto Nunnari
2013-05-08 14:19 ` Roberto Nunnari
1 sibling, 2 replies; 38+ messages in thread
From: Robin Hill @ 2013-05-07 7:53 UTC (permalink / raw)
To: Roberto Nunnari; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 3255 bytes --]
On Fri May 03, 2013 at 06:28:02PM +0200, Roberto Nunnari wrote:
> Robin Hill wrote:
> > The safest option would be:
> > - add in the new disks
> > - partition to at least the same size as your existing partitions (they
> > can be larger)
> > - add the new partitions into the arrays (they'll go in as spares)
> > - grow the arrays to 4 members (this avoids any loss of redundancy)
> > - wait for the resync to complete
> > - install grub/lilo/syslinux to the new disks
> > - fail and remove the old disk partitions from the arrays
> > - shrink the arrays back down to 2 members
> > - remove the old disks
> >
> > Then, if you're keeping the same number of partitions but increasing the
> > size:
>
> Ok.. got here.
>
> > - grow the arrays to fill the partitions
> > - grow the filesystems to fill the arrays
>
> Now the scary part.. so.. here I believe I should give the following
> commands:
>
> mdadm --grow /dev/md0 --size=max
> mdadm --grow /dev/md1 --size=max
> mdadm --grow /dev/md2 --size=max
>
Yep, that's right. Make sure they've actually grown to the correct size
before you progress though - I have had one occasion where using
--size=max actually ended up shrinking the array and I had to manually
work out the size to use in order to recover. That was using an older
version of mdadm though, and I've not seen it happen since.
> and after that
>
> fsck /dev/md0
> fsck /dev/md1
> fsck /dev/md2
>
You'll need 'fsck -f' here to force it to run.
> and
>
> resize2fs /dev/md0
> resize2fs /dev/md1
> resize2fs /dev/md2
>
> Correct?
>
That should be it, yes.
>
> .. I still have a couple of questions:
>
> 1) how do I know if there's a bitmap?
>
Check /proc/mdstat - it'll report a bitmap - e.g.
md6 : active raid6 sdg[0] sdf[6] sde[5] sdi[2] sdh[1]
11721052272 blocks super 1.2 level 6, 16k chunk, algorithm 2 [5/5] [UUUUU]
bitmap: 0/30 pages [0KB], 65536KB chunk
> 2) at present /dev/md2 usage is 100%.. could that cause any problem?
>
It'll slow things down a bit but otherwise shouldn't be an issue.
> 3) the new drives are 2TG drives.. As around one year ago had trouble on
> linux (it was a server dated 2006 with CentOS 5) that would not handle
> drives larger than 2TB.. I wander what happens if one day one drive
> fails and the drive I'll buy to replace will be sold as 2TB but in
> reality slightly larger than 2TB.. what will happen? Will linux fail
> again to use a drive larger than 2TB?
>
All 2TB drives are exactly the same size. Since somewhere around the
320G/500G mark, all drive manufacturers have agreed to standardise the
drive sizes, so getting mismatches like this is a thing of the past.
> At present I'm on ubuntu 10.04, all software from standard distribution.
>
> Pitfalls I should know?
>
You'll need to use GPT partitions instead of standard MBR partitions for
drives over 2TB, but there shouldn't be any issue with handling them.
Cheers,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-05-07 7:53 ` Robin Hill
@ 2013-05-07 10:22 ` Roberto Nunnari
2013-05-08 14:19 ` Roberto Nunnari
1 sibling, 0 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-05-07 10:22 UTC (permalink / raw)
To: linux-raid
On 05/07/2013 09:53 AM, Robin Hill wrote:
> On Fri May 03, 2013 at 06:28:02PM +0200, Roberto Nunnari wrote:
>
>> Robin Hill wrote:
>>> The safest option would be:
>>> - add in the new disks
>>> - partition to at least the same size as your existing partitions (they
>>> can be larger)
>>> - add the new partitions into the arrays (they'll go in as spares)
>>> - grow the arrays to 4 members (this avoids any loss of redundancy)
>>> - wait for the resync to complete
>>> - install grub/lilo/syslinux to the new disks
>>> - fail and remove the old disk partitions from the arrays
>>> - shrink the arrays back down to 2 members
>>> - remove the old disks
>>>
>>> Then, if you're keeping the same number of partitions but increasing the
>>> size:
>>
>> Ok.. got here.
>>
>>> - grow the arrays to fill the partitions
>>> - grow the filesystems to fill the arrays
>>
>> Now the scary part.. so.. here I believe I should give the following
>> commands:
>>
>> mdadm --grow /dev/md0 --size=max
>> mdadm --grow /dev/md1 --size=max
>> mdadm --grow /dev/md2 --size=max
>>
> Yep, that's right. Make sure they've actually grown to the correct size
> before you progress though - I have had one occasion where using
> --size=max actually ended up shrinking the array and I had to manually
> work out the size to use in order to recover. That was using an older
> version of mdadm though, and I've not seen it happen since.
>
>> and after that
>>
>> fsck /dev/md0
>> fsck /dev/md1
>> fsck /dev/md2
>>
> You'll need 'fsck -f' here to force it to run.
>
>> and
>>
>> resize2fs /dev/md0
>> resize2fs /dev/md1
>> resize2fs /dev/md2
>>
>> Correct?
>>
> That should be it, yes.
>
>>
>> .. I still have a couple of questions:
>>
>> 1) how do I know if there's a bitmap?
>>
> Check /proc/mdstat - it'll report a bitmap - e.g.
> md6 : active raid6 sdg[0] sdf[6] sde[5] sdi[2] sdh[1]
> 11721052272 blocks super 1.2 level 6, 16k chunk, algorithm 2 [5/5] [UUUUU]
> bitmap: 0/30 pages [0KB], 65536KB chunk
>
>> 2) at present /dev/md2 usage is 100%.. could that cause any problem?
>>
> It'll slow things down a bit but otherwise shouldn't be an issue.
>
>> 3) the new drives are 2TG drives.. As around one year ago had trouble on
>> linux (it was a server dated 2006 with CentOS 5) that would not handle
>> drives larger than 2TB.. I wander what happens if one day one drive
>> fails and the drive I'll buy to replace will be sold as 2TB but in
>> reality slightly larger than 2TB.. what will happen? Will linux fail
>> again to use a drive larger than 2TB?
>>
> All 2TB drives are exactly the same size. Since somewhere around the
> 320G/500G mark, all drive manufacturers have agreed to standardise the
> drive sizes, so getting mismatches like this is a thing of the past.
>
>> At present I'm on ubuntu 10.04, all software from standard distribution.
>>
>> Pitfalls I should know?
>>
> You'll need to use GPT partitions instead of standard MBR partitions for
> drives over 2TB, but there shouldn't be any issue with handling them.
>
> Cheers,
> Robin
>
Thank you Robin.
Today I'm on holiday, but I will look at it tomorrow. :-)
Best regards.
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-05-07 7:53 ` Robin Hill
2013-05-07 10:22 ` Roberto Nunnari
@ 2013-05-08 14:19 ` Roberto Nunnari
2013-05-08 15:10 ` Robin Hill
1 sibling, 1 reply; 38+ messages in thread
From: Roberto Nunnari @ 2013-05-08 14:19 UTC (permalink / raw)
To: Roberto Nunnari, linux-raid
Robin Hill wrote:
> On Fri May 03, 2013 at 06:28:02PM +0200, Roberto Nunnari wrote:
>
>> Robin Hill wrote:
>>> The safest option would be:
>>> - add in the new disks
>>> - partition to at least the same size as your existing partitions (they
>>> can be larger)
>>> - add the new partitions into the arrays (they'll go in as spares)
>>> - grow the arrays to 4 members (this avoids any loss of redundancy)
>>> - wait for the resync to complete
>>> - install grub/lilo/syslinux to the new disks
>>> - fail and remove the old disk partitions from the arrays
>>> - shrink the arrays back down to 2 members
>>> - remove the old disks
>>>
>>> Then, if you're keeping the same number of partitions but increasing the
>>> size:
>> Ok.. got here.
>>
>>> - grow the arrays to fill the partitions
>>> - grow the filesystems to fill the arrays
>> Now the scary part.. so.. here I believe I should give the following
>> commands:
>>
>> mdadm --grow /dev/md0 --size=max
>> mdadm --grow /dev/md1 --size=max
>> mdadm --grow /dev/md2 --size=max
>>
> Yep, that's right. Make sure they've actually grown to the correct size
> before you progress though - I have had one occasion where using
> --size=max actually ended up shrinking the array and I had to manually
> work out the size to use in order to recover. That was using an older
> version of mdadm though, and I've not seen it happen since.
>
>> and after that
>>
>> fsck /dev/md0
>> fsck /dev/md1
>> fsck /dev/md2
>>
> You'll need 'fsck -f' here to force it to run.
humm.. as /dev/md0 is mounted on / I probably should boot from a cd, and
run fsck and resize2fs from there.. maybe using UUIDs, right?
>
>> and
>>
>> resize2fs /dev/md0
>> resize2fs /dev/md1
>> resize2fs /dev/md2
>>
>> Correct?
>>
> That should be it, yes.
Thank you.
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-05-08 14:19 ` Roberto Nunnari
@ 2013-05-08 15:10 ` Robin Hill
2013-05-08 16:05 ` Roberto Nunnari
0 siblings, 1 reply; 38+ messages in thread
From: Robin Hill @ 2013-05-08 15:10 UTC (permalink / raw)
To: Roberto Nunnari; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 2249 bytes --]
On Wed May 08, 2013 at 04:19:33PM +0200, Roberto Nunnari wrote:
> Robin Hill wrote:
> > On Fri May 03, 2013 at 06:28:02PM +0200, Roberto Nunnari wrote:
> >
> >> Robin Hill wrote:
> >>> The safest option would be:
> >>> - add in the new disks
> >>> - partition to at least the same size as your existing partitions (they
> >>> can be larger)
> >>> - add the new partitions into the arrays (they'll go in as spares)
> >>> - grow the arrays to 4 members (this avoids any loss of redundancy)
> >>> - wait for the resync to complete
> >>> - install grub/lilo/syslinux to the new disks
> >>> - fail and remove the old disk partitions from the arrays
> >>> - shrink the arrays back down to 2 members
> >>> - remove the old disks
> >>>
> >>> Then, if you're keeping the same number of partitions but increasing the
> >>> size:
> >> Ok.. got here.
> >>
> >>> - grow the arrays to fill the partitions
> >>> - grow the filesystems to fill the arrays
> >> Now the scary part.. so.. here I believe I should give the following
> >> commands:
> >>
> >> mdadm --grow /dev/md0 --size=max
> >> mdadm --grow /dev/md1 --size=max
> >> mdadm --grow /dev/md2 --size=max
> >>
> > Yep, that's right. Make sure they've actually grown to the correct size
> > before you progress though - I have had one occasion where using
> > --size=max actually ended up shrinking the array and I had to manually
> > work out the size to use in order to recover. That was using an older
> > version of mdadm though, and I've not seen it happen since.
> >
> >> and after that
> >>
> >> fsck /dev/md0
> >> fsck /dev/md1
> >> fsck /dev/md2
> >>
> > You'll need 'fsck -f' here to force it to run.
>
> humm.. as /dev/md0 is mounted on / I probably should boot from a cd, and
> run fsck and resize2fs from there.. maybe using UUIDs, right?
>
You can just skip the fsck and run resize2fs - it'll work fine on a
mounted filesystem. It'll probably be safer to do it offline though.
Cheers,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-05-08 15:10 ` Robin Hill
@ 2013-05-08 16:05 ` Roberto Nunnari
2013-05-08 17:01 ` Robin Hill
0 siblings, 1 reply; 38+ messages in thread
From: Roberto Nunnari @ 2013-05-08 16:05 UTC (permalink / raw)
To: Roberto Nunnari, linux-raid
Robin Hill wrote:
> On Wed May 08, 2013 at 04:19:33PM +0200, Roberto Nunnari wrote:
>
>> Robin Hill wrote:
>>> On Fri May 03, 2013 at 06:28:02PM +0200, Roberto Nunnari wrote:
>>>
>>>> Robin Hill wrote:
>>>>> The safest option would be:
>>>>> - add in the new disks
>>>>> - partition to at least the same size as your existing partitions (they
>>>>> can be larger)
>>>>> - add the new partitions into the arrays (they'll go in as spares)
>>>>> - grow the arrays to 4 members (this avoids any loss of redundancy)
>>>>> - wait for the resync to complete
>>>>> - install grub/lilo/syslinux to the new disks
>>>>> - fail and remove the old disk partitions from the arrays
>>>>> - shrink the arrays back down to 2 members
>>>>> - remove the old disks
>>>>>
>>>>> Then, if you're keeping the same number of partitions but increasing the
>>>>> size:
>>>> Ok.. got here.
>>>>
>>>>> - grow the arrays to fill the partitions
>>>>> - grow the filesystems to fill the arrays
>>>> Now the scary part.. so.. here I believe I should give the following
>>>> commands:
>>>>
>>>> mdadm --grow /dev/md0 --size=max
>>>> mdadm --grow /dev/md1 --size=max
>>>> mdadm --grow /dev/md2 --size=max
>>>>
>>> Yep, that's right. Make sure they've actually grown to the correct size
>>> before you progress though - I have had one occasion where using
>>> --size=max actually ended up shrinking the array and I had to manually
>>> work out the size to use in order to recover. That was using an older
>>> version of mdadm though, and I've not seen it happen since.
>>>
>>>> and after that
>>>>
>>>> fsck /dev/md0
>>>> fsck /dev/md1
>>>> fsck /dev/md2
>>>>
>>> You'll need 'fsck -f' here to force it to run.
>> humm.. as /dev/md0 is mounted on / I probably should boot from a cd, and
>> run fsck and resize2fs from there.. maybe using UUIDs, right?
>>
> You can just skip the fsck and run resize2fs - it'll work fine on a
> mounted filesystem. It'll probably be safer to do it offline though.
>
> Cheers,
> Robin
I'd rather stay on the safe side.. how do I assemble the array if I boot
from a cd?
something like:
mdadm --scan --assemble --uuid=a26bf396:31389f83:0df1722d:f404fe4c
would to the job and let me with a /dev/mdX I will be able to work with
(fsck and resize2fs)?
Thank you.
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-05-08 16:05 ` Roberto Nunnari
@ 2013-05-08 17:01 ` Robin Hill
2013-05-08 17:20 ` Roberto Nunnari
0 siblings, 1 reply; 38+ messages in thread
From: Robin Hill @ 2013-05-08 17:01 UTC (permalink / raw)
To: Roberto Nunnari; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 2952 bytes --]
On Wed May 08, 2013 at 06:05:32 +0200, Roberto Nunnari wrote:
> Robin Hill wrote:
> > On Wed May 08, 2013 at 04:19:33PM +0200, Roberto Nunnari wrote:
> >
> >> Robin Hill wrote:
> >>> On Fri May 03, 2013 at 06:28:02PM +0200, Roberto Nunnari wrote:
> >>>
> >>>> Robin Hill wrote:
> >>>>> The safest option would be:
> >>>>> - add in the new disks
> >>>>> - partition to at least the same size as your existing partitions (they
> >>>>> can be larger)
> >>>>> - add the new partitions into the arrays (they'll go in as spares)
> >>>>> - grow the arrays to 4 members (this avoids any loss of redundancy)
> >>>>> - wait for the resync to complete
> >>>>> - install grub/lilo/syslinux to the new disks
> >>>>> - fail and remove the old disk partitions from the arrays
> >>>>> - shrink the arrays back down to 2 members
> >>>>> - remove the old disks
> >>>>>
> >>>>> Then, if you're keeping the same number of partitions but increasing the
> >>>>> size:
> >>>> Ok.. got here.
> >>>>
> >>>>> - grow the arrays to fill the partitions
> >>>>> - grow the filesystems to fill the arrays
> >>>> Now the scary part.. so.. here I believe I should give the following
> >>>> commands:
> >>>>
> >>>> mdadm --grow /dev/md0 --size=max
> >>>> mdadm --grow /dev/md1 --size=max
> >>>> mdadm --grow /dev/md2 --size=max
> >>>>
> >>> Yep, that's right. Make sure they've actually grown to the correct size
> >>> before you progress though - I have had one occasion where using
> >>> --size=max actually ended up shrinking the array and I had to manually
> >>> work out the size to use in order to recover. That was using an older
> >>> version of mdadm though, and I've not seen it happen since.
> >>>
> >>>> and after that
> >>>>
> >>>> fsck /dev/md0
> >>>> fsck /dev/md1
> >>>> fsck /dev/md2
> >>>>
> >>> You'll need 'fsck -f' here to force it to run.
> >> humm.. as /dev/md0 is mounted on / I probably should boot from a cd, and
> >> run fsck and resize2fs from there.. maybe using UUIDs, right?
> >>
> > You can just skip the fsck and run resize2fs - it'll work fine on a
> > mounted filesystem. It'll probably be safer to do it offline though.
> >
> > Cheers,
> > Robin
>
> I'd rather stay on the safe side.. how do I assemble the array if I boot
> from a cd?
>
> something like:
>
> mdadm --scan --assemble --uuid=a26bf396:31389f83:0df1722d:f404fe4c
>
> would to the job and let me with a /dev/mdX I will be able to work with
> (fsck and resize2fs)?
>
That should do it, yes. If not, you can always do it explicitly with:
mdadm -A /dev/md0 /dev/sd[abcd]1
You'd need to double-check what the device names end up as though.
Cheers,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-05-08 17:01 ` Robin Hill
@ 2013-05-08 17:20 ` Roberto Nunnari
0 siblings, 0 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-05-08 17:20 UTC (permalink / raw)
To: Roberto Nunnari, linux-raid
Robin Hill wrote:
> On Wed May 08, 2013 at 06:05:32 +0200, Roberto Nunnari wrote:
>> I'd rather stay on the safe side.. how do I assemble the array if I boot
>> from a cd?
>>
>> something like:
>>
>> mdadm --scan --assemble --uuid=a26bf396:31389f83:0df1722d:f404fe4c
>>
>> would to the job and let me with a /dev/mdX I will be able to work with
>> (fsck and resize2fs)?
>>
> That should do it, yes. If not, you can always do it explicitly with:
> mdadm -A /dev/md0 /dev/sd[abcd]1
>
> You'd need to double-check what the device names end up as though.
>
> Cheers,
> Robin
Thank you. :-)
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: replacing drives
2013-04-26 15:53 ` Robin Hill
` (3 preceding siblings ...)
2013-05-03 16:28 ` Roberto Nunnari
@ 2013-05-10 21:35 ` Roberto Nunnari
4 siblings, 0 replies; 38+ messages in thread
From: Roberto Nunnari @ 2013-05-10 21:35 UTC (permalink / raw)
To: linux-raid
On 04/26/2013 05:53 PM, Robin Hill wrote:
> On Fri Apr 26, 2013 at 04:27:01PM +0200, Roberto Nunnari wrote:
>
>> Hi all.
>>
>> I'd like to replace two hd in raid1 with larger ones.
>>
>> I could just add the new drives in raid1 and mount it on /opt after a
>> dump/restore, but I'd prefer to just have to drives instead of four..
>> less noise and less power consumption and noise.
>>
>> The question is: what whould be the best way to go?
>> Tricks and tips? Drawbacks? Common errors?
>>
>> Any hint/advice welcome.
>> Thank you. :-)
>>
>>
>> present HD: two WD caviar green 500GB
>> new HD: two WD caviar green 2TB
>>
> I don't think these have SCTERC configuration options, so you'll need to
> make sure you increase the timeout in the storage stack to prevent read
> timeouts from causing drives to be prematurely kicked out of the array.
>
>>
>> root@host1:~# uname -rms
>> Linux 2.6.32-46-server x86_64
>>
> That'll be too old for the hot-replacement functionality, but that
> doesn't make much difference for RAID1 anyway.
>
>> root@host1:~# cat /proc/mdstat
>> Personalities : [linear] [raid1] [multipath] [raid0] [raid6] [raid5]
>> [raid4] [raid10]
>> md1 : active raid1 sda2[0] sdb2[1]
>> 7812032 blocks [2/2] [UU]
>>
>> md2 : active raid1 sda3[0] sdb3[1]
>> 431744960 blocks [2/2] [UU]
>>
>> md0 : active raid1 sda1[0] sdb1[1]
>> 48827328 blocks [2/2] [UU]
>>
>> unused devices: <none>
>>
> The safest option would be:
> - add in the new disks
> - partition to at least the same size as your existing partitions (they
> can be larger)
> - add the new partitions into the arrays (they'll go in as spares)
> - grow the arrays to 4 members (this avoids any loss of redundancy)
> - wait for the resync to complete
> - install grub/lilo/syslinux to the new disks
> - fail and remove the old disk partitions from the arrays
> - shrink the arrays back down to 2 members
> - remove the old disks
>
> Then, if you're keeping the same number of partitions but increasing the
> size:
> - grow the arrays to fill the partitions
> - grow the filesystems to fill the arrays
> or, if you're adding extra partitions:
> - create new arrays on extra partitions
> - format and mount
>
> If you have hot-plug bays then you can do all this without any downtime
> (you could also do one disk at a time and just grow the arrays to 3
> members), otherwise you'll need to shut down to install and remove the
> disks. If you only have two bays then you could fail one of the disks
> then recover to a new one, but that's definitely a risky option.
>
> That's the outline of the process anyway - if you need any details of
> the actual commands then do ask.
>
> HTH,
> Robin
>
Done!
linux-raid is GREAT!! I was worried.. but it all went very smooth and my
system is up and running with very little downtime! mdadm is very
flexible and powerful!
Thank you Robin for your support!
Best regards.
Robi
^ permalink raw reply [flat|nested] 38+ messages in thread
end of thread, other threads:[~2013-05-10 21:35 UTC | newest]
Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-04-26 14:27 replacing drives Roberto Nunnari
2013-04-26 15:36 ` Tregaron Bayly
2013-04-26 15:42 ` Keith Keller
2013-04-26 15:53 ` Robin Hill
2013-04-30 13:17 ` Roberto Nunnari
2013-04-30 13:20 ` Mikael Abrahamsson
2013-04-30 14:11 ` Roberto Nunnari
2013-04-30 14:22 ` Robin Hill
2013-04-30 14:40 ` Mikael Abrahamsson
2013-04-30 14:27 ` Roberto Nunnari
2013-04-30 14:39 ` Roberto Nunnari
2013-04-30 14:42 ` Mikael Abrahamsson
2013-04-30 15:10 ` Roberto Nunnari
2013-04-30 15:11 ` Phil Turmel
2013-04-30 15:39 ` Roberto Spadim
2013-05-01 1:55 ` Brad Campbell
2013-05-01 15:06 ` Roberto Nunnari
2013-05-01 18:14 ` Roberto Spadim
2013-05-02 17:49 ` Roy Sigurd Karlsbakk
2013-05-02 17:43 ` Roy Sigurd Karlsbakk
2013-04-30 13:45 ` Robin Hill
2013-04-30 14:05 ` Roberto Nunnari
2013-04-30 14:28 ` Roberto Nunnari
2013-04-30 15:19 ` Roberto Nunnari
2013-05-02 13:56 ` Roberto Nunnari
2013-05-02 14:54 ` Robin Hill
2013-05-02 15:00 ` Roberto Nunnari
2013-05-03 16:28 ` Roberto Nunnari
2013-05-06 11:30 ` Roberto Nunnari
2013-05-07 7:53 ` Robin Hill
2013-05-07 10:22 ` Roberto Nunnari
2013-05-08 14:19 ` Roberto Nunnari
2013-05-08 15:10 ` Robin Hill
2013-05-08 16:05 ` Roberto Nunnari
2013-05-08 17:01 ` Robin Hill
2013-05-08 17:20 ` Roberto Nunnari
2013-05-10 21:35 ` Roberto Nunnari
2013-04-26 22:20 ` Roberto Nunnari
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.