All of lore.kernel.org
 help / color / mirror / Atom feed
* Debian kernel stanza after aptitude kernel upgrade
@ 2010-09-21 10:39 A. Krijgsman
  2010-09-21 15:18 ` Tim Small
  0 siblings, 1 reply; 7+ messages in thread
From: A. Krijgsman @ 2010-09-21 10:39 UTC (permalink / raw)
  To: linux-raid

Dear fellow software-raid-users,

Hopefully someone can help my out with their experience.
Since I am not a die-hard linux user, and are not very familiar with the 
kernel modules loaded during initrd.

Normally I install software raid1 on all servers I get my hands on (if it 
does not have a raid controller.)
I always use Debian, and their installer makes creating a raid 1 setup easy.
Now recently I switched two servers from a single disk setup to a raid1 
setup on a running setup, without the installer.

Yesterday I did a apt-get update / apt-get upgrade, and got myself a shiny 
new kernel package.
After rebooting that system was in a lockdown.

Stupid me! I didn't check the menu.lst of my grub, and apperantly aptitude 
rebuilded the initrd for the new kernel.
The sysadmin I got the server managed to het the md device back online and I 
can now access my server again trough ssh.

I wish to avoid this kind of problems in the future (and I prefere never to 
upgrade the kernel on a running machine again ;-))
However since it is smart to sometimes make those changes, I was wondering 
if there is a way to test if my machine will boot without actually booting 
it?
I checked up again with the raid1 turotials I used, and re-created the 
initramdisk. (I noticed that I lost the /etc/default/modules lines for md 
and raid1.)
What steps should I take in account to make sure my raid1 array is always 
bootable?

#My menu.list for grub:
default         0
fallback        1

#And the stanza's:

title           Debian GNU/Linux, kernel 2.6.26-2-686 RAID (hd1)
root            (hd1,0)
kernel          /boot/vmlinuz-2.6.26-2-686 root=/dev/md0 ro quiet
initrd          /boot/initrd.img-2.6.26-2-686

## ## End Default Options ##

title           Debian GNU/Linux, kernel 2.6.26-2-686
root            (hd0,0)
kernel          /boot/vmlinuz-2.6.26-2-686 root=/dev/sda1 ro quiet
initrd          /boot/initrd.img-2.6.26-2-686

#And my /etc/initramfs-tools/modules:
raid1
md

#And my /etc/modules
loop
raid1
md

An other questions I would like to ask is the following.
Since Grub loads the initrd-image from one of the two disks, if one fails, 
it won't boot the md root device anyway right?
Is it that whel /dev/sda fails, /dev/sdb becomes /dev/sda? (or must I state 
that hd1 becomes hd0 when hd0 has failed?)
This because I would prefere a stanza that always boots up in degraded mode, 
rather then in a panic kernel mode ;-)
I have seen stanza's containing both disksk within one stanza, don't know if 
this is old or still supported?

Thanks for your time to read and hopefully reply!

Regards,
Armand 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Debian kernel stanza after aptitude kernel upgrade
  2010-09-21 10:39 Debian kernel stanza after aptitude kernel upgrade A. Krijgsman
@ 2010-09-21 15:18 ` Tim Small
  2010-09-21 15:52   ` A. Krijgsman
  2010-09-21 20:29   ` Neil Brown
  0 siblings, 2 replies; 7+ messages in thread
From: Tim Small @ 2010-09-21 15:18 UTC (permalink / raw)
  To: A. Krijgsman; +Cc: linux-raid

On 21/09/10 11:39, A. Krijgsman wrote:
> Stupid me! I didn't check the menu.lst of my grub, and apperantly 
> aptitude rebuilded the initrd for the new kernel.
> The sysadmin I got the server managed to het the md device back online 
> and I can now access my server again trough ssh.

Once you've installed the extra disk, I think you need to stick the 
output of

mdadm --examine --scan

into /etc/mdadm/mdadm.conf

and then run

update-initramfs -k all -u

This isn't particularly well documented, so feel free to update the 
documentation and submit a patch ;o).  You shouldn't need to hard-code 
the loading of raid1 etc. in /etc/modules.



A good quick-and-dirty hack to check that a machine will reboot 
correctly is to use qemu or kvm.  The below should be fine, but to be on 
the safeside, create a user which has readonly access to the raw hard 
drive devices, and run the following as that user:

qemu -snapshot -hda /dev/sda -hdb /dev/sdb -m 64 -net none

The "-snapshot" will make the VM use copy-on-write version of the real 
block devices.  The real OS will continue to update the block devices 
"underneath" the qemu, so the VM will get confused easily, but it's good 
enough as a check check to the question "will it reboot?".


>
> #My menu.list for grub:

Err, if that's all of it, then I'd guess you're not using the debian 
mechanisms to manage it?  I'd probably switch back to using the Debian 
management stuff, it handles adding new kernels etc. fairly well.



> Since Grub loads the initrd-image from one of the two disks, if one 
> fails, it won't boot the md root device anyway right?
> Is it that whel /dev/sda fails, /dev/sdb becomes /dev/sda? (or must I 
> state that hd1 becomes hd0 when hd0 has failed?)

This is a bit of a pain with grub1 - grub2 handles it a bit better.  
With all BIOSes I've seen, if the first disk dies, the second disk 
becomes BIOS disk 0x80 (i.e. (hd0) in grub).  The workaround is to run 
grub-install twice, telling grub that hd0 is sdb the second time by 
manually editing /boot/grub/device.map  Once grub has loaded the kernel 
and initrd into RAM, then the md code should stand a reasonable chance 
of working out which drive is OK.


Tim.

> This because I would prefere a stanza that always boots up in degraded 
> mode, rather then in a panic kernel mode ;-)
> I have seen stanza's containing both disksk within one stanza, don't 
> know if this is old or still supported?
>
> Thanks for your time to read and hopefully reply!
>
> Regards,
> Armand
> -- 
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
South East Open Source Solutions Limited
Registered in England and Wales with company number 06134732.
Registered Office: 2 Powell Gardens, Redhill, Surrey, RH1 1TQ
VAT number: 900 6633 53  http://seoss.co.uk/ +44-(0)1273-808309


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Debian kernel stanza after aptitude kernel upgrade
  2010-09-21 15:18 ` Tim Small
@ 2010-09-21 15:52   ` A. Krijgsman
  2010-09-21 16:06     ` Tim Small
  2010-09-21 20:29   ` Neil Brown
  1 sibling, 1 reply; 7+ messages in thread
From: A. Krijgsman @ 2010-09-21 15:52 UTC (permalink / raw)
  To: linux-raid

Dear Tim,

Thank you for your reply.
You state that after an added disk I should run:
> update-initramfs -k all -u

This I understand, since I managed to get the raid 1 working this way the 
first time.
However debian apt-get updated the kernel image, and I assume that process 
ran update-initramfs because of that.
I however believe that is why I lost the ability to boot from my raid set.
Do I need to manually rebuild the initramfs after a kernel upgrade before I 
reboot?
(and check if md and raid1 are inserter as modules in the initramfs 
kernel? )

>> #My menu.list for grub:
>
> Err, if that's all of it, then I'd guess you're not using the debian 
> mechanisms to manage it?  I'd probably switch back to using the Debian 
> management stuff, it handles adding new kernels etc. fairly well.

Sorry that was not my entire list, just the boot stanza's I use; I figured 
that was all that was needed.
I will give the qemu a try, thank you!
I will also look into grub2 as well,!

Regards,
Armand


----- Original Message ----- 
From: "Tim Small" <tim@seoss.co.uk>
To: "A. Krijgsman" <a.krijgsman@draftsman.nl>
Cc: <linux-raid@vger.kernel.org>
Sent: Tuesday, September 21, 2010 5:18 PM
Subject: Re: Debian kernel stanza after aptitude kernel upgrade


> On 21/09/10 11:39, A. Krijgsman wrote:
>> Stupid me! I didn't check the menu.lst of my grub, and apperantly 
>> aptitude rebuilded the initrd for the new kernel.
>> The sysadmin I got the server managed to het the md device back online 
>> and I can now access my server again trough ssh.
>
> Once you've installed the extra disk, I think you need to stick the output 
> of
>
> mdadm --examine --scan
>
> into /etc/mdadm/mdadm.conf
>
> and then run
>
> update-initramfs -k all -u
>
> This isn't particularly well documented, so feel free to update the 
> documentation and submit a patch ;o).  You shouldn't need to hard-code the 
> loading of raid1 etc. in /etc/modules.
>
>
>
> A good quick-and-dirty hack to check that a machine will reboot correctly 
> is to use qemu or kvm.  The below should be fine, but to be on the 
> safeside, create a user which has readonly access to the raw hard drive 
> devices, and run the following as that user:
>
> qemu -snapshot -hda /dev/sda -hdb /dev/sdb -m 64 -net none
>
> The "-snapshot" will make the VM use copy-on-write version of the real 
> block devices.  The real OS will continue to update the block devices 
> "underneath" the qemu, so the VM will get confused easily, but it's good 
> enough as a check check to the question "will it reboot?".
>
>
>>
>> #My menu.list for grub:
>
> Err, if that's all of it, then I'd guess you're not using the debian 
> mechanisms to manage it?  I'd probably switch back to using the Debian 
> management stuff, it handles adding new kernels etc. fairly well.
>
>
>
>> Since Grub loads the initrd-image from one of the two disks, if one 
>> fails, it won't boot the md root device anyway right?
>> Is it that whel /dev/sda fails, /dev/sdb becomes /dev/sda? (or must I 
>> state that hd1 becomes hd0 when hd0 has failed?)
>
> This is a bit of a pain with grub1 - grub2 handles it a bit better.  With 
> all BIOSes I've seen, if the first disk dies, the second disk becomes BIOS 
> disk 0x80 (i.e. (hd0) in grub).  The workaround is to run grub-install 
> twice, telling grub that hd0 is sdb the second time by manually editing 
> /boot/grub/device.map  Once grub has loaded the kernel and initrd into 
> RAM, then the md code should stand a reasonable chance of working out 
> which drive is OK.
>
>
> Tim.
>
>> This because I would prefere a stanza that always boots up in degraded 
>> mode, rather then in a panic kernel mode ;-)
>> I have seen stanza's containing both disksk within one stanza, don't know 
>> if this is old or still supported?
>>
>> Thanks for your time to read and hopefully reply!
>>
>> Regards,
>> Armand
>> -- 
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
> -- 
> South East Open Source Solutions Limited
> Registered in England and Wales with company number 06134732.
> Registered Office: 2 Powell Gardens, Redhill, Surrey, RH1 1TQ
> VAT number: 900 6633 53  http://seoss.co.uk/ +44-(0)1273-808309
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Debian kernel stanza after aptitude kernel upgrade
  2010-09-21 15:52   ` A. Krijgsman
@ 2010-09-21 16:06     ` Tim Small
  0 siblings, 0 replies; 7+ messages in thread
From: Tim Small @ 2010-09-21 16:06 UTC (permalink / raw)
  To: A. Krijgsman,
	linux-raid@vger.kernel.org >>
	"linux-raid@vger.kernel.org"

On 21/09/10 16:52, A. Krijgsman wrote:
> However debian apt-get updated the kernel image, and I assume that 
> process ran update-initramfs because of that.

This is getting a bit OT, but yes, a new kernel install will normally 
trigger an update to the initramfs files, however this can be disabled.  
See /etc/initramfs-tools/ and the contents of the debconf database...


> I will also look into grub2 as well,!

Personally, I think I'd stick with grub1 on lenny, but use that 
workaround - that's what I do, and it only needs to be done when 
adding/replacing disks etc.

Tim.

-- 
South East Open Source Solutions Limited
Registered in England and Wales with company number 06134732.
Registered Office: 2 Powell Gardens, Redhill, Surrey, RH1 1TQ
VAT number: 900 6633 53  http://seoss.co.uk/ +44-(0)1273-808309


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Debian kernel stanza after aptitude kernel upgrade
  2010-09-21 15:18 ` Tim Small
  2010-09-21 15:52   ` A. Krijgsman
@ 2010-09-21 20:29   ` Neil Brown
  2010-09-28 11:11     ` Tim Small
  1 sibling, 1 reply; 7+ messages in thread
From: Neil Brown @ 2010-09-21 20:29 UTC (permalink / raw)
  To: Tim Small; +Cc: A. Krijgsman, linux-raid

On Tue, 21 Sep 2010 16:18:37 +0100
Tim Small <tim@seoss.co.uk> wrote:

> On 21/09/10 11:39, A. Krijgsman wrote:
> > Stupid me! I didn't check the menu.lst of my grub, and apperantly 
> > aptitude rebuilded the initrd for the new kernel.
> > The sysadmin I got the server managed to het the md device back online 
> > and I can now access my server again trough ssh.
> 
> Once you've installed the extra disk, I think you need to stick the 
> output of
> 
> mdadm --examine --scan
> 
> into /etc/mdadm/mdadm.conf

It is generally better to use 
    mdadm --detail --scan

for generating mdadm.conf as it is more likely to get the device names
right.  And when doing this by hand, always review the output to make sure it
looks right.

NeilBrown


> 
> and then run
> 
> update-initramfs -k all -u
> 
> This isn't particularly well documented, so feel free to update the 
> documentation and submit a patch ;o).  You shouldn't need to hard-code 
> the loading of raid1 etc. in /etc/modules.
> 
> 
> 
> A good quick-and-dirty hack to check that a machine will reboot 
> correctly is to use qemu or kvm.  The below should be fine, but to be on 
> the safeside, create a user which has readonly access to the raw hard 
> drive devices, and run the following as that user:
> 
> qemu -snapshot -hda /dev/sda -hdb /dev/sdb -m 64 -net none
> 
> The "-snapshot" will make the VM use copy-on-write version of the real 
> block devices.  The real OS will continue to update the block devices 
> "underneath" the qemu, so the VM will get confused easily, but it's good 
> enough as a check check to the question "will it reboot?".
> 
> 
> >
> > #My menu.list for grub:
> 
> Err, if that's all of it, then I'd guess you're not using the debian 
> mechanisms to manage it?  I'd probably switch back to using the Debian 
> management stuff, it handles adding new kernels etc. fairly well.
> 
> 
> 
> > Since Grub loads the initrd-image from one of the two disks, if one 
> > fails, it won't boot the md root device anyway right?
> > Is it that whel /dev/sda fails, /dev/sdb becomes /dev/sda? (or must I 
> > state that hd1 becomes hd0 when hd0 has failed?)
> 
> This is a bit of a pain with grub1 - grub2 handles it a bit better.  
> With all BIOSes I've seen, if the first disk dies, the second disk 
> becomes BIOS disk 0x80 (i.e. (hd0) in grub).  The workaround is to run 
> grub-install twice, telling grub that hd0 is sdb the second time by 
> manually editing /boot/grub/device.map  Once grub has loaded the kernel 
> and initrd into RAM, then the md code should stand a reasonable chance 
> of working out which drive is OK.
> 
> 
> Tim.
> 
> > This because I would prefere a stanza that always boots up in degraded 
> > mode, rather then in a panic kernel mode ;-)
> > I have seen stanza's containing both disksk within one stanza, don't 
> > know if this is old or still supported?
> >
> > Thanks for your time to read and hopefully reply!
> >
> > Regards,
> > Armand
> > -- 
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Debian kernel stanza after aptitude kernel upgrade
  2010-09-21 20:29   ` Neil Brown
@ 2010-09-28 11:11     ` Tim Small
  2010-10-07  4:16       ` Neil Brown
  0 siblings, 1 reply; 7+ messages in thread
From: Tim Small @ 2010-09-28 11:11 UTC (permalink / raw)
  To: Neil Brown; +Cc: A. Krijgsman, linux-raid

On 21/09/10 21:29, Neil Brown wrote:
>> I think you need to stick the output of
>>
>> mdadm --examine --scan
>>
>> into /etc/mdadm/mdadm.conf
>>      
> It is generally better to use
>      mdadm --detail --scan
>
> for generating mdadm.conf as it is more likely to get the device names
> right.  And when doing this by hand, always review the output to make sure it
> looks right.
>    


Thanks for that Neil - Debian (and thus Ubuntu) currently uses the 
output of "mdadm --examine --scan --config=partitions" when 
autogenerating the mdadm.conf output.  Should this be considered a bug?  
If so, could you give a bit of detail, and I'll open a bug for the 
script....

Thanks,

Tim.

-- 
South East Open Source Solutions Limited
Registered in England and Wales with company number 06134732.
Registered Office: 2 Powell Gardens, Redhill, Surrey, RH1 1TQ
VAT number: 900 6633 53  http://seoss.co.uk/ +44-(0)1273-808309


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Debian kernel stanza after aptitude kernel upgrade
  2010-09-28 11:11     ` Tim Small
@ 2010-10-07  4:16       ` Neil Brown
  0 siblings, 0 replies; 7+ messages in thread
From: Neil Brown @ 2010-10-07  4:16 UTC (permalink / raw)
  To: Tim Small; +Cc: A. Krijgsman, linux-raid

On Tue, 28 Sep 2010 12:11:35 +0100
Tim Small <tim@seoss.co.uk> wrote:

> On 21/09/10 21:29, Neil Brown wrote:
> >> I think you need to stick the output of
> >>
> >> mdadm --examine --scan
> >>
> >> into /etc/mdadm/mdadm.conf
> >>      
> > It is generally better to use
> >      mdadm --detail --scan
> >
> > for generating mdadm.conf as it is more likely to get the device names
> > right.  And when doing this by hand, always review the output to make sure it
> > looks right.
> >    
> 
> 
> Thanks for that Neil - Debian (and thus Ubuntu) currently uses the 
> output of "mdadm --examine --scan --config=partitions" when 
> autogenerating the mdadm.conf output.  Should this be considered a bug?  
> If so, could you give a bit of detail, and I'll open a bug for the 
> script....
> 
> Thanks,
> 
> Tim.
> 

It all depends on what you want to do.

If you want a config file which described the current configuration, then
"--detail --scan" is definitely the thing to use.
If you want a config file that records what is on the devices currently
attached to the machine, then "--examine --scan" is what you want.

Any use of "--examine --scan" is probably better left to the auto-assembly
stuff in mdadm (mdadm -As).  So Debian should probably be using --detail
--scan.  However without a statement of the exactly purpose and context, one
cannot be certain.

NeilBrown

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2010-10-07  4:16 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-09-21 10:39 Debian kernel stanza after aptitude kernel upgrade A. Krijgsman
2010-09-21 15:18 ` Tim Small
2010-09-21 15:52   ` A. Krijgsman
2010-09-21 16:06     ` Tim Small
2010-09-21 20:29   ` Neil Brown
2010-09-28 11:11     ` Tim Small
2010-10-07  4:16       ` Neil Brown

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.