All of lore.kernel.org
 help / color / mirror / Atom feed
* teuthology-lock and choosing a kernel
@ 2015-03-08 12:50 Loic Dachary
       [not found] ` <6E9B72C6-1FA0-46D3-873E-EA99A52BE669@redhat.com>
  0 siblings, 1 reply; 5+ messages in thread
From: Loic Dachary @ 2015-03-08 12:50 UTC (permalink / raw)
  To: Andrew Schoen; +Cc: Ceph Development

[-- Attachment #1: Type: text/plain, Size: 2092 bytes --]

Hi Andrew,

After successfully locking a centos 6.5 VPS in the community lab with

teuthology-lock --lock-many 1 --owner loic@dachary.org --machine-type vps --os-type centos --os-version 6.5

it turns out that it has a 2.6.32 kernel by default. A more recent kernel is required to run the ceph-disk tests because it relies on /dev/loop handling partition tables as a regular disk would. After installing a 3.10 kernel from http://elrepo.org/tiki/kernel-lt and rebooting, it was no longer possible to reach the machine.

The teuthology-suite command has a -k option which suggest there is a way to specify the kernel when provisioning a machine. The command

./virtualenv/bin/teuthology-suite --dry-run -k testing --priority 101 --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org  --ceph firefly-backports

shows lines like:

2015-03-08 13:43:26,432.432 INFO:teuthology.suite:dry-run: ./virtualenv/bin/teuthology-schedule --name loic-2015-03-08_13:43:06-rgw-firefly-backports-testing-basic-multi --num 1 --worker multi --priority 101 --owner loic@dachary.org --description 'rgw/multifs/{clusters/fixed-2.yaml fs/btrfs.yaml rgw_pool_type/erasure-coded.yaml tasks/rgw_multipart_upload.yaml}' -- /tmp/schedule_suite_AQ2b6w /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/clusters/fixed-2.yaml /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/fs/btrfs.yaml /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/rgw_pool_type/erasure-coded.yaml /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/tasks/rgw_multipart_upload.yaml

which show the testing word as part of the job name. The https://github.com/ceph/teuthology/ page shows some more information about kernel choices but it's non trivial to figure out how to translate that into something that could be used in the context of teuthology-lock.

I'm not sure where to look and I would be grateful if you could give me a pointer to go in the right direction.

Cheers

-- 
Loïc Dachary, Artisan Logiciel Libre


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: teuthology-lock and choosing a kernel
       [not found] ` <6E9B72C6-1FA0-46D3-873E-EA99A52BE669@redhat.com>
@ 2015-03-09 14:01   ` Loic Dachary
  2015-03-09 22:32   ` Loic Dachary
  1 sibling, 0 replies; 5+ messages in thread
From: Loic Dachary @ 2015-03-09 14:01 UTC (permalink / raw)
  To: Andrew Schoen; +Cc: Ceph Development

[-- Attachment #1: Type: text/plain, Size: 3265 bytes --]


Of course ! Using teuthology tasks to configure the node makes perfect sense.

Thanks !

On 09/03/2015 14:37, Andrew Schoen wrote:
> Loic,
> 
> After locking the node like normal, you can use teuthology to install the kernel you need.  Just include the kernel stanza in your yaml file.  http://ceph.com/teuthology/docs/teuthology.task.html#teuthology.task.kernel.task
> 
> Something like this:
> 
>   interactive-on-error: true                                                                                                                                                  
>   roles:
>      - [mon.0, client.0]
>   kernel:
>      branch: testing
>   tasks:
>      - interactive: 
> 
> Use teuthology-lock —list-targets to get the connection information for you newly locked node and add that to your yaml.
> 
> Best,
> Andrew
> 
> On Mar 8, 2015, at 7:50 AM, Loic Dachary <loic@dachary.org <mailto:loic@dachary.org>> wrote:
> 
>> Hi Andrew,
>>
>> After successfully locking a centos 6.5 VPS in the community lab with
>>
>> teuthology-lock --lock-many 1 --owner loic@dachary.org <mailto:loic@dachary.org> --machine-type vps --os-type centos --os-version 6.5
>>
>> it turns out that it has a 2.6.32 kernel by default. A more recent kernel is required to run the ceph-disk tests because it relies on /dev/loop handling partition tables as a regular disk would. After installing a 3.10 kernel from http://elrepo.org/tiki/kernel-lt and rebooting, it was no longer possible to reach the machine.
>>
>> The teuthology-suite command has a -k option which suggest there is a way to specify the kernel when provisioning a machine. The command
>>
>> ./virtualenv/bin/teuthology-suite --dry-run -k testing --priority 101 --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org <mailto:loic@dachary.org> --owner loic@dachary.org <mailto:loic@dachary.org>  --ceph firefly-backports
>>
>> shows lines like:
>>
>> 2015-03-08 13:43:26,432.432 INFO:teuthology.suite:dry-run: ./virtualenv/bin/teuthology-schedule --name loic-2015-03-08_13:43:06-rgw-firefly-backports-testing-basic-multi --num 1 --worker multi --priority 101 --owner loic@dachary.org <mailto:loic@dachary.org> --description 'rgw/multifs/{clusters/fixed-2.yaml fs/btrfs.yaml rgw_pool_type/erasure-coded.yaml tasks/rgw_multipart_upload.yaml}' -- /tmp/schedule_suite_AQ2b6w /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/clusters/fixed-2.yaml /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/fs/btrfs.yaml /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/rgw_pool_type/erasure-coded.yaml /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/tasks/rgw_multipart_upload.yaml
>>
>> which show the testing word as part of the job name. The https://github.com/ceph/teuthology/ page shows some more information about kernel choices but it's non trivial to figure out how to translate that into something that could be used in the context of teuthology-lock.
>>
>> I'm not sure where to look and I would be grateful if you could give me a pointer to go in the right direction.
>>
>> Cheers
>>
>> -- 
>> Loïc Dachary, Artisan Logiciel Libre
>>
> 

-- 
Loïc Dachary, Artisan Logiciel Libre


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: teuthology-lock and choosing a kernel
       [not found] ` <6E9B72C6-1FA0-46D3-873E-EA99A52BE669@redhat.com>
  2015-03-09 14:01   ` Loic Dachary
@ 2015-03-09 22:32   ` Loic Dachary
  2015-03-10  7:19     ` Ilya Dryomov
  1 sibling, 1 reply; 5+ messages in thread
From: Loic Dachary @ 2015-03-09 22:32 UTC (permalink / raw)
  To: Andrew Schoen; +Cc: Ceph Development

[-- Attachment #1: Type: text/plain, Size: 4171 bytes --]

Hi Andrew,

I successfully installed a 3.19 kernel (details at http://dachary.org/?p=3594). It turns out that the loop module is compiled in and defaults to having zero partitions allowed by default. Since I was looking for a solution to have /dev/loop useable for tests, I rebooted with /boot/grub/grub.conf as

[ubuntu@vpm083 src]$ cat /boot/grub/grub.conf 
default=0
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title rhel-6.5-cloudinit (3.19.0-ceph-00029-gaf5b96e)
        root (hd0,0)
        kernel /boot/vmlinuz-3.19.0-ceph-00029-gaf5b96e ro root=LABEL=79d3d2d4  loop.max_part=16
        initrd /boot/initramfs-3.19.0-ceph-00029-gaf5b96e.img


and that works fine. Do you know if I could do that from the yaml file directly ? Alternatively I could use a kernel that does not have the loop module compiled in and modprobe it with loop.max_part=16, but it's unclear to me what kernels are available and what their names are.

Thanks a lot for the help :-)

On 09/03/2015 14:37, Andrew Schoen wrote:
> Loic,
> 
> After locking the node like normal, you can use teuthology to install the kernel you need.  Just include the kernel stanza in your yaml file.  http://ceph.com/teuthology/docs/teuthology.task.html#teuthology.task.kernel.task
> 
> Something like this:
> 
>   interactive-on-error: true                                                                                                                                                  
>   roles:
>      - [mon.0, client.0]
>   kernel:
>      branch: testing
>   tasks:
>      - interactive: 
> 
> Use teuthology-lock —list-targets to get the connection information for you newly locked node and add that to your yaml.
> 
> Best,
> Andrew
> 
> On Mar 8, 2015, at 7:50 AM, Loic Dachary <loic@dachary.org <mailto:loic@dachary.org>> wrote:
> 
>> Hi Andrew,
>>
>> After successfully locking a centos 6.5 VPS in the community lab with
>>
>> teuthology-lock --lock-many 1 --owner loic@dachary.org <mailto:loic@dachary.org> --machine-type vps --os-type centos --os-version 6.5
>>
>> it turns out that it has a 2.6.32 kernel by default. A more recent kernel is required to run the ceph-disk tests because it relies on /dev/loop handling partition tables as a regular disk would. After installing a 3.10 kernel from http://elrepo.org/tiki/kernel-lt and rebooting, it was no longer possible to reach the machine.
>>
>> The teuthology-suite command has a -k option which suggest there is a way to specify the kernel when provisioning a machine. The command
>>
>> ./virtualenv/bin/teuthology-suite --dry-run -k testing --priority 101 --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org <mailto:loic@dachary.org> --owner loic@dachary.org <mailto:loic@dachary.org>  --ceph firefly-backports
>>
>> shows lines like:
>>
>> 2015-03-08 13:43:26,432.432 INFO:teuthology.suite:dry-run: ./virtualenv/bin/teuthology-schedule --name loic-2015-03-08_13:43:06-rgw-firefly-backports-testing-basic-multi --num 1 --worker multi --priority 101 --owner loic@dachary.org <mailto:loic@dachary.org> --description 'rgw/multifs/{clusters/fixed-2.yaml fs/btrfs.yaml rgw_pool_type/erasure-coded.yaml tasks/rgw_multipart_upload.yaml}' -- /tmp/schedule_suite_AQ2b6w /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/clusters/fixed-2.yaml /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/fs/btrfs.yaml /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/rgw_pool_type/erasure-coded.yaml /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/tasks/rgw_multipart_upload.yaml
>>
>> which show the testing word as part of the job name. The https://github.com/ceph/teuthology/ page shows some more information about kernel choices but it's non trivial to figure out how to translate that into something that could be used in the context of teuthology-lock.
>>
>> I'm not sure where to look and I would be grateful if you could give me a pointer to go in the right direction.
>>
>> Cheers
>>
>> -- 
>> Loïc Dachary, Artisan Logiciel Libre
>>
> 

-- 
Loïc Dachary, Artisan Logiciel Libre


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: teuthology-lock and choosing a kernel
  2015-03-09 22:32   ` Loic Dachary
@ 2015-03-10  7:19     ` Ilya Dryomov
  2015-03-10  7:48       ` Loic Dachary
  0 siblings, 1 reply; 5+ messages in thread
From: Ilya Dryomov @ 2015-03-10  7:19 UTC (permalink / raw)
  To: Loic Dachary; +Cc: Andrew Schoen, Ceph Development

On Tue, Mar 10, 2015 at 1:32 AM, Loic Dachary <loic@dachary.org> wrote:
> Hi Andrew,
>
> I successfully installed a 3.19 kernel (details at http://dachary.org/?p=3594). It turns out that the loop module is compiled in and defaults to having zero partitions allowed by default. Since I was looking for a solution to have /dev/loop useable for tests, I rebooted with /boot/grub/grub.conf as
>
> [ubuntu@vpm083 src]$ cat /boot/grub/grub.conf
> default=0
> timeout=5
> splashimage=(hd0,0)/boot/grub/splash.xpm.gz
> hiddenmenu
> title rhel-6.5-cloudinit (3.19.0-ceph-00029-gaf5b96e)
>         root (hd0,0)
>         kernel /boot/vmlinuz-3.19.0-ceph-00029-gaf5b96e ro root=LABEL=79d3d2d4  loop.max_part=16
>         initrd /boot/initramfs-3.19.0-ceph-00029-gaf5b96e.img
>
>
> and that works fine. Do you know if I could do that from the yaml file directly ? Alternatively I could use a kernel that does not have the loop module compiled in and modprobe it with loop.max_part=16, but it's unclear to me what kernels are available and what their names are.

I think you can run a set of commands from yaml (see "exec" stanza), so
you can sort of script it (GRUB_CMDLINE_LINUX=, etc), but that's never
going to be reliable.

Basically the only kernel available right now is the testing flavor.
debug flavor is some random config and has been unused for a while.
I can try changing testing config to do CONFIG_BLK_DEV_LOOP=m if you
don't find a good way to workaround it.

https://lists.ubuntu.com/archives/kernel-team/2009-September/007364.html
is apparently the reason it's compiled in (our config is a somewhat old
and slightly stripped down Ubuntu config).

Thanks,

                Ilya

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: teuthology-lock and choosing a kernel
  2015-03-10  7:19     ` Ilya Dryomov
@ 2015-03-10  7:48       ` Loic Dachary
  0 siblings, 0 replies; 5+ messages in thread
From: Loic Dachary @ 2015-03-10  7:48 UTC (permalink / raw)
  To: Ilya Dryomov; +Cc: Ceph Development

[-- Attachment #1: Type: text/plain, Size: 1962 bytes --]

Hi Ilya,

On 10/03/2015 08:19, Ilya Dryomov wrote:
> On Tue, Mar 10, 2015 at 1:32 AM, Loic Dachary <loic@dachary.org> wrote:
>> Hi Andrew,
>>
>> I successfully installed a 3.19 kernel (details at http://dachary.org/?p=3594). It turns out that the loop module is compiled in and defaults to having zero partitions allowed by default. Since I was looking for a solution to have /dev/loop useable for tests, I rebooted with /boot/grub/grub.conf as
>>
>> [ubuntu@vpm083 src]$ cat /boot/grub/grub.conf
>> default=0
>> timeout=5
>> splashimage=(hd0,0)/boot/grub/splash.xpm.gz
>> hiddenmenu
>> title rhel-6.5-cloudinit (3.19.0-ceph-00029-gaf5b96e)
>>         root (hd0,0)
>>         kernel /boot/vmlinuz-3.19.0-ceph-00029-gaf5b96e ro root=LABEL=79d3d2d4  loop.max_part=16
>>         initrd /boot/initramfs-3.19.0-ceph-00029-gaf5b96e.img
>>
>>
>> and that works fine. Do you know if I could do that from the yaml file directly ? Alternatively I could use a kernel that does not have the loop module compiled in and modprobe it with loop.max_part=16, but it's unclear to me what kernels are available and what their names are.
> 
> I think you can run a set of commands from yaml (see "exec" stanza), so
> you can sort of script it (GRUB_CMDLINE_LINUX=, etc), but that's never
> going to be reliable.
> 
> Basically the only kernel available right now is the testing flavor.
> debug flavor is some random config and has been unused for a while.
> I can try changing testing config to do CONFIG_BLK_DEV_LOOP=m if you
> don't find a good way to workaround it.
> 
> https://lists.ubuntu.com/archives/kernel-team/2009-September/007364.html
> is apparently the reason it's compiled in (our config is a somewhat old
> and slightly stripped down Ubuntu config).

Thanks for exploring the history so far back ;-) I hope to find an easier path but its comforting to know where we stand.

Cheers

-- 
Loïc Dachary, Artisan Logiciel Libre


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-03-10  7:48 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-08 12:50 teuthology-lock and choosing a kernel Loic Dachary
     [not found] ` <6E9B72C6-1FA0-46D3-873E-EA99A52BE669@redhat.com>
2015-03-09 14:01   ` Loic Dachary
2015-03-09 22:32   ` Loic Dachary
2015-03-10  7:19     ` Ilya Dryomov
2015-03-10  7:48       ` Loic Dachary

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.