All of lore.kernel.org
 help / color / mirror / Atom feed
* kvm-autotest -- introducing kvm_runtest_2
@ 2009-03-01 19:09 Uri Lublin
  2009-03-02 17:45 ` Ryan Harper
  0 siblings, 1 reply; 19+ messages in thread
From: Uri Lublin @ 2009-03-01 19:09 UTC (permalink / raw)
  To: KVM List; +Cc: Uri Lublin

Hello,

KVM-autotest is a test framework for kvm, based on autotest 
(http://autotest.kernel.org).

Its purpose is to keep kvm stable.
For developers, we want to find regressions early.
For users, we want users to feel confident kvm runs well on their own machine.
Also we would like to present results nicely, similar to http://test.kernel.org.


Today we have quite a few guests (including linux, windows, unix) and some tests 
(guest installation, boot, reboot and more)

For guest installation, we need the guest iso images. One needs to get the iso 
images before running guest installation tests (which is by default needed for 
all other tests to run).

Source code is located at git://git.kernel.org/pub/scm/virt/kvm/kvm-autotest.git

Interesting directories are
     client/tests/kvm_runtest_2 (newer)
     client/tests/kvm_runtest   (older, almost obsolete, not as interesting)

Wiki can be found at http://kvm.qumranet.com/kvmwiki/KVM_RegressionTest
      http://kvm.qumranet.com/kvmwiki/KVM_RegressionTest/GettingStarted
      http://kvm.qumranet.com/kvmwiki/RegressionTests/ConfigFile2

To run kvm_runtest_2 you need a kvm installed on you host. Please read
BEFORE_YOU_START file, and set up the symbolic links accordingly.

Currently kvm_runtest_2 does not install kvm, so you'd have to install it 
yourself. kvm_runtest "knows" how to install kvm (default is to install latest 
kvm release).
Currently kvm_runtest_2 uses default networking (-net user) and qemu 'redir', 
while kvm_runtest uses tap. By default, kvm_runtest starts a dhcpd server, to 
serve local kvm guests.

Please make sure you have a big enough disk for the guest images (each of a size 
of up to a few GB).

Comments/Suggestions/Ideas/Requests/Patches are welcome.

Thanks,
    Uri.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
  2009-03-01 19:09 kvm-autotest -- introducing kvm_runtest_2 Uri Lublin
@ 2009-03-02 17:45 ` Ryan Harper
  2009-03-04  8:58   ` Uri Lublin
  0 siblings, 1 reply; 19+ messages in thread
From: Ryan Harper @ 2009-03-02 17:45 UTC (permalink / raw)
  To: Uri Lublin; +Cc: KVM List

* Uri Lublin <uril@redhat.com> [2009-03-01 13:10]:
> Hello,
> 
> KVM-autotest is a test framework for kvm, based on autotest 
> (http://autotest.kernel.org).
> 
> Its purpose is to keep kvm stable.
> For developers, we want to find regressions early.
> For users, we want users to feel confident kvm runs well on their own 
> machine.
> Also we would like to present results nicely, similar to 
> http://test.kernel.org.
> 
> 
> Today we have quite a few guests (including linux, windows, unix) and some 
> tests (guest installation, boot, reboot and more)
> 
> For guest installation, we need the guest iso images. One needs to get the 
> iso images before running guest installation tests (which is by default 
> needed for all other tests to run).
> 
> Source code is located at 
> git://git.kernel.org/pub/scm/virt/kvm/kvm-autotest.git
> 
> Interesting directories are
>     client/tests/kvm_runtest_2 (newer)
>     client/tests/kvm_runtest   (older, almost obsolete, not as interesting)

I've been digging into kvm_runtest_2 and have some feedback, but first
to say, runtest_2 is huge cleanup and simplification from runtest, so
thanks. Now for some comments:

- kvm_tests.cfg has a decent learning curve to wrap your head around.
It would have been useful to have some debugging that would dump out
what rules were filtering out guests... the dependencies aren't always
easy to find.  My first experience with the dep hunt is just changing
'only qcow2' in fc8_quick to raw and getting no output from
kvm_config.py.  Turns out, that 'raw' has a smp2 requirement ... that
sort of filtering could be displayed with debugging output making
configuration changes easier.
  - documentation of keywords and structure would be nice, explaining
  what -variant , only and @ are doing for you, etc.
  - it seems like the definition and rules ought to be separate from the
  last section which defines which tests to run (the fc8_quick area), so
  adding something as simple as include support to kvm_config.py would
  be sufficient to support a common definition file but different
  testing rules.

- kvm_runtest_2 as mentioned doesn't mess with your host networking and
relies on -net user and redir, would be good to plumb through -net tap
support that can be configured instead of always using -net user

- make -vnc parameter config/optional

I noticed the references to the setup isos for windows that presumbly
install cygwin telnetd/sshd, are those available?  if the isos
themselves aren't, if the build instructions are, that would be very
useful.

- guest install wizard using md5sum region matching ... ouch.  This is
quite fickle.  I've seen different kvms generate different md5sum for
the same region a couple of times.  I know distributing screenshots of
certain OSes is a grey area, but it would be nice to plumb through
screenshot comparison and make that configurable.  FWIW, I'll probably
look at pulling the screenshot comparison bits from kvmtest and getting
that integrated in kvm_runtest_2.


- kvm_runtest_2 looks a lot more like a regular autotest test, which is
a Good Thing(TM).  There are still some things that would prevent it
going upstream autotest (which I assume is the long term goal)
  - a lot of the ssh and scp work to copy autotest client into a guest
  is already handled by autoserv
  - vm.py has a lot of infrastructure that should be integrated into
  autotest/server/kvm.py  or possibly client-side common code to support
  the next comment
  - kvm_tests.py defines new tests as functions, each of these tests
  should be a separate client tests  which sounds like a pain, but
  should allow for easier test composition and hopefully make it easier
  to add new tests that look like any other client side test with just
  the implementation.py and control file
    - this model moves toward eliminating kvm_runtest_2 and having a
    server-side generate a set of tests to run and spawning those on a
    target system.

  I do still like the idea of having a client-side test that can just
  run on a developer/user's system to produce results without having to
  configure all of the autotest server-side bits.


-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ryanh@us.ibm.com

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
  2009-03-02 17:45 ` Ryan Harper
@ 2009-03-04  8:58   ` Uri Lublin
  2009-03-04 18:15     ` Ryan Harper
  2009-03-09 16:23     ` Ryan Harper
  0 siblings, 2 replies; 19+ messages in thread
From: Uri Lublin @ 2009-03-04  8:58 UTC (permalink / raw)
  To: Ryan Harper; +Cc: KVM List

Ryan Harper wrote:
> * Uri Lublin <uril@redhat.com> [2009-03-01 13:10]:

Ryan,

Sorry for the late response.

>> KVM-autotest is a test framework for kvm, based on autotest 
>> (http://autotest.kernel.org).
> 
> I've been digging into kvm_runtest_2 and have some feedback, but first
> to say, runtest_2 is huge cleanup and simplification from runtest, so
> thanks. Now for some comments:
> 
> - kvm_tests.cfg has a decent learning curve to wrap your head around.
> It would have been useful to have some debugging that would dump out
> what rules were filtering out guests... the dependencies aren't always
> easy to find.  My first experience with the dep hunt is just changing
> 'only qcow2' in fc8_quick to raw and getting no output from
> kvm_config.py.  Turns out, that 'raw' has a smp2 requirement ... that
> sort of filtering could be displayed with debugging output making
> configuration changes easier.
I agree, we need to add some debug messages to kvm_config.py.
The smp - raw connection, of course, should not have left in the configuration
file. It was just us testing the code.
>   - documentation of keywords and structure would be nice, explaining
>   what -variant , only and @ are doing for you, etc.

Please read http://kvm.qumranet.com/kvmwiki/RegressionTests/ConfigFile2

>   - it seems like the definition and rules ought to be separate from the
>   last section which defines which tests to run (the fc8_quick area), so
>   adding something as simple as include support to kvm_config.py would
>   be sufficient to support a common definition file but different
>   testing rules.
An include support is one way to do it. We thought of a different way, which is
to add rules from the control file. So the control file would pick the test-list
it wants to run. Your suggestion is simpler but you need both a config file and
a control file to change the test-list. We need to change only the control file.

> 
> - kvm_runtest_2 as mentioned doesn't mess with your host networking and
> relies on -net user and redir, would be good to plumb through -net tap
> support that can be configured instead of always using -net user
We want to add -net tap support, as that is what users usually use.
kvm_runtest does exactly that (a part of kvm_host.cfg). The drawbacks of tap are
(among others):
  - One must run tests as root, or play with sudo/chmod (True for /dev/kvm, but
simpler)
  - You have to a have a dhcpd around. kvm_runtest by default runs a local dhcpd
to serve kvm guests (part of setup/cleanup tests).
  - A bit more difficult to configure.

> 
> - make -vnc parameter config/optional
Agreed

> 
> I noticed the references to the setup isos for windows that presumbly
> install cygwin telnetd/sshd, are those available?  if the isos
> themselves aren't, if the build instructions are, that would be very
> useful.
You are right. We do have an installation iso images for telnetd/sshd.
I did not want to commit iso images. Also, I am not sure about licensing, and I 
prefer that we would generate them on the user machine. We'll add the build 
instructions to the wiki.

> 
> - guest install wizard using md5sum region matching ... ouch.  This is
> quite fickle.  I've seen different kvms generate different md5sum for
> the same region a couple of times.  I know distributing screenshots of
> certain OSes is a grey area, but it would be nice to plumb through
> screenshot comparison and make that configurable.  FWIW, I'll probably
> look at pulling the screenshot comparison bits from kvmtest and getting
> that integrated in kvm_runtest_2.
Creating a step file is not as easy as it seems, exactly for that reason. One 
has to pick a part of the screenshot that only available when input is expected 
and that would be consistent. We were thinking of replacing the md5sum with a 
tiny compressed image of the part of the image that was picked.
We had two other implementation for guest-wizard, one which only compares 
two/three consecutive screendumps (problems with e.g. clock ticking), and one 
similar to kvmtest. The kvmtest way is to let the user create his/her own 
screendumps to be used later. I did not want to add so many screendumps images 
to the repository. Step-Maker keeps the images it uses, so we can compare them 
upon failure. Step-Editor lets the user to change a single barrier_2 step (or 
more) by looking at the original image and picking a different area.

> 
> 
> - kvm_runtest_2 looks a lot more like a regular autotest test, which is
> a Good Thing(TM).  There are still some things that would prevent it
> going upstream autotest (which I assume is the long term goal)
You assume correctly.

>   - a lot of the ssh and scp work to copy autotest client into a guest
>   is already handled by autoserv
That is true, but we want to be able to run it as client test too. That way a 
user does not have to install the server to run kvm tests on his/her machine.

>   - vm.py has a lot of infrastructure that should be integrated into
>   autotest/server/kvm.py  or possibly client-side common code to support
>   the next comment
In the long term, there should be a client-server shared directory that deals 
with kvm guests (letting the host-client be the server for kvm-guests clients)

>   - kvm_tests.py defines new tests as functions, each of these tests
>   should be a separate client tests  which sounds like a pain, but
>   should allow for easier test composition and hopefully make it easier
>   to add new tests that look like any other client side test with just
>   the implementation.py and control file
>     - this model moves toward eliminating kvm_runtest_2 and having a
>     server-side generate a set of tests to run and spawning those on a
>     target system.
I am not sure that I follow. Why implementing a test as a function is a pain ?
The plan is to keep kvm_runtest_2 in the client side, but move the configuration 
file + parser to the server (if one wants to dispatch test from the server). The 
server can dispatch the client test (using kvm_runtest_2 "test" + dictionary + 
tag). We have dependencies and we can spread unrelated kvm tests among similar 
hosts (e.g install guest OS and run tests for Fedora-10 64 bit on one machine, 
and install guest OS and run tests for Windows-XP 32 bit on another).
We may have to add hosts into the configuration file, or add it while running 
(from the control file ?). We sure do not want to go back to the way kvm_runtest 
works (playing with symbolic links so that autotest would find and run the 
test), and we do not want too many kvm_test_NAME directories under client/tests/ .

> 
>   I do still like the idea of having a client-side test that can just
>   run on a developer/user's system to produce results without having to
>   configure all of the autotest server-side bits.
Me too :-)

Thanks for all the comments and suggestions,
     Uri.




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
  2009-03-04  8:58   ` Uri Lublin
@ 2009-03-04 18:15     ` Ryan Harper
  2009-03-04 18:59       ` sudhir kumar
  2009-03-09 16:23     ` Ryan Harper
  1 sibling, 1 reply; 19+ messages in thread
From: Ryan Harper @ 2009-03-04 18:15 UTC (permalink / raw)
  To: Uri Lublin; +Cc: Ryan Harper, KVM List

* Uri Lublin <uril@redhat.com> [2009-03-04 02:59]:
> 
> >  - it seems like the definition and rules ought to be separate from the
> >  last section which defines which tests to run (the fc8_quick area), so
> >  adding something as simple as include support to kvm_config.py would
> >  be sufficient to support a common definition file but different
> >  testing rules.
> An include support is one way to do it. We thought of a different way, 
> which is
> to add rules from the control file. So the control file would pick the 
> test-list
> it wants to run. Your suggestion is simpler but you need both a config file 
> and
> a control file to change the test-list. We need to change only the control 
> file.

OK, well, I viewed almost all of the test config file as static.  The
rules about guests, features, config variants, can change over time, but
not that often which is what I was wanting an include of that mostly
static data and then something else that picked which guests and tests
to run.  It does make sense for the control file to pick the set of
tests, so I think we're in agreement, though I do think adding include
support is still a good idea, but much lower on the priority list.

> 
> >
> >- kvm_runtest_2 as mentioned doesn't mess with your host networking and
> >relies on -net user and redir, would be good to plumb through -net tap
> >support that can be configured instead of always using -net user
> We want to add -net tap support, as that is what users usually use.
> kvm_runtest does exactly that (a part of kvm_host.cfg). The drawbacks of 
> tap are
> (among others):
>  - One must run tests as root, or play with sudo/chmod (True for /dev/kvm, 
>  but
> simpler)
>  - You have to a have a dhcpd around. kvm_runtest by default runs a local 
>  dhcpd
> to serve kvm guests (part of setup/cleanup tests).
>  - A bit more difficult to configure.
> 

I don't want to have the test *setup* tap and all of the infrastructure
like dhcp, or dnsmasq... I just want to be able to configure whether a
guest is launched with -net tap or -net user.  kvm_runtest_2 punts on
building and install kvm as well as the networking, which is a great
idea, I just want to be flexible enough to run kvm_runtest_2 with -net
tap as well as -net user.


> >
> >I noticed the references to the setup isos for windows that presumbly
> >install cygwin telnetd/sshd, are those available?  if the isos
> >themselves aren't, if the build instructions are, that would be very
> >useful.
> You are right. We do have an installation iso images for telnetd/sshd.
> I did not want to commit iso images. Also, I am not sure about licensing, 
> and I prefer that we would generate them on the user machine. We'll add the 
> build instructions to the wiki.

Agreed.  Sounds like a plan.

> >- guest install wizard using md5sum region matching ... ouch.  This is
> >quite fickle.  I've seen different kvms generate different md5sum for
> >the same region a couple of times.  I know distributing screenshots of
> >certain OSes is a grey area, but it would be nice to plumb through
> >screenshot comparison and make that configurable.  FWIW, I'll probably
> >look at pulling the screenshot comparison bits from kvmtest and getting
> >that integrated in kvm_runtest_2.
> Creating a step file is not as easy as it seems, exactly for that reason. 
> One has to pick a part of the screenshot that only available when input is 
> expected and that would be consistent. We were thinking of replacing the 
> md5sum with a tiny compressed image of the part of the image that was 
> picked.

It isn't just that step file creation isn't easy is that even with a
good stepfile with smart region boxes, md5sum can *still* fail because
KVM renders one pixel in the region differently than the system where the
original was created; this creates false positives failures.


> We had two other implementation for guest-wizard, one which only compares 
> two/three consecutive screendumps (problems with e.g. clock ticking), and 

Right, I like the idea of the region selection in stepmaker, it lets us
avoid areas which have VM specific info, like the device path or clock.
I'd like to keep the current stepmaker and region code, what I'd like to
see instead of just md5sum compare of the cropped region, to be able to
plug in different comparisons.  If a user does have screens from
stepmaker available, guest_wizard could attempt to do screen compare
like we do in kvmtests with match percentages.  If no screens are
available, fallback on md5 or some other comparison algorithm.

> one similar to kvmtest. The kvmtest way is to let the user create his/her 
> own screendumps to be used later. I did not want to add so many screendumps 
> images to the repository. Step-Maker keeps the images it uses, so we can 
> compare them upon failure. Step-Editor lets the user to change a single 
> barrier_2 step (or more) by looking at the original image and picking a 
> different area.

Agreed, I don't want to commit screens to the repo either, I just want
to be able to use screens if a user has them available.

> >  - a lot of the ssh and scp work to copy autotest client into a guest
> >  is already handled by autoserv
> That is true, but we want to be able to run it as client test too. That way 
> a user does not have to install the server to run kvm tests on his/her 
> machine.

While true, we should try to use the existing server code for autotest
install.

> >  - vm.py has a lot of infrastructure that should be integrated into
> >  autotest/server/kvm.py  or possibly client-side common code to support
> >  the next comment
> In the long term, there should be a client-server shared directory that 
> deals with kvm guests (letting the host-client be the server for kvm-guests 
> clients)

I believe client/common_lib is imported into server-side as common code,
so moving kvm infrastructure bits there will allow server-side and any
other client test to manipulate VM/KVM objects.

> 
> >  - kvm_tests.py defines new tests as functions, each of these tests
> >  should be a separate client tests  which sounds like a pain, but
> >  should allow for easier test composition and hopefully make it easier
> >  to add new tests that look like any other client side test with just
> >  the implementation.py and control file
> >    - this model moves toward eliminating kvm_runtest_2 and having a
> >    server-side generate a set of tests to run and spawning those on a
> >    target system.
> I am not sure that I follow. Why implementing a test as a function is a 
> pain ?

Test as a function of course isn't a pain.  What I meant was that if
I've already have to guests and I just want to do a migration test, it
would be nice to just be able to:

% cd client/tests/kvm/migration
% $AUTOTEST_BIN ./control

I'd like to move away from tests eval-ing and exec-ing other tests; it
just feels a little hacky and stands out versus other autotest client
tests.

We can probably table the discussion until we push patches at autotest
and see what that community thinks of kvm_runtest_2 at that time.


> The plan is to keep kvm_runtest_2 in the client side, but move the 
> configuration file + parser to the server (if one wants to dispatch test 
> from the server). The server can dispatch the client test (using 
> kvm_runtest_2 "test" + dictionary + tag). We have dependencies and we can 
> spread unrelated kvm tests among similar hosts (e.g install guest OS and 
> run tests for Fedora-10 64 bit on one machine, and install guest OS and run 
> tests for Windows-XP 32 bit on another).

Yeah, that sounds reasonable.

> We may have to add hosts into the configuration file, or add it while 
> running (from the control file ?). We sure do not want to go back to the 
> way kvm_runtest works (playing with symbolic links so that autotest would 
> find and run the test), and we do not want too many kvm_test_NAME 
> directories under client/tests/ .

Agree with no symbolic links.  If we move common kvm utils and objects
to client/common_lib that avoids any of that hackery.

On the dir structure, I agree we don't have to pollute client/tests with
a ton of kvm_* tests.  I'll look around upstream autotest and see if
there is an other client side tests to use an example.

> >  I do still like the idea of having a client-side test that can just
> >  run on a developer/user's system to produce results without having to
> >  configure all of the autotest server-side bits.
> Me too :-)
> 
> Thanks for all the comments and suggestions,

Sure


-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ryanh@us.ibm.com

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
  2009-03-04 18:15     ` Ryan Harper
@ 2009-03-04 18:59       ` sudhir kumar
  2009-03-04 22:23         ` Dor Laor
  0 siblings, 1 reply; 19+ messages in thread
From: sudhir kumar @ 2009-03-04 18:59 UTC (permalink / raw)
  To: Ryan Harper; +Cc: Uri Lublin, KVM List

On Wed, Mar 4, 2009 at 11:45 PM, Ryan Harper <ryanh@us.ibm.com> wrote:
> * Uri Lublin <uril@redhat.com> [2009-03-04 02:59]:
>>
>> >  - it seems like the definition and rules ought to be separate from the
>> >  last section which defines which tests to run (the fc8_quick area), so
>> >  adding something as simple as include support to kvm_config.py would
>> >  be sufficient to support a common definition file but different
>> >  testing rules.
>> An include support is one way to do it. We thought of a different way,
>> which is
>> to add rules from the control file. So the control file would pick the
>> test-list
>> it wants to run. Your suggestion is simpler but you need both a config file
>> and
>> a control file to change the test-list. We need to change only the control
>> file.
>
> OK, well, I viewed almost all of the test config file as static.  The
> rules about guests, features, config variants, can change over time, but
> not that often which is what I was wanting an include of that mostly
> static data and then something else that picked which guests and tests
> to run.  It does make sense for the control file to pick the set of
> tests, so I think we're in agreement, though I do think adding include
> support is still a good idea, but much lower on the priority list.
>
>>
>> >
>> >- kvm_runtest_2 as mentioned doesn't mess with your host networking and
>> >relies on -net user and redir, would be good to plumb through -net tap
>> >support that can be configured instead of always using -net user
>> We want to add -net tap support, as that is what users usually use.
>> kvm_runtest does exactly that (a part of kvm_host.cfg). The drawbacks of
>> tap are
>> (among others):
>>  - One must run tests as root, or play with sudo/chmod (True for /dev/kvm,
>>  but
>> simpler)
>>  - You have to a have a dhcpd around. kvm_runtest by default runs a local
>>  dhcpd
>> to serve kvm guests (part of setup/cleanup tests).
>>  - A bit more difficult to configure.
>>
>
> I don't want to have the test *setup* tap and all of the infrastructure
> like dhcp, or dnsmasq... I just want to be able to configure whether a
> guest is launched with -net tap or -net user.  kvm_runtest_2 punts on
> building and install kvm as well as the networking, which is a great
> idea, I just want to be flexible enough to run kvm_runtest_2 with -net
> tap as well as -net user.
>
>
>> >
>> >I noticed the references to the setup isos for windows that presumbly
>> >install cygwin telnetd/sshd, are those available?  if the isos
>> >themselves aren't, if the build instructions are, that would be very
>> >useful.
>> You are right. We do have an installation iso images for telnetd/sshd.
>> I did not want to commit iso images. Also, I am not sure about licensing,
>> and I prefer that we would generate them on the user machine. We'll add the
>> build instructions to the wiki.
>
> Agreed.  Sounds like a plan.
>
>> >- guest install wizard using md5sum region matching ... ouch.  This is
>> >quite fickle.  I've seen different kvms generate different md5sum for
>> >the same region a couple of times.  I know distributing screenshots of
>> >certain OSes is a grey area, but it would be nice to plumb through
>> >screenshot comparison and make that configurable.  FWIW, I'll probably
>> >look at pulling the screenshot comparison bits from kvmtest and getting
>> >that integrated in kvm_runtest_2.
>> Creating a step file is not as easy as it seems, exactly for that reason.
I agree here 100% as per my experience.
>> One has to pick a part of the screenshot that only available when input is
>> expected and that would be consistent. We were thinking of replacing the
>> md5sum with a tiny compressed image of the part of the image that was
>> picked.
>
> It isn't just that step file creation isn't easy is that even with a
> good stepfile with smart region boxes, md5sum can *still* fail because
> KVM renders one pixel in the region differently than the system where the
> original was created; this creates false positives failures.
I also have faced this issue. Even on the same system the step file
may fail. I saw few(though not frequent) situations where the same
md5sum passed in one time and failed in the other attempt.
>
>
>> We had two other implementation for guest-wizard, one which only compares
>> two/three consecutive screendumps (problems with e.g. clock ticking), and
>
> Right, I like the idea of the region selection in stepmaker, it lets us
> avoid areas which have VM specific info, like the device path or clock.
> I'd like to keep the current stepmaker and region code, what I'd like to
> see instead of just md5sum compare of the cropped region, to be able to
> plug in different comparisons.  If a user does have screens from
> stepmaker available, guest_wizard could attempt to do screen compare
> like we do in kvmtests with match percentages.  If no screens are
> available, fallback on md5 or some other comparison algorithm.
That sounds better. Yet none of my step files worked in one go. I
remember running my test and stepmaker parallelly and continue taking
screenshots untill one passes and putting that md5sum in the step
file. So at the end I was fully meshed up with respect to the cropped
images and I had no idea which cropped image corresponded to which
md5sum.
>
Though the RHEL5.3 step files that I generated in the text mode
installation were quite strong.

>> one similar to kvmtest. The kvmtest way is to let the user create his/her
>> own screendumps to be used later. I did not want to add so many screendumps
>> images to the repository. Step-Maker keeps the images it uses, so we can
>> compare them upon failure. Step-Editor lets the user to change a single
>> barrier_2 step (or more) by looking at the original image and picking a
>> different area.
>
> Agreed, I don't want to commit screens to the repo either, I just want
> to be able to use screens if a user has them available.
>
I have two questions with respect to stepfiles.
1. The timeouts: Timeouts may fall short if a step file generated on a
high end machine is to be used on a very low config machine or
installing N virtual machines (say N ~ 50,100 etc) in parallel.
2. If there are changes in KVM display in future the md5sum will fail.
So are we prepared for that?

>> >  - a lot of the ssh and scp work to copy autotest client into a guest
>> >  is already handled by autoserv
>> That is true, but we want to be able to run it as client test too. That way
>> a user does not have to install the server to run kvm tests on his/her
>> machine.
>
> While true, we should try to use the existing server code for autotest
> install.
>
>> >  - vm.py has a lot of infrastructure that should be integrated into
>> >  autotest/server/kvm.py  or possibly client-side common code to support
>> >  the next comment
>> In the long term, there should be a client-server shared directory that
>> deals with kvm guests (letting the host-client be the server for kvm-guests
>> clients)
>
> I believe client/common_lib is imported into server-side as common code,
> so moving kvm infrastructure bits there will allow server-side and any
> other client test to manipulate VM/KVM objects.
>
>>
>> >  - kvm_tests.py defines new tests as functions, each of these tests
>> >  should be a separate client tests  which sounds like a pain, but
>> >  should allow for easier test composition and hopefully make it easier
>> >  to add new tests that look like any other client side test with just
>> >  the implementation.py and control file
>> >    - this model moves toward eliminating kvm_runtest_2 and having a
>> >    server-side generate a set of tests to run and spawning those on a
>> >    target system.
>> I am not sure that I follow. Why implementing a test as a function is a
>> pain ?
>
> Test as a function of course isn't a pain.  What I meant was that if
> I've already have to guests and I just want to do a migration test, it
> would be nice to just be able to:
>
> % cd client/tests/kvm/migration
> % $AUTOTEST_BIN ./control
>
> I'd like to move away from tests eval-ing and exec-ing other tests; it
> just feels a little hacky and stands out versus other autotest client
> tests.
>
> We can probably table the discussion until we push patches at autotest
> and see what that community thinks of kvm_runtest_2 at that time.
>
>
>> The plan is to keep kvm_runtest_2 in the client side, but move the
>> configuration file + parser to the server (if one wants to dispatch test
>> from the server). The server can dispatch the client test (using
>> kvm_runtest_2 "test" + dictionary + tag). We have dependencies and we can
>> spread unrelated kvm tests among similar hosts (e.g install guest OS and
>> run tests for Fedora-10 64 bit on one machine, and install guest OS and run
>> tests for Windows-XP 32 bit on another).
>
> Yeah, that sounds reasonable.
>
>> We may have to add hosts into the configuration file, or add it while
>> running (from the control file ?). We sure do not want to go back to the
>> way kvm_runtest works (playing with symbolic links so that autotest would
>> find and run the test), and we do not want too many kvm_test_NAME
>> directories under client/tests/ .
>
> Agree with no symbolic links.  If we move common kvm utils and objects
> to client/common_lib that avoids any of that hackery.
>
> On the dir structure, I agree we don't have to pollute client/tests with
> a ton of kvm_* tests.  I'll look around upstream autotest and see if
> there is an other client side tests to use an example.
>
>> >  I do still like the idea of having a client-side test that can just
>> >  run on a developer/user's system to produce results without having to
>> >  configure all of the autotest server-side bits.
>> Me too :-)
>>
>> Thanks for all the comments and suggestions,
>
> Sure
>
>
> --
> Ryan Harper
> Software Engineer; Linux Technology Center
> IBM Corp., Austin, Tx
> ryanh@us.ibm.com
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Sudhir Kumar

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
  2009-03-04 18:59       ` sudhir kumar
@ 2009-03-04 22:23         ` Dor Laor
  0 siblings, 0 replies; 19+ messages in thread
From: Dor Laor @ 2009-03-04 22:23 UTC (permalink / raw)
  To: sudhir kumar; +Cc: Ryan Harper, Uri Lublin, KVM List

sudhir kumar wrote:
> On Wed, Mar 4, 2009 at 11:45 PM, Ryan Harper <ryanh@us.ibm.com> wrote:
>   
>> * Uri Lublin <uril@redhat.com> [2009-03-04 02:59]:
>>     
>>>>  - it seems like the definition and rules ought to be separate from the
>>>>  last section which defines which tests to run (the fc8_quick area), so
>>>>  adding something as simple as include support to kvm_config.py would
>>>>  be sufficient to support a common definition file but different
>>>>  testing rules.
>>>>         
>>> An include support is one way to do it. We thought of a different way,
>>> which is
>>> to add rules from the control file. So the control file would pick the
>>> test-list
>>> it wants to run. Your suggestion is simpler but you need both a config file
>>> and
>>> a control file to change the test-list. We need to change only the control
>>> file.
>>>       
>> OK, well, I viewed almost all of the test config file as static.  The
>> rules about guests, features, config variants, can change over time, but
>> not that often which is what I was wanting an include of that mostly
>> static data and then something else that picked which guests and tests
>> to run.  It does make sense for the control file to pick the set of
>> tests, so I think we're in agreement, though I do think adding include
>> support is still a good idea, but much lower on the priority list.
>>
>>     
>>>> - kvm_runtest_2 as mentioned doesn't mess with your host networking and
>>>> relies on -net user and redir, would be good to plumb through -net tap
>>>> support that can be configured instead of always using -net user
>>>>         
>>> We want to add -net tap support, as that is what users usually use.
>>> kvm_runtest does exactly that (a part of kvm_host.cfg). The drawbacks of
>>> tap are
>>> (among others):
>>>  - One must run tests as root, or play with sudo/chmod (True for /dev/kvm,
>>>  but
>>> simpler)
>>>  - You have to a have a dhcpd around. kvm_runtest by default runs a local
>>>  dhcpd
>>> to serve kvm guests (part of setup/cleanup tests).
>>>  - A bit more difficult to configure.
>>>
>>>       
>> I don't want to have the test *setup* tap and all of the infrastructure
>> like dhcp, or dnsmasq... I just want to be able to configure whether a
>> guest is launched with -net tap or -net user.  kvm_runtest_2 punts on
>> building and install kvm as well as the networking, which is a great
>> idea, I just want to be flexible enough to run kvm_runtest_2 with -net
>> tap as well as -net user.
>>
>>
>>     
>>>> I noticed the references to the setup isos for windows that presumbly
>>>> install cygwin telnetd/sshd, are those available?  if the isos
>>>> themselves aren't, if the build instructions are, that would be very
>>>> useful.
>>>>         
>>> You are right. We do have an installation iso images for telnetd/sshd.
>>> I did not want to commit iso images. Also, I am not sure about licensing,
>>> and I prefer that we would generate them on the user machine. We'll add the
>>> build instructions to the wiki.
>>>       
>> Agreed.  Sounds like a plan.
>>
>>     
>>>> - guest install wizard using md5sum region matching ... ouch.  This is
>>>> quite fickle.  I've seen different kvms generate different md5sum for
>>>> the same region a couple of times.  I know distributing screenshots of
>>>> certain OSes is a grey area, but it would be nice to plumb through
>>>> screenshot comparison and make that configurable.  FWIW, I'll probably
>>>> look at pulling the screenshot comparison bits from kvmtest and getting
>>>> that integrated in kvm_runtest_2.
>>>>         
>>> Creating a step file is not as easy as it seems, exactly for that reason.
>>>       
> I agree here 100% as per my experience.
>   
>>> One has to pick a part of the screenshot that only available when input is
>>> expected and that would be consistent. We were thinking of replacing the
>>> md5sum with a tiny compressed image of the part of the image that was
>>> picked.
>>>       
>> It isn't just that step file creation isn't easy is that even with a
>> good stepfile with smart region boxes, md5sum can *still* fail because
>> KVM renders one pixel in the region differently than the system where the
>> original was created; this creates false positives failures.
>>     
> I also have faced this issue. Even on the same system the step file
> may fail. I saw few(though not frequent) situations where the same
> md5sum passed in one time and failed in the other attempt.
>   
Maybe we can do some type of graphic processing to the original bitmap 
to reduce
its quality drastically, then do the md5sum. It won't be 100% process 
but can deal with
most problems. Anyway I don't think we run into these issues here.

As a rule of the thumb I like to first use kickstart files instead of 
step maker files
when possible. It will take the timing issue out of the equation. It is 
very important
for running the same test over plain qemu and kvm. Windows also has its 
version
of it (answer files) so even the 'gui' OS can be used with it.
>>     
>>> We had two other implementation for guest-wizard, one which only compares
>>> two/three consecutive screendumps (problems with e.g. clock ticking), and
>>>       
>> Right, I like the idea of the region selection in stepmaker, it lets us
>> avoid areas which have VM specific info, like the device path or clock.
>> I'd like to keep the current stepmaker and region code, what I'd like to
>> see instead of just md5sum compare of the cropped region, to be able to
>> plug in different comparisons.  If a user does have screens from
>> stepmaker available, guest_wizard could attempt to do screen compare
>> like we do in kvmtests with match percentages.  If no screens are
>> available, fallback on md5 or some other comparison algorithm.
>>     
> That sounds better. Yet none of my step files worked in one go. I
> remember running my test and stepmaker parallelly and continue taking
> screenshots untill one passes and putting that md5sum in the step
> file. So at the end I was fully meshed up with respect to the cropped
> images and I had no idea which cropped image corresponded to which
> md5sum.
>   
> Though the RHEL5.3 step files that I generated in the text mode
> installation were quite strong.
>
>   
>>> one similar to kvmtest. The kvmtest way is to let the user create his/her
>>> own screendumps to be used later. I did not want to add so many screendumps
>>> images to the repository. Step-Maker keeps the images it uses, so we can
>>> compare them upon failure. Step-Editor lets the user to change a single
>>> barrier_2 step (or more) by looking at the original image and picking a
>>> different area.
>>>       
>> Agreed, I don't want to commit screens to the repo either, I just want
>> to be able to use screens if a user has them available.
>>
>>     
> I have two questions with respect to stepfiles.
> 1. The timeouts: Timeouts may fall short if a step file generated on a
> high end machine is to be used on a very low config machine or
> installing N virtual machines (say N ~ 50,100 etc) in parallel.
> 2. If there are changes in KVM display in future the md5sum will fail.
> So are we prepared for that?
>
>   
>>>>  - a lot of the ssh and scp work to copy autotest client into a guest
>>>>  is already handled by autoserv
>>>>         
>>> That is true, but we want to be able to run it as client test too. That way
>>> a user does not have to install the server to run kvm tests on his/her
>>> machine.
>>>       
>> While true, we should try to use the existing server code for autotest
>> install.
>>
>>     
>>>>  - vm.py has a lot of infrastructure that should be integrated into
>>>>  autotest/server/kvm.py  or possibly client-side common code to support
>>>>  the next comment
>>>>         
>>> In the long term, there should be a client-server shared directory that
>>> deals with kvm guests (letting the host-client be the server for kvm-guests
>>> clients)
>>>       
>> I believe client/common_lib is imported into server-side as common code,
>> so moving kvm infrastructure bits there will allow server-side and any
>> other client test to manipulate VM/KVM objects.
>>
>>     
>>>>  - kvm_tests.py defines new tests as functions, each of these tests
>>>>  should be a separate client tests  which sounds like a pain, but
>>>>  should allow for easier test composition and hopefully make it easier
>>>>  to add new tests that look like any other client side test with just
>>>>  the implementation.py and control file
>>>>    - this model moves toward eliminating kvm_runtest_2 and having a
>>>>    server-side generate a set of tests to run and spawning those on a
>>>>    target system.
>>>>         
>>> I am not sure that I follow. Why implementing a test as a function is a
>>> pain ?
>>>       
>> Test as a function of course isn't a pain.  What I meant was that if
>> I've already have to guests and I just want to do a migration test, it
>> would be nice to just be able to:
>>
>> % cd client/tests/kvm/migration
>> % $AUTOTEST_BIN ./control
>>
>> I'd like to move away from tests eval-ing and exec-ing other tests; it
>> just feels a little hacky and stands out versus other autotest client
>> tests.
>>
>> We can probably table the discussion until we push patches at autotest
>> and see what that community thinks of kvm_runtest_2 at that time.
>>
>>
>>     
>>> The plan is to keep kvm_runtest_2 in the client side, but move the
>>> configuration file + parser to the server (if one wants to dispatch test
>>> from the server). The server can dispatch the client test (using
>>> kvm_runtest_2 "test" + dictionary + tag). We have dependencies and we can
>>> spread unrelated kvm tests among similar hosts (e.g install guest OS and
>>> run tests for Fedora-10 64 bit on one machine, and install guest OS and run
>>> tests for Windows-XP 32 bit on another).
>>>       
>> Yeah, that sounds reasonable.
>>
>>     
>>> We may have to add hosts into the configuration file, or add it while
>>> running (from the control file ?). We sure do not want to go back to the
>>> way kvm_runtest works (playing with symbolic links so that autotest would
>>> find and run the test), and we do not want too many kvm_test_NAME
>>> directories under client/tests/ .
>>>       
>> Agree with no symbolic links.  If we move common kvm utils and objects
>> to client/common_lib that avoids any of that hackery.
>>
>> On the dir structure, I agree we don't have to pollute client/tests with
>> a ton of kvm_* tests.  I'll look around upstream autotest and see if
>> there is an other client side tests to use an example.
>>
>>     
>>>>  I do still like the idea of having a client-side test that can just
>>>>  run on a developer/user's system to produce results without having to
>>>>  configure all of the autotest server-side bits.
>>>>         
>>> Me too :-)
>>>
>>> Thanks for all the comments and suggestions,
>>>       
>> Sure
>>
>>
>> --
>> Ryan Harper
>> Software Engineer; Linux Technology Center
>> IBM Corp., Austin, Tx
>> ryanh@us.ibm.com
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>     
>
>
>
>   


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
  2009-03-04  8:58   ` Uri Lublin
  2009-03-04 18:15     ` Ryan Harper
@ 2009-03-09 16:23     ` Ryan Harper
  2009-03-09 17:53       ` Uri Lublin
  1 sibling, 1 reply; 19+ messages in thread
From: Ryan Harper @ 2009-03-09 16:23 UTC (permalink / raw)
  To: Uri Lublin; +Cc: Ryan Harper, KVM List

* Uri Lublin <uril@redhat.com> [2009-03-04 01:59]:
> >
> >I noticed the references to the setup isos for windows that presumbly
> >install cygwin telnetd/sshd, are those available?  if the isos
> >themselves aren't, if the build instructions are, that would be very
> >useful.
> You are right. We do have an installation iso images for telnetd/sshd.
> I did not want to commit iso images. Also, I am not sure about licensing, 
> and I prefer that we would generate them on the user machine. We'll add the 
> build instructions to the wiki.

Any ETA on the build instructions?

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ryanh@us.ibm.com

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
  2009-03-09 16:23     ` Ryan Harper
@ 2009-03-09 17:53       ` Uri Lublin
  0 siblings, 0 replies; 19+ messages in thread
From: Uri Lublin @ 2009-03-09 17:53 UTC (permalink / raw)
  To: Ryan Harper; +Cc: KVM List

Ryan Harper wrote:
> * Uri Lublin <uril@redhat.com> [2009-03-04 01:59]:
>>> I noticed the references to the setup isos for windows that presumbly
>>> install cygwin telnetd/sshd, are those available?  if the isos
>>> themselves aren't, if the build instructions are, that would be very
>>> useful.
>> You are right. We do have an installation iso images for telnetd/sshd.
>> I did not want to commit iso images. Also, I am not sure about licensing, 
>> and I prefer that we would generate them on the user machine. We'll add the 
>> build instructions to the wiki.
> 
> Any ETA on the build instructions?
> 

We are currently in the process of moving the wiki to a new location.
Till that happen, please try the following instructions:

To build the cdrom for ssh:
1. mkdir /tmp/ssh_for_win
2. cd /tmp/ssh_for_win
3. download setupssh.exe
4. cat > setup.bat << EOF
setupssh.exe
net user Administrator 123456
netsh firewall set opmode disable
c:
cd c:\Program Files\OpenSSH\bin
mkgroup -l >> ..\etc\group
mkpasswd -l >> ..\etc\passwd
net start opensshd
EOF
5. genisoimage -o setupssh.iso setup.bat setupssh.exe


For win2008 -- enabling telnet
1. mkdir /tmp/telnet_for_windows
2. cd /tmp/telnet_for_windows/
3. cat > setup.bat << EOF
servermanagercmd -install telnet-server
sc config TlntSvr start= auto
netsh firewall set opmode disable
net start telnet
EOF
4. genisoimage -o setuptelnet.iso setup.bat


Regards,
     Uri.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
  2009-03-12 14:03 ` Michael Goldish
@ 2009-03-12 14:21   ` Ryan Harper
  0 siblings, 0 replies; 19+ messages in thread
From: Ryan Harper @ 2009-03-12 14:21 UTC (permalink / raw)
  To: Michael Goldish; +Cc: Ryan Harper, KVM List, Uri Lublin

* Michael Goldish <mgoldish@redhat.com> [2009-03-12 09:04]:
> 
> > 
> > yep, used stepeditor to fix; defintely worth documenting where one
> > should be invoking stepeditor -- from the steps dir; if you don't run
> > it from there, it won't find the steps_data dir =(
> 
> Are you absolutely sure about that? That's not the way it's supposed
> to be. I tried running it on several machines and it worked every time
> regardless of where I invoked it from. Since it resides in the
> kvm_runtest_2 dir, I usually just change to that directory and type
> ./stepeditor.py. Then I use file->open and pick the steps file, and it
> works.

You're right, it was the stepfile that I opened since the data dir
variable is created from the name of the stepfile.

> 
> If you have a very recent version, you should have a dir named
> "steps_data" under "kvm_runtest_2", right next to "steps". Inside
> "steps_data" you should have the data dirs. For "steps/RHEL5.steps"

I've got whatever is latest in the public repo.

> the corresponding data dir would be "steps_data/RHEL5.steps_data/".
> If you have a slightly older version, you should have the data dirs
> inside the "steps" dir, next to the stepfiles themselves. For
> "steps/RHEL5.steps", the corresponding data dir would be
> "steps/RHEL5.steps_data/".
> 
> > I'll have to go back and re-read your email on where to put the
> > reference ppm files so one gets the refrence comparision.
> 
> The paragraph above applies to the reference comparison as well.

OK, cool.

> > Right - I suppose it might be better if the names of the windows iso
> > disks matched how MS names them in MSDN, for example, kvm_runtest
> > refers
> > to  Windows2008-x64.iso  which doesn't match any name from MSDN, what
> > we
> > have is:
> > en_windows_server_2008_datacenter_enterprise_standard_x64_dvd_X14-26714.iso
> 
> This is a very good idea. I wonder how we can find out the MSDN names
> of the ISOs we have.  > BTW, did the ISO you mentioned work with
> kvm_runtest?

MSDN lists the md5 and maybe sha1 hashs for the isos on the website
where they are downloaded.

That iso works until the step where it needs to set the password for the
user, and as we've discussed, without the original ppm files, I can't
figure out why it fails to match that screen.


-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ryanh@us.ibm.com

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
       [not found] <337817070.1631851236866509897.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
@ 2009-03-12 14:03 ` Michael Goldish
  2009-03-12 14:21   ` Ryan Harper
  0 siblings, 1 reply; 19+ messages in thread
From: Michael Goldish @ 2009-03-12 14:03 UTC (permalink / raw)
  To: Ryan Harper; +Cc: KVM List, Uri Lublin


----- "Ryan Harper" <ryanh@us.ibm.com> wrote:

> * Michael Goldish <mgoldish@redhat.com> [2009-03-12 02:26]:
> > 
> > > > Regarding the stepfiles you created for Linux -- I can't help
> much
> > > > with those since I don't have the data. I do believe that if I
> had
> > > the
> > > > data and the stepfiles I could quickly identify the problem, so
> if
> > > you
> > > > think those can be sent to us, I'd like to have them.
> > > 
> > > I created a stepfile for RHEL5 and what I'm seeing is that one of
> the
> > > screens I captured in stepmaker ended up having a focus ring
> around
> > > something and on replay the focus isn't there.  This situation
> isn't
> > > something that a new algo will fix as you pointed out.  I'm
> wondering
> > > if
> > > this is something you've seen.  I don't quite understand how it
> would
> > > happen since stepmaker and the replace send the same keystrokes.
>  I
> > > also
> > > don't see how in general this can be avoided.
> > 
> > The problem sounds familiar. Does the ring appear around one of the
> > GNOME menubars, i.e. around "Applications" or "System"? GNOME seems
> to
> > be somewhat indeterministic with those rings. If you run the
> stepfile
> > several times, you'll notice that in most cases you'll see a focus
> > ring (or no focus ring, I don't quite remember) and the rest of the
> > time you'll get the other case.
> 
> Ding Ding Ding! =)
> 
> > 
> > This can be avoided either with experience, or a good wiki entry on
> > picking the right barriers (which we plan to create). But you don't
> > have to avoid making mistakes with stepmaker -- most types of
> mistakes
> > are fixed very quickly and easily with stepeditor.
> 
> yep, used stepeditor to fix; defintely worth documenting where one
> should be invoking stepeditor -- from the steps dir; if you don't run
> it from there, it won't find the steps_data dir =(

Are you absolutely sure about that? That's not the way it's supposed to be. I tried running it on several machines and it worked every time regardless of where I invoked it from. Since it resides in the kvm_runtest_2 dir, I usually just change to that directory and type ./stepeditor.py. Then I use file->open and pick the steps file, and it works.

If you have a very recent version, you should have a dir named "steps_data" under "kvm_runtest_2", right next to "steps". Inside "steps_data" you should have the data dirs. For "steps/RHEL5.steps" the corresponding data dir would be "steps_data/RHEL5.steps_data/".
If you have a slightly older version, you should have the data dirs inside the "steps" dir, next to the stepfiles themselves. For "steps/RHEL5.steps", the corresponding data dir would be "steps/RHEL5.steps_data/".

> I'll have to go back and re-read your email on where to put the
> reference ppm files so one gets the refrence comparision.

The paragraph above applies to the reference comparison as well.

> > The other thing has to do with the ISO files. kvm_runtest has a
> very
> > important feature that we innocently forgot to implement in
> > kvm_runtest_2 -- md5sum verification of the ISO files. This means
> that
> > the framework currently makes no use of the "md5sum" and
> "md5sum_1m"
> > parameters in the config file. This means you might be using
> different
> > ISOs than the ones we made our stepfiles with. In that case I
> wouldn't
> > expect any stepfile to succeed. However, if you used these same
> ISOs
> > with kvm_runtest then they should be fine. In any case, I'll add
> the
> > feature ASAP to the git repository.
> 
> Right - I suppose it might be better if the names of the windows iso
> disks matched how MS names them in MSDN, for example, kvm_runtest
> refers
> to  Windows2008-x64.iso  which doesn't match any name from MSDN, what
> we
> have is:
> en_windows_server_2008_datacenter_enterprise_standard_x64_dvd_X14-26714.iso

This is a very good idea. I wonder how we can find out the MSDN names of the ISOs we have.
BTW, did the ISO you mentioned work with kvm_runtest?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
  2009-03-12  7:25 ` Michael Goldish
@ 2009-03-12 12:54   ` Ryan Harper
  0 siblings, 0 replies; 19+ messages in thread
From: Ryan Harper @ 2009-03-12 12:54 UTC (permalink / raw)
  To: Michael Goldish; +Cc: Ryan Harper, KVM List, Uri Lublin

* Michael Goldish <mgoldish@redhat.com> [2009-03-12 02:26]:
> 
> > > Regarding the stepfiles you created for Linux -- I can't help much
> > > with those since I don't have the data. I do believe that if I had
> > the
> > > data and the stepfiles I could quickly identify the problem, so if
> > you
> > > think those can be sent to us, I'd like to have them.
> > 
> > I created a stepfile for RHEL5 and what I'm seeing is that one of the
> > screens I captured in stepmaker ended up having a focus ring around
> > something and on replay the focus isn't there.  This situation isn't
> > something that a new algo will fix as you pointed out.  I'm wondering
> > if
> > this is something you've seen.  I don't quite understand how it would
> > happen since stepmaker and the replace send the same keystrokes.  I
> > also
> > don't see how in general this can be avoided.
> 
> The problem sounds familiar. Does the ring appear around one of the
> GNOME menubars, i.e. around "Applications" or "System"? GNOME seems to
> be somewhat indeterministic with those rings. If you run the stepfile
> several times, you'll notice that in most cases you'll see a focus
> ring (or no focus ring, I don't quite remember) and the rest of the
> time you'll get the other case.

Ding Ding Ding! =)

> 
> This can be avoided either with experience, or a good wiki entry on
> picking the right barriers (which we plan to create). But you don't
> have to avoid making mistakes with stepmaker -- most types of mistakes
> are fixed very quickly and easily with stepeditor.

yep, used stepeditor to fix; defintely worth documenting where one
should be invoking stepeditor -- from the steps dir; if you don't run it
from there, it won't find the steps_data dir =(

> 
> The fix depends on exactly what you were trying to do:
> 
> - If you sent "alt-f1" to open the menu, and in the following step
> picked the open menu (including the "Applications" caption itself) to
> make sure it was open -- use stepeditor to modify the barrier so that
> it doesn't include the "Applications" caption or anything that might
> have a ring around it.

That worked for me.


> 
> The following text was copied from your previous e-mail:
> 
> > I do have the debug dir data from these runs.  Looking at the cropped
> > ppm and screendump ppm is how I determined that there must be
> > something
> > wrong with how the image is rendered since the cropped ppm matches
> > the
> > screendump output, but with whatever subtle difference that generates
> > a
> > different md5sum.
> 
> I'm not sure my previous e-mail was clear enough, so just in case it
> wasn't, let me rephrase: The cropped ppm is generated from the
> screendump ppm every time the stepfile running module receives a
> screendump from the guest in order to see if it matches a barrier.
> This is done for debugging purposes. If you somehow check, you'll see
> there is no subtle difference between those two files. It wouldn't
> make sense to find a subtle difference between them, and if you did
> find one, it certainly wouldn't indicate a stepfile problem, but
> rather a very strange bug in the framework.  You should be looking for
> subtle differences between the screendump ppm and the reference
> screendump ppm, as well as between the cropped screendump ppm and the
> reference cropped screendump ppm. By "reference" I mean coming from
> the stepmaker data. If you don't have the stepmaker data, you have no
> way of knowing what caused the difference in the md5sums.

Right -- the real win was comparing the full screendump to the reference
screendump - basically, without the reference dumps, the debug output
isn't useful.

I'll have to go back and re-read your email on where to put the
reference ppm files so one gets the refrence comparision.

> 
> 
> There are two other things I forgot to mention in my previous e-mail:
> 
> The Windows failures you're seeing might be caused by KVM bugs other
> than the one I mentioned. KVM-84 has a very strong tendency to crash
> during Windows installations. You can use the logs to find out if that
> happened in your case. If you have the latest git HEAD the exception
> info will look something like "Barrier timed out at step ... (VM is
> dead)", and if you have a slightly older version, you'll probably see
> "(guest is stuck)" at the end of the info string. You should also see
> the system consistently complaining that it can't fetch any
> screendumps from the guest (this will appear in stdout).

I've seen those on kvm-84.

> The other thing has to do with the ISO files. kvm_runtest has a very
> important feature that we innocently forgot to implement in
> kvm_runtest_2 -- md5sum verification of the ISO files. This means that
> the framework currently makes no use of the "md5sum" and "md5sum_1m"
> parameters in the config file. This means you might be using different
> ISOs than the ones we made our stepfiles with. In that case I wouldn't
> expect any stepfile to succeed. However, if you used these same ISOs
> with kvm_runtest then they should be fine. In any case, I'll add the
> feature ASAP to the git repository.

Right - I suppose it might be better if the names of the windows iso
disks matched how MS names them in MSDN, for example, kvm_runtest refers
to  Windows2008-x64.iso  which doesn't match any name from MSDN, what we
have is:
en_windows_server_2008_datacenter_enterprise_standard_x64_dvd_X14-26714.iso

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ryanh@us.ibm.com

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
       [not found] <316573781.1616221236842323850.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
@ 2009-03-12  7:25 ` Michael Goldish
  2009-03-12 12:54   ` Ryan Harper
  0 siblings, 1 reply; 19+ messages in thread
From: Michael Goldish @ 2009-03-12  7:25 UTC (permalink / raw)
  To: Ryan Harper; +Cc: KVM List, Uri Lublin


----- "Ryan Harper" <ryanh@us.ibm.com> wrote:

> * Michael Goldish <mgoldish@redhat.com> [2009-03-11 03:02]:
> > > We've also been creating stepfiles for Linux guests as well that
> > > aren't
> > > here, various SLES and RHEL installs -- and I've repeatedly seen
> the
> > > same issue where the cropped region *should* match but isn't, and
> it
> > > isn't a result of any of the very correct reasons you've listed
> below
> > > as
> > > to why the stepfiles might fail.
> 
> [snip]
> 
> > 
> > Regarding the stepfiles you created for Linux -- I can't help much
> > with those since I don't have the data. I do believe that if I had
> the
> > data and the stepfiles I could quickly identify the problem, so if
> you
> > think those can be sent to us, I'd like to have them.
> 
> I created a stepfile for RHEL5 and what I'm seeing is that one of the
> screens I captured in stepmaker ended up having a focus ring around
> something and on replay the focus isn't there.  This situation isn't
> something that a new algo will fix as you pointed out.  I'm wondering
> if
> this is something you've seen.  I don't quite understand how it would
> happen since stepmaker and the replace send the same keystrokes.  I
> also
> don't see how in general this can be avoided.

The problem sounds familiar. Does the ring appear around one of the GNOME menubars, i.e. around "Applications" or "System"? GNOME seems to be somewhat indeterministic with those rings. If you run the stepfile several times, you'll notice that in most cases you'll see a focus ring (or no focus ring, I don't quite remember) and the rest of the time you'll get the other case.

This can be avoided either with experience, or a good wiki entry on picking the right barriers (which we plan to create). But you don't have to avoid making mistakes with stepmaker -- most types of mistakes are fixed very quickly and easily with stepeditor.

The fix depends on exactly what you were trying to do:

- If you sent "alt-f1" to open the menu, and in the following step picked the open menu (including the "Applications" caption itself) to make sure it was open -- use stepeditor to modify the barrier so that it doesn't include the "Applications" caption or anything that might have a ring around it.
- If you encountered the ring before sending "alt-f1" (I don't quite remember exactly when the ring tends to appear), and you picked the menubar in a barrier in order to make sure you've reached the desktop after the boot process, you may want to pick the little icons right next to the menubar instead (those typically include Firefox and/or Evolution icons) without including the ring in the barrier (it's also good practice not to include the desktop background in a barrier if it tends to change ).
- If you encountered the ring somewhere else altogether, for example around a button during installation, then I don't remember seeing this case -- installations are usually quite deterministic -- but you can try picking the text inside the button, without including the ring, or if you need the button's surroundings as well, you can pick some tiny part of the button that is _outside_ the ring (I believe you have an outer ring there that is at least one or two pixels wide), as well as the surroundings, or you can use two consecutive barriers -- the first one picking the surroundings and the other picking text inside the button.
All these can be done easily with stepeditor without having to run stepmaker all over again.

If you place the stepmaker data in the right place (as I mentioned in a previous e-mail) it'll save you the time it takes to find what went wrong with the step.


The following text was copied from your previous e-mail:

> I do have the debug dir data from these runs.  Looking at the cropped
> ppm and screendump ppm is how I determined that there must be
> something
> wrong with how the image is rendered since the cropped ppm matches
> the
> screendump output, but with whatever subtle difference that generates
> a
> different md5sum.

I'm not sure my previous e-mail was clear enough, so just in case it wasn't, let me rephrase:
The cropped ppm is generated from the screendump ppm every time the stepfile running module receives a screendump from the guest in order to see if it matches a barrier. This is done for debugging purposes. If you somehow check, you'll see there is no subtle difference between those two files. It wouldn't make sense to find a subtle difference between them, and if you did find one, it certainly wouldn't indicate a stepfile problem, but rather a very strange bug in the framework.
You should be looking for subtle differences between the screendump ppm and the reference screendump ppm, as well as between the cropped screendump ppm and the reference cropped screendump ppm. By "reference" I mean coming from the stepmaker data. If you don't have the stepmaker data, you have no way of knowing what caused the difference in the md5sums.


There are two other things I forgot to mention in my previous e-mail:

The Windows failures you're seeing might be caused by KVM bugs other than the one I mentioned. KVM-84 has a very strong tendency to crash during Windows installations. You can use the logs to find out if that happened in your case. If you have the latest git HEAD the exception info will look something like "Barrier timed out at step ... (VM is dead)", and if you have a slightly older version, you'll probably see "(guest is stuck)" at the end of the info string. You should also see the system consistently complaining that it can't fetch any screendumps from the guest (this will appear in stdout).

The other thing has to do with the ISO files. kvm_runtest has a very important feature that we innocently forgot to implement in kvm_runtest_2 -- md5sum verification of the ISO files. This means that the framework currently makes no use of the "md5sum" and "md5sum_1m" parameters in the config file. This means you might be using different ISOs than the ones we made our stepfiles with. In that case I wouldn't expect any stepfile to succeed. However, if you used these same ISOs with kvm_runtest then they should be fine. In any case, I'll add the feature ASAP to the git repository.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
  2009-03-11  8:01 ` Michael Goldish
  2009-03-11 13:12   ` Ryan Harper
@ 2009-03-11 20:58   ` Ryan Harper
  1 sibling, 0 replies; 19+ messages in thread
From: Ryan Harper @ 2009-03-11 20:58 UTC (permalink / raw)
  To: Michael Goldish; +Cc: Ryan Harper, KVM List, Uri Lublin

* Michael Goldish <mgoldish@redhat.com> [2009-03-11 03:02]:
> > We've also been creating stepfiles for Linux guests as well that
> > aren't
> > here, various SLES and RHEL installs -- and I've repeatedly seen the
> > same issue where the cropped region *should* match but isn't, and it
> > isn't a result of any of the very correct reasons you've listed below
> > as
> > to why the stepfiles might fail.

[snip]

> 
> Regarding the stepfiles you created for Linux -- I can't help much
> with those since I don't have the data. I do believe that if I had the
> data and the stepfiles I could quickly identify the problem, so if you
> think those can be sent to us, I'd like to have them.

I created a stepfile for RHEL5 and what I'm seeing is that one of the
screens I captured in stepmaker ended up having a focus ring around
something and on replay the focus isn't there.  This situation isn't
something that a new algo will fix as you pointed out.  I'm wondering if
this is something you've seen.  I don't quite understand how it would
happen since stepmaker and the replace send the same keystrokes.  I also
don't see how in general this can be avoided.

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ryanh@us.ibm.com

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
  2009-03-11  8:01 ` Michael Goldish
@ 2009-03-11 13:12   ` Ryan Harper
  2009-03-11 20:58   ` Ryan Harper
  1 sibling, 0 replies; 19+ messages in thread
From: Ryan Harper @ 2009-03-11 13:12 UTC (permalink / raw)
  To: Michael Goldish; +Cc: Ryan Harper, KVM List, Uri Lublin

* Michael Goldish <mgoldish@redhat.com> [2009-03-11 03:08]:
> > > I'd like to comment on this. I don't doubt that some fuzzy matching
> > > algorithm (such as calculating match percentages) would generally
> > be
> > > more robust. I do however doubt it would significantly lower the
> > false
> > > positive rate in our case (which is fairly low already). False
> > > positive failures in step files are typically caused by:
> > 
> > I've seen multiple failures during the windows guest installs which I
> > assume are well tested stepfiles.  For example, 2k8 installs and the
> > fails to pass the barrier when trying to set the user password for
> > the
> > first time.  The cropped region *looks* exactly like the the intended
> > location on the screendump, but md5sums to something different. 
> > 
> > A recent run of 2k3 and 2k8 installs resulted in the following
> > failures:
> > 
> > Win2k3-32bit -- screenshot of "Windows Setup" and Setup is starting
> > windows, cropped region is of "Setup is starting Windows" full screen
> > dump matches this text from a human pov
> > 
> > Win2k3-64-bit -- same as above
> > 
> > Win2k8-32-bit -- screenshot of "The user's password must be changed
> > before logging in the first time with OK and cancel buttons.  -
> > cropped
> > region is of the text "The user's password must be changed before
> > logging in the first time" - matching the full screen screendump fine
> > from a human POV
> > 
> > Win2k8-64-bit -- same as above
> > 
> > We've also been creating stepfiles for Linux guests as well that
> > aren't
> > here, various SLES and RHEL installs -- and I've repeatedly seen the
> > same issue where the cropped region *should* match but isn't, and it
> > isn't a result of any of the very correct reasons you've listed below
> > as
> > to why the stepfiles might fail.
> 
> The Windows failures you're describing sound like they could be caused
> by a known KVM bug, which results in Windows installations sometimes
> booting from CDROM, instead of the HDD, immediately following the
> installation.

No, but I have seen a bug that after an install and guest OS reboots,
KVM fails to boot from the harddrive; exiting KVM and then booting from
HD works fine.  We're looking into that one right now.

> 
> I assume you don't have the stepmaker data of those Windows stepfiles.

These are the stepfiles that came with kvm-autotest so no *I* don't have
stepmaker data, but whomever committed the windows stepfiles to
kvm-autotest *should* have the data ... 

> In that case, the images left by the stepfile test, scrdump.ppm and
> cropped_scrdump.ppm, are in fact the full screendump and a cropped
> region in it. They should always match perfectly, because the cropped
> one is generated from the full one at runtime. None of them reflects
> the expected guest behavior; they reflect what the stepfile test
> actually found. The only thing you have that reflects the expected
> guest behavior is the md5sum found in the stepfile.
> 
> If you happened to keep the "debug" dirs which contain the screendumps
> and test logs, and could somehow send them to me or Uri, I'd be able
> to tell you what went wrong with the test and whether it is indeed
> that KVM bug or a stepfile error. We probably could also use the
> stepfiles you were working with, because we might have changed ours
> recently, though that is unlikely because we don't change old
> stepfiles very often nowadays.

I do have the debug dir data from these runs.  Looking at the cropped
ppm and screendump ppm is how I determined that there must be something
wrong with how the image is rendered since the cropped ppm matches the
screendump output, but with whatever subtle difference that generates a
different md5sum.

I'll see about figuring out how to get the debug output to you.


> 
> Regarding the stepfiles you created for Linux -- I can't help much
> with those since I don't have the data. I do believe that if I had the
> data and the stepfiles I could quickly identify the problem, so if you
> think those can be sent to us, I'd like to have them.

OK

> 
> I'm not sure exactly what version of kvm_runtest_2 you're using (are
> you are using kvm_runtest_2?), but I think it should support

Yep, kvm_runtest_2; but I've seen the same issue on kvm_runtest.

> automatic comparison of the actual screendump with the expected
> screendump. If you have a slightly older version than the current git
> HEAD, then you should probably place your <stepfile>_data directory
> right next to <stepfile>, and whenever a stepfile test fails you'll
> get -- in addition to scrdump.ppm and cropped_scrdump.ppm --
> scrdump_reference.ppm and cropped_scrdump_reference.ppm, as well as a
> nice green-red comparison image which colors all matching pixels green
> and all mismatching ones red. That last image is very helpful when
> stepfiles require fixing. If you have the latest git HEAD, you should
> place all your <stepfile>_data dirs in a dir named "steps_data" which
> should reside next to "steps" (which should contain the stepfiles
> themselves).

Very useful information, should be added to the wiki; we should write a
section on using stepmaker/stepeditor and best practices on picking
barriers/cropped regions.

> > Well, either there is a *bug* right now that is triggering a higher
> > rate
> > of false positives, or using a better algorithm is a requirement;
> > distributing stepfiles and md5sums that don't work isn't productive,
> > so
> > in the case that it is a bug I still suggest we pursue a more
> > resilient
> > algorithm.
> 
> Do the Windows tests you mentioned fail consistently, or have you
> witnessed any of them succeed in some of the runs?

Consistently fail, no passes so far.


-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ryanh@us.ibm.com

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
       [not found] <1170652852.1514931236758361795.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
@ 2009-03-11  8:01 ` Michael Goldish
  2009-03-11 13:12   ` Ryan Harper
  2009-03-11 20:58   ` Ryan Harper
  0 siblings, 2 replies; 19+ messages in thread
From: Michael Goldish @ 2009-03-11  8:01 UTC (permalink / raw)
  To: Ryan Harper; +Cc: KVM List, Uri Lublin


----- "Ryan Harper" <ryanh@us.ibm.com> wrote:

> * Michael Goldish <mgoldish@redhat.com> [2009-03-10 20:55]:
> > 
> > ----- "Ryan Harper" <ryanh@us.ibm.com> wrote:
> > 
> > > > >- guest install wizard using md5sum region matching ... ouch. 
> This
> > > is
> > > > >quite fickle.  I've seen different kvms generate different
> md5sum
> > > for
> > > > >the same region a couple of times.  I know distributing
> screenshots
> > > of
> > > > >certain OSes is a grey area, but it would be nice to plumb
> through
> > > > >screenshot comparison and make that configurable.  FWIW, I'll
> > > probably
> > > > >look at pulling the screenshot comparison bits from kvmtest
> and
> > > getting
> > > > >that integrated in kvm_runtest_2.
> > > > Creating a step file is not as easy as it seems, exactly for
> that
> > > reason. 
> > > > One has to pick a part of the screenshot that only available
> when
> > > input is 
> > > > expected and that would be consistent. We were thinking of
> replacing
> > > the 
> > > > md5sum with a tiny compressed image of the part of the image
> that
> > > was 
> > > > picked.
> > > 
> > > It isn't just that step file creation isn't easy is that even with
> a
> > > good stepfile with smart region boxes, md5sum can *still* fail
> > > because
> > > KVM renders one pixel in the region differently than the system
> where
> > > the
> > > original was created; this creates false positives failures.
> > 
> > I'd like to comment on this. I don't doubt that some fuzzy matching
> > algorithm (such as calculating match percentages) would generally
> be
> > more robust. I do however doubt it would significantly lower the
> false
> > positive rate in our case (which is fairly low already). False
> > positive failures in step files are typically caused by:
> 
> I've seen multiple failures during the windows guest installs which I
> assume are well tested stepfiles.  For example, 2k8 installs and the
> fails to pass the barrier when trying to set the user password for
> the
> first time.  The cropped region *looks* exactly like the the intended
> location on the screendump, but md5sums to something different. 
> 
> A recent run of 2k3 and 2k8 installs resulted in the following
> failures:
> 
> Win2k3-32bit -- screenshot of "Windows Setup" and Setup is starting
> windows, cropped region is of "Setup is starting Windows" full screen
> dump matches this text from a human pov
> 
> Win2k3-64-bit -- same as above
> 
> Win2k8-32-bit -- screenshot of "The user's password must be changed
> before logging in the first time with OK and cancel buttons.  -
> cropped
> region is of the text "The user's password must be changed before
> logging in the first time" - matching the full screen screendump fine
> from a human POV
> 
> Win2k8-64-bit -- same as above
> 
> We've also been creating stepfiles for Linux guests as well that
> aren't
> here, various SLES and RHEL installs -- and I've repeatedly seen the
> same issue where the cropped region *should* match but isn't, and it
> isn't a result of any of the very correct reasons you've listed below
> as
> to why the stepfiles might fail.

The Windows failures you're describing sound like they could be caused by a known KVM bug, which results in Windows installations sometimes booting from CDROM, instead of the HDD, immediately following the installation.

I assume you don't have the stepmaker data of those Windows stepfiles. In that case, the images left by the stepfile test, scrdump.ppm and cropped_scrdump.ppm, are in fact the full screendump and a cropped region in it. They should always match perfectly, because the cropped one is generated from the full one at runtime. None of them reflects the expected guest behavior; they reflect what the stepfile test actually found. The only thing you have that reflects the expected guest behavior is the md5sum found in the stepfile.

If you happened to keep the "debug" dirs which contain the screendumps and test logs, and could somehow send them to me or Uri, I'd be able to tell you what went wrong with the test and whether it is indeed that KVM bug or a stepfile error. We probably could also use the stepfiles you were working with, because we might have changed ours recently, though that is unlikely because we don't change old stepfiles very often nowadays.

Regarding the stepfiles you created for Linux -- I can't help much with those since I don't have the data. I do believe that if I had the data and the stepfiles I could quickly identify the problem, so if you think those can be sent to us, I'd like to have them.

I'm not sure exactly what version of kvm_runtest_2 you're using (are you are using kvm_runtest_2?), but I think it should support automatic comparison of the actual screendump with the expected screendump. If you have a slightly older version than the current git HEAD, then you should probably place your <stepfile>_data directory right next to <stepfile>, and whenever a stepfile test fails you'll get -- in addition to scrdump.ppm and cropped_scrdump.ppm -- scrdump_reference.ppm and cropped_scrdump_reference.ppm, as well as a nice green-red comparison image which colors all matching pixels green and all mismatching ones red. That last image is very helpful when stepfiles require fixing. If you have the latest git HEAD, you should place all your <stepfile>_data dirs in a dir named "steps_dat
 a" which should reside next to "steps" (which should contain the stepfiles themselves).

> > - an unexpected popup window covering the test region
> > - a dialog which has a different position every time (and varies by
> >   many pixels)
> > - a barrier that passes before the controls get input focus, which
> >   causes the following keystrokes to have no effect
> > - in text mode, sometimes a line of text is printed unexpectedly
> and
> >   causes the entire screen to scroll up
> > - addition/modification of a KVM feature which changes the course
> of
> >   the installation
> 
> > 
> > I may have left something out. In any case, all these problems are
> > solved by picking better barrier regions, but none can be solved by
> > using a more forgiving comparison method. I have encountered a
> single
> > installation that rendered a single pixel in an indeterministic
> > fashion, and though this problem was easily solved by correcting
> the
> > barrier (using a stepfile editor), I do agree we might get a small
> > decrease in the false positive rate if we use a more forgiving
> > algorithm.
> 
> Well, either there is a *bug* right now that is triggering a higher
> rate
> of false positives, or using a better algorithm is a requirement;
> distributing stepfiles and md5sums that don't work isn't productive,
> so
> in the case that it is a bug I still suggest we pursue a more
> resilient
> algorithm.

Do the Windows tests you mentioned fail consistently, or have you witnessed any of them succeed in some of the runs?

> > However, there is also a risk: a more forgiving algorithm may
> forgive
> > KVM for rendering errors. It may also make it risky to pick
> barriers
> > that are meant to catch small text; I believe a button with a "Yes"
> > caption and a button with a "No" caption would have a very high
> match
> > percentage, especially if you have to pick the whole button, or
> maybe
> > even some of its surroundings (and you often do).
> 
> Noted, though I think as you indicated above, smart selection of the
> cropped region goes a long way toward avoiding these sorts of
> collisions.
> 
> > 
> > I still believe it's a good idea to look into other methods (we're
> > already doing that) and start experimenting with them.
> 
> Cool.  Obviously without the original ppm files from the stepmaker
> run,
> we can't determine if a different algo would help so we're generating
> new stepfiles and ppm data and trying to reproduce the md5sum
> mismatch
> issues.  If there is anything I can do to help with the algo work let
> me
> know.

Thanks, I certainly will. I also appreciate your help so far.

Michael

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
  2009-03-11  1:54 ` Michael Goldish
@ 2009-03-11  2:58   ` Ryan Harper
  0 siblings, 0 replies; 19+ messages in thread
From: Ryan Harper @ 2009-03-11  2:58 UTC (permalink / raw)
  To: Michael Goldish; +Cc: Ryan Harper, KVM List, Uri Lublin

* Michael Goldish <mgoldish@redhat.com> [2009-03-10 20:55]:
> 
> ----- "Ryan Harper" <ryanh@us.ibm.com> wrote:
> 
> > > >- guest install wizard using md5sum region matching ... ouch.  This
> > is
> > > >quite fickle.  I've seen different kvms generate different md5sum
> > for
> > > >the same region a couple of times.  I know distributing screenshots
> > of
> > > >certain OSes is a grey area, but it would be nice to plumb through
> > > >screenshot comparison and make that configurable.  FWIW, I'll
> > probably
> > > >look at pulling the screenshot comparison bits from kvmtest and
> > getting
> > > >that integrated in kvm_runtest_2.
> > > Creating a step file is not as easy as it seems, exactly for that
> > reason. 
> > > One has to pick a part of the screenshot that only available when
> > input is 
> > > expected and that would be consistent. We were thinking of replacing
> > the 
> > > md5sum with a tiny compressed image of the part of the image that
> > was 
> > > picked.
> > 
> > It isn't just that step file creation isn't easy is that even with a
> > good stepfile with smart region boxes, md5sum can *still* fail
> > because
> > KVM renders one pixel in the region differently than the system where
> > the
> > original was created; this creates false positives failures.
> 
> I'd like to comment on this. I don't doubt that some fuzzy matching
> algorithm (such as calculating match percentages) would generally be
> more robust. I do however doubt it would significantly lower the false
> positive rate in our case (which is fairly low already). False
> positive failures in step files are typically caused by:

I've seen multiple failures during the windows guest installs which I
assume are well tested stepfiles.  For example, 2k8 installs and the
fails to pass the barrier when trying to set the user password for the
first time.  The cropped region *looks* exactly like the the intended
location on the screendump, but md5sums to something different. 

A recent run of 2k3 and 2k8 installs resulted in the following failures:

Win2k3-32bit -- screenshot of "Windows Setup" and Setup is starting
windows, cropped region is of "Setup is starting Windows" full screen
dump matches this text from a human pov

Win2k3-64-bit -- same as above

Win2k8-32-bit -- screenshot of "The user's password must be changed
before logging in the first time with OK and cancel buttons.  - cropped
region is of the text "The user's password must be changed before
logging in the first time" - matching the full screen screendump fine
from a human POV

Win2k8-64-bit -- same as above

We've also been creating stepfiles for Linux guests as well that aren't
here, various SLES and RHEL installs -- and I've repeatedly seen the
same issue where the cropped region *should* match but isn't, and it
isn't a result of any of the very correct reasons you've listed below as
to why the stepfiles might fail.

> 
> - an unexpected popup window covering the test region
> - a dialog which has a different position every time (and varies by
>   many pixels)
> - a barrier that passes before the controls get input focus, which
>   causes the following keystrokes to have no effect
> - in text mode, sometimes a line of text is printed unexpectedly and
>   causes the entire screen to scroll up
> - addition/modification of a KVM feature which changes the course of
>   the installation

> 
> I may have left something out. In any case, all these problems are
> solved by picking better barrier regions, but none can be solved by
> using a more forgiving comparison method. I have encountered a single
> installation that rendered a single pixel in an indeterministic
> fashion, and though this problem was easily solved by correcting the
> barrier (using a stepfile editor), I do agree we might get a small
> decrease in the false positive rate if we use a more forgiving
> algorithm.

Well, either there is a *bug* right now that is triggering a higher rate
of false positives, or using a better algorithm is a requirement;
distributing stepfiles and md5sums that don't work isn't productive, so
in the case that it is a bug I still suggest we pursue a more resilient
algorithm.

> 
> However, there is also a risk: a more forgiving algorithm may forgive
> KVM for rendering errors. It may also make it risky to pick barriers
> that are meant to catch small text; I believe a button with a "Yes"
> caption and a button with a "No" caption would have a very high match
> percentage, especially if you have to pick the whole button, or maybe
> even some of its surroundings (and you often do).

Noted, though I think as you indicated above, smart selection of the
cropped region goes a long way toward avoiding these sorts of
collisions.

> 
> I still believe it's a good idea to look into other methods (we're
> already doing that) and start experimenting with them.

Cool.  Obviously without the original ppm files from the stepmaker run,
we can't determine if a different algo would help so we're generating
new stepfiles and ppm data and trying to reproduce the md5sum mismatch
issues.  If there is anything I can do to help with the algo work let me
know.

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ryanh@us.ibm.com

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
       [not found] <1419870903.1471901236736357942.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
@ 2009-03-11  1:54 ` Michael Goldish
  2009-03-11  2:58   ` Ryan Harper
  0 siblings, 1 reply; 19+ messages in thread
From: Michael Goldish @ 2009-03-11  1:54 UTC (permalink / raw)
  To: Ryan Harper; +Cc: KVM List, Uri Lublin


----- "Ryan Harper" <ryanh@us.ibm.com> wrote:

> > >- guest install wizard using md5sum region matching ... ouch.  This
> is
> > >quite fickle.  I've seen different kvms generate different md5sum
> for
> > >the same region a couple of times.  I know distributing screenshots
> of
> > >certain OSes is a grey area, but it would be nice to plumb through
> > >screenshot comparison and make that configurable.  FWIW, I'll
> probably
> > >look at pulling the screenshot comparison bits from kvmtest and
> getting
> > >that integrated in kvm_runtest_2.
> > Creating a step file is not as easy as it seems, exactly for that
> reason. 
> > One has to pick a part of the screenshot that only available when
> input is 
> > expected and that would be consistent. We were thinking of replacing
> the 
> > md5sum with a tiny compressed image of the part of the image that
> was 
> > picked.
> 
> It isn't just that step file creation isn't easy is that even with a
> good stepfile with smart region boxes, md5sum can *still* fail
> because
> KVM renders one pixel in the region differently than the system where
> the
> original was created; this creates false positives failures.

I'd like to comment on this. I don't doubt that some fuzzy matching algorithm (such as calculating match percentages) would generally be more robust. I do however doubt it would significantly lower the false positive rate in our case (which is fairly low already). False positive failures in step files are typically caused by:

- an unexpected popup window covering the test region
- a dialog which has a different position every time (and varies by many pixels)
- a barrier that passes before the controls get input focus, which causes the following keystrokes to have no effect
- in text mode, sometimes a line of text is printed unexpectedly and causes the entire screen to scroll up
- addition/modification of a KVM feature which changes the course of the installation

I may have left something out. In any case, all these problems are solved by picking better barrier regions, but none can be solved by using a more forgiving comparison method. I have encountered a single installation that rendered a single pixel in an indeterministic fashion, and though this problem was easily solved by correcting the barrier (using a stepfile editor), I do agree we might get a small decrease in the false positive rate if we use a more forgiving algorithm.

However, there is also a risk: a more forgiving algorithm may forgive KVM for rendering errors. It may also make it risky to pick barriers that are meant to catch small text; I believe a button with a "Yes" caption and a button with a "No" caption would have a very high match percentage, especially if you have to pick the whole button, or maybe even some of its surroundings (and you often do).

I still believe it's a good idea to look into other methods (we're already doing that) and start experimenting with them.

Thanks,
Michael

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
       [not found] <160776987.914431236209784966.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
@ 2009-03-04 23:52 ` Uri Lublin
  0 siblings, 0 replies; 19+ messages in thread
From: Uri Lublin @ 2009-03-04 23:52 UTC (permalink / raw)
  To: sudhir kumar; +Cc: KVM List, Ryan Harper

From: "sudhir kumar" <smalikphy@gmail.com>
>On Wed, Mar 4, 2009 at 11:45 PM, Ryan Harper <ryanh@us.ibm.com> wrote:
>>> * Uri Lublin <uril@redhat.com> [2009-03-04 02:59]:
>>> >- guest install wizard using md5sum region matching ... ouch.  This is
>>> >quite fickle.  I've seen different kvms generate different md5sum for
>>> >the same region a couple of times.  I know distributing screenshots of
>>> >certain OSes is a grey area, but it would be nice to plumb through
>>> >screenshot comparison and make that configurable.  FWIW, I'll probably
>>> >look at pulling the screenshot comparison bits from kvmtest and getting
>>> >that integrated in kvm_runtest_2.
>>> Creating a step file is not as easy as it seems, exactly for that reason.
>I agree here 100% as per my experience.
>>> One has to pick a part of the screenshot that only available when input is
>>> expected and that would be consistent. We were thinking of replacing the
>>> md5sum with a tiny compressed image of the part of the image that was
>>> picked.
>>
>> It isn't just that step file creation isn't easy is that even with a
>> good stepfile with smart region boxes, md5sum can *still* fail because
>> KVM renders one pixel in the region differently than the system where the
>> original was created; this creates false positives failures.
>I also have faced this issue. Even on the same system the step file
>may fail. I saw few(though not frequent) situations where the same
>md5sum passed in one time and failed in the other attempt.

I think we need something more forgiving than md5sum,
which would still be pretty small. Image compression/reduction
is what we are thinking of (but have yet to prove it's working
and select the best algorithm for our needs).

>>> We had two other implementation for guest-wizard, one which only compares
>>> two/three consecutive screendumps (problems with e.g. clock ticking), and
>>
>> Right, I like the idea of the region selection in stepmaker, it lets us
>> avoid areas which have VM specific info, like the device path or clock.
>> I'd like to keep the current stepmaker and region code, what I'd like to
>> see instead of just md5sum compare of the cropped region, to be able to
>> plug in different comparisons.  If a user does have screens from
>> stepmaker available, guest_wizard could attempt to do screen compare
>> like we do in kvmtests with match percentages.  If no screens are
>> available, fallback on md5 or some other comparison algorithm.

>That sounds better. Yet none of my step files worked in one go. I
>remember running my test and stepmaker parallelly and continue taking
>screenshots untill one passes and putting that md5sum in the step
>file. 

That's an interesting way. I have'nt tried that one before.

>So at the end I was fully meshed up with respect to the cropped
>images and I had no idea which cropped image corresponded to which
>md5sum.

Step file writes the step number (which is the image number) 
as a comment for the step.
I'm sure (and hope) that after you'd create a few step files, you'd pick the 
right subimage and the whole process would be much easier.

>Though the RHEL5.3 step files that I generated in the text mode
>installation were quite strong.
That's nice to know.

>>> one similar to kvmtest. The kvmtest way is to let the user create his/her
>>> own screendumps to be used later. I did not want to add so many screendumps
>>> images to the repository. Step-Maker keeps the images it uses, so we can
>>> compare them upon failure. Step-Editor lets the user to change a single
>>> barrier_2 step (or more) by looking at the original image and picking a
>>> different area.
>>
>> Agreed, I don't want to commit screens to the repo either, I just want
>> to be able to use screens if a user has them available.
>>
>I have two questions with respect to stepfiles.
>1. The timeouts: Timeouts may fall short if a step file generated on a
>high end machine is to be used on a very low config machine or
>installing N virtual machines (say N ~ 50,100 etc) in parallel.

For those reasons, among others, we set the timeout to 5 times
what the step actually took. Since creating the step file is human driven
(compared to running the installation) effectively the multiplication is even
higher. That also creates a problem when guest installation fails.

We thought of a very quick and simple benchmark. We would run it in 
step-maker and write the score in the step file. And we would run
it before guest installation. We'd adjust the timeout accordingly.

>2. If there are changes in KVM display in future the md5sum will fail.
>So are we prepared for that?

It happened to us when suspend capability was added. I do not think you
can be prepared for that. The before/after screens are different. We have 
step-editor for that. Using step-editor we can change the
step that failed due to the change (adjust it to the "after" screendump
or choosing a different region), and hopefully guest installation would
be successful again.

Uri.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: kvm-autotest -- introducing kvm_runtest_2
       [not found] <1786372222.913761236208477884.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
@ 2009-03-04 23:25 ` Uri Lublin
  0 siblings, 0 replies; 19+ messages in thread
From: Uri Lublin @ 2009-03-04 23:25 UTC (permalink / raw)
  To: Ryan Harper; +Cc: KVM List

From: "Ryan Harper" <ryanh@us.ibm.com>
> Uri Lublin <uril@redhat.com> [2009-03-04 02:59]:
>> 
>> >  - it seems like the definition and rules ought to be separate from the
>> >  last section which defines which tests to run (the fc8_quick area), so
>> >  adding something as simple as include support to kvm_config.py would
>> >  be sufficient to support a common definition file but different
>> >  testing rules.
>> An include support is one way to do it. We thought of a different way, 
>> which is
>> to add rules from the control file. So the control file would pick the 
>> test-list
>> it wants to run. Your suggestion is simpler but you need both a config file 
>> and
>> a control file to change the test-list. We need to change only the control 
>> file.
>
>OK, well, I viewed almost all of the test config file as static.  The
>rules about guests, features, config variants, can change over time, but
>not that often which is what I was wanting an include of that mostly
>static data and then something else that picked which guests and tests
>to run.  It does make sense for the control file to pick the set of
>tests, so I think we're in agreement, though I do think adding include
>support is still a good idea, but much lower on the priority list.

OK. Shouldn't be too hard.

>>
>> 
>> >
>> >- kvm_runtest_2 as mentioned doesn't mess with your host networking and
>> >relies on -net user and redir, would be good to plumb through -net tap
>> >support that can be configured instead of always using -net user
>> We want to add -net tap support, as that is what users usually use.
>> kvm_runtest does exactly that (a part of kvm_host.cfg). The drawbacks of 
>> tap are
>> (among others):
>>  - One must run tests as root, or play with sudo/chmod (True for /dev/kvm, 
>>  but
>> simpler)
>>  - You have to a have a dhcpd around. kvm_runtest by default runs a local 
>>  dhcpd
>> to serve kvm guests (part of setup/cleanup tests).
>>  - A bit more difficult to configure.
>> 

>I don't want to have the test *setup* tap and all of the infrastructure
>like dhcp, or dnsmasq... I just want to be able to configure whether a
>guest is launched with -net tap or -net user.  kvm_runtest_2 punts on
>building and install kvm as well as the networking, which is a great
>idea, I just want to be flexible enough to run kvm_runtest_2 with -net
>tap as well as -net user.

I agree we want to enable -net tap. I've just mentioned the drawbacks of it.
We want to add kvm_install to kvm_runtest_2.
It's important to build kvm on the client.
It would be nice to also use it to build kvm on guests.


>> >
>> >I noticed the references to the setup isos for windows that presumbly
>> >install cygwin telnetd/sshd, are those available?  if the isos
>> >themselves aren't, if the build instructions are, that would be very
>> >useful.
>> You are right. We do have an installation iso images for telnetd/sshd.
>> I did not want to commit iso images. Also, I am not sure about licensing, 
>> and I prefer that we would generate them on the user machine. We'll add the 
>> build instructions to the wiki.
>
>Agreed.  Sounds like a plan.


>> >- guest install wizard using md5sum region matching ... ouch.  This is
>> >quite fickle.  I've seen different kvms generate different md5sum for
>> >the same region a couple of times.  I know distributing screenshots of
>> >certain OSes is a grey area, but it would be nice to plumb through
>> >screenshot comparison and make that configurable.  FWIW, I'll probably
>> >look at pulling the screenshot comparison bits from kvmtest and getting
>> >that integrated in kvm_runtest_2.
>> Creating a step file is not as easy as it seems, exactly for that reason. 
>> One has to pick a part of the screenshot that only available when input is 
>> expected and that would be consistent. We were thinking of replacing the 
>> md5sum with a tiny compressed image of the part of the image that was 
>> picked.

>It isn't just that step file creation isn't easy is that even with a
>good stepfile with smart region boxes, md5sum can *still* fail because
>KVM renders one pixel in the region differently than the system where the
>original was created; this creates false positives failures.

We need something more "forgiving" than md5sum, that would still 
be a compact representation of the region box. We may be able to
use an image compression algorithm (need to investigate on that).
That's what I meant by "tiny compressed image".

>> We had two other implementation for guest-wizard, one which only compares 
>> two/three consecutive screendumps (problems with e.g. clock ticking), and 

>Right, I like the idea of the region selection in stepmaker, it lets us
>avoid areas which have VM specific info, like the device path or clock.
>I'd like to keep the current stepmaker and region code, what I'd like to
>see instead of just md5sum compare of the cropped region, to be able to
>plug in different comparisons.  If a user does have screens from
>stepmaker available, guest_wizard could attempt to do screen compare
>like we do in kvmtests with match percentages.  If no screens are
>available, fallback on md5 or some other comparison algorithm.

As mentioned above, an image compression/reduction algorithm may be 
better than md5sum. With kvmtest setting the percentage is based on
assumptions too. Setting the percentage too high (100%) may result
in the same problem of md5sum, and setting it too low, may find a 
match before the guest is ready to receive input. I would not mind
adding the kvmtest way, but I am not sure I want it to be the 
default. After some tuning, we do not have that many false 
positives. We can let the user choose.

>>> one similar to kvmtest. The kvmtest way is to let the user create his/her 
>>> own screendumps to be used later. I did not want to add so many screendumps 
>>> images to the repository. Step-Maker keeps the images it uses, so we can 
>>> compare them upon failure. Step-Editor lets the user to change a single 
>>> barrier_2 step (or more) by looking at the original image and picking a 
>>> different area.
>>
>>Agreed, I don't want to commit screens to the repo either, I just want
>>to be able to use screens if a user has them available.

>> >  - a lot of the ssh and scp work to copy autotest client into a guest
>> >  is already handled by autoserv
>> That is true, but we want to be able to run it as client test too. That way 
>> a user does not have to install the server to run kvm tests on his/her 
>> machine.

>While true, we should try to use the existing server code for autotest
>install.

OK.

>> >  - vm.py has a lot of infrastructure that should be integrated into
>> >  autotest/server/kvm.py  or possibly client-side common code to support
>> >  the next comment
>> In the long term, there should be a client-server shared directory that 
>> deals with kvm guests (letting the host-client be the server for kvm-guests 
>> clients)

>I believe client/common_lib is imported into server-side as common code,
>so moving kvm infrastructure bits there will allow server-side and any
>other client test to manipulate VM/KVM objects.

For kvm-install that can be done. Running autotest tests on kvm guests 
would not be as easy.

>> 
>> >  - kvm_tests.py defines new tests as functions, each of these tests
>> >  should be a separate client tests  which sounds like a pain, but
>> >  should allow for easier test composition and hopefully make it easier
>> >  to add new tests that look like any other client side test with just
>> >  the implementation.py and control file
>> >    - this model moves toward eliminating kvm_runtest_2 and having a
>> >    server-side generate a set of tests to run and spawning those on a
>> >    target system.
>> I am not sure that I follow. Why implementing a test as a function is a 
>> pain ?

>Test as a function of course isn't a pain.  What I meant was that if
>I've already have to guests and I just want to do a migration test, it
>would be nice to just be able to:
>
>% cd client/tests/kvm/migration
>% $AUTOTEST_BIN ./control

That would be nice. Actually, we used to keep it that way with kvm_runtest
but there were too many dependencies so we've dropped it. It is good
only for specific cases (e.g. migration of a Fedora-8 32 bit guest
with 512 MB ...). One can easily write a control file that uses
kvm_runtest_2 "test" to do that. A simple configuration 
file is another, not as simple, way (just 1-2 dictionaries in the list).

>I'd like to move away from tests eval-ing and exec-ing other tests; it
>just feels a little hacky and stands out versus other autotest client
>tests.

I think testing kvm is more complex than testing e.g. kernelbuild.
Control files files don't have to be one liners.
I think that having a config-file parser in the control file, and calling
appropriate tests, reduces complexity and code duplication. Think of
the guests x tests matrix we want to test.

>We can probably table the discussion until we push patches at autotest
>and see what that community thinks of kvm_runtest_2 at that time.

OK.

>> The plan is to keep kvm_runtest_2 in the client side, but move the 
>> configuration file + parser to the server (if one wants to dispatch test 
>> from the server). The server can dispatch the client test (using 
>> kvm_runtest_2 "test" + dictionary + tag). We have dependencies and we can 
>> spread unrelated kvm tests among similar hosts (e.g install guest OS and 
>> run tests for Fedora-10 64 bit on one machine, and install guest OS and run 
>> tests for Windows-XP 32 bit on another).

>Yeah, that sounds reasonable.

>> We may have to add hosts into the configuration file, or add it while 
>> running (from the control file ?). We sure do not want to go back to the 
>> way kvm_runtest works (playing with symbolic links so that autotest would 
>> find and run the test), and we do not want too many kvm_test_NAME 
>> directories under client/tests/ .

>Agree with no symbolic links.  If we move common kvm utils and objects
>to client/common_lib that avoids any of that hackery.

>On the dir structure, I agree we don't have to pollute client/tests with
>a ton of kvm_* tests.  I'll look around upstream autotest and see if
>there is an other client side tests to use an example.

OK.


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2009-03-12 14:22 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-03-01 19:09 kvm-autotest -- introducing kvm_runtest_2 Uri Lublin
2009-03-02 17:45 ` Ryan Harper
2009-03-04  8:58   ` Uri Lublin
2009-03-04 18:15     ` Ryan Harper
2009-03-04 18:59       ` sudhir kumar
2009-03-04 22:23         ` Dor Laor
2009-03-09 16:23     ` Ryan Harper
2009-03-09 17:53       ` Uri Lublin
     [not found] <1786372222.913761236208477884.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
2009-03-04 23:25 ` Uri Lublin
     [not found] <160776987.914431236209784966.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
2009-03-04 23:52 ` Uri Lublin
     [not found] <1419870903.1471901236736357942.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
2009-03-11  1:54 ` Michael Goldish
2009-03-11  2:58   ` Ryan Harper
     [not found] <1170652852.1514931236758361795.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
2009-03-11  8:01 ` Michael Goldish
2009-03-11 13:12   ` Ryan Harper
2009-03-11 20:58   ` Ryan Harper
     [not found] <316573781.1616221236842323850.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
2009-03-12  7:25 ` Michael Goldish
2009-03-12 12:54   ` Ryan Harper
     [not found] <337817070.1631851236866509897.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
2009-03-12 14:03 ` Michael Goldish
2009-03-12 14:21   ` Ryan Harper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.