All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] Libvirt upstream CI efforts
       [not found] <20190118140336.GA19921@beluga.usersys.redhat.com>
@ 2019-02-21 14:39 ` Erik Skultety
  2019-02-21 17:56   ` [Qemu-devel] [libvirt] " Daniel P. Berrangé
                     ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Erik Skultety @ 2019-02-21 14:39 UTC (permalink / raw)
  To: libvir-list; +Cc: Yash Mankad, Cleber Rosa Junior, qemu-devel

Hi,
I'm starting this thread in order to continue with the ongoing efforts to
bring actual integration testing to libvirt. Currently, the status quo is that
we build libvirt (along with our unit test suite) using different OS-flavoured
VMs in ci.centos.org. Andrea put a tremendous amount of work to not only
automate the whole process of creating the VMs but also having a way for a
dev to re-create the same environment locally without jenkins by using the
lcitool.

#TL;DR (if you're from QEMU, no TLDR for you ;), there are questions to answer)
- we need to run functional tests upstream on ci.centos.org
    -> pure VM testing environment (nested for migration) vs Docker images
- we need to host the upstream test suite somewhere
    -> main libvirt.git repo vs libvirt-jenkins-ci.git vs new standalone repo
- what framework to use for the test suite
    -> TCK vs avocado-vt vs plain avocado

#THE LONG STORY SHORT
As far as the functional test suite goes, there's an already existing
integration with the avocado-vt and a massive number of test cases at [1]
which is currently not used for upstream testing, primarily because of the huge
number of test cases (and also many unnecessary legacy test cases). An
alternative set of functional test cases is available as part of the
libvirt-tck framework [2]. The obvious question now is how can we build upon
any of this and introduce proper functional testing of upstream libvirt to our
jenkins environment at ci.centos.org, so I formulated the following discussion
points as I think these are crucial to sort out before we move on to the test
suite itself:

* Infrastructure/Storage requirements (need for hosting pre-build images?)
     - one of the main goals we should strive for with upstream CI is that
       every developer should be able to run the integration test suite on
       their own machine (conveniently) prior to submitting their patchset to
       the list
     - we need a reproducible environment to ensure that we don't get different
       results across different platforms (including ci.centos.org), therefore
       we could provide pre-built images with environment already set up to run
       the suite in an L1 guest.
     - as for performing migration tests, we could utilize nested virt
     - should we go this way, having some publicly accessible storage to host
       all the pre-built images is a key problem to solve

           -> an estimate of how much we're currently using: roughly 130G from
              our 500G allocation at ci.centos.org to store 8 qcow2 images + 2
              freebsd isos

           -> we're also fairly generous with how much we allocate for a guest
              image as most of the guests don't even use half of the 20G
              allocation

           -> considering sparsifying the pre-built images and compressing them
              + adding a ton of dependencies to run the suite, extending the
              pool of distros by including ubuntu 16 + 18, 200-250G is IMHO
              quite a generous estimate of our real need

           -> we need to find a party willing to give us the estimated amount
              of publicly accessible storage and consider whether we'd need any
              funds for that

           -> we'd have to also talk to other projects that have done a similar
              thing about possible caveats related to hosting images, e.g.
              bandwidth

           -> as for ci.centos.org, it does provide publicly accessible folder
              where projects can store artifacts (the documentation even
              mentions VM images), there might a limit though [3]

     - alternatively, we could use Docker images to test migration instead of
       nested virt (and not only migration)
           -> we'd loose support for non-Linux platforms like FreeBSD which we
              would not if we used nested

* Hosting the test suite itself
     - the main point to discuss here is whether the test suite should be part
       of the main libvirt repo following QEMU's lead by example or should they
       live inside a separate repo (a new one or as part of
       libvirt-jenkins-ci [4]
           -> the question here for QEMU folks is:

       *"What was the rationale for QEMU to decide to have avocado-qemu as
        part of the main repo?"*

* What framework to use for the test suite
     - libvirt-tck because it already contains a bunch of very useful tests as
       mentioned in the beginning
     - using the avocado-vt plugin because that's what's the existing
       libvirt-test-provider [1] is about
     - pure avocado for its community popularity and continuous development and
       once again follow QEMU leading by example
           -> and again a question for QEMU folks:

       *"What was QEMU's take on this and why did they decide to go with
        avocado-qemu?"*


* Integrating the test suite with the main libvirt.git repo
     - if we host the suite as part of libvirt-jenkins-ci as mentioned in the
       previous section then we could make libvirt-jenkins-ci a submodule of
       libvirt.git and enhance the toolchain by having something like 'make
       integration' that would prepare the selected guests and execute the test
       suite in them (only on demand)

Regards,
Erik

[1] https://github.com/autotest/tp-libvirt
[2] https://libvirt.org/testtck.html
[3] https://wiki.centos.org/QaWiki/CI/GettingStarted#head-a46ee49e8818ef9b50225c4e9d429f7a079758d2
[4] https://github.com/libvirt/libvirt-jenkins-ci

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [libvirt] Libvirt upstream CI efforts
  2019-02-21 14:39 ` [Qemu-devel] Libvirt upstream CI efforts Erik Skultety
@ 2019-02-21 17:56   ` Daniel P. Berrangé
  2019-02-21 19:06     ` Cleber Rosa
  2019-02-21 18:50   ` [Qemu-devel] " Cleber Rosa
  2019-02-22 16:37   ` [Qemu-devel] [libvirt] " Daniel P. Berrangé
  2 siblings, 1 reply; 6+ messages in thread
From: Daniel P. Berrangé @ 2019-02-21 17:56 UTC (permalink / raw)
  To: Erik Skultety; +Cc: libvir-list, Yash Mankad, qemu-devel, Cleber Rosa Junior

On Thu, Feb 21, 2019 at 03:39:15PM +0100, Erik Skultety wrote:
> Hi,
> I'm starting this thread in order to continue with the ongoing efforts to
> bring actual integration testing to libvirt. Currently, the status quo is that
> we build libvirt (along with our unit test suite) using different OS-flavoured
> VMs in ci.centos.org. Andrea put a tremendous amount of work to not only
> automate the whole process of creating the VMs but also having a way for a
> dev to re-create the same environment locally without jenkins by using the
> lcitool.

Note that it is more than just libvirt on the ci.centos.org host. Our
current built project list covers libosinfo, libvirt, libvirt-cim,
libvirt-dbus, libvirt-glib, libvirt-go, libvirt-go-xml, libvirt-ocaml,
libvirt-perl, libvirt-python, libvirt-sandbox, libvirt-tck, osinfo-db,
osinfo-db-tools, virt-manager & virt-viewer

For the C libraries in that list, we've also built & tested for
mingw32/64. All the projects also build RPMs.

In addition to ci.centos.org we have Travis CI testing for several
of the projects - libvirt, libvirt-go, libvirt-go-xml, libvirt-dbus,
libvirt-rust and libvirt-python. In the libvirt case this uses Docker
containers, but others just use native Travis environment. Travis is
the only place we get macOS coverage for libvirt.

Finally everything is x86-only right now, though I've been working on
using Debian to build cross-compiler container environments to address
that limitation.

We also have patchew scanning libvir-list and running syntax-check
across patches though it has not been very reliably running in
recent times which is a shame.


> #THE LONG STORY SHORT
> As far as the functional test suite goes, there's an already existing
> integration with the avocado-vt and a massive number of test cases at [1]
> which is currently not used for upstream testing, primarily because of the huge
> number of test cases (and also many unnecessary legacy test cases).
> An alternative set of functional test cases is available as part of the
> libvirt-tck framework [2]. The obvious question now is how can we build upon
> any of this and introduce proper functional testing of upstream libvirt to our
> jenkins environment at ci.centos.org, so I formulated the following discussion
> points as I think these are crucial to sort out before we move on to the test
> suite itself:
> 
> * Infrastructure/Storage requirements (need for hosting pre-build images?)
>      - one of the main goals we should strive for with upstream CI is that
>        every developer should be able to run the integration test suite on
>        their own machine (conveniently) prior to submitting their patchset to
>        the list

Any test suite that developers are expected to run before submissions
needs to be reasonably fast to run, and above all it needs to be very r
eliable. If it is slow, or wastes time by giving false positives, developers
will quickly learn to not bother running it.

This neccessarily implies that what developers run will only be a small
subset of what the CI systems run.

Developers just need to be able to then reproduce failures from CI
in some manner locally to debug things after the fact. 

>      - we need a reproducible environment to ensure that we don't get different
>        results across different platforms (including ci.centos.org), therefore
>        we could provide pre-built images with environment already set up to run
>        the suite in an L1 guest.
>      - as for performing migration tests, we could utilize nested virt

Migration testing doesn't fundamentally need nested virt. It just needs two
separate isolated libvirt instances. From POV of libvirt, we're just testing
our integration with QEMU, for which it is sufficient to use TCG, not KVM.
This could be done with any two VMs, or two container environments.

>      - should we go this way, having some publicly accessible storage to host
>        all the pre-built images is a key problem to solve
> 
>            -> an estimate of how much we're currently using: roughly 130G from
>               our 500G allocation at ci.centos.org to store 8 qcow2 images + 2
>               freebsd isos
> 
>            -> we're also fairly generous with how much we allocate for a guest
>               image as most of the guests don't even use half of the 20G
>               allocation
> 
>            -> considering sparsifying the pre-built images and compressing them
>               + adding a ton of dependencies to run the suite, extending the
>               pool of distros by including ubuntu 16 + 18, 200-250G is IMHO
>               quite a generous estimate of our real need
> 
>            -> we need to find a party willing to give us the estimated amount
>               of publicly accessible storage and consider whether we'd need any
>               funds for that
> 
>            -> we'd have to also talk to other projects that have done a similar
>               thing about possible caveats related to hosting images, e.g.
>               bandwidth
> 
>            -> as for ci.centos.org, it does provide publicly accessible folder
>               where projects can store artifacts (the documentation even
>               mentions VM images), there might a limit though [3]
> 
>      - alternatively, we could use Docker images to test migration instead of
>        nested virt (and not only migration)
>            -> we'd loose support for non-Linux platforms like FreeBSD which we
>               would not if we used nested

This is a false dichotomy, as use of Docker and VM images are not mutally
exclusive.

The problems around need for large disk storage and bandwidth requirements
for hosting disk images are a nice illustration of why the use of containers
for build & test environments has grown so quickly to become a defacto standard
approach.

The image storage & bandwidth issue becomes someone else problem, where that
someone else is Docker Hub or Quay.io, and thus incurrs financial or admin
costs to the project. When using public services though, we should of course
be careful not to get locked into a specific vendor's service. Fortunately
docker images are widely supported enough that this isn't a big issue, as
we've already proved by switching from Dockre Hub to Quay.io for our current
images.

The added benefit of containers is that developers don't then require a system
with physical or nested virt in order to run the environment. The containers
can run efficiently on any hardware available, phyiscal or virtual.

The vast majority of our targetted build platforms are Linux based, so can
be hosted via containers. The *BSD platforms can remain using disk images.

Provided that developers have a automated mechanism for creating the *BSD
images (using lcitool as today), then I don't see a compelling need to
actually provide hosting for pre-built VM disk images. Developers can build
them locally as & when they are needed.



In terms of infrastructure I think the most critical thing we are lacking
is the hardware resource for actually running the CI systems, which is a
definite blocker if we want to run any kind of extensive functional /
integration tests.

We could make better use of our current ci.centos.org server by switching
the Linux environments to use Docker. This would reduce the memory footprint
of each environment significantly, as we'd not be statically partitioning
up RAM to each env. It would improve our CPU utilization by allowing each job
to access all host CPUs, with the host OS balancing. Currently each VM only
gets 2 vCPUs, out of 8 in the host. So in times where only 1 job is running
we've wasted 3/4 of our CPU resource.  We could increase all the VMs to have
8 vCPUs, which could improve things but it still has 2 schedulars involved,
so won't be as resource efficient as containers.

Regardless of any improvements to current utilization though, I don't see
the current hardware having sufficient capacity to run serious integration
tests, especially if we want the integration tests run on multiple OS
targets.

IOW the key blocker is a 2nd server that we can register to ci.centos.org for
running jenkins jobs.  Our original server was a gift from the CentOS project
IIUC. If CentOS don't have the capacity to provide a second server, then I
think we should push Red Hat to fund it, given how fundamental the libvirt
project is to Red Hat.

> * Hosting the test suite itself
>      - the main point to discuss here is whether the test suite should be part
>        of the main libvirt repo following QEMU's lead by example or should they
>        live inside a separate repo (a new one or as part of
>        libvirt-jenkins-ci [4]

The libvirt-jenkins-ci repository is for tools/scripts/config to manage the
CI infrastructure itself. No actual tests belong there.

I don't think they need to be in the libvirt.git repository either. Libvirt
has long taken the approach of keeping independent parts of the project in
their own distinct repository, allowing them to live & evolve as best suits
their needs.  We indeed already have external repos containing integration
tests such as the TCK and the (largely unused now) libvirt-Test-API

Having it in a separate repo doesn't prevent us from making it easy to run
the test suite from the master libvirt.git. It is trivial to have make
rules that will pull in the external repo content. We've already done that
with libvirt-go-xml, where we pull in libvirt.git to provide XML files for
testing against.

>            -> the question here for QEMU folks is:
> 
>        *"What was the rationale for QEMU to decide to have avocado-qemu as
>         part of the main repo?"*
 
> * What framework to use for the test suite
>      - libvirt-tck because it already contains a bunch of very useful tests as
>        mentioned in the beginning
>      - using the avocado-vt plugin because that's what's the existing
>        libvirt-test-provider [1] is about
>      - pure avocado for its community popularity and continuous development and
>        once again follow QEMU leading by example
>            -> and again a question for QEMU folks:

I think there's two distinct questions / decision points there. There
the harness that controls execution & reporting results of the tests,
and there is the framework for actually writing individual tests.

The libvirt-TCK originally has Perl's Test::Harness for running and
reporting the tests. The actual test cases are using the TAP protocol
for their output. The test cases written in Perl use Test::More for
generating TAP output, the tests cases written in shell just write
TAP format results directly.

The test cases can thus be executed by anything that knows how to
consume the TAP format. Likewises tests can be writen in Python,
Go, $whatever, as long as it can emit TAP format.

I think such independance is useful as it makes it easy to integrate
tests with distinct harnesses.

I also think there's really not any single "best" test suite. We
already have multiple, and they have different levels of coverage
not least of the API bindings.

For example, by virtue of using Perl, the TCK provides integration
testing of the Sys::Virt API bindings to libvirt.

The avocado-vt gives the same benefit to the Python bindings.

We should just make it easy to run all of the suites that we might
find useful rather than trying pick a perfect one.

I should note that the TCK project is not merely intended for upstream
dev. It was also intended as something for downstream users/admins/
vendors to use as a way to validate that their specific installation
of libvirt was operating correctly. As such it goes to some trouble
to avoid damaging the host system, so that developers can safely
run it on precious machine. They don't need to setup a throwaway
box to run it in & it can be launched with zero config & do something
sensible.

>        *"What was QEMU's take on this and why did they decide to go with
>         avocado-qemu?"*

Note is a bit more complicated than this for QEMU as there's acutally
many test systems in QEMU

 - Unit tests emitting TAP format with GLib's TAP harnes
 - QTests functional tests emitting TAP format with GLib's TAP harness
 - Block I/O functional/integration tests emitting a custom format
   with its own harness
 - Acceptance (integration) tests using avacado


> * Integrating the test suite with the main libvirt.git repo
>      - if we host the suite as part of libvirt-jenkins-ci as mentioned in the
>        previous section then we could make libvirt-jenkins-ci a submodule of
>        libvirt.git and enhance the toolchain by having something like 'make
>        integration' that would prepare the selected guests and execute the test
>        suite in them (only on demand)

Git submodules have the (both useful & annoying) feature that they
are tied to a specific commit of the submodule. Tieing to a specific
commit certainly makes sense for build deps like gnulib, but I don't
think its so clearcut for the test suite. I think it would be useful
not to have to update the submodule commit hash in libvirt.git any
time a new useful test was added to the test repo.

IOW, it is probably sufficient to simply have "make" do a normal
git clone of the external repo so it always gets fresh test content.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] Libvirt upstream CI efforts
  2019-02-21 14:39 ` [Qemu-devel] Libvirt upstream CI efforts Erik Skultety
  2019-02-21 17:56   ` [Qemu-devel] [libvirt] " Daniel P. Berrangé
@ 2019-02-21 18:50   ` Cleber Rosa
  2019-02-27 14:56     ` Wainer dos Santos Moschetta
  2019-02-22 16:37   ` [Qemu-devel] [libvirt] " Daniel P. Berrangé
  2 siblings, 1 reply; 6+ messages in thread
From: Cleber Rosa @ 2019-02-21 18:50 UTC (permalink / raw)
  To: Erik Skultety, libvir-list; +Cc: Yash Mankad, qemu-devel



On 2/21/19 9:39 AM, Erik Skultety wrote:
> Hi,
> I'm starting this thread in order to continue with the ongoing efforts to
> bring actual integration testing to libvirt. Currently, the status quo is that
> we build libvirt (along with our unit test suite) using different OS-flavoured
> VMs in ci.centos.org. Andrea put a tremendous amount of work to not only
> automate the whole process of creating the VMs but also having a way for a
> dev to re-create the same environment locally without jenkins by using the
> lcitool.
> 

Nice to meet you lcitool!  I spent some time looking and testing it, and
I see tremendously value in allowing developers have the same experience
locally (or anywhere else they choose) as opposed to only behind a
black(-ish) box environment.  Yash may remember some of our
conversations about that.  The problem lcitool solves is common (I'm
having that myself for "deployment checks", AKA integration tests, of
Avocado itself)[1].

Hopefully not diverting too much from the main topic, but I'd like to
ask if there was a specific reason for installing guests instead of
reusing something like virt-builder?  This is my "provision" step that I
use locally:

  $ virsh destroy $DOMAIN; virt-builder
--ssh-inject=root:file:$SSH_PUB_KEY --selinux-relabel
--root-password=password:$PASSWORD --output=$VM_BASE_DIR/$DOMAIN.qcow2
--format=qcow2 --install python2 $GUEST_TYPE && virsh start $DOMAIN

Which seems to be quicker and simpler than maintaining kickstart files.
 It also covers more guests (should work for FreeBSD which seem to have
some caveats on lcitool).  Ideally, I'd like ansible to be responsible
for it (and it'd be fine that it calls this or something else).  But I
haven't looked at how well ansible will take this (maybe a dynamic
inventory implementation is all that's needed).

> #TL;DR (if you're from QEMU, no TLDR for you ;), there are questions to answer)
> - we need to run functional tests upstream on ci.centos.org
>     -> pure VM testing environment (nested for migration) vs Docker images
> - we need to host the upstream test suite somewhere
>     -> main libvirt.git repo vs libvirt-jenkins-ci.git vs new standalone repo
> - what framework to use for the test suite
>     -> TCK vs avocado-vt vs plain avocado
> 
> #THE LONG STORY SHORT
> As far as the functional test suite goes, there's an already existing
> integration with the avocado-vt and a massive number of test cases at [1]
> which is currently not used for upstream testing, primarily because of the huge
> number of test cases (and also many unnecessary legacy test cases). An
> alternative set of functional test cases is available as part of the
> libvirt-tck framework [2]. The obvious question now is how can we build upon
> any of this and introduce proper functional testing of upstream libvirt to our
> jenkins environment at ci.centos.org, so I formulated the following discussion
> points as I think these are crucial to sort out before we move on to the test
> suite itself:
> 
> * Infrastructure/Storage requirements (need for hosting pre-build images?)
>      - one of the main goals we should strive for with upstream CI is that
>        every developer should be able to run the integration test suite on
>        their own machine (conveniently) prior to submitting their patchset to
>        the list
>      - we need a reproducible environment to ensure that we don't get different
>        results across different platforms (including ci.centos.org), therefore
>        we could provide pre-built images with environment already set up to run
>        the suite in an L1 guest.

This seems to match the virt-builder approach.

>      - as for performing migration tests, we could utilize nested virt
>      - should we go this way, having some publicly accessible storage to host
>        all the pre-built images is a key problem to solve
> 
>            -> an estimate of how much we're currently using: roughly 130G from
>               our 500G allocation at ci.centos.org to store 8 qcow2 images + 2
>               freebsd isos
> 

Maybe this just needs to become a repository that developers can also
download from?  This would require the FreeBSD ISOs (and installation)
to be converted into a similar pre-built image use, though.

>            -> we're also fairly generous with how much we allocate for a guest
>               image as most of the guests don't even use half of the 20G
>               allocation
> 
>            -> considering sparsifying the pre-built images and compressing them
>               + adding a ton of dependencies to run the suite, extending the
>               pool of distros by including ubuntu 16 + 18, 200-250G is IMHO
>               quite a generous estimate of our real need
> 
>            -> we need to find a party willing to give us the estimated amount
>               of publicly accessible storage and consider whether we'd need any
>               funds for that
> 
>            -> we'd have to also talk to other projects that have done a similar
>               thing about possible caveats related to hosting images, e.g.
>               bandwidth

We're hosting a very small number of images (and small images in size) here:

  https://avocado-project.org/data/assets/

There's at least one image that gets downloaded on every single
Avocado-VT installation (vt-bootstrap) by default.  I have to admit I
haven't monitored the bandwidth usage, but it hasn't gone over the quota
(and we're paying ~5 USD/month for that server).

> 
>            -> as for ci.centos.org, it does provide publicly accessible folder
>               where projects can store artifacts (the documentation even
>               mentions VM images), there might a limit though [3]
> 
>      - alternatively, we could use Docker images to test migration instead of
>        nested virt (and not only migration)
>            -> we'd loose support for non-Linux platforms like FreeBSD which we
>               would not if we used nested
> 

One must pay attention to capabilities, seccomp and other layers added
to containers.  I'm not fully confident that the results of
virtualization testing under a container (specially failures) are just
as good as results from a non-containerized environment.  But I may be
on track to changing my opinion on this matter.

> * Hosting the test suite itself
>      - the main point to discuss here is whether the test suite should be part
>        of the main libvirt repo following QEMU's lead by example or should they
>        live inside a separate repo (a new one or as part of
>        libvirt-jenkins-ci [4]
>            -> the question here for QEMU folks is:
> 
>        *"What was the rationale for QEMU to decide to have avocado-qemu as
>         part of the main repo?"*
> 

Whenever you have an external test suite, you loose the automatic
version matching of the component you're testing.  Then conditionals,
abstractions, special treatments for the components we're testing tend
to plague everything.  Avocado-VT/tp-{qemu,libvirt} are examples of test
frameworks repositories that may still support 10 years or so of
different software versions.  The end result is *not* nice because:

  * Abstraction increases to support multiple versions of everything
  * Learning curve goes through the roof
  * Developers don't take the time to learn a complex framework full of
abstractions
  * QE does take the time, because they usually need to more than one
version of a software
  * Developers and QE now have their own silos

You could overcome some of that by keeping policies on supported
versions, baby sitting and deprecating code, but I firmly believe that
those house keeping tasks are bound to fail.

There's one thing developers will take immediate action on, and that is
when a "make check[-functional]" fails... so a test suite in this sense
need to be intrusive and affect a developer's common workflow.

> * What framework to use for the test suite
>      - libvirt-tck because it already contains a bunch of very useful tests as
>        mentioned in the beginning
>      - using the avocado-vt plugin because that's what's the existing
>        libvirt-test-provider [1] is about
>      - pure avocado for its community popularity and continuous development and
>        once again follow QEMU leading by example
>            -> and again a question for QEMU folks:
> 
>        *"What was QEMU's take on this and why did they decide to go with
>         avocado-qemu?"*
> 

Well, "avocado-qemu" did not exist when we initially pursued this task.
 Besides the points above, as to why keeping the tests as part of the
main repo, we understand that there are a lot of common problems in
testing.  They're usually solved over and over again, in an ad-hoc
manner for each project.

(Pure) Avocado was for a long time nothing but a speculation of what we
believed most projects would need for their testing (and an Avocado-VT
compatibility layer).  We got somethings right, and somethings wrong.

During the last year or so, a number of Avocado features have been added
for the sake of QEMU testing (for the "avocado-qemu" initiative), but I
bet that a user reading the documentation won't guess that.  Those
features are abstract and should work for any other project.

So, along the way, we had confidence that the testing stack could be
shared and split, and that tests living within the main repo ended up
looking simple and effective.  The "glue" between tests and framework is
quite thin and the bootstrap can be done transparently as part of the
"make check-acceptance" target.  We haven't heard from developers any
resistance to this approach, so, so far, we believe we're on the right
track.

> 
> * Integrating the test suite with the main libvirt.git repo
>      - if we host the suite as part of libvirt-jenkins-ci as mentioned in the
>        previous section then we could make libvirt-jenkins-ci a submodule of
>        libvirt.git and enhance the toolchain by having something like 'make
>        integration' that would prepare the selected guests and execute the test
>        suite in them (only on demand)
> 

Yes, this is the type of experience that should ultimately be delivered.

> Regards,
> Erik
> 
> [1] https://github.com/autotest/tp-libvirt
> [2] https://libvirt.org/testtck.html
> [3] https://wiki.centos.org/QaWiki/CI/GettingStarted#head-a46ee49e8818ef9b50225c4e9d429f7a079758d2
> [4] https://github.com/libvirt/libvirt-jenkins-ci
> 

[1]
https://github.com/avocado-framework/avocado/tree/master/selftests/deployment

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]
[  7ABB 96EB 8B46 B94D 5E0F  E9BB 657E 8D33 A5F2 09F3  ]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [libvirt] Libvirt upstream CI efforts
  2019-02-21 17:56   ` [Qemu-devel] [libvirt] " Daniel P. Berrangé
@ 2019-02-21 19:06     ` Cleber Rosa
  0 siblings, 0 replies; 6+ messages in thread
From: Cleber Rosa @ 2019-02-21 19:06 UTC (permalink / raw)
  To: Daniel P. Berrangé, Erik Skultety
  Cc: libvir-list, Yash Mankad, qemu-devel



On 2/21/19 12:56 PM, Daniel P. Berrangé wrote:
> On Thu, Feb 21, 2019 at 03:39:15PM +0100, Erik Skultety wrote:
>> Hi,
>> I'm starting this thread in order to continue with the ongoing efforts to
>> bring actual integration testing to libvirt. Currently, the status quo is that
>> we build libvirt (along with our unit test suite) using different OS-flavoured
>> VMs in ci.centos.org. Andrea put a tremendous amount of work to not only
>> automate the whole process of creating the VMs but also having a way for a
>> dev to re-create the same environment locally without jenkins by using the
>> lcitool.
> 
> Note that it is more than just libvirt on the ci.centos.org host. Our
> current built project list covers libosinfo, libvirt, libvirt-cim,
> libvirt-dbus, libvirt-glib, libvirt-go, libvirt-go-xml, libvirt-ocaml,
> libvirt-perl, libvirt-python, libvirt-sandbox, libvirt-tck, osinfo-db,
> osinfo-db-tools, virt-manager & virt-viewer
> 
> For the C libraries in that list, we've also built & tested for
> mingw32/64. All the projects also build RPMs.
> 
> In addition to ci.centos.org we have Travis CI testing for several
> of the projects - libvirt, libvirt-go, libvirt-go-xml, libvirt-dbus,
> libvirt-rust and libvirt-python. In the libvirt case this uses Docker
> containers, but others just use native Travis environment. Travis is
> the only place we get macOS coverage for libvirt.
> 
> Finally everything is x86-only right now, though I've been working on
> using Debian to build cross-compiler container environments to address
> that limitation.
> 
> We also have patchew scanning libvir-list and running syntax-check
> across patches though it has not been very reliably running in
> recent times which is a shame.
> 
> 
>> #THE LONG STORY SHORT
>> As far as the functional test suite goes, there's an already existing
>> integration with the avocado-vt and a massive number of test cases at [1]
>> which is currently not used for upstream testing, primarily because of the huge
>> number of test cases (and also many unnecessary legacy test cases).
>> An alternative set of functional test cases is available as part of the
>> libvirt-tck framework [2]. The obvious question now is how can we build upon
>> any of this and introduce proper functional testing of upstream libvirt to our
>> jenkins environment at ci.centos.org, so I formulated the following discussion
>> points as I think these are crucial to sort out before we move on to the test
>> suite itself:
>>
>> * Infrastructure/Storage requirements (need for hosting pre-build images?)
>>      - one of the main goals we should strive for with upstream CI is that
>>        every developer should be able to run the integration test suite on
>>        their own machine (conveniently) prior to submitting their patchset to
>>        the list
> 
> Any test suite that developers are expected to run before submissions
> needs to be reasonably fast to run, and above all it needs to be very r
> eliable. If it is slow, or wastes time by giving false positives, developers
> will quickly learn to not bother running it.
> 
> This neccessarily implies that what developers run will only be a small
> subset of what the CI systems run.
> 
> Developers just need to be able to then reproduce failures from CI
> in some manner locally to debug things after the fact. 
> 
>>      - we need a reproducible environment to ensure that we don't get different
>>        results across different platforms (including ci.centos.org), therefore
>>        we could provide pre-built images with environment already set up to run
>>        the suite in an L1 guest.
>>      - as for performing migration tests, we could utilize nested virt
> 
> Migration testing doesn't fundamentally need nested virt. It just needs two
> separate isolated libvirt instances. From POV of libvirt, we're just testing
> our integration with QEMU, for which it is sufficient to use TCG, not KVM.
> This could be done with any two VMs, or two container environments.
> 
>>      - should we go this way, having some publicly accessible storage to host
>>        all the pre-built images is a key problem to solve
>>
>>            -> an estimate of how much we're currently using: roughly 130G from
>>               our 500G allocation at ci.centos.org to store 8 qcow2 images + 2
>>               freebsd isos
>>
>>            -> we're also fairly generous with how much we allocate for a guest
>>               image as most of the guests don't even use half of the 20G
>>               allocation
>>
>>            -> considering sparsifying the pre-built images and compressing them
>>               + adding a ton of dependencies to run the suite, extending the
>>               pool of distros by including ubuntu 16 + 18, 200-250G is IMHO
>>               quite a generous estimate of our real need
>>
>>            -> we need to find a party willing to give us the estimated amount
>>               of publicly accessible storage and consider whether we'd need any
>>               funds for that
>>
>>            -> we'd have to also talk to other projects that have done a similar
>>               thing about possible caveats related to hosting images, e.g.
>>               bandwidth
>>
>>            -> as for ci.centos.org, it does provide publicly accessible folder
>>               where projects can store artifacts (the documentation even
>>               mentions VM images), there might a limit though [3]
>>
>>      - alternatively, we could use Docker images to test migration instead of
>>        nested virt (and not only migration)
>>            -> we'd loose support for non-Linux platforms like FreeBSD which we
>>               would not if we used nested
> 
> This is a false dichotomy, as use of Docker and VM images are not mutally
> exclusive.
> 
> The problems around need for large disk storage and bandwidth requirements
> for hosting disk images are a nice illustration of why the use of containers
> for build & test environments has grown so quickly to become a defacto standard
> approach.
> 
> The image storage & bandwidth issue becomes someone else problem, where that
> someone else is Docker Hub or Quay.io, and thus incurrs financial or admin
> costs to the project. When using public services though, we should of course
> be careful not to get locked into a specific vendor's service. Fortunately
> docker images are widely supported enough that this isn't a big issue, as
> we've already proved by switching from Dockre Hub to Quay.io for our current
> images.
> 
> The added benefit of containers is that developers don't then require a system
> with physical or nested virt in order to run the environment. The containers
> can run efficiently on any hardware available, phyiscal or virtual.
> 
> The vast majority of our targetted build platforms are Linux based, so can
> be hosted via containers. The *BSD platforms can remain using disk images.
> 
> Provided that developers have a automated mechanism for creating the *BSD
> images (using lcitool as today), then I don't see a compelling need to
> actually provide hosting for pre-built VM disk images. Developers can build
> them locally as & when they are needed.
> 
> 
> 
> In terms of infrastructure I think the most critical thing we are lacking
> is the hardware resource for actually running the CI systems, which is a
> definite blocker if we want to run any kind of extensive functional /
> integration tests.
> 
> We could make better use of our current ci.centos.org server by switching
> the Linux environments to use Docker. This would reduce the memory footprint
> of each environment significantly, as we'd not be statically partitioning
> up RAM to each env. It would improve our CPU utilization by allowing each job
> to access all host CPUs, with the host OS balancing. Currently each VM only
> gets 2 vCPUs, out of 8 in the host. So in times where only 1 job is running
> we've wasted 3/4 of our CPU resource.  We could increase all the VMs to have
> 8 vCPUs, which could improve things but it still has 2 schedulars involved,
> so won't be as resource efficient as containers.
> 
> Regardless of any improvements to current utilization though, I don't see
> the current hardware having sufficient capacity to run serious integration
> tests, especially if we want the integration tests run on multiple OS
> targets.
> 
> IOW the key blocker is a 2nd server that we can register to ci.centos.org for
> running jenkins jobs.  Our original server was a gift from the CentOS project
> IIUC. If CentOS don't have the capacity to provide a second server, then I
> think we should push Red Hat to fund it, given how fundamental the libvirt
> project is to Red Hat.
> 
>> * Hosting the test suite itself
>>      - the main point to discuss here is whether the test suite should be part
>>        of the main libvirt repo following QEMU's lead by example or should they
>>        live inside a separate repo (a new one or as part of
>>        libvirt-jenkins-ci [4]
> 
> The libvirt-jenkins-ci repository is for tools/scripts/config to manage the
> CI infrastructure itself. No actual tests belong there.
> 
> I don't think they need to be in the libvirt.git repository either. Libvirt
> has long taken the approach of keeping independent parts of the project in
> their own distinct repository, allowing them to live & evolve as best suits
> their needs.  We indeed already have external repos containing integration
> tests such as the TCK and the (largely unused now) libvirt-Test-API
> 
> Having it in a separate repo doesn't prevent us from making it easy to run
> the test suite from the master libvirt.git. It is trivial to have make
> rules that will pull in the external repo content. We've already done that
> with libvirt-go-xml, where we pull in libvirt.git to provide XML files for
> testing against.
> 
>>            -> the question here for QEMU folks is:
>>
>>        *"What was the rationale for QEMU to decide to have avocado-qemu as
>>         part of the main repo?"*
>  
>> * What framework to use for the test suite
>>      - libvirt-tck because it already contains a bunch of very useful tests as
>>        mentioned in the beginning
>>      - using the avocado-vt plugin because that's what's the existing
>>        libvirt-test-provider [1] is about
>>      - pure avocado for its community popularity and continuous development and
>>        once again follow QEMU leading by example
>>            -> and again a question for QEMU folks:
> 
> I think there's two distinct questions / decision points there. There
> the harness that controls execution & reporting results of the tests,
> and there is the framework for actually writing individual tests.
> 
> The libvirt-TCK originally has Perl's Test::Harness for running and
> reporting the tests. The actual test cases are using the TAP protocol
> for their output. The test cases written in Perl use Test::More for
> generating TAP output, the tests cases written in shell just write
> TAP format results directly.
> 
> The test cases can thus be executed by anything that knows how to
> consume the TAP format. Likewises tests can be writen in Python,
> Go, $whatever, as long as it can emit TAP format.
> 
> I think such independance is useful as it makes it easy to integrate
> tests with distinct harnesses.
> 

I agree that putting all your eggs on a single basket is can be bad
thing, but IMO, requiring developers to write code that emits TAP (as
simple as it is) is a clear sign that things are out of place.

I believe most developers would not be able to write TAP compatible
output by heart.  This is just to say that "there should be one obvious
and easy way to do it".  I like the way qemu-iotests behave, because
they don't require this type of burden on the test writer, still, they
can be written in a number of ways (shell, Python unittest, plain
Python, etc).

> I also think there's really not any single "best" test suite. We
> already have multiple, and they have different levels of coverage
> not least of the API bindings.
> 
> For example, by virtue of using Perl, the TCK provides integration
> testing of the Sys::Virt API bindings to libvirt.
> 
> The avocado-vt gives the same benefit to the Python bindings.
> 

Not that this is super important, but Avocado-VT doesn't use the libvirt
binding, and neither does tp-libvirt (long long story).

> We should just make it easy to run all of the suites that we might
> find useful rather than trying pick a perfect one.
> 

There can be many indeed. From a product perspective, it'd be nice to
make the contributor life easier by giving a few indications on how to
write a test for what he/she is contributing.  And ideally it should be
as simple as possible.

But this is all pretty obvious :)

- Cleber.

> I should note that the TCK project is not merely intended for upstream
> dev. It was also intended as something for downstream users/admins/
> vendors to use as a way to validate that their specific installation
> of libvirt was operating correctly. As such it goes to some trouble
> to avoid damaging the host system, so that developers can safely
> run it on precious machine. They don't need to setup a throwaway
> box to run it in & it can be launched with zero config & do something
> sensible.
> 
>>        *"What was QEMU's take on this and why did they decide to go with
>>         avocado-qemu?"*
> 
> Note is a bit more complicated than this for QEMU as there's acutally
> many test systems in QEMU
> 
>  - Unit tests emitting TAP format with GLib's TAP harnes
>  - QTests functional tests emitting TAP format with GLib's TAP harness
>  - Block I/O functional/integration tests emitting a custom format
>    with its own harness
>  - Acceptance (integration) tests using avacado
> 
> 
>> * Integrating the test suite with the main libvirt.git repo
>>      - if we host the suite as part of libvirt-jenkins-ci as mentioned in the
>>        previous section then we could make libvirt-jenkins-ci a submodule of
>>        libvirt.git and enhance the toolchain by having something like 'make
>>        integration' that would prepare the selected guests and execute the test
>>        suite in them (only on demand)
> 
> Git submodules have the (both useful & annoying) feature that they
> are tied to a specific commit of the submodule. Tieing to a specific
> commit certainly makes sense for build deps like gnulib, but I don't
> think its so clearcut for the test suite. I think it would be useful
> not to have to update the submodule commit hash in libvirt.git any
> time a new useful test was added to the test repo.
> 
> IOW, it is probably sufficient to simply have "make" do a normal
> git clone of the external repo so it always gets fresh test content.
> 
> Regards,
> Daniel
> 

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]
[  7ABB 96EB 8B46 B94D 5E0F  E9BB 657E 8D33 A5F2 09F3  ]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [libvirt] Libvirt upstream CI efforts
  2019-02-21 14:39 ` [Qemu-devel] Libvirt upstream CI efforts Erik Skultety
  2019-02-21 17:56   ` [Qemu-devel] [libvirt] " Daniel P. Berrangé
  2019-02-21 18:50   ` [Qemu-devel] " Cleber Rosa
@ 2019-02-22 16:37   ` Daniel P. Berrangé
  2 siblings, 0 replies; 6+ messages in thread
From: Daniel P. Berrangé @ 2019-02-22 16:37 UTC (permalink / raw)
  To: Erik Skultety; +Cc: libvir-list, Yash Mankad, qemu-devel, Cleber Rosa Junior

On Thu, Feb 21, 2019 at 03:39:15PM +0100, Erik Skultety wrote:
> number of test cases (and also many unnecessary legacy test cases). An
> alternative set of functional test cases is available as part of the
> libvirt-tck framework [2]. The obvious question now is how can we build upon
> any of this and introduce proper functional testing of upstream libvirt to our
> jenkins environment at ci.centos.org, so I formulated the following discussion
> points as I think these are crucial to sort out before we move on to the test
> suite itself:

Having thought about this some more I think it would be helpful to outline
the various areas of testing libvirt is missing / could benefit from, as
I think it is broader than just running an integration test suite on the
ci.centos.org

Listing in order of the phase of development, not priority....


 - Testing by developers before code submissions

   Developers today (usually) run unit tests + syntax check before
   submission, though even that is forgotten at times.

   Ideally some level of functional testing would be commonly performed
   too.

   Amount of time devs are likely to want spend on testing will depend
   on the scope of the work being done. ie they're not going to test
   on all distros, with all integration tests for simple patches. Also
   no desire to make devs run QEMU tests on patches which are changing
   Xen code and vica-verca.

   Essentially goal is to give developers confidence that they have
   not done something terrible before submission. Not expecting devs
   to catch all bugs themselves at this stage.


 - Testing of patches posted to mailing list pre merge

   patchew.org currently monitors patch postings to libvir-list
   and imports each posting into a new github branch. In theory
   it runs syntax-check against them & reports failures but this
   has not been reliable.

   Highly desirable to have all patches go through build + unit
   tests at this point, across multiple distros. It is common for
   devs to break mingw and/or *BSD and/or macOS builds, since vast
   majority of the dev focus is Linux.  There is generally long
   enough between patch posting & review approval that build+unit
   tests should be doable with sufficient patchew worker resource.


   Extra brownie points if the build + tests ran across each
   individual patch to prove that  git bisect-ability isn't
   broken. This would require significantly more worker
   resources though. This is the only place bi-sect could
   be tested as anything beyond is too late.


   Running functional tests at this point would be beneficial,
   on the general principle that the sooner we find a problem,
   the cheaper it is to fix & less impact it has on people.
   Massively dependant on worker resource.


 - Testing of latest git master post merge

   This is where almost all of our current effort has gone.
   
   ci.centos.org does build & unit testing fully chained
   together all libvirt components on Linux + BSD using VMs

   Travis CI does testing of individual of libvirt components
   on Linux + macOS, using contaniers for Linux.

   Both of these are x86 only thus far. Through use of Debian
   cross compilers we can get non-x86 coverage for builds,
   but not much else without finding some real hardware.

   Desirable to have functional testing here to detect problems
   before they get into any formal release. Dependent on resource
   to run on ci.centos.org or Travis, or another system we might
   get access to. Still likely to be x86 only.


 - Testing during RPM builds

   When building new packages for distros, 'make check' is usually
   run. This has caught problems appearing in distros which have
   sometimes been missed by ci.centos.org.

   Desirable to have functional testing here in order to prevent
   breakage making its way into distros, by aborting the build.

   Runs on all Fedora architectures which is a big plus, since
   all earlier upstream testing resources are x86 only.

   The environment is quite restrictive as it is inside Koji/Brew
   but libguestfs test suite has shown its possible to do very
   thorough testing of Libvirt & QEMU in this context that frequently
   identifies bugs in libvirt & QEMU & kernel & other Fedora/RHEL
   components.

   Fedora has an automated system that frequently rebuilds the
   RPMs to check FTBFS (fail to build from source) status of
   packages to detect regresisons over time.


 - Testing of composed distros

   Real integration testing belongs here, as its validating the
   exact software build & deployment setup that users will ultimately
   run with.

   The test environment is more flexible than during RPM build,
   but by the time it runs the update is already in the distro
   repos, potentially breaking downstream users (libguestfs).

   Note 100% sure, but I think the Fedora CI is x86 only.



 - Testing by developers investigating CI failures

   For any of the above steps which are run by any kind of automated
   system there needs to be a way for developers to reproduce the
   same test environment in an easy manner.

   For ci.centos.org we can re-generate the VMs.
   For Travis we can pull the docker images from quay.io
   For koji/brew we can run mock locally

   None of these are a perfect match though, as they can't reproduce
   the exact hardware setup, or the load characteristics, just the
   software install setup.



It is clear we have lots of places where we should/could be doing
functional testing, and none of them is going to cover all the
bases.

The environments in which we need to be able to do testing are
also quite varied in scope. Some places (ci.centos) have freedom
to bring a full VMs customized as we desire, others (Travis, Gitlab)
are supporting docker containers with arbitrary Linux, others (brew,
koji) we just have to accept whatever environment we're executing
in.

In terms of developers, we can't rely on ability to run VMs, because
they may already be running inside a VM, and lacking nested-virt.

Essentially I think this means we need to make it practical to run
the functional tests as is, in whatever the current execution
environment is. If that works, then pretty much by implication,
it ought to be possible to optionally launch it inside a VM, or
inside a container to reproduce the precise environment of a
particular test system.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] Libvirt upstream CI efforts
  2019-02-21 18:50   ` [Qemu-devel] " Cleber Rosa
@ 2019-02-27 14:56     ` Wainer dos Santos Moschetta
  0 siblings, 0 replies; 6+ messages in thread
From: Wainer dos Santos Moschetta @ 2019-02-27 14:56 UTC (permalink / raw)
  To: Cleber Rosa, Erik Skultety, libvir-list; +Cc: Yash Mankad, qemu-devel


On 02/21/2019 03:50 PM, Cleber Rosa wrote:
>
> On 2/21/19 9:39 AM, Erik Skultety wrote:
>> Hi,
>> I'm starting this thread in order to continue with the ongoing efforts to
>> bring actual integration testing to libvirt. Currently, the status quo is that
>> we build libvirt (along with our unit test suite) using different OS-flavoured
>> VMs in ci.centos.org. Andrea put a tremendous amount of work to not only
>> automate the whole process of creating the VMs but also having a way for a
>> dev to re-create the same environment locally without jenkins by using the
>> lcitool.
>>
> Nice to meet you lcitool!  I spent some time looking and testing it, and
> I see tremendously value in allowing developers have the same experience
> locally (or anywhere else they choose) as opposed to only behind a
> black(-ish) box environment.  Yash may remember some of our
> conversations about that.  The problem lcitool solves is common (I'm
> having that myself for "deployment checks", AKA integration tests, of
> Avocado itself)[1].
>
> Hopefully not diverting too much from the main topic, but I'd like to
> ask if there was a specific reason for installing guests instead of
> reusing something like virt-builder?  This is my "provision" step that I
> use locally:
>
>    $ virsh destroy $DOMAIN; virt-builder
> --ssh-inject=root:file:$SSH_PUB_KEY --selinux-relabel
> --root-password=password:$PASSWORD --output=$VM_BASE_DIR/$DOMAIN.qcow2
> --format=qcow2 --install python2 $GUEST_TYPE && virsh start $DOMAIN
>
> Which seems to be quicker and simpler than maintaining kickstart files.
>   It also covers more guests (should work for FreeBSD which seem to have
> some caveats on lcitool).  Ideally, I'd like ansible to be responsible
> for it (and it'd be fine that it calls this or something else).  But I
> haven't looked at how well ansible will take this (maybe a dynamic
> inventory implementation is all that's needed).

At Red Hat, the downstream CI system for QEMU uses Linchpin[1] to 
provision bare-metal machines on Beaker [2].

Linchpin is an Ansible based tool that allows to provision (and destroy) 
resources with various providers, for example, Libvirt, Openstack, and 
Duffy [3] (behind of ci.centos.org). On provision success, it generates 
an Ansible inventory file which can be used run tasks on the resources.

It would be good if we could adopt that tool upstream, so have a "common 
language" for provision resources across Libvirt and QEMU CI projects, 
and eventually making Linchpin a better tool.

Okay, there are limitations... it does not provision Docker containers 
yet, for example. I'm working on it [4] though.

Regards,

Wainer

[1] https://linchpin.readthedocs.io/en/latest/index.html
[2] https://beaker-project.org/
[3] https://wiki.centos.org/QaWiki/CI/Duffy
[4] https://github.com/CentOS-PaaS-SIG/linchpin/pull/977


>
>> #TL;DR (if you're from QEMU, no TLDR for you ;), there are questions to answer)
>> - we need to run functional tests upstream on ci.centos.org
>>      -> pure VM testing environment (nested for migration) vs Docker images
>> - we need to host the upstream test suite somewhere
>>      -> main libvirt.git repo vs libvirt-jenkins-ci.git vs new standalone repo
>> - what framework to use for the test suite
>>      -> TCK vs avocado-vt vs plain avocado
>>
>> #THE LONG STORY SHORT
>> As far as the functional test suite goes, there's an already existing
>> integration with the avocado-vt and a massive number of test cases at [1]
>> which is currently not used for upstream testing, primarily because of the huge
>> number of test cases (and also many unnecessary legacy test cases). An
>> alternative set of functional test cases is available as part of the
>> libvirt-tck framework [2]. The obvious question now is how can we build upon
>> any of this and introduce proper functional testing of upstream libvirt to our
>> jenkins environment at ci.centos.org, so I formulated the following discussion
>> points as I think these are crucial to sort out before we move on to the test
>> suite itself:
>>
>> * Infrastructure/Storage requirements (need for hosting pre-build images?)
>>       - one of the main goals we should strive for with upstream CI is that
>>         every developer should be able to run the integration test suite on
>>         their own machine (conveniently) prior to submitting their patchset to
>>         the list
>>       - we need a reproducible environment to ensure that we don't get different
>>         results across different platforms (including ci.centos.org), therefore
>>         we could provide pre-built images with environment already set up to run
>>         the suite in an L1 guest.
> This seems to match the virt-builder approach.
>
>>       - as for performing migration tests, we could utilize nested virt
>>       - should we go this way, having some publicly accessible storage to host
>>         all the pre-built images is a key problem to solve
>>
>>             -> an estimate of how much we're currently using: roughly 130G from
>>                our 500G allocation at ci.centos.org to store 8 qcow2 images + 2
>>                freebsd isos
>>
> Maybe this just needs to become a repository that developers can also
> download from?  This would require the FreeBSD ISOs (and installation)
> to be converted into a similar pre-built image use, though.
>
>>             -> we're also fairly generous with how much we allocate for a guest
>>                image as most of the guests don't even use half of the 20G
>>                allocation
>>
>>             -> considering sparsifying the pre-built images and compressing them
>>                + adding a ton of dependencies to run the suite, extending the
>>                pool of distros by including ubuntu 16 + 18, 200-250G is IMHO
>>                quite a generous estimate of our real need
>>
>>             -> we need to find a party willing to give us the estimated amount
>>                of publicly accessible storage and consider whether we'd need any
>>                funds for that
>>
>>             -> we'd have to also talk to other projects that have done a similar
>>                thing about possible caveats related to hosting images, e.g.
>>                bandwidth
> We're hosting a very small number of images (and small images in size) here:
>
>    https://avocado-project.org/data/assets/
>
> There's at least one image that gets downloaded on every single
> Avocado-VT installation (vt-bootstrap) by default.  I have to admit I
> haven't monitored the bandwidth usage, but it hasn't gone over the quota
> (and we're paying ~5 USD/month for that server).
>
>>             -> as for ci.centos.org, it does provide publicly accessible folder
>>                where projects can store artifacts (the documentation even
>>                mentions VM images), there might a limit though [3]
>>
>>       - alternatively, we could use Docker images to test migration instead of
>>         nested virt (and not only migration)
>>             -> we'd loose support for non-Linux platforms like FreeBSD which we
>>                would not if we used nested
>>
> One must pay attention to capabilities, seccomp and other layers added
> to containers.  I'm not fully confident that the results of
> virtualization testing under a container (specially failures) are just
> as good as results from a non-containerized environment.  But I may be
> on track to changing my opinion on this matter.
>
>> * Hosting the test suite itself
>>       - the main point to discuss here is whether the test suite should be part
>>         of the main libvirt repo following QEMU's lead by example or should they
>>         live inside a separate repo (a new one or as part of
>>         libvirt-jenkins-ci [4]
>>             -> the question here for QEMU folks is:
>>
>>         *"What was the rationale for QEMU to decide to have avocado-qemu as
>>          part of the main repo?"*
>>
> Whenever you have an external test suite, you loose the automatic
> version matching of the component you're testing.  Then conditionals,
> abstractions, special treatments for the components we're testing tend
> to plague everything.  Avocado-VT/tp-{qemu,libvirt} are examples of test
> frameworks repositories that may still support 10 years or so of
> different software versions.  The end result is *not* nice because:
>
>    * Abstraction increases to support multiple versions of everything
>    * Learning curve goes through the roof
>    * Developers don't take the time to learn a complex framework full of
> abstractions
>    * QE does take the time, because they usually need to more than one
> version of a software
>    * Developers and QE now have their own silos
>
> You could overcome some of that by keeping policies on supported
> versions, baby sitting and deprecating code, but I firmly believe that
> those house keeping tasks are bound to fail.
>
> There's one thing developers will take immediate action on, and that is
> when a "make check[-functional]" fails... so a test suite in this sense
> need to be intrusive and affect a developer's common workflow.
>
>> * What framework to use for the test suite
>>       - libvirt-tck because it already contains a bunch of very useful tests as
>>         mentioned in the beginning
>>       - using the avocado-vt plugin because that's what's the existing
>>         libvirt-test-provider [1] is about
>>       - pure avocado for its community popularity and continuous development and
>>         once again follow QEMU leading by example
>>             -> and again a question for QEMU folks:
>>
>>         *"What was QEMU's take on this and why did they decide to go with
>>          avocado-qemu?"*
>>
> Well, "avocado-qemu" did not exist when we initially pursued this task.
>   Besides the points above, as to why keeping the tests as part of the
> main repo, we understand that there are a lot of common problems in
> testing.  They're usually solved over and over again, in an ad-hoc
> manner for each project.
>
> (Pure) Avocado was for a long time nothing but a speculation of what we
> believed most projects would need for their testing (and an Avocado-VT
> compatibility layer).  We got somethings right, and somethings wrong.
>
> During the last year or so, a number of Avocado features have been added
> for the sake of QEMU testing (for the "avocado-qemu" initiative), but I
> bet that a user reading the documentation won't guess that.  Those
> features are abstract and should work for any other project.
>
> So, along the way, we had confidence that the testing stack could be
> shared and split, and that tests living within the main repo ended up
> looking simple and effective.  The "glue" between tests and framework is
> quite thin and the bootstrap can be done transparently as part of the
> "make check-acceptance" target.  We haven't heard from developers any
> resistance to this approach, so, so far, we believe we're on the right
> track.
>
>> * Integrating the test suite with the main libvirt.git repo
>>       - if we host the suite as part of libvirt-jenkins-ci as mentioned in the
>>         previous section then we could make libvirt-jenkins-ci a submodule of
>>         libvirt.git and enhance the toolchain by having something like 'make
>>         integration' that would prepare the selected guests and execute the test
>>         suite in them (only on demand)
>>
> Yes, this is the type of experience that should ultimately be delivered.
>
>> Regards,
>> Erik
>>
>> [1] https://github.com/autotest/tp-libvirt
>> [2] https://libvirt.org/testtck.html
>> [3] https://wiki.centos.org/QaWiki/CI/GettingStarted#head-a46ee49e8818ef9b50225c4e9d429f7a079758d2
>> [4] https://github.com/libvirt/libvirt-jenkins-ci
>>
> [1]
> https://github.com/avocado-framework/avocado/tree/master/selftests/deployment
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-02-27 15:04 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20190118140336.GA19921@beluga.usersys.redhat.com>
2019-02-21 14:39 ` [Qemu-devel] Libvirt upstream CI efforts Erik Skultety
2019-02-21 17:56   ` [Qemu-devel] [libvirt] " Daniel P. Berrangé
2019-02-21 19:06     ` Cleber Rosa
2019-02-21 18:50   ` [Qemu-devel] " Cleber Rosa
2019-02-27 14:56     ` Wainer dos Santos Moschetta
2019-02-22 16:37   ` [Qemu-devel] [libvirt] " Daniel P. Berrangé

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.