All of lore.kernel.org
 help / color / mirror / Atom feed
* [Fuego] First steps with fuego
@ 2017-05-09  9:49 Rafael Gago Castano
  2017-05-09 19:04 ` Bird, Timothy
  0 siblings, 1 reply; 18+ messages in thread
From: Rafael Gago Castano @ 2017-05-09  9:49 UTC (permalink / raw)
  To: fuego

Hello,

We are a group that is using a LAVA setup, but we are not completely happy with it and we are trying alternatives. We like from fuego that it seems to be more hackable, it runs the server from a container (hopefully being able to develop tests locally) and doesn't try to bring in a lot of Web technologies/languages. 

So far I have some questions.

I have built the .pdf under fuego/docs (fuego-docs.pdg) and I'm trying to follow it on Ubuntu 16.04. I'm now blocked at step 4. The jenkins Web interface  is not showing anything specific to fuego, I see just a vanilla Jenkins server with no tabs, not something like this:

http://bird.org/fuego-files/fuego-dashboard-history.png

I was able to submit some tests (Initial test section on the README file) by using the ftc tool and I saw some of them failing (docker.default.Functional.LTP,  docker.default.Functional.glib), but the Jenkins interface doesn't let me do anything. I'm not very well versed on Jenkins and Docker setups so I might be doing something wrong. What should I check?

Then I saw no reference about what in LAVA is called multinode tests (just for someone reading this that doesn't know LAVA: a test using more than one board involved to be able to test e.g. communication hardware). If this is achievable with Fuego, how would be do?

BR,
Rafa.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
  2017-05-09  9:49 [Fuego] First steps with fuego Rafael Gago Castano
@ 2017-05-09 19:04 ` Bird, Timothy
  2017-05-10  6:32   ` Rafael Gago Castano
  2017-05-10 13:12   ` Rafael Gago Castano
  0 siblings, 2 replies; 18+ messages in thread
From: Bird, Timothy @ 2017-05-09 19:04 UTC (permalink / raw)
  To: Rafael Gago Castano, fuego

> From: Rafael Gago Castano on Tuesday, May 09, 2017 2:49 AM
>
> We are a group that is using a LAVA setup, but we are not completely happy
> with it and we are trying alternatives. We like from fuego that it seems to be
> more hackable, it runs the server from a container (hopefully being able to
> develop tests locally) and doesn't try to bring in a lot of Web
> technologies/languages.
Thanks for taking a look at Fuego!  I hope we can answer your questions
and address any issues you have.

> 
> So far I have some questions.
> 
> I have built the .pdf under fuego/docs (fuego-docs.pdf) and I'm trying to
> follow it on Ubuntu 16.04. I'm now blocked at step 4. The jenkins Web
> interface  is not showing anything specific to fuego, I see just a vanilla Jenkins
> server with no tabs, not something like this:
> 
> http://bird.org/fuego-files/fuego-dashboard-history.png

Yeah - the PDF  doc is unfortunately a bit out of date.  We had a major change
in the jenkins interface, and in the method of populating the interface, in our 1.1
release, which was released in March.  I'm sorry this didn't get updated for that
release.  That file on our wiki that you point to refers to the old, pre v1.1 interface.

The best location for information on getting up and running is the wiki.  This page:
http://bird.org/fuego/Fuego_Quickstart_Guide
hopefully has some good information you can use (and up-to-date screenshots).
There are also some screenshots at:
http://bird.org/fuego/Jenkins_User_Interface

> 
> I was able to submit some tests (Initial test section on the README file) by
> using the ftc tool and I saw some of them failing
> (docker.default.Functional.LTP,  docker.default.Functional.glib), but the
> Jenkins interface doesn't let me do anything. I'm not very well versed on
> Jenkins and Docker setups so I might be doing something wrong. What
> should I check?
Actually, docker is a special target for playing around with Fuego by
running tests on the Fuego docker container.  We actually discovered
a few real bugs in docker with Fuego, that we haven't been able to fix
yet.  So the LTP bug you saw is real.  There are currently 3 errors from
LTP that should be passing but are currently failing with the "docker"
board, that are under investigation now.  We may just blacklist those
particular sub-tests, so that they don't cause undue anxiety running Fuego. 

In the future, our plan is to have an 'ftc' sub-command you can run to save
your current results (including test failures that you feel you can ignore),
as the new "reference" results that future tests on your hardware will
compare against to determine success or failure.  There was a tool for this
in the past, but it was an awk script, and had some usability problems, and
we have just recently refactored a lot of our results parsing in conjunction
with a unified results output and processing, across all tests (both
Benchmark and Functional).

Unfortunately, this area of Fuego is undergoing
some churn at the moment.  You can detect errors with Fuego, using
the Jenkins interface, and you can detect functional test regressions
and benchmark results (performance) regressions.  But you can't use
the Jenkins interface to blacklist the failing tests (indicate that you
don't want to run them, or indicate that you want to ignore those
results). And actually investigating the cause of an error is still left
as an exercise to the user (although our links to the test logs should
help).

We are still investigating the docker bug with the Functional.glib test, to
see what the actual problem is.   I hope to have this resolved and/or
explained in Fuego version 1.2. To be honest, the 1.1 release was a bit
rushed, and we probably should have removed Functional.glib from the
testplans for the docker board, to avoid these issues (and then
re-enabled it in the testplans when the issue was resolved.)

You bring up an important point - which is that we should probably have
some release notes that explain known problems, especially with one of
our "demo" boards (the 'docker' pseudo-board).  One thing we are striving
for is a good out-of-the box experience, and it looks like we've failed with
the 1.1 release.

> 
> Then I saw no reference about what in LAVA is called multinode tests (just
> for someone reading this that doesn't know LAVA: a test using more than
> one board involved to be able to test e.g. communication hardware). If this is
> achievable with Fuego, how would be do?

In a way, every test in Fuego is a multi-node test, with the host being one
end of the test.  The way Fuego is structured, the base script for the test
runs on the host, inside the docker container, and communicates with
the board.  For many tests, the base script just: 1) builds the software,
2) deploys it to target, 3) executes it on the board, 4) collects the results
and 5) analyzes it.

However, we do have a few tests that perform other actions on the host,
or rely on processes running on the host.
For example, the Functional.netperf test relies on a netperf server running on
the host (an instance is started in the container if one is not already running).
A test can execute whatever steps it wants to, including communicating
with nodes, machines, or test equipment that is not on the target board.
These steps would be put into the base script's test_pre_check() or test_run()
functions.  test_pre_check is usually reserved to checking that test pre-requisites
are met, which in the case of a multi-node test might include reserving off-target
testing hardware (such as video scanners), or it might include setting up
the connections for things like bus testing (e.g. establishing the connections to the
device under test for CAN bus testing or USB testing).

Then the test_run() function should do things like initiating the actual saving
of results, or starting and stopping communication from off-board endpoints.

See http://bird.org/fuego/Adding_a_test for more information about the
functions in a test base script.

One concept in Fuego related to this, that I think is important, is that
we intend the contents of the Fuego docker container to be a
"test distribution", or in other words, a distribution of Linux, based
on Debian, that is specifically geared for testing.  Not only does Fuego
come with actual test programs, and the test packages themselves
include source (if needed) to build the tests for the target boards.
But also, the Fuego test distribution is intended to include host-side
software as well for testing, as well as (potentially) the software
needed to drive external devices and test equipment.  It is a pain
for the average developer to collect, install, and integrate this
software onto their desktop machines, and that's one aspect of
testing that we'd like to share between developers (test program
setup, hardware and tools configuration, etc.)  You can see the
first steps in this direction by the inclusion in Fuego of the netperf
server, and several utilities (like serio and ttc) to handle different
board communication methods.  We'll be expanding this in the
years to come, so that more and more tests can be run with just
the single docker installation of Fuego, requiring much less pre-test
configuration and specialized expertise by testers.

I hope this answers your questions.  Thanks again for
experimenting with Fuego and providing feedback.
 -- Tim


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
  2017-05-09 19:04 ` Bird, Timothy
@ 2017-05-10  6:32   ` Rafael Gago Castano
  2017-05-10 13:12   ` Rafael Gago Castano
  1 sibling, 0 replies; 18+ messages in thread
From: Rafael Gago Castano @ 2017-05-10  6:32 UTC (permalink / raw)
  To: Bird, Timothy, fuego

Thanks for the detailed response! 

We have some tests in LAVA that communicate from board to board, but they are old and can be rewritten to communicate with the host instead, so I'll keep looking at fuego on my low work peaks.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
  2017-05-09 19:04 ` Bird, Timothy
  2017-05-10  6:32   ` Rafael Gago Castano
@ 2017-05-10 13:12   ` Rafael Gago Castano
  2017-05-10 19:30     ` Bird, Timothy
  1 sibling, 1 reply; 18+ messages in thread
From: Rafael Gago Castano @ 2017-05-10 13:12 UTC (permalink / raw)
  To: Bird, Timothy, fuego

I got some time today to keep investigating today, it was trivial to integrate with one of our boards, so the Wiki it's up to date and good. It was a very easy and trouble free setup. 

Then I wanted as an exercise to write test a for rs485 (the same case happens with CAN) but I have been looking at "fuego-core/engine/tests" and I couldn't answer the questions myself.

1.) Our current setup does device poweon/off the device (custom commands in LAVA), fetches a kernel, dtb and rootfs from our build servers and copies the images to the DUT's RAM for RAM booting (tftp + Uboot commands integrated in LAVA). How does these steps are best integrated in fuego?

LAVA has this feature integrated, but we already have an internal Python program that handles this, so we don't need fuego to talk with UBoot, just to know the best place to launch the tool. Spontaneusly and with what I know I only can think about doing it on the "test_pre_check" function or as the first (dummy) test of a DUT specific tesplan, but both would have its drawbacks.

2.) Let's say that we have the desired image running on the device and we want to test RS485 TX/RX at different baudrates. Now with LAVA we boot, wait until both machines are booted (using the "lava-send" and "lava-wait" synchronization primitives) set the baudrate in both sides, start the receiver side (either host or DUT), synchronize with the sender side (either host or DUT) , start sending and then we start the cycle again with another baudrate - TX/RX cfg. We do the same type of test for CAN too (using cansequence).

I haven't figured out how I would implement such test with fuego. Both sender and receiver would need to be able to signal a failure, but there is only a "run_test" function and it's running on the DUT. Is there any way to implement this or any sane workaround?

3.) This one is very easy to add/contribute, but some inbuilt support to specify inside a test a git + branch + commit sha instead of a tarball would be nice to avoid duplication of tests for different library versions (e.g. when testing previous image releases that get backports).

BR,
Rafa.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
  2017-05-10 13:12   ` Rafael Gago Castano
@ 2017-05-10 19:30     ` Bird, Timothy
       [not found]       ` <HE1PR06MB3036959D51FD5FBFC02208C8C2ED0@HE1PR06MB3036.eurprd06.prod.outlook.com>
  0 siblings, 1 reply; 18+ messages in thread
From: Bird, Timothy @ 2017-05-10 19:30 UTC (permalink / raw)
  To: Rafael Gago Castano, fuego

> -----Original Message-----
> From: Rafael Gago on Wednesday, May 10, 2017 6:12 AM
> I got some time today to keep investigating today, it was trivial to integrate
> with one of our boards, so the Wiki it's up to date and good. It was a very
> easy and trouble free setup.

Thanks.  That's good to hear.
 
> Then I wanted as an exercise to write test a for rs485 (the same case
> happens with CAN) but I have been looking at "fuego-core/engine/tests"
> and I couldn't answer the questions myself.
> 
> 1.) Our current setup does device poweon/off the device (custom
> commands in LAVA), fetches a kernel, dtb and rootfs from our build servers
> and copies the images to the DUT's RAM for RAM booting (tftp + Uboot
> commands integrated in LAVA). How does these steps are best integrated in
> fuego?
> 
> LAVA has this feature integrated, but we already have an internal Python
> program that handles this, so we don't need fuego to talk with UBoot, just to
> know the best place to launch the tool. Spontaneusly and with what I know I
> only can think about doing it on the "test_pre_check" function or as the first
> (dummy) test of a DUT specific tesplan, but both would have its drawbacks.

It depends on what the actual test is.  If you are trying to determine if something
is going wrong with those steps themselves, then it would be appropriate
to put them in the "test_run()" function.  If these are just pre-cursors to prepare
for the actual testing, and the "actual' test is of some other functionality, then
they should go in "test_pre_check()", probably with something to validate that
they succeeded before proceeding with the actual test.

If this is for setup for a sequence of tests, then I think making a dummy test,
as you indicated, is the right thing.  You could add it as the first step of a Fuego
testplan, or you could make your own Jenkins job to handle the job sequence.
There are two Jenkins plugins which can handle this type of thing:

https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Join+Plugin

Both of these allow for jobs to be chained together, as precursors or successors
to other jobs.

I'm am not a LAVA expert, but my understanding is that LAVA always does a reboot
between tests.  (I may be confusing this with KernelCI)  Fuego does not have this
model by default, but you can construct Jenkins jobs to perform this type of
system-level sequence of 'build, deploy and reboot' if desired. 
This presentation might be helpful:
Continuous Integration and Autotest Environment Using Fuego - by Kenji Tadano & Kengo Ibe (Mitsubishi Electric), Oct 2016 at ELCE 2016
You can find the links for the PDF and video at:
http://bird.org/fuego/Presentations#2016

If lots of people do testing this way, we should probably improve our support
for it, or come up with guidelines for a good way to do it.

> 2.) Let's say that we have the desired image running on the device and we
> want to test RS485 TX/RX at different baudrates. Now with LAVA we boot,
> wait until both machines are booted (using the "lava-send" and "lava-wait"
> synchronization primitives) set the baudrate in both sides, start the receiver
> side (either host or DUT), synchronize with the sender side (either host or
> DUT) , start sending and then we start the cycle again with another baudrate
> - TX/RX cfg. We do the same type of test for CAN too (using cansequence).

Hmmm.  The 'ftc' command does have a 'wait-for' sub-command for this type
of thing.  However,  I don't know of any tests that use this, and I don't know how
it compares to the lava-send/lava-wait protocols.  When I've used it in the
past I've done simple things like check for the existence of a file, and the "signaling"
side has done something like "touch /tmp/target_ready"

It currently only checks conditions on the host.  This is a deficiency.  It should also
check for conditions on the target.  Something could be jury-rigged, but it would
require some multi-processing in the test_run function.

> I haven't figured out how I would implement such test with fuego. Both
> sender and receiver would need to be able to signal a failure, but there is
> only a "run_test" function and it's running on the DUT. Is there any way to
> implement this or any sane workaround?

test_run() is running on the host, and can start and stop both sides.
But it is not threaded.  It would need to start separate processes
on the sending side and the receiving side of the test, and keep them in
sync.  If one side is the host, then 'ftc wait-for' could be used for this.
You could have a sequence in test_run that looked like the code below.

I'm just brainstorming this, and haven't tried any of this, but here's
some code:  This assumes that the sender on the host can detect
a failure or will time out, and will do "touch /tmp/sender_done" when
it is finished.

# start the test log on the board
report "echo Starting baud_rate test"
for baud_rate in $FUNCTIONAL_RS485_RXTX_BAUD_RATE_LIST ; do 
   # start a listener on the target, and append it's stdout to the test log
   report_append "start_listener $baud_rate"

   # start a sender on the host, and collect it's stdout locally (on the host)
   $TEST_HOME/start_sender $baud_rate >/tmp/sender_log

   # wait (up to 100 seconds) for process on host to signal completion
   ftc wait-for -t 100 "test -f /tmp/sender_done"
   # kill sender on host (should check ftc exit code to see if this is needed)
   pkill sender

   # terminate listener on board, if it hasn't already exited
   kill_procs listener

   # append the log from the host to the board's test log
   put /tmp/sender_log /tmp
   report_append "cat /tmp/sender_log"
done

BAUD_RATE_LIST would be in the spec file for the test
(named here as Functional.RS485_rxtx).

start_sender is a made-up program that runs on the host to 
start one side of the connection.  start_listener is a made-up
program that runs on the target board to start the other side
of the connection.  This code assumes it starts a process called
'listener' on the board, and this code kills that listener when the
test is over, if it is still running. start_listener is deployed to the target
in test_deploy, and start_sender is executed directly from
the test's home directory (e.g. fuego-core/engine/tests/Functional.RS485_rxtx)

OK - just going through this mental exercise has shown some
deficiencies in Fuego's host/target logging model and synchronization
that we should correct.  There should definitely be a method of adding 
something from the host to the log, in one step.  And we should
extend ftc to support checking the condition of something on target.
I'm not sure if the logging shown here will work or not.  start_listener
needs to return immediately, and leave the 'listener' process executing
on the board, still outputting to the log.  If this doesn't work by default,
another layer of output capture and appending to the log could be done,
but that's more awkward.

The function "report()" starts the log on the board, and "report_append()"
adds more material to it.  Since we also want information from the host
(or from a 3rd machine), we have to collect that ourselves and add it to
the log directly.

The tmp file (sender_log) on the host should definitely be put into a more
unique location (test specific, and with a unique temp filename), to avoid
collisions between tests running on multiple boards.

But this shows proof of concept for how this can be done.

> 
> 3.) This one is very easy to add/contribute, but some inbuilt support to
> specify inside a test a git + branch + commit sha instead of a tarball would be
> nice to avoid duplication of tests for different library versions (e.g. when
> testing previous image releases that get backports).
That's a good use case to support.

Actually, we're adding support for getting source for a test from git in Fuego
v1.2.  Some preliminary support is in the 'next' branch, but it still needs a 
bit of work (It doesn't support specifying the commit ID yet).  See the
function "unpack" in fuego-core/engine/scripts/functions.sh.
 
 -- Tim


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
       [not found]         ` <ECADFF3FD767C149AD96A924E7EA6EAF1FA88A98@USCULXMSG01.am.sony.com>
@ 2017-05-12  9:38           ` Rafael Gago Castano
  2017-05-12 18:05             ` Bird, Timothy
  0 siblings, 1 reply; 18+ messages in thread
From: Rafael Gago Castano @ 2017-05-12  9:38 UTC (permalink / raw)
  To: Bird, Timothy; +Cc: fuego

For some reason I didn't CC the previous mail to the mailing list but just to you.

> OK.  cmd *should* return the error code from the remote command.
> If it's not, then that's a bug.

Yes, I was in a rush yesterday and I had no time to do the things properly, but I wrote a terse loop with while using "test -f" to poll for file existance on the DUT and it failed, so I wronlgy assumed that was a "cmd" error.

If I write this in the body of a test, the test gets early interrupted and never prints "after" (I'm using the ssh transport):

    echo "before"
    cmd "test -f /tmp/nonexistant-file"
    echo "after"

On the logs I see:

++ echo before
before
++ cmd 'test -f /tmp/nonexistant'
++ report_devlog 'cmd: test -f /tmp/nonexistant'
++ echo 'cmd: test -f /tmp/nonexistant'
++ ov_transport_cmd 'test -f /tmp/nonexistant'
++ case "$TRANSPORT" in
++ sshpass -e ssh -o ServerAliveInterval=30 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=30 -p 22 root@192.168.1.2 'test -f /tmp/nonexistant'
Warning: Permanently added '192.168.1.2' (ECDSA) to the list of known hosts.
+ signal_handler
+ echo 'in signal_handler'
in signal_handler

> Just FYI - I experimented a bit, and was able to do some conditional
> waiting on the target with:
>  ftc wait-for -t 60 "/bin/bash -c \"source $LOGDIR/prolog.sh ; ov_transport_cmd test -f /tmp/sync_file\""

> This could be improved syntactically, but it did work.  I envision extended 'ftc wait-for' to support
> something like:
>    ftc wait-for "target: test -f /tmp/sync_file"
> for this style of board-side conditional test.

As fuego is writing the test for running in the host, the host is issuing shell comands to the DUT (kind of master-slave), so the HOST to DUT synchronization is already implicit. There already is a clear "happens before" relationship in place.

It's the other way around, the HOST syncing to the DUT that I couldn't fix. I was attempting something similar to this (untested):

FUEGO_SYNC_PREFIX=/tmp/fuego-sync/

# Just clear all the sync files from previous runs
function on_test_start() {
    cmd rm -rf $FUEGO_SYNC_PREFIX && mkdir $FUEGO_SYNC_PREFIX
    #error handling
}

function wait_for_dut() {
    local FILE=$FUEGO_SYNC_PREFIX$1
    local SEC=999999
    if [[ ! -z $2 ]]; then 
        SEC=$2 
    fi
    local ELAPSED=0
    while [[ -n cmd "test -f $FILE" ]]; do
        if [[ $ELAPSED -lt $SEC ]]; then 
            return 1
        fi
        sleep 1
        ELAPSED=$((ELAPSED + 1))
    done
    cmd "rm $FILE"
    return 0
}

> This is probably something that would be good to add to the board file.

I hand't thought that the board files are just sourced and that I can add variables there. That's powerful. In our case we have more than one serial port on each device though.

> That looks great.  I don't know the failure modes for the serial port transfers
> so I can't say how robust this is, but it looks OK to me.  As long as stty observes
> the timeouts, then things shouldn't get stuck.
>
> I have a minor preference for using TAP format for the test output for these types
> of simple test  - basically formatting the output line as:
> ok <testnum> test description
> But that's not required, and I think this looks like a nice test.
>
> I'm trying to think how I can try this here.  I have my test board here with the
> serial port connected to the serial console, but maybe I could wire up a second
> one for testing.

Don't bother, I should had posted this as an RFC, I just found easier to share it as a patch than by copying snippets. As the test is done now it isn't guaranteed to work correctly, for doing this test properly the sender (HOST) would need to start sending (echo) after making sure that the DUT is listening (after the launched "dut_rx.sh" script is past the "cat" command => cat launched as a subprocess). The test is failing for me today.

    

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
       [not found]       ` <HE1PR06MB3036959D51FD5FBFC02208C8C2ED0@HE1PR06MB3036.eurprd06.prod.outlook.com>
       [not found]         ` <ECADFF3FD767C149AD96A924E7EA6EAF1FA88A98@USCULXMSG01.am.sony.com>
@ 2017-05-12 14:33         ` Rafael Gago Castano
  2017-05-12 18:39           ` Bird, Timothy
  1 sibling, 1 reply; 18+ messages in thread
From: Rafael Gago Castano @ 2017-05-12 14:33 UTC (permalink / raw)
  To: Bird, Timothy; +Cc: fuego

As I have no infinite budget to investigate fuego, I decided to skip looking at that signal error on "ls" and to assume that we can do communication tests with fuego, it may need some refinements but there isn't any big barrier.  As a matter of fact for our use case it's simpler than by using LAVA multinode tests.

Then I started looking at the device "power off + power off + flashing + wait for boot" procedure. I tried to do it as a test but it had a problem, when the device is rebooting it hasn't connectivity, so it does fail:

SDKTARGETSYSROOT=/opt/slp/dingo-next/sdk/sysroots/cortexa9hf-neon-oe-linux-gnueabi
Target_PreCleanup=true
_=/usr/bin/env
ssh: connect to host 192.168.1.2 port 22: No route to host
ssh: connect to host 192.168.1.2 port 22: No route to host

*** ABORTED ***

Fuego error reason: Cannot connect to 192.168.1.2 via ssh

in signal_handler
##### doing fuego phase: post_test ########

I guess that to do "power off + power off + flashing + wait for boot" would require a different approach than writing a test.

At first I thought about having two new types of tests: setup and teardown, but these aren't conceptually tests. IMO these belong to the board, so just adding an optional "setup" and "teardown" functions to the board file could be a good starting point.

Then the problem would be to decide when to run them. Running them as a test (of a new type) makes very explicit to see and control when they run. Having them on the board file would need rules or maybe flags for "ftc", because the setup/teardown functions should definitely run as the first and last actions of a testplan/batch, but they would be detrimental when developing tests.

Then the third approach would be to leave the board setup and teardown to the Jenkins side and let the user handle the integration. This makes the user to have more scattered configuration (configure boards in two places) but keeps Fuego simpler. I'm not very fluent  with Jenkins hacking but it seems doable too.

Which approach seems the best?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
  2017-05-12  9:38           ` Rafael Gago Castano
@ 2017-05-12 18:05             ` Bird, Timothy
  2017-05-15  8:21               ` Rafael Gago Castano
  0 siblings, 1 reply; 18+ messages in thread
From: Bird, Timothy @ 2017-05-12 18:05 UTC (permalink / raw)
  To: Rafael Gago Castano; +Cc: fuego



> -----Original Message-----
> From: Rafael Gago Castano on Friday, May 12, 2017 2:39 AM
>
> For some reason I didn't CC the previous mail to the mailing list but just to
> you.
> 
> > OK.  cmd *should* return the error code from the remote command.
> > If it's not, then that's a bug.
> 
> Yes, I was in a rush yesterday and I had no time to do the things properly, but
> I wrote a terse loop with while using "test -f" to poll for file existance on the
> DUT and it failed, so I wronlgy assumed that was a "cmd" error.
> 
> If I write this in the body of a test, the test gets early interrupted and never
> prints "after" (I'm using the ssh transport):
> 
>     echo "before"
>     cmd "test -f /tmp/nonexistant-file"
>     echo "after"
> 
> On the logs I see:
> 
> ++ echo before
> before
> ++ cmd 'test -f /tmp/nonexistant'
> ++ report_devlog 'cmd: test -f /tmp/nonexistant'
> ++ echo 'cmd: test -f /tmp/nonexistant'
> ++ ov_transport_cmd 'test -f /tmp/nonexistant'
> ++ case "$TRANSPORT" in
> ++ sshpass -e ssh -o ServerAliveInterval=30 -o StrictHostKeyChecking=no -o
> UserKnownHostsFile=/dev/null -o ConnectTimeout=30 -p 22
> root@192.168.1.2 'test -f /tmp/nonexistant'
> Warning: Permanently added '192.168.1.2' (ECDSA) to the list of known hosts.
> + signal_handler
> + echo 'in signal_handler'
> in signal_handler

The way Fuego core operates now, the test scripts are run with bash set -e
in effect.  That is, any command error (outside of an if, assignment, or
expression) will issue an error trap and move directly to post_test.
A command failure is interpreted as a hard error.  Possibly this is too aggressive,
but it is the same as the default behavior of Jenkins.

If you have a command that whose failure does not indicate test failure
(something optional, or a test), then you can wrap it like so:

  set +e
  cmd "this might fail, but don't stop the test"
  set -e

We have been contemplating changing this, but that's how it works right now.

> > Just FYI - I experimented a bit, and was able to do some conditional
> > waiting on the target with:
> >  ftc wait-for -t 60 "/bin/bash -c \"source $LOGDIR/prolog.sh ;
> ov_transport_cmd test -f /tmp/sync_file\""
> 
> > This could be improved syntactically, but it did work.  I envision extended
> 'ftc wait-for' to support
> > something like:
> >    ftc wait-for "target: test -f /tmp/sync_file"
> > for this style of board-side conditional test.
> 
> As fuego is writing the test for running in the host, the host is issuing shell
> comands to the DUT (kind of master-slave), so the HOST to DUT
> synchronization is already implicit. There already is a clear "happens before"
> relationship in place.
> 
> It's the other way around, the HOST syncing to the DUT that I couldn't fix. I
> was attempting something similar to this (untested):
> 
> FUEGO_SYNC_PREFIX=/tmp/fuego-sync/
> 
> # Just clear all the sync files from previous runs
> function on_test_start() {
>     cmd rm -rf $FUEGO_SYNC_PREFIX && mkdir $FUEGO_SYNC_PREFIX
>     #error handling
> }
> 
> function wait_for_dut() {
>     local FILE=$FUEGO_SYNC_PREFIX$1
>     local SEC=999999
>     if [[ ! -z $2 ]]; then
>         SEC=$2
>     fi
>     local ELAPSED=0
>     while [[ -n cmd "test -f $FILE" ]]; do
I'm not sure I follow the use of '-n' here.  Is 'test -f' noisy in one case
and silent in the other?  I would think that maybe it should be:
   while cmd "test -f $FILE" ; do

>         if [[ $ELAPSED -lt $SEC ]]; then
>             return 1
>         fi
>         sleep 1
>         ELAPSED=$((ELAPSED + 1))
>     done
>     cmd "rm $FILE"
>     return 0
> }
>

This looks like a good way to provide a generalized sync solution from target
to host, based on files.  I hope you don't mind if I copy parts of this if I implement
a synchronize function for the core. :-)

> > This is probably something that would be good to add to the board file.
> 
> I hand't thought that the board files are just sourced and that I can add
> variables there. That's powerful. In our case we have more than one serial
> port on each device though.
> 
> > That looks great.  I don't know the failure modes for the serial port
> transfers
> > so I can't say how robust this is, but it looks OK to me.  As long as stty
> observes
> > the timeouts, then things shouldn't get stuck.
> >
> > I have a minor preference for using TAP format for the test output for
> these types
> > of simple test  - basically formatting the output line as:
> > ok <testnum> test description
> > But that's not required, and I think this looks like a nice test.
> >
> > I'm trying to think how I can try this here.  I have my test board here with
> the
> > serial port connected to the serial console, but maybe I could wire up a
> second
> > one for testing.
> 
> Don't bother, I should had posted this as an RFC, I just found easier to share
> it as a patch than by copying snippets. As the test is done now it isn't
> guaranteed to work correctly, for doing this test properly the sender (HOST)
> would need to start sending (echo) after making sure that the DUT is
> listening (after the launched "dut_rx.sh" script is past the "cat" command =>
> cat launched as a subprocess). The test is failing for me today.

OK - but it's our first serial port, multi-node test, and I was excited to see it.
:-)
 -- Tim


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
  2017-05-12 14:33         ` Rafael Gago Castano
@ 2017-05-12 18:39           ` Bird, Timothy
  2017-05-15 13:57             ` Rafael Gago Castano
  0 siblings, 1 reply; 18+ messages in thread
From: Bird, Timothy @ 2017-05-12 18:39 UTC (permalink / raw)
  To: Rafael Gago Castano; +Cc: fuego


> -----Original Message-----
> From: Rafael Gago Castano Friday, May 12, 2017 7:34 AM
> To: Bird, Timothy <Tim.Bird@sony.com>
> Cc: fuego@lists.linuxfoundation.org
> Subject: Re: First steps with fuego
> 
> As I have no infinite budget to investigate fuego, I decided to skip looking at
> that signal error on "ls" and to assume that we can do communication tests
> with fuego, it may need some refinements but there isn't any big barrier.

OK good.  Don't hesitate to ask questions.  We don't want people to burn
out trying to understand Fuego (which is admittedly a bit complicated),
and it's very helpful for us to see what issues come up and what
questions people have, so we can improve Fuego.

> As a matter of fact for our use case it's simpler than by using LAVA multinode
> tests.
That's good to hear.  Hopefully we can add some synchronization primitives
to match the functionality available with LAVA more easily.

> 
> Then I started looking at the device "power off + power off + flashing + wait
> for boot" procedure. I tried to do it as a test but it had a problem, when the
> device is rebooting it hasn't connectivity, so it does fail:
> 
> SDKTARGETSYSROOT=/opt/slp/dingo-next/sdk/sysroots/cortexa9hf-neon-
> oe-linux-gnueabi
> Target_PreCleanup=true
> _=/usr/bin/env
> ssh: connect to host 192.168.1.2 port 22: No route to host
> ssh: connect to host 192.168.1.2 port 22: No route to host
> 
> *** ABORTED ***
> 
> Fuego error reason: Cannot connect to 192.168.1.2 via ssh
> 
> in signal_handler
> ##### doing fuego phase: post_test ########
> 
> I guess that to do "power off + power off + flashing + wait for boot" would
> require a different approach than writing a test.
> 
> At first I thought about having two new types of tests: setup and teardown,
> but these aren't conceptually tests. IMO these belong to the board, so just
> adding an optional "setup" and "teardown" functions to the board file could
> be a good starting point.
> 
> Then the problem would be to decide when to run them. Running them as a
> test (of a new type) makes very explicit to see and control when they run.
> Having them on the board file would need rules or maybe flags for "ftc",
> because the setup/teardown functions should definitely run as the first and
> last actions of a testplan/batch, but they would be detrimental when
> developing tests.
It sounds like this is the best approach.  See below for some existing 
fuego functionality that might be useful for this.

> 
> Then the third approach would be to leave the board setup and teardown to
> the Jenkins side and let the user handle the integration. This makes the user
> to have more scattered configuration (configure boards in two places) but
> keeps Fuego simpler. I'm not very fluent  with Jenkins hacking but it seems
> doable too.
> 
> Which approach seems the best?

There are variables now, that you can define on a board, to handle
link setup and link teardown.  I think these could be also be used for
the purpose of system redeploy and board boot.

If you define TARGET_SETUP_LINK in your board file, as the name
of a function, then Fuego will call that function during the 'pre_test' phase.

This is not well-documented on the wiki (there's one obscure
reference to it on this page: http://bird.org/fuego/function_pre_test  )
The reason this is  not well-documented yet is that this functionality
is currently undergoing some change.  We are switching to using
explicit transport routines, which can be overridden in the board file.
That is, I've just added support for new routines ov_transport_connect
and ov_transport_disconnect that are intended to replace this
functionality.  However, we plan on leaving in support for this
functionality for legacy reasons, and you should be able to use the
current technique to do anything you want to get the board
operational and ready for network communication (including build,
system deploy, and boot), in your own custom routine.

Here's an example (I haven't tested):
You put this in your <board>.board file for the DUT.

function my_custom_board_bringup {
  echo "do a bunch of interesting stuff here"
}
TARGET_SETUP_LINK="my_custom_board_bringup"

By default, things defined in the board file will be executed for
every test. However, you might make the setup conditional
by testing some condition inside your custom setup function, to decide
when the operation is needed.  For example, maybe you could set
a variable in a spec file that is examined in the custom routine to decide
whether to skip it.

Note that similar functionality for link teardown (or board shutdown)
is available using TARGET_TEARDOWN_LINK, which is called in the
post_test phase by Fuego.

I hope this addresses what you need.
 -- Tim


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
  2017-05-12 18:05             ` Bird, Timothy
@ 2017-05-15  8:21               ` Rafael Gago Castano
  0 siblings, 0 replies; 18+ messages in thread
From: Rafael Gago Castano @ 2017-05-15  8:21 UTC (permalink / raw)
  To: Bird, Timothy; +Cc: fuego

> If you have a command that whose failure does not indicate test failure
> (something optional, or a test), then you can wrap it like so:
>
>  set +e
>  cmd "this might fail, but don't stop the test"
>  set -e
>
> We have been contemplating changing this, but that's how it works right now.

Then there is no need to change anything IMO, the +e / -e setting can be wrapped in a function.

> >     while [[ -n cmd "test -f $FILE" ]]; do
> I'm not sure I follow the use of '-n' here.  Is 'test -f' noisy in one case
> and silent in the other?  I would think that maybe it should be:
>   while cmd "test -f $FILE" ; do

It's actually a mistake. It's meant to loop for as long as the sync file doesn't exist:

    until cmd "test -f $FILE" ; do

> This looks like a good way to provide a generalized sync solution from target
> to host, based on files.  I hope you don't mind if I copy parts of this if I implement
> a synchronize function for the core. :-)

Sure!
    

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
  2017-05-12 18:39           ` Bird, Timothy
@ 2017-05-15 13:57             ` Rafael Gago Castano
  2017-05-15 22:37               ` Bird, Timothy
  0 siblings, 1 reply; 18+ messages in thread
From: Rafael Gago Castano @ 2017-05-15 13:57 UTC (permalink / raw)
  To: Bird, Timothy; +Cc: fuego

> Here's an example (I haven't tested):
> You put this in your <board>.board file for the DUT.
> 
> function my_custom_board_bringup {
>   echo "do a bunch of interesting stuff here"
> }
> TARGET_SETUP_LINK="my_custom_board_bringup"

This doesn't work as-is. 

/fuego-core/engine/scripts/functions.sh: line 264: BOARD_SETUP: command not found

If I try to write "echo" from the board file it doesn't echo nothing.  If I add an eval to the line 264 of functions.sh it still fails. Grepping the repository for TARGET_SETUP_LINK only gives a result (functions.sh:264).

I had no time to look at the "ftc" source but is it possible that ftc is filtering out the functions?

The obvious workaround is to place the setup/teardown scripts on /usr/bin and pass parameters to them, but it would be nice to keep all the fuego related things near and explicit.




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
  2017-05-15 13:57             ` Rafael Gago Castano
@ 2017-05-15 22:37               ` Bird, Timothy
  2017-05-16 14:38                 ` Rafael Gago Castano
  0 siblings, 1 reply; 18+ messages in thread
From: Bird, Timothy @ 2017-05-15 22:37 UTC (permalink / raw)
  To: Rafael Gago Castano; +Cc: fuego

> -----Original Message-----
> From: Rafael Gago Castano on  Monday, May 15, 2017 6:57 AM
> 
> > Here's an example (I haven't tested):
> > You put this in your <board>.board file for the DUT.
> >
> > function my_custom_board_bringup {
> >   echo "do a bunch of interesting stuff here"
> > }
> > TARGET_SETUP_LINK="my_custom_board_bringup"
> 
> This doesn't work as-is.
> 
> /fuego-core/engine/scripts/functions.sh: line 264: BOARD_SETUP: command
> not found
> 
> If I try to write "echo" from the board file it doesn't echo nothing.  If I add an
> eval to the line 264 of functions.sh it still fails. Grepping the repository for
> TARGET_SETUP_LINK only gives a result (functions.sh:264).
> 
> I had no time to look at the "ftc" source but is it possible that ftc is filtering
> out the functions?
It is indeed.  Sorry for that.  I should have tested it here before sending it.
I was able to get it to work with a little more effort. 

Here is what is happening:
The board file, although it looks like a shell script, is actually not. It is being
processed by a progam call ovgen.py (overlay generator).  This reads the
board file, and fuego's "class" files and the test spec, and produces a 
file called prolog.sh.  This is stored in the log directory for the test run.
This is indeed a shell script file, and it is sourced into the running shell.
In this process, regular functions are stripped from the board file.
I'm not sure the reason for this, and plan to look into whether it is
necessary.

In any event, you *can* define something called an override function
in the board file, and that *will* appear in prolog.sh, and thus in the
running shell environment of the base script.  However, you currently
can only override an existing base class function.  So, to make a short
story long, here's what you can do:

Add a stub function to fuego-core/engine/overlays/base/base-board.fuegoclass,
like so:
function ov_board_setup() {
	return
}
By convention, functions in the base class start with "ov_"
which is short for "overlay".  I'm not sure that's required, but
it won't hurt for now.

Now, define an override function in your board file, like so:

override-func ov_board_setup() {
	echo "do your setup operations here"
}
TARGET_SETUP_LINK="ov_board_setup"

This would go in fuego/fuego-ro/boards/<your_board_name>.board

Please let me know if this works for you.  If not, can you please
post the console log for the test?  Make sure you define FUEGO_DEBUG=1
in the job configuration for the test.

Thanks.
 -- Tim


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
  2017-05-15 22:37               ` Bird, Timothy
@ 2017-05-16 14:38                 ` Rafael Gago Castano
  2017-05-17  5:06                   ` Bird, Timothy
  0 siblings, 1 reply; 18+ messages in thread
From: Rafael Gago Castano @ 2017-05-16 14:38 UTC (permalink / raw)
  To: Bird, Timothy; +Cc: fuego

> Add a stub function to fuego-core/engine/overlays/base/base-board.fuegoclass,
> like so:
> function ov_board_setup() {
>         return
> }
> By convention, functions in the base class start with "ov_"
> which is short for "overlay".  I'm not sure that's required, but
> it won't hurt for now.
> 
> Now, define an override function in your board file, like so:
> 
> override-func ov_board_setup() {
>         echo "do your setup operations here"
> }
> TARGET_SETUP_LINK="ov_board_setup"
> 
> This would go in fuego/fuego-ro/boards/<your_board_name>.board
> 
> Please let me know if this works for you. 

It does. Thanks, now that you posted it I remember about reading something about override-func on the manual.

I ended up implementing a board setup/teardown function that is only enabled by a variable set by a setup/teardown test. So now I have a setup test, serial rx and teardown test that are working and integrated with our environment. That it's an achievement considering the little amount of time that I have spent.

Then I added the setup and teardown tests as the first and the last ones of a test plan, but it looks like Jenkins doesn't respect the testplans .json file order on the .batch generated file, it looks like if they are sorted alphabetically. 

How are test runs meant to be scheduled/triggered on jenkins without user intervention?  I tried ftc put-request but it doesn't return a request id.

PS: I sent another variant of the RX serial test to the mailing list. That one is more compact and doesn't require any modification to Fuego. Previously I was trying to synchronize through "primitives" (I still was thinking as if I was using LAVA); Fuego has implicit host-dut synchronization.

    

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
  2017-05-16 14:38                 ` Rafael Gago Castano
@ 2017-05-17  5:06                   ` Bird, Timothy
  2017-05-17  6:35                     ` Rafael Gago Castano
  0 siblings, 1 reply; 18+ messages in thread
From: Bird, Timothy @ 2017-05-17  5:06 UTC (permalink / raw)
  To: Rafael Gago Castano; +Cc: fuego



> -----Original Message-----
> From: Rafael Gago Castano on Tuesday, May 16, 2017 7:38 AM
> 
> > Add a stub function to fuego-core/engine/overlays/base/base-
> board.fuegoclass,
> > like so:
> > function ov_board_setup() {
> >         return
> > }
> > By convention, functions in the base class start with "ov_"
> > which is short for "overlay".  I'm not sure that's required, but
> > it won't hurt for now.
> >
> > Now, define an override function in your board file, like so:
> >
> > override-func ov_board_setup() {
> >         echo "do your setup operations here"
> > }
> > TARGET_SETUP_LINK="ov_board_setup"
> >
> > This would go in fuego/fuego-ro/boards/<your_board_name>.board
> >
> > Please let me know if this works for you.
> 
> It does. Thanks, now that you posted it I remember about reading something
> about override-func on the manual.
> 
> I ended up implementing a board setup/teardown function that is only
> enabled by a variable set by a setup/teardown test. So now I have a setup
> test, serial rx and teardown test that are working and integrated with our
> environment. That it's an achievement considering the little amount of time
> that I have spent.
> 
> Then I added the setup and teardown tests as the first and the last ones of a
> test plan, but it looks like Jenkins doesn't respect the testplans .json file
> order on the .batch generated file, it looks like if they are sorted
> alphabetically.
I vaguely recall something about test ordering.  I'll have to look into this.
I believe the latest Jenkins versions have a different pipeline model, that
may allow better specification of test ordering.  Does anyone else know
about the batch job ordering in Jenkins?

> 
> How are test runs meant to be scheduled/triggered on jenkins without user
> intervention?  I tried ftc put-request but it doesn't return a request id.

'ftc put-request' is for sending a job to the Fuego global server (which is
only a prototype at the moment).  I saw your jobs there.  This feature is not
complete yet, but is eventually intended to allow developers to send tests
between sites, and request a test to be run on someone else's hardware.
The ftc command you probably want is 'ftc build-job'.  This can be used from
the command line to have Jenkins initiate one of its jobs.

I'm not sure if this command was working in the 1.1 release, but there's
also a Jenkins command line feature that can be used to accomplish the
same thing.  See here:
https://jenkins.io/doc/book/managing/cli/

This is if you want to start jobs from the command line.
Jenkins also supports a variety of mechanisms for triggering jobs, including a cron-like
feature to start a job at a given time, as well as triggering based on git commits, via
various plugins.  Let me know if you want links to some resources on how to do this.
(or just Google "jenkins trigger build").

Note that in Jenkins terminology a "build" is an execution of a test "job".
If what you're trying to do is start one build, that then starts another one, and then
another one, serialized in a cascade fashion, then I think you can use the Multi-job
plugin.  See https://wiki.jenkins-ci.org/display/JENKINS/Multijob+Plugin

I haven't used it, but it looks like it would accomplish the ordering you are interested in.
 -- Tim


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
  2017-05-17  5:06                   ` Bird, Timothy
@ 2017-05-17  6:35                     ` Rafael Gago Castano
  2017-05-17 15:33                       ` Bird, Timothy
  0 siblings, 1 reply; 18+ messages in thread
From: Rafael Gago Castano @ 2017-05-17  6:35 UTC (permalink / raw)
  To: Bird, Timothy; +Cc: fuego

Thank you for the help and references. I'll check that. 

Tomorrow I will show my findings on a meeting and it will be decided if we go forward and invest more in Fuego with the objective of replacing our working LAVA setup in the future. I hope to go forward with this. If we don't at least I'll try to provide feedback about what the others thought.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
  2017-05-17  6:35                     ` Rafael Gago Castano
@ 2017-05-17 15:33                       ` Bird, Timothy
  2017-05-22  8:23                         ` Rafael Gago Castano
  0 siblings, 1 reply; 18+ messages in thread
From: Bird, Timothy @ 2017-05-17 15:33 UTC (permalink / raw)
  To: Rafael Gago Castano; +Cc: fuego



> -----Original Message-----
> From: Rafael Gago Castano  on Tuesday, May 16, 2017 11:36 PM
> Thank you for the help and references. I'll check that.
> 
> Tomorrow I will show my findings on a meeting and it will be decided if we go
> forward and invest more in Fuego with the objective of replacing our working
> LAVA setup in the future. I hope to go forward with this. If we don't at least
> I'll try to provide feedback about what the others thought.

Thanks very much!  Even if you decide to stick with LAVA, getting feedback on
what Fuego is missing, or could improve, is very helpful.

 -- Tim

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
  2017-05-17 15:33                       ` Bird, Timothy
@ 2017-05-22  8:23                         ` Rafael Gago Castano
  2017-05-22 20:51                           ` Bird, Timothy
  0 siblings, 1 reply; 18+ messages in thread
From: Rafael Gago Castano @ 2017-05-22  8:23 UTC (permalink / raw)
  To: Bird, Timothy, fuego

It looks like we'll continue looking at fuego :)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Fuego] First steps with fuego
  2017-05-22  8:23                         ` Rafael Gago Castano
@ 2017-05-22 20:51                           ` Bird, Timothy
  0 siblings, 0 replies; 18+ messages in thread
From: Bird, Timothy @ 2017-05-22 20:51 UTC (permalink / raw)
  To: Rafael Gago Castano, fuego

> -----Original Message-----
> From: Rafael Gago Castano on Monday, May 22, 2017 1:23 AM 
> It looks like we'll continue looking at fuego :)

That's great news!  I hope we can provide you with valuable tests
and QA services for Linux.
 -- Tim


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2017-05-22 20:51 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-09  9:49 [Fuego] First steps with fuego Rafael Gago Castano
2017-05-09 19:04 ` Bird, Timothy
2017-05-10  6:32   ` Rafael Gago Castano
2017-05-10 13:12   ` Rafael Gago Castano
2017-05-10 19:30     ` Bird, Timothy
     [not found]       ` <HE1PR06MB3036959D51FD5FBFC02208C8C2ED0@HE1PR06MB3036.eurprd06.prod.outlook.com>
     [not found]         ` <ECADFF3FD767C149AD96A924E7EA6EAF1FA88A98@USCULXMSG01.am.sony.com>
2017-05-12  9:38           ` Rafael Gago Castano
2017-05-12 18:05             ` Bird, Timothy
2017-05-15  8:21               ` Rafael Gago Castano
2017-05-12 14:33         ` Rafael Gago Castano
2017-05-12 18:39           ` Bird, Timothy
2017-05-15 13:57             ` Rafael Gago Castano
2017-05-15 22:37               ` Bird, Timothy
2017-05-16 14:38                 ` Rafael Gago Castano
2017-05-17  5:06                   ` Bird, Timothy
2017-05-17  6:35                     ` Rafael Gago Castano
2017-05-17 15:33                       ` Bird, Timothy
2017-05-22  8:23                         ` Rafael Gago Castano
2017-05-22 20:51                           ` Bird, Timothy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.