All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] tcmu-runner and QEMU
@ 2014-08-29 17:22 Benoît Canet
  2014-08-29 18:38 ` Andy Grover
                   ` (2 more replies)
  0 siblings, 3 replies; 24+ messages in thread
From: Benoît Canet @ 2014-08-29 17:22 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, pbonzini, agrover, stefanha


Hi list,

Listening at Palo's suggestion I started discussing privately with
Andy about integrating LIO and the QEMU block layer together using
tcmu-runner: https://github.com/agrover/tcmu-runner.

The rationale is that it would be very handy to be able to export one of the numerous QEMU
image formats into ISCSI or FCOE via the LIO kernel target.

For example a cloud provider would be able to provision either a bare metal instance
(some hardware knows how to boot ISCSI and FCOE) or a virtualized instance while
using the same QCOW2 backing chain.

The consequence is that the end user would be able to switch back and forth between
small virtualized hardware or monster bare metal hardware while keeping the same data
in the same volumes.

Quoting Andy:
"My initial thought is that we don't want to make tcmu-runner
QEMU-specific, what we really want is tcmu-runner to be able to use
QEMU's multitude of BlockDrivers. Ideally the BlockDrivers could be
compiled as loadable modules that could then be loaded by QEMU or
tcmu-runner. Or if that's not possible then we might need to build a
tcmu-runner handler as part of QEMU, similar to how qemu-nbd is built?"

The truth is that QEMU block drivers don't know how to do much on their own
so we probably must bring the whole QEMU  block layer in a tcmu-runner handler plugin.

Another reason to do this is that the QEMU block layer brings features like taking
snapshots or streaming snaphots that a cloud provider would want to keep while exporting
QCOW2 as ISCSI or FCOE.

Doing these operations is usually done by passing something like
"--qmp tcp:localhost,4444,server,nowait" as a QEMU command line argument then
connecting on this JSON processing socket then send orders to QEMU.

I made some patches to split this QMP machinery from the QEMU binary but still
I don't know how a tcmu-runner plugin handler would be able to receive this command
line configuration.

Some other configuration would be needed to configurate properly the QEMU block layer:
for example which cache mode should the handler use ?

So passing configuration to the QEMU block plugin would be the first critical point.

The second problem is that the QEMU block layer is big and filled with scary stuff like
threads and coroutines but I think only trying to write the tcmu-runner handler will
tell if it's doable.

Best regards

Benoît

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-29 17:22 [Qemu-devel] tcmu-runner and QEMU Benoît Canet
@ 2014-08-29 18:38 ` Andy Grover
  2014-08-29 18:51   ` Benoît Canet
  2014-08-30 14:46 ` Richard W.M. Jones
  2014-09-02  9:25 ` Stefan Hajnoczi
  2 siblings, 1 reply; 24+ messages in thread
From: Andy Grover @ 2014-08-29 18:38 UTC (permalink / raw)
  To: Benoît Canet, qemu-devel; +Cc: kwolf, pbonzini, stefanha

On 08/29/2014 10:22 AM, Benoît Canet wrote:
> The truth is that QEMU block drivers don't know how to do much on their own
> so we probably must bring the whole QEMU  block layer in a tcmu-runner handler plugin.

Woah! Really? ok...

> Another reason to do this is that the QEMU block layer brings features like taking
> snapshots or streaming snaphots that a cloud provider would want to keep while exporting
> QCOW2 as ISCSI or FCOE.
>
> Doing these operations is usually done by passing something like
> "--qmp tcp:localhost,4444,server,nowait" as a QEMU command line argument then
> connecting on this JSON processing socket then send orders to QEMU.

The LIO TCMU backend and tcmu-runner provide for a configstring that is 
associated with a given backstore. This is made available to the 
handler, and sounds like just what qmp needs.

> I made some patches to split this QMP machinery from the QEMU binary but still
> I don't know how a tcmu-runner plugin handler would be able to receive this command
> line configuration.

The flow would be:
1) admin configures a LIO backstore of type "user", size 10G, and gives 
it a configstring like "qmp/tcp:localhost,4444,server,nowait"
2) admin exports the backstore via whatever LIO-supported fabric(s) 
(e.g. iSCSI)
3) tcmu-runner is notified of the new user backstore from step 1, finds 
the handler associated with "qmp", calls 
handler->open("tcp:localhost,4444,server,nowait")
4) qmp handler parses string and does whatever it needs to do
5) handler receives SCSI commands as they arrive

> The second problem is that the QEMU block layer is big and filled with scary stuff like
> threads and coroutines but I think only trying to write the tcmu-runner handler will
> tell if it's doable.

Yeah, could be tricky but would be pretty cool if it works. Let me know 
how I can help, or with any questions.

Regards -- Andy

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-29 18:38 ` Andy Grover
@ 2014-08-29 18:51   ` Benoît Canet
  2014-08-29 22:36     ` Andy Grover
  0 siblings, 1 reply; 24+ messages in thread
From: Benoît Canet @ 2014-08-29 18:51 UTC (permalink / raw)
  To: Andy Grover; +Cc: Benoît Canet, kwolf, qemu-devel, stefanha, pbonzini

The Friday 29 Aug 2014 à 11:38:14 (-0700), Andy Grover wrote :
> On 08/29/2014 10:22 AM, Benoît Canet wrote:
> >The truth is that QEMU block drivers don't know how to do much on their own
> >so we probably must bring the whole QEMU  block layer in a tcmu-runner handler plugin.
> 
> Woah! Really? ok...
> 
> >Another reason to do this is that the QEMU block layer brings features like taking
> >snapshots or streaming snaphots that a cloud provider would want to keep while exporting
> >QCOW2 as ISCSI or FCOE.
> >
> >Doing these operations is usually done by passing something like
> >"--qmp tcp:localhost,4444,server,nowait" as a QEMU command line argument then
> >connecting on this JSON processing socket then send orders to QEMU.
> 
> The LIO TCMU backend and tcmu-runner provide for a configstring that is
> associated with a given backstore. This is made available to the handler,
> and sounds like just what qmp needs.
> 
> >I made some patches to split this QMP machinery from the QEMU binary but still
> >I don't know how a tcmu-runner plugin handler would be able to receive this command
> >line configuration.
> 
> The flow would be:
> 1) admin configures a LIO backstore of type "user", size 10G, and gives it a
> configstring like "qmp/tcp:localhost,4444,server,nowait"
> 2) admin exports the backstore via whatever LIO-supported fabric(s) (e.g.
> iSCSI)
> 3) tcmu-runner is notified of the new user backstore from step 1, finds the
> handler associated with "qmp", calls
> handler->open("tcp:localhost,4444,server,nowait")
> 4) qmp handler parses string and does whatever it needs to do
> 5) handler receives SCSI commands as they arrive

QMP is just a way to control QEMU via a socket: it is not particularly block related.

On the other hand bringing the whole block layers into a tcmu-runner handler
would mean that there would be _one_ QMP socket opened
(by mean of wonderfull QEMU modules static variables :) to control multiple block devices
exported.

So I think the configuration passed must be done before an individual open occurs:
being global to the .so implementing the tcmu-runner handler.

But I don't see how to do it with the current API.

Best regards

Benoît

> 
> >The second problem is that the QEMU block layer is big and filled with scary stuff like
> >threads and coroutines but I think only trying to write the tcmu-runner handler will
> >tell if it's doable.
> 
> Yeah, could be tricky but would be pretty cool if it works. Let me know how
> I can help, or with any questions.
> 
> Regards -- Andy
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-29 18:51   ` Benoît Canet
@ 2014-08-29 22:36     ` Andy Grover
  2014-08-29 22:46       ` Benoît Canet
  0 siblings, 1 reply; 24+ messages in thread
From: Andy Grover @ 2014-08-29 22:36 UTC (permalink / raw)
  To: Benoît Canet; +Cc: kwolf, pbonzini, qemu-devel, stefanha

On 08/29/2014 11:51 AM, Benoît Canet wrote:
> QMP is just a way to control QEMU via a socket: it is not particularly block related.
>
> On the other hand bringing the whole block layers into a tcmu-runner handler
> would mean that there would be _one_ QMP socket opened
> (by mean of wonderfull QEMU modules static variables :) to control multiple block devices
> exported.
>
> So I think the configuration passed must be done before an individual open occurs:
> being global to the .so implementing the tcmu-runner handler.
>
> But I don't see how to do it with the current API.

This discussion leads me to think we need to step back and discuss our 
requirements. I am looking for flexible backstores for SCSI-based 
fabrics, with as little new code as possible. I think you are looking 
for a way to export QEMU block devices over iSCSI and other fabrics?

I don't think making a LIO userspace handler into basically a 
full-fledged secondary QEMU server instance is the way to go. What I 
think better serves your requirements is to enable QEMU to configure LIO.

In a previous email you wrote:
> Another reason to do this is that the QEMU block layer brings
> features like taking snapshots or streaming snaphots that a cloud
> provider would want to keep while exporting QCOW2 as ISCSI or FCOE.

Whether a volume is exported over iSCSI or FCoE or not shouldn't affect 
how it is managed. QMP commands should go to the single QEMU server, 
which can then optionally configure LIO to export the volume. That 
leaves us with the issue that we'd need to arbitrate access to the 
backing file if taking a streaming snapshot (qemu and tcmu-runner 
processes both accessing the img), but that should be straightforward, 
or at least work that can be done in a second phase of development.

Thoughts?

Regards -- Andy

p.s. offline Monday.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-29 22:36     ` Andy Grover
@ 2014-08-29 22:46       ` Benoît Canet
  0 siblings, 0 replies; 24+ messages in thread
From: Benoît Canet @ 2014-08-29 22:46 UTC (permalink / raw)
  To: Andy Grover; +Cc: Benoît Canet, kwolf, qemu-devel, stefanha, pbonzini

The Friday 29 Aug 2014 à 15:36:41 (-0700), Andy Grover wrote :
> On 08/29/2014 11:51 AM, Benoît Canet wrote:
> >QMP is just a way to control QEMU via a socket: it is not particularly block related.
> >
> >On the other hand bringing the whole block layers into a tcmu-runner handler
> >would mean that there would be _one_ QMP socket opened
> >(by mean of wonderfull QEMU modules static variables :) to control multiple block devices
> >exported.
> >
> >So I think the configuration passed must be done before an individual open occurs:
> >being global to the .so implementing the tcmu-runner handler.
> >
> >But I don't see how to do it with the current API.
> 
> This discussion leads me to think we need to step back and discuss our
> requirements. I am looking for flexible backstores for SCSI-based fabrics,
> with as little new code as possible.

> I think you are looking for a way to
> export QEMU block devices over iSCSI and other fabrics?

true

> 
> I don't think making a LIO userspace handler into basically a full-fledged
> secondary QEMU server instance is the way to go. What I think better serves
> your requirements is to enable QEMU to configure LIO.

Ok as long as there is an efficient way for QEMU to process LIO requests.

> 
> In a previous email you wrote:
> >Another reason to do this is that the QEMU block layer brings
> >features like taking snapshots or streaming snaphots that a cloud
> >provider would want to keep while exporting QCOW2 as ISCSI or FCOE.
> 
> Whether a volume is exported over iSCSI or FCoE or not shouldn't affect how
> it is managed. QMP commands should go to the single QEMU server, which can
> then optionally configure LIO to export the volume. That leaves us with the
> issue that we'd need to arbitrate access to the backing file if taking a
> streaming snapshot (qemu and tcmu-runner processes both accessing the img),
> but that should be straightforward, or at least work that can be done in a
> second phase of development.

That makes sense that QEMU trigger the LIO export by itself because it will be
easier to integrate this feature into libvirt or another management stack.

Best regards

Benoît

> 
> Thoughts?
> 
> Regards -- Andy
> 
> p.s. offline Monday.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-29 17:22 [Qemu-devel] tcmu-runner and QEMU Benoît Canet
  2014-08-29 18:38 ` Andy Grover
@ 2014-08-30 14:46 ` Richard W.M. Jones
  2014-08-30 15:53   ` Benoît Canet
  2014-09-02  9:25 ` Stefan Hajnoczi
  2 siblings, 1 reply; 24+ messages in thread
From: Richard W.M. Jones @ 2014-08-30 14:46 UTC (permalink / raw)
  To: Benoît Canet; +Cc: kwolf, pbonzini, agrover, qemu-devel, stefanha

For the benefit of those who have absolutely no idea what you're
talking about, could you write a simpler summary of what you're trying
to do?

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
libguestfs lets you edit virtual machines.  Supports shell scripting,
bindings from many languages.  http://libguestfs.org

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-30 14:46 ` Richard W.M. Jones
@ 2014-08-30 15:53   ` Benoît Canet
  2014-08-30 16:02     ` Richard W.M. Jones
  0 siblings, 1 reply; 24+ messages in thread
From: Benoît Canet @ 2014-08-30 15:53 UTC (permalink / raw)
  To: Richard W.M. Jones
  Cc: Benoît Canet, kwolf, qemu-devel, stefanha, pbonzini, agrover

The Saturday 30 Aug 2014 à 15:46:41 (+0100), Richard W.M. Jones wrote :
> For the benefit of those who have absolutely no idea what you're
> talking about, could you write a simpler summary of what you're trying
> to do?
> 
> Rich.

Hello,

Most cloud providers sell virtualized instances either using Xen or KVM.

However another trend is to provide bare metal instances for people who want the highest
CPU and network performance possible.(typicaly people doing computation with MPI)

So a cloud end user would need to be able to instanciate a virtual machine use it for a while
then stop the virtual machine, change the hardware type to bare metal and restart the instance
while keeping using the same boot volume.

QEMU will keep a virtual machine data stored in one of it's numerous storage backend format
like QCOW2 or QED.

If the cloud provider want to be able to boot QCOW2 or QED images on bare metal machines
he will need to export QCOW2 or QED images on the network.

So far only qemu-nbd allows to do this and it is neither well performing nor really convenient
to boot on a bare metal machine.

So summarize I am looking for a way to export QCOW2 or QED image as an ISCSI or FCOE
targets while keeping all the goodies these format provides (taking snapshots for backup,
streaming, mirroring).

Reusing LIO code would help tremendously to simplify this task.

Best regards

Benoît

> 
> -- 
> Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> libguestfs lets you edit virtual machines.  Supports shell scripting,
> bindings from many languages.  http://libguestfs.org
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-30 15:53   ` Benoît Canet
@ 2014-08-30 16:02     ` Richard W.M. Jones
  2014-08-30 16:04       ` Richard W.M. Jones
                         ` (2 more replies)
  0 siblings, 3 replies; 24+ messages in thread
From: Richard W.M. Jones @ 2014-08-30 16:02 UTC (permalink / raw)
  To: Benoît Canet; +Cc: kwolf, pbonzini, agrover, qemu-devel, stefanha

On Sat, Aug 30, 2014 at 05:53:43PM +0200, Benoît Canet wrote:
> The Saturday 30 Aug 2014 à 15:46:41 (+0100), Richard W.M. Jones wrote :
> > For the benefit of those who have absolutely no idea what you're
> > talking about, could you write a simpler summary of what you're trying
> > to do?
> > 
> > Rich.
> 
> Hello,
> 
> Most cloud providers sell virtualized instances either using Xen or KVM.
>
> However another trend is to provide bare metal instances for people
> who want the highest CPU and network performance possible.(typicaly
> people doing computation with MPI)
>
> So a cloud end user would need to be able to instanciate a virtual
> machine use it for a while then stop the virtual machine, change the
> hardware type to bare metal and restart the instance while keeping
> using the same boot volume.
>
> QEMU will keep a virtual machine data stored in one of it's numerous
> storage backend format like QCOW2 or QED.
>
> If the cloud provider want to be able to boot QCOW2 or QED images on
> bare metal machines he will need to export QCOW2 or QED images on
> the network.
>
> So far only qemu-nbd allows to do this and it is neither well
> performing nor really convenient to boot on a bare metal machine.

So I think what you want is a `qemu-iscsi'?  ie. the same as qemu-nbd,
but with an iSCSI frontend (to replace the NBD server).

I think this is an excellent idea, although AIUI iSCSI is a pretty
complex protocol.  (I wrote an NBD server, and the protocol is almost
trivial, albeit as you say, performing badly).

> So summarize I am looking for a way to export QCOW2 or QED image as
> an ISCSI or FCOE targets while keeping all the goodies these format
> provides (taking snapshots for backup, streaming, mirroring).
>
> Reusing LIO code would help tremendously to simplify this task.

I guess so.  Are you planning to integrate bits of LIO into qemu, or
bits of qemu into LIO?

The latter has been tried various times, without much success.  See
the many examples of people trying to make the qemu block driver code
into a separate library, and failing.

Writing an iSCSI front end to qemu would be good, but qemu has some
very particular policies about what code can be introduced, so that
could be tricky too ...

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
libguestfs lets you edit virtual machines.  Supports shell scripting,
bindings from many languages.  http://libguestfs.org

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-30 16:02     ` Richard W.M. Jones
@ 2014-08-30 16:04       ` Richard W.M. Jones
  2014-08-30 17:22         ` Benoît Canet
  2014-08-30 16:51       ` Benoît Canet
  2014-08-31 20:03       ` Andy Grover
  2 siblings, 1 reply; 24+ messages in thread
From: Richard W.M. Jones @ 2014-08-30 16:04 UTC (permalink / raw)
  To: Benoît Canet; +Cc: kwolf, pbonzini, agrover, qemu-devel, stefanha

BTW, what is "tcmu-runner"?  The github repo you pointed to is ... opaque.

Rich.


-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-builder quickly builds VMs from scratch
http://libguestfs.org/virt-builder.1.html

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-30 16:02     ` Richard W.M. Jones
  2014-08-30 16:04       ` Richard W.M. Jones
@ 2014-08-30 16:51       ` Benoît Canet
  2014-08-31 20:03       ` Andy Grover
  2 siblings, 0 replies; 24+ messages in thread
From: Benoît Canet @ 2014-08-30 16:51 UTC (permalink / raw)
  To: Richard W.M. Jones
  Cc: Benoît Canet, kwolf, qemu-devel, stefanha, pbonzini, agrover

The Saturday 30 Aug 2014 à 17:02:12 (+0100), Richard W.M. Jones wrote :
> On Sat, Aug 30, 2014 at 05:53:43PM +0200, Benoît Canet wrote:
> > The Saturday 30 Aug 2014 à 15:46:41 (+0100), Richard W.M. Jones wrote :
> > > For the benefit of those who have absolutely no idea what you're
> > > talking about, could you write a simpler summary of what you're trying
> > > to do?
> > > 
> > > Rich.
> > 
> > Hello,
> > 
> > Most cloud providers sell virtualized instances either using Xen or KVM.
> >
> > However another trend is to provide bare metal instances for people
> > who want the highest CPU and network performance possible.(typicaly
> > people doing computation with MPI)
> >
> > So a cloud end user would need to be able to instanciate a virtual
> > machine use it for a while then stop the virtual machine, change the
> > hardware type to bare metal and restart the instance while keeping
> > using the same boot volume.
> >
> > QEMU will keep a virtual machine data stored in one of it's numerous
> > storage backend format like QCOW2 or QED.
> >
> > If the cloud provider want to be able to boot QCOW2 or QED images on
> > bare metal machines he will need to export QCOW2 or QED images on
> > the network.
> >
> > So far only qemu-nbd allows to do this and it is neither well
> > performing nor really convenient to boot on a bare metal machine.
> 
> So I think what you want is a `qemu-iscsi'?  ie. the same as qemu-nbd,
> but with an iSCSI frontend (to replace the NBD server).
> 
> I think this is an excellent idea, although AIUI iSCSI is a pretty
> complex protocol.  (I wrote an NBD server, and the protocol is almost
> trivial, albeit as you say, performing badly).
> 
> > So summarize I am looking for a way to export QCOW2 or QED image as
> > an ISCSI or FCOE targets while keeping all the goodies these format
> > provides (taking snapshots for backup, streaming, mirroring).
> >
> > Reusing LIO code would help tremendously to simplify this task.
> 
> I guess so.  Are you planning to integrate bits of LIO into qemu, or
> bits of qemu into LIO?
> 
> The latter has been tried various times, without much success.  See
> the many examples of people trying to make the qemu block driver code
> into a separate library, and failing.

Paolo pointed me to Andy's current work so I started this discution:
http://thread.gmane.org/gmane.linux.kernel/1771465.

I know well enough the QEMU block layer to be aware that it's riddled
with static variables so I think bits of Andy's current work on top of
a qemu-lio command would be the way.

>> Writing an iSCSI front end to qemu would be good, but qemu has some
> very particular policies about what code can be introduced, so that
> could be tricky too ...

Where can I read these policies ?

Best regards

Benoît

> 
> Rich.
> 
> -- 
> Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> libguestfs lets you edit virtual machines.  Supports shell scripting,
> bindings from many languages.  http://libguestfs.org

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-30 16:04       ` Richard W.M. Jones
@ 2014-08-30 17:22         ` Benoît Canet
  2014-08-30 21:50           ` Benoît Canet
  0 siblings, 1 reply; 24+ messages in thread
From: Benoît Canet @ 2014-08-30 17:22 UTC (permalink / raw)
  To: Richard W.M. Jones
  Cc: Benoît Canet, kwolf, qemu-devel, stefanha, pbonzini, agrover

The Saturday 30 Aug 2014 à 17:04:12 (+0100), Richard W.M. Jones wrote :
> BTW, what is "tcmu-runner"?  The github repo you pointed to is ... opaque.

Andy Groover is working on a way to implement LIO storage backend in userspace.
tcmu-runner is a daemon able to load storage backend plugins for this machinery.

Best regards

Benoît

> 
> Rich.
> 
> 
> -- 
> Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> virt-builder quickly builds VMs from scratch
> http://libguestfs.org/virt-builder.1.html
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-30 17:22         ` Benoît Canet
@ 2014-08-30 21:50           ` Benoît Canet
  0 siblings, 0 replies; 24+ messages in thread
From: Benoît Canet @ 2014-08-30 21:50 UTC (permalink / raw)
  To: Benoît Canet
  Cc: kwolf, Richard W.M. Jones, qemu-devel, stefanha, pbonzini, agrover

The Saturday 30 Aug 2014 à 19:22:09 (+0200), Benoît Canet wrote :
> The Saturday 30 Aug 2014 à 17:04:12 (+0100), Richard W.M. Jones wrote :
> > BTW, what is "tcmu-runner"?  The github repo you pointed to is ... opaque.
> 
> Andy Groover.

My apologizes for the mispelling.

>is working on a way to implement LIO storage backend in userspace.
> tcmu-runner is a daemon able to load storage backend plugins for this machinery.
> 
> Best regards
> 
> Benoît
> 
> > 
> > Rich.
> > 
> > 
> > -- 
> > Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
> > Read my programming and virtualization blog: http://rwmj.wordpress.com
> > virt-builder quickly builds VMs from scratch
> > http://libguestfs.org/virt-builder.1.html
> > 
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-30 16:02     ` Richard W.M. Jones
  2014-08-30 16:04       ` Richard W.M. Jones
  2014-08-30 16:51       ` Benoît Canet
@ 2014-08-31 20:03       ` Andy Grover
  2014-08-31 20:38         ` Benoît Canet
  2014-09-01  8:08         ` Paolo Bonzini
  2 siblings, 2 replies; 24+ messages in thread
From: Andy Grover @ 2014-08-31 20:03 UTC (permalink / raw)
  To: Richard W.M. Jones, Benoît Canet
  Cc: kwolf, pbonzini, qemu-devel, stefanha

On 08/30/2014 09:02 AM, Richard W.M. Jones wrote:
> On Sat, Aug 30, 2014 at 05:53:43PM +0200, Benoît Canet wrote:
>> If the cloud provider want to be able to boot QCOW2 or QED images on
>> bare metal machines he will need to export QCOW2 or QED images on
>> the network.
>>
>> So far only qemu-nbd allows to do this and it is neither well
>> performing nor really convenient to boot on a bare metal machine.
>
> So I think what you want is a `qemu-iscsi'?  ie. the same as qemu-nbd,
> but with an iSCSI frontend (to replace the NBD server).

You want qemu to be able to issue SCSI commands over iSCSI? I thought 
qemu used libiscsi for this, to be the initiator. What Benoit and I have 
been discussing is the other side, enabling qemu to configure LIO to 
handle requests from other initiators (either VMs or iron) over iSCSI or 
FCoE, but backed by qcow2 disk images. The problem being LIO doesn't 
speak qcow2 yet.

> I guess so.  Are you planning to integrate bits of LIO into qemu, or
> bits of qemu into LIO?

My current thinking is 1) enable qemu to configure the LIO kernel target 
(it's all straightforward via configfs, but add a nice library to qemu 
to hide the details) and 2) enable LIO to use qcow2 and other formats 
besides raw images to back exported LUNs. This is where the LIO 
userspace passthrough and tcmu-runner come in, because we want to do 
this in userspace, not as kernel code, so we have to pass SCSI commands 
up to a userspace helper daemon.

> The latter has been tried various times, without much success.  See
> the many examples of people trying to make the qemu block driver code
> into a separate library, and failing.

What's been the sticking point?

Regards -- Andy

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-31 20:03       ` Andy Grover
@ 2014-08-31 20:38         ` Benoît Canet
  2014-09-01  8:32           ` Paolo Bonzini
  2014-09-01  8:08         ` Paolo Bonzini
  1 sibling, 1 reply; 24+ messages in thread
From: Benoît Canet @ 2014-08-31 20:38 UTC (permalink / raw)
  To: Andy Grover
  Cc: Benoît Canet, kwolf, qemu-devel, Richard W.M. Jones,
	stefanha, pbonzini

The Sunday 31 Aug 2014 à 13:03:14 (-0700), Andy Grover wrote :
> On 08/30/2014 09:02 AM, Richard W.M. Jones wrote:
> >On Sat, Aug 30, 2014 at 05:53:43PM +0200, Benoît Canet wrote:
> >>If the cloud provider want to be able to boot QCOW2 or QED images on
> >>bare metal machines he will need to export QCOW2 or QED images on
> >>the network.
> >>
> >>So far only qemu-nbd allows to do this and it is neither well
> >>performing nor really convenient to boot on a bare metal machine.
> >
> >So I think what you want is a `qemu-iscsi'?  ie. the same as qemu-nbd,
> >but with an iSCSI frontend (to replace the NBD server).
> 
> You want qemu to be able to issue SCSI commands over iSCSI? I thought qemu
> used libiscsi for this, to be the initiator. What Benoit and I have been
> discussing is the other side, enabling qemu to configure LIO to handle
> requests from other initiators (either VMs or iron) over iSCSI or FCoE, but
> backed by qcow2 disk images. The problem being LIO doesn't speak qcow2 yet.
> 
> >I guess so.  Are you planning to integrate bits of LIO into qemu, or
> >bits of qemu into LIO?
> 
> My current thinking is 1) enable qemu to configure the LIO kernel target
> (it's all straightforward via configfs, but add a nice library to qemu to
> hide the details) and 2) enable LIO to use qcow2 and other formats besides
> raw images to back exported LUNs. This is where the LIO userspace
> passthrough and tcmu-runner come in, because we want to do this in
> userspace, not as kernel code, so we have to pass SCSI commands up to a
> userspace helper daemon.
> 
> >The latter has been tried various times, without much success.  See
> >the many examples of people trying to make the qemu block driver code
> >into a separate library, and failing.
> 

The problem with QEMU block drivers is that they are using either coroutines
or QEMU custom AIO callbacks so reusing them without the block layer is
not doable.

For the QEMU block layer as a whole it maintains some linked lists of block devices
states or similar stuff as static global variables.
(See https://github.com/qemu/qemu/blob/master/block.c#L96)

So having more than one instance of the block layer running is not doable.

I am not aware of anyone successfull in turning it into a proper .so.

Extracting into a binary acting as an nbd target was done with qemu-nbd though.

Best regards

Benoît

> What's been the sticking point?
> 
> Regards -- Andy
> 
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-31 20:03       ` Andy Grover
  2014-08-31 20:38         ` Benoît Canet
@ 2014-09-01  8:08         ` Paolo Bonzini
  1 sibling, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2014-09-01  8:08 UTC (permalink / raw)
  To: Andy Grover, Richard W.M. Jones, Benoît Canet
  Cc: kwolf, qemu-devel, stefanha

Il 31/08/2014 22:03, Andy Grover ha scritto:
>> So I think what you want is a `qemu-iscsi'?  ie. the same as qemu-nbd,
>> but with an iSCSI frontend (to replace the NBD server).
> 
> You want qemu to be able to issue SCSI commands over iSCSI? I thought
> qemu used libiscsi for this, to be the initiator. What Benoit and I have
> been discussing is the other side, enabling qemu to configure LIO to
> handle requests from other initiators

Right, you're talking about the same thing; qemu-nbd is an NBD server
(matching a "target" in SCSI speak) that understands qcow2.

Paolo

> (either VMs or iron) over iSCSI or
> FCoE, but backed by qcow2 disk images. The problem being LIO doesn't
> speak qcow2 yet.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-31 20:38         ` Benoît Canet
@ 2014-09-01  8:32           ` Paolo Bonzini
  0 siblings, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2014-09-01  8:32 UTC (permalink / raw)
  To: Benoît Canet, Andy Grover
  Cc: kwolf, qemu-devel, stefanha, Richard W.M. Jones

Il 31/08/2014 22:38, Benoît Canet ha scritto:
> The problem with QEMU block drivers is that they are using either coroutines
> or QEMU custom AIO callbacks so reusing them without the block layer is
> not doable.

Not really true.  The QEMU block layer can be wrapped relatively easily
in a GSource.  This is how QEMU uses it, in fact.

As to global state, each .so is only loaded once in an executable, so if
TCMU loaded two QEMU plugins the .so would point to the block devices
from each plugin.

The problem is more the QMP interface, I think.

> For the QEMU block layer as a whole it maintains some linked lists of block devices
> states or similar stuff as static global variables.
> (See https://github.com/qemu/qemu/blob/master/block.c#L96)
> 
> So having more than one instance of the block layer running is not doable.
> 
> I am not aware of anyone successfull in turning it into a proper .so.

There was libqemublock.  I think it stopped just because the author
turned to something else, not because there were particular problems
with the design.

Paolo

> Extracting into a binary acting as an nbd target was done with qemu-nbd though.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-08-29 17:22 [Qemu-devel] tcmu-runner and QEMU Benoît Canet
  2014-08-29 18:38 ` Andy Grover
  2014-08-30 14:46 ` Richard W.M. Jones
@ 2014-09-02  9:25 ` Stefan Hajnoczi
  2014-09-03  0:20   ` Andy Grover
  2 siblings, 1 reply; 24+ messages in thread
From: Stefan Hajnoczi @ 2014-09-02  9:25 UTC (permalink / raw)
  To: Benoît Canet; +Cc: kwolf, pbonzini, agrover, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 989 bytes --]

On Fri, Aug 29, 2014 at 07:22:18PM +0200, Benoît Canet wrote:
> Listening at Palo's suggestion I started discussing privately with
> Andy about integrating LIO and the QEMU block layer together using
> tcmu-runner: https://github.com/agrover/tcmu-runner.

I looked at this briefly when Andy posted the userspace target patches
to the target-devel list.

The easiest approach is to write a tool similar to qemu-nbd that speaks
the userspace target protocol (i.e. mmap the shared memory).

If the tcmu setup code is involved, maybe providing a libtcmu with the
setup code would be useful.  I suspect that other projects may want to
integrate userspace target support too.  It's easier to let people add
it to their codebase rather than hope they bring their codebase into
tcmu-runner.

The qemu-lio tool would live in the QEMU codebase and reuse all the
infrastructure.  For example, it could include a QMP monitor just like
the one you are adding to qemu-nbd.

Stefan

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-09-02  9:25 ` Stefan Hajnoczi
@ 2014-09-03  0:20   ` Andy Grover
  2014-09-03  7:34     ` Paolo Bonzini
  2014-09-03 13:11     ` Stefan Hajnoczi
  0 siblings, 2 replies; 24+ messages in thread
From: Andy Grover @ 2014-09-03  0:20 UTC (permalink / raw)
  To: Stefan Hajnoczi, Benoît Canet; +Cc: kwolf, pbonzini, qemu-devel

On 09/02/2014 02:25 AM, Stefan Hajnoczi wrote:
> The easiest approach is to write a tool similar to qemu-nbd that speaks
> the userspace target protocol (i.e. mmap the shared memory).

> If the tcmu setup code is involved, maybe providing a libtcmu with the
> setup code would be useful.  I suspect that other projects may want to
> integrate userspace target support too.  It's easier to let people add
> it to their codebase rather than hope they bring their codebase into
> tcmu-runner.

What other projects were you thinking of?

 From my perspective, QEMU is singular. QEMU's block support seems to 
cover just about everything, even ceph, gluster, and sheepdog!

We certainly don't want to duplicate that code so a qemu-lio-tcmu in 
qemu.git like qemu-nbd, basically statically linking the BlockDriver 
object files, sounds like the first thing to try.

We can make tcmu-runner a library (libtcmu) if it makes sense, but let's 
do some work to try the current way and see how it goes before 
"flipping" it.

 > The qemu-lio tool would live in the QEMU codebase and reuse all the
 > infrastructure.  For example, it could include a QMP monitor just like
 > the one you are adding to qemu-nbd.

Benoit and I talked a little about QMP on another part of the thread... 
I said I didn't think we needed a QMP monitor in qemu-lio-tcmu, but let 
me spin up on qemu a little more and I'll be able to speak more 
intelligently.

-- Andy

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-09-03  0:20   ` Andy Grover
@ 2014-09-03  7:34     ` Paolo Bonzini
  2014-09-03 13:11     ` Stefan Hajnoczi
  1 sibling, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2014-09-03  7:34 UTC (permalink / raw)
  To: Andy Grover, Stefan Hajnoczi, Benoît Canet; +Cc: kwolf, qemu-devel

Il 03/09/2014 02:20, Andy Grover ha scritto:
>> The qemu-lio tool would live in the QEMU codebase and reuse all the
>> infrastructure.  For example, it could include a QMP monitor just like
>> the one you are adding to qemu-nbd.
> 
> Benoit and I talked a little about QMP on another part of the thread...
> I said I didn't think we needed a QMP monitor in qemu-lio-tcmu, but let
> me spin up on qemu a little more and I'll be able to speak more
> intelligently.

You do need it.  If you think of it from the "traditional NAS"
viewpoint, it's how you do things like snapshots, mirroring, RAID
recovery, and all that.

Paolo

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-09-03  0:20   ` Andy Grover
  2014-09-03  7:34     ` Paolo Bonzini
@ 2014-09-03 13:11     ` Stefan Hajnoczi
  2014-09-04 13:24       ` Benoît Canet
  1 sibling, 1 reply; 24+ messages in thread
From: Stefan Hajnoczi @ 2014-09-03 13:11 UTC (permalink / raw)
  To: Andy Grover; +Cc: Benoît Canet, kwolf, qemu-devel, pbonzini

[-- Attachment #1: Type: text/plain, Size: 1226 bytes --]

On Tue, Sep 02, 2014 at 05:20:55PM -0700, Andy Grover wrote:
> On 09/02/2014 02:25 AM, Stefan Hajnoczi wrote:
> > The qemu-lio tool would live in the QEMU codebase and reuse all the
> > infrastructure.  For example, it could include a QMP monitor just like
> > the one you are adding to qemu-nbd.
> 
> Benoit and I talked a little about QMP on another part of the thread... I
> said I didn't think we needed a QMP monitor in qemu-lio-tcmu, but let me
> spin up on qemu a little more and I'll be able to speak more intelligently.

The QEMU block layer has useful features that are available as QMP
commands:

For example, the drive-mirror QMP command copies a disk image to a new
location while still servicing I/O requests.  This is used when an
administrator needs to migrate disk images to a new file system or
storage devices without downtime.

There are other commands for snapshots and backup which are issued via
QMP.

It might even make sense to make the tcmu interface available at
run-time in QEMU like the run-time NBD server.  This allows you to get
at read-only point-in-time snapshots while the guest is accessing the
disk.  See the nbd-server-start command in qapi/block.json.

Stefan

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-09-03 13:11     ` Stefan Hajnoczi
@ 2014-09-04 13:24       ` Benoît Canet
  2014-09-04 15:15         ` Andy Grover
  0 siblings, 1 reply; 24+ messages in thread
From: Benoît Canet @ 2014-09-04 13:24 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Benoît Canet, kwolf, Andy Grover, qemu-devel, pbonzini

The Wednesday 03 Sep 2014 à 14:11:59 (+0100), Stefan Hajnoczi wrote :
> On Tue, Sep 02, 2014 at 05:20:55PM -0700, Andy Grover wrote:
> > On 09/02/2014 02:25 AM, Stefan Hajnoczi wrote:
> > > The qemu-lio tool would live in the QEMU codebase and reuse all the
> > > infrastructure.  For example, it could include a QMP monitor just like
> > > the one you are adding to qemu-nbd.
> > 
> > Benoit and I talked a little about QMP on another part of the thread... I
> > said I didn't think we needed a QMP monitor in qemu-lio-tcmu, but let me
> > spin up on qemu a little more and I'll be able to speak more intelligently.
> 
> The QEMU block layer has useful features that are available as QMP
> commands:
> 
> For example, the drive-mirror QMP command copies a disk image to a new
> location while still servicing I/O requests.  This is used when an
> administrator needs to migrate disk images to a new file system or
> storage devices without downtime.
> 
> There are other commands for snapshots and backup which are issued via
> QMP.
> 
> It might even make sense to make the tcmu interface available at
> run-time in QEMU like the run-time NBD server.  This allows you to get
> at read-only point-in-time snapshots while the guest is accessing the
> disk.  See the nbd-server-start command in qapi/block.json.
> 
> Stefan

Andy: ping

I hope we didn't scaried you with our monster block backend and it's
associated QMP socket ;)

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-09-04 13:24       ` Benoît Canet
@ 2014-09-04 15:15         ` Andy Grover
  2014-09-04 15:59           ` Benoît Canet
  2014-09-04 20:16           ` Stefan Hajnoczi
  0 siblings, 2 replies; 24+ messages in thread
From: Andy Grover @ 2014-09-04 15:15 UTC (permalink / raw)
  To: Benoît Canet, Stefan Hajnoczi; +Cc: kwolf, pbonzini, qemu-devel

On 09/04/2014 06:24 AM, Benoît Canet wrote:
>> There are other commands for snapshots and backup which are issued via
>> QMP.
>>
>> It might even make sense to make the tcmu interface available at
>> run-time in QEMU like the run-time NBD server.  This allows you to get
>> at read-only point-in-time snapshots while the guest is accessing the
>> disk.  See the nbd-server-start command in qapi/block.json.
>>
>> Stefan
>
> Andy: ping
>
> I hope we didn't scaried you with our monster block backend and it's
> associated QMP socket ;)

Hi Benoît,

No, I've gone off to work on a initial proof-of-concept implementation 
of a qemu-lio-tcmu.so module, hopefully it'll be ready to look at 
shortly and then we can shoot arrows at it. :)

But in the meantime, do you have a use case or user story for the QMP 
support that might help me understand better how it might all fit together?

Regards -- Andy

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-09-04 15:15         ` Andy Grover
@ 2014-09-04 15:59           ` Benoît Canet
  2014-09-04 20:16           ` Stefan Hajnoczi
  1 sibling, 0 replies; 24+ messages in thread
From: Benoît Canet @ 2014-09-04 15:59 UTC (permalink / raw)
  To: Andy Grover
  Cc: Benoît Canet, kwolf, qemu-devel, Stefan Hajnoczi, pbonzini

The Thursday 04 Sep 2014 à 08:15:39 (-0700), Andy Grover wrote :
> On 09/04/2014 06:24 AM, Benoît Canet wrote:
> >>There are other commands for snapshots and backup which are issued via
> >>QMP.
> >>
> >>It might even make sense to make the tcmu interface available at
> >>run-time in QEMU like the run-time NBD server.  This allows you to get
> >>at read-only point-in-time snapshots while the guest is accessing the
> >>disk.  See the nbd-server-start command in qapi/block.json.
> >>
> >>Stefan
> >
> >Andy: ping
> >
> >I hope we didn't scaried you with our monster block backend and it's
> >associated QMP socket ;)
> 
> Hi Benoît,
> 
> No, I've gone off to work on a initial proof-of-concept implementation of a
> qemu-lio-tcmu.so module, hopefully it'll be ready to look at shortly and
> then we can shoot arrows at it. :)

Great !!

> 
> But in the meantime, do you have a use case or user story for the QMP
> support that might help me understand better how it might all fit together?
> 

Ok,


1) The cloud end user story
---------------------------

My customer has implemented the AWS EC2 API to build his cloud.
Simply said they are an AWS EC2 clone.

The EC2 api provide a way for the end users (customers of my customer) to
take snapshots of their VMs volumes programatically.
Openstack surelly provide this too.

The end users are taking snapshot every days since it's an easy way for
them to have a point in time snapshot of their virtual machine.
They can accumulate over 600 snapshots as the days pass.

So the end users really needs snapshots.

The EC2 compatible cloud only sane way to trigger a snapshotis to use a QMP socket.

Right now my customer provide virtualized instances but the new use case is to provide
bare metal instances. tcmu could be used for this.

I guess the openstack people will want to do the same at some point.

The point is that the end users would still want to take snapshots of their bare metal
instances volumes. Not having this would break the EC2 API and make the instances
unusuable.

2) The cloud provider story
---------------------------

End users make snapshot like crazy and eventually the QCOW2 backing chain will
make QEMU open more than FD_MAX files at once. QEMU will crash.

Also the cloud provider has interest to shorten the backing chains
(drop the oldest snapshots) in order to minimize the risk caused by a common
QCOW2 ancestor snapshot corruption.

QEMU allows to do this with streaming. Streaming is triggered with a QMP command.

3) The CEPH user story
----------------------

I know Red Hat is big on CEPH so:
CEPH provide it's own kind of snapshotting. QEMU support it via QMP.

4) The any block operation you want to do with QEMU storage story
-----------------------------------------------------------------

simple things such as collecting a block device statistic is also done via QMP.
And so on ...

you can look in qapi/block-core.json to have an idea of QMP usefullness while
working with QEMU block devices.

Best regards

Benoît

> Regards -- Andy
> 
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] tcmu-runner and QEMU
  2014-09-04 15:15         ` Andy Grover
  2014-09-04 15:59           ` Benoît Canet
@ 2014-09-04 20:16           ` Stefan Hajnoczi
  1 sibling, 0 replies; 24+ messages in thread
From: Stefan Hajnoczi @ 2014-09-04 20:16 UTC (permalink / raw)
  To: Andy Grover
  Cc: Benoît Canet, kwolf, qemu-devel, Stefan Hajnoczi, pbonzini

[-- Attachment #1: Type: text/plain, Size: 895 bytes --]

On Thu, Sep 04, 2014 at 08:15:39AM -0700, Andy Grover wrote:
> But in the meantime, do you have a use case or user story for the QMP
> support that might help me understand better how it might all fit together?

From my previous email:

"administrator needs to migrate disk images to a new file system or
storage devices without downtime."

Here is some more detail about how that works:

QEMU has a drive-mirror QMP command that copies the disk image to a new
location while continuing to service I/O.  In other words live storage
migration, no downtime.

A tool needs to connect to the QMP unix domain socket and issue the
drive-mirror command.  Then it needs to wait until the QMP events are
raised signalling drive-mirror completion and it block-job-complete QMP
command to atomically switch to the new image file (it is now safe to
delete the old image file).

Stefan

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2014-09-04 20:16 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-29 17:22 [Qemu-devel] tcmu-runner and QEMU Benoît Canet
2014-08-29 18:38 ` Andy Grover
2014-08-29 18:51   ` Benoît Canet
2014-08-29 22:36     ` Andy Grover
2014-08-29 22:46       ` Benoît Canet
2014-08-30 14:46 ` Richard W.M. Jones
2014-08-30 15:53   ` Benoît Canet
2014-08-30 16:02     ` Richard W.M. Jones
2014-08-30 16:04       ` Richard W.M. Jones
2014-08-30 17:22         ` Benoît Canet
2014-08-30 21:50           ` Benoît Canet
2014-08-30 16:51       ` Benoît Canet
2014-08-31 20:03       ` Andy Grover
2014-08-31 20:38         ` Benoît Canet
2014-09-01  8:32           ` Paolo Bonzini
2014-09-01  8:08         ` Paolo Bonzini
2014-09-02  9:25 ` Stefan Hajnoczi
2014-09-03  0:20   ` Andy Grover
2014-09-03  7:34     ` Paolo Bonzini
2014-09-03 13:11     ` Stefan Hajnoczi
2014-09-04 13:24       ` Benoît Canet
2014-09-04 15:15         ` Andy Grover
2014-09-04 15:59           ` Benoît Canet
2014-09-04 20:16           ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.