From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57775) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XNRHU-00076J-15 for qemu-devel@nongnu.org; Fri, 29 Aug 2014 14:52:22 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XNRHN-00079C-PL for qemu-devel@nongnu.org; Fri, 29 Aug 2014 14:52:15 -0400 Received: from lputeaux-656-01-25-125.w80-12.abo.wanadoo.fr ([80.12.84.125]:47683 helo=paradis.irqsave.net) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XNRHN-00078z-ID for qemu-devel@nongnu.org; Fri, 29 Aug 2014 14:52:09 -0400 Date: Fri, 29 Aug 2014 20:51:21 +0200 From: =?iso-8859-1?Q?Beno=EEt?= Canet Message-ID: <20140829185121.GA31376@irqsave.net> References: <20140829172218.GD16755@irqsave.net> <5400C896.2040600@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <5400C896.2040600@redhat.com> Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] tcmu-runner and QEMU List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Andy Grover Cc: =?iso-8859-1?Q?Beno=EEt?= Canet , kwolf@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com, pbonzini@redhat.com The Friday 29 Aug 2014 =E0 11:38:14 (-0700), Andy Grover wrote : > On 08/29/2014 10:22 AM, Beno=EEt Canet wrote: > >The truth is that QEMU block drivers don't know how to do much on thei= r own > >so we probably must bring the whole QEMU block layer in a tcmu-runner= handler plugin. >=20 > Woah! Really? ok... >=20 > >Another reason to do this is that the QEMU block layer brings features= like taking > >snapshots or streaming snaphots that a cloud provider would want to ke= ep while exporting > >QCOW2 as ISCSI or FCOE. > > > >Doing these operations is usually done by passing something like > >"--qmp tcp:localhost,4444,server,nowait" as a QEMU command line argume= nt then > >connecting on this JSON processing socket then send orders to QEMU. >=20 > The LIO TCMU backend and tcmu-runner provide for a configstring that is > associated with a given backstore. This is made available to the handle= r, > and sounds like just what qmp needs. >=20 > >I made some patches to split this QMP machinery from the QEMU binary b= ut still > >I don't know how a tcmu-runner plugin handler would be able to receive= this command > >line configuration. >=20 > The flow would be: > 1) admin configures a LIO backstore of type "user", size 10G, and gives= it a > configstring like "qmp/tcp:localhost,4444,server,nowait" > 2) admin exports the backstore via whatever LIO-supported fabric(s) (e.= g. > iSCSI) > 3) tcmu-runner is notified of the new user backstore from step 1, finds= the > handler associated with "qmp", calls > handler->open("tcp:localhost,4444,server,nowait") > 4) qmp handler parses string and does whatever it needs to do > 5) handler receives SCSI commands as they arrive QMP is just a way to control QEMU via a socket: it is not particularly bl= ock related. On the other hand bringing the whole block layers into a tcmu-runner hand= ler would mean that there would be _one_ QMP socket opened (by mean of wonderfull QEMU modules static variables :) to control multip= le block devices exported. So I think the configuration passed must be done before an individual ope= n occurs: being global to the .so implementing the tcmu-runner handler. But I don't see how to do it with the current API. Best regards Beno=EEt >=20 > >The second problem is that the QEMU block layer is big and filled with= scary stuff like > >threads and coroutines but I think only trying to write the tcmu-runne= r handler will > >tell if it's doable. >=20 > Yeah, could be tricky but would be pretty cool if it works. Let me know= how > I can help, or with any questions. >=20 > Regards -- Andy >=20