From mboxrd@z Thu Jan 1 00:00:00 1970 From: Stefano Stabellini Subject: Re: [PATCH 6/6] xl/libxl: implement QDISK libxl_device_disk_local_attach Date: Fri, 30 Mar 2012 15:55:15 +0100 Message-ID: References: <1332856772-30292-6-git-send-email-stefano.stabellini@eu.citrix.com> <20337.63226.633651.340221@mariner.uk.xensource.com> <20341.47455.258834.382780@mariner.uk.xensource.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20341.47455.258834.382780@mariner.uk.xensource.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Ian Jackson Cc: "xen-devel@lists.xensource.com" , Ian Campbell , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org On Fri, 30 Mar 2012, Ian Jackson wrote: > Stefano Stabellini writes ("Re: [PATCH 6/6] xl/libxl: implement QDISK libxl_device_disk_local_attach"): > > On Tue, 27 Mar 2012, Ian Jackson wrote: > > > Stefano Stabellini writes ("[PATCH 6/6] xl/libxl: implement QDISK libxl_device_disk_local_attach"): > > > > - Spawn a QEMU instance at boot time to handle disk local attach > > > > requests. > > > > > > This is a bit strange. Why does this need to be a single daemon ? > > > Can't we have one qemu per disk ? > > > > Why is a bit strange? It has always been the case that QEMU PV would > > take care of all the PV backends for a single domain. Moreover why would > > you want more QEMUs when you can handle everything you need with just > > one and a single thread (except the inevitable xenstore thread)? > > Offhand I can think of at least two reasons to prever separate qemus > (at least one per domain): We have one per domain, in this case one for dom0. Keep in mind that the destination here is domain 0. > Firstly, the performance scalability will > be improved if we don't have to funnel all the IO through a single > process. Given that the process in question is using AIO, it doesn' matter much. > Secondly, it avoids propagating failures of any kind from > one domain to another. There are no two domains involved here, only one: domain 0. Also, should we spawn a new Linux kernel for each domain we want a backend for then? Even with our disaggregated architecture we never thought of having a backend/driver domain per guest domain. > Thirdly, it will make it easier to do > disaggregation later if we feel like it. Disaggregation is ortogonal to this: we are going to have a qemu disk backend in each domain we want to be able to do local_attach.