From mboxrd@z Thu Jan 1 00:00:00 1970 From: Stefano Stabellini Subject: Re: QEMU 2.2.0 in Xen 4.6 Date: Tue, 27 Jan 2015 10:47:15 +0000 Message-ID: References: <1422024910.19859.82.camel@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Eric Shelton Cc: Anthony Perard , xen-devel@lists.xensource.com, Ian Campbell , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org On Mon, 26 Jan 2015, Eric Shelton wrote: > On Mon, Jan 26, 2015 at 9:37 AM, Stefano Stabellini > wrote: > > On Fri, 23 Jan 2015, Eric Shelton wrote: > >> On Jan 23, 2015 10:10 AM, "Stefano Stabellini" wrote: > >> > > >> > On Fri, 23 Jan 2015, Ian Campbell wrote: > >> > > On Fri, 2015-01-23 at 14:42 +0000, Stefano Stabellini wrote: > >> > > > >> > > > HVM guest ---------(PV)----------> QEMU in Dom0 for guest > >> > > > | > >> > > > --(emulation)--> QEMU Stubdom-(syscall)->Linux Stubdom---(PV)--> QEMU Dom0 for stubdom > >> > > > >> > > Here, and throughout what you said I think, "QEMU in Dom0 for guest" > >> > > could equally well be e.g. "blkback in driver domain for guest" likewise > >> > > the "... for stubdom" too. > >> > > > >> > > i.e. the PV backend for the stubdom or the guest doesn't necessarily > >> > > need to be QEMU and doesn't necessarily need to be in dom0. > >> > > >> > Indeed > >> > > >> > >> Thank you both. > >> > >> There is one other thing that would be helpful to understand. Anthony had to patch the Linux kernel running in the stubdom to > >> allow memory mapping. What mappings are needed between dom0 and the stub domain, and between the stub domain and the HVM guest > >> domain? I am guessing this comes into play for the display, as it seems that the dom0 qemu instance is running the VNC server. > > > > I don't remember the details of it anymore. CC'ing Anthony that might > > have a better idea. > > Actually, it looks like what I need to understand is how the startup > of (1) QEMU in Dom0, (2) QEMU in stubdom, and (3) the HVM domain are > synchronized (or supposed to be synchronized). I assume the desired > order is (2) -> (1) -> (3). We have two QEMUs in Dom0, one to provide backends to the stubdoms and one to provide backends to the HVM guest. Which one is (1)? If (1) is the QEMU providing the backends for the stubdomain, then you want (1)->(2)->(3) > It looks like > /local/domain/0/device-model/{hvm-domid}/state is used to notify xl > that QEMU in Dom0 is running, which prompts xl to unpause the HVM > domain, making sure that (1) occurs before (3). The problem I am > running into is that, at least with QEMU 2.0, nothing ensures that > QEMU in stubdom is up and running before unpausing the HVM domain. > This causes hvmloader to fail, as (3) occurs before (2), and there is > no device model in place yet. > > So, what mechanism is being used with qemu-traditional to ensure QEMU > in stubdom is running before the HVM domain is unpaused? Actually I know that is confusing but I am pretty sure that /local/domain/0/device-model/{hvm-domid}/state is to notify that QEMU in the stubdom is up and running. The QEMU writing to that path, is the one running in the stubdom. I think that we are probably missing a comparable mechanism for upstream QEMU. The toolstack (xl and libxl) talks to qemu-traditional via xenstore, and talks to upstream QEMU via QMP over a unix socket. I think that you need to get QMP working in a stubdom first, maybe by using a PV console as transport for it. Afterward you could use QMP to check the state of QEMU and pause and unpause the domain.