All of lore.kernel.org
 help / color / mirror / Atom feed
* live migrating hvm from 4.4 to 4.7 fails in ioreq server
@ 2016-05-11 12:18 Olaf Hering
  2016-05-11 12:22 ` Andrew Cooper
  0 siblings, 1 reply; 23+ messages in thread
From: Olaf Hering @ 2016-05-11 12:18 UTC (permalink / raw)
  To: xen-devel

Migrating a HVM guest from staging-4.4 to staging fails:

# cat /var/log/xen/qemu-dm-fv-x64-sles12sp1-clean--incoming.log
char device redirected to /dev/pts/3 (label serial0)
xen: ioreq server create: Cannot allocate memory
qemu-system-x86_64: xen hardware virtual machine initialisation failed

Looks like hvm_alloc_ioreq_gmfn finds no bit in
d->arch.hvm_domain.ioreq_gmfn.mask.  Is there a slim change that 4.4 does not
know about HVM_PARAM_NR_IOREQ_SERVER_PAGES, and as a result 4.7 fails to
configure the guest properly?

domU.cfg looks like this:

name="x"
memory=256
serial="pty"
builder="hvm"
boot="cd"
disk=[
        'file:/disk0.raw,hda,w',
        'file:/some.iso,hdc:cdrom,r',
]
vif=[
        'bridge=br0'
]
keymap="de"
vfb = [
        'type=vnc,vncunused=1,keymap=de'
]
usb=1
usbdevice='tablet'


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-11 12:18 live migrating hvm from 4.4 to 4.7 fails in ioreq server Olaf Hering
@ 2016-05-11 12:22 ` Andrew Cooper
  2016-05-11 12:30   ` Olaf Hering
  2016-05-11 12:38   ` Paul Durrant
  0 siblings, 2 replies; 23+ messages in thread
From: Andrew Cooper @ 2016-05-11 12:22 UTC (permalink / raw)
  To: Olaf Hering, xen-devel, Paul Durrant

On 11/05/16 13:18, Olaf Hering wrote:
> Migrating a HVM guest from staging-4.4 to staging fails:
>
> # cat /var/log/xen/qemu-dm-fv-x64-sles12sp1-clean--incoming.log
> char device redirected to /dev/pts/3 (label serial0)
> xen: ioreq server create: Cannot allocate memory
> qemu-system-x86_64: xen hardware virtual machine initialisation failed
>
> Looks like hvm_alloc_ioreq_gmfn finds no bit in
> d->arch.hvm_domain.ioreq_gmfn.mask.  Is there a slim change that 4.4 does not
> know about HVM_PARAM_NR_IOREQ_SERVER_PAGES, and as a result 4.7 fails to
> configure the guest properly?

HVM_PARAM_NR_IOREQ_SERVER_PAGES was introduced in 4.6 iirc.  CC'ing Paul
who did this work.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-11 12:22 ` Andrew Cooper
@ 2016-05-11 12:30   ` Olaf Hering
  2016-05-11 13:07     ` Olaf Hering
  2016-05-11 12:38   ` Paul Durrant
  1 sibling, 1 reply; 23+ messages in thread
From: Olaf Hering @ 2016-05-11 12:30 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Paul Durrant, xen-devel

On Wed, May 11, Andrew Cooper wrote:

> On 11/05/16 13:18, Olaf Hering wrote:
> > Migrating a HVM guest from staging-4.4 to staging fails:
> >
> > # cat /var/log/xen/qemu-dm-fv-x64-sles12sp1-clean--incoming.log
> > char device redirected to /dev/pts/3 (label serial0)
> > xen: ioreq server create: Cannot allocate memory
> > qemu-system-x86_64: xen hardware virtual machine initialisation failed
> >
> > Looks like hvm_alloc_ioreq_gmfn finds no bit in
> > d->arch.hvm_domain.ioreq_gmfn.mask.  Is there a slim change that 4.4 does not
> > know about HVM_PARAM_NR_IOREQ_SERVER_PAGES, and as a result 4.7 fails to
> > configure the guest properly?
> 
> HVM_PARAM_NR_IOREQ_SERVER_PAGES was introduced in 4.6 iirc.  CC'ing Paul
> who did this work.

Migration from staging-4.4 to staging-4.6 fails in the same way. We did
not have a 4.6 based Xen, so noone noticed until now.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-11 12:22 ` Andrew Cooper
  2016-05-11 12:30   ` Olaf Hering
@ 2016-05-11 12:38   ` Paul Durrant
  2016-05-12 10:55     ` Wei Liu
  1 sibling, 1 reply; 23+ messages in thread
From: Paul Durrant @ 2016-05-11 12:38 UTC (permalink / raw)
  To: Andrew Cooper, Olaf Hering, xen-devel

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: 11 May 2016 13:23
> To: Olaf Hering; xen-devel@lists.xen.org; Paul Durrant
> Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> server
> 
> On 11/05/16 13:18, Olaf Hering wrote:
> > Migrating a HVM guest from staging-4.4 to staging fails:
> >
> > # cat /var/log/xen/qemu-dm-fv-x64-sles12sp1-clean--incoming.log
> > char device redirected to /dev/pts/3 (label serial0)
> > xen: ioreq server create: Cannot allocate memory
> > qemu-system-x86_64: xen hardware virtual machine initialisation failed
> >
> > Looks like hvm_alloc_ioreq_gmfn finds no bit in
> > d->arch.hvm_domain.ioreq_gmfn.mask.  Is there a slim change that 4.4
> does not
> > know about HVM_PARAM_NR_IOREQ_SERVER_PAGES, and as a result 4.7
> fails to
> > configure the guest properly?
> 
> HVM_PARAM_NR_IOREQ_SERVER_PAGES was introduced in 4.6 iirc.  CC'ing
> Paul
> who did this work.

The problem is because the new QEMU will assume that the guest was provisioned with ioreq server pages. Somehow it needs to know to behave as a 'default' ioreq server (as qemu trad would) in which case the compatibility code in the hypervisor would DTRT. I guess it would be ok to just have QEMU fall back to the old 'default' HVM param mechanism if creation of an IOREQ server fails. The only other way out would be allow Xen to 'steal' the default server's pages if it doesn't exist.
The former obviously requires a patch to QEMU (but the compat code already exists as a compile-time option so it's probably a small-ish change) and the latter requires a patch to Xen. Which is more preferable at this stage?

  Paul

> 
> ~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-11 12:30   ` Olaf Hering
@ 2016-05-11 13:07     ` Olaf Hering
  2016-05-12 10:53       ` Wei Liu
  0 siblings, 1 reply; 23+ messages in thread
From: Olaf Hering @ 2016-05-11 13:07 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Paul Durrant, xen-devel

On Wed, May 11, Olaf Hering wrote:

> Migration from staging-4.4 to staging-4.6 fails in the same way. We did
> not have a 4.6 based Xen, so noone noticed until now.

And migration from staging-4.5 to staging works as well. So this leaves
staging-4.4.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-11 13:07     ` Olaf Hering
@ 2016-05-12 10:53       ` Wei Liu
  0 siblings, 0 replies; 23+ messages in thread
From: Wei Liu @ 2016-05-12 10:53 UTC (permalink / raw)
  To: Olaf Hering; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, xen-devel

On Wed, May 11, 2016 at 03:07:16PM +0200, Olaf Hering wrote:
> On Wed, May 11, Olaf Hering wrote:
> 
> > Migration from staging-4.4 to staging-4.6 fails in the same way. We did
> > not have a 4.6 based Xen, so noone noticed until now.
> 
> And migration from staging-4.5 to staging works as well. So this leaves
> staging-4.4.
> 

`git log` shows that the support of multiple ioreq servers was added in
4.5. That probably explains it.

Wei.

> Olaf
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-11 12:38   ` Paul Durrant
@ 2016-05-12 10:55     ` Wei Liu
  2016-05-12 12:39       ` Paul Durrant
  0 siblings, 1 reply; 23+ messages in thread
From: Wei Liu @ 2016-05-12 10:55 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Olaf Hering, Wei Liu, xen-devel

On Wed, May 11, 2016 at 12:38:46PM +0000, Paul Durrant wrote:
> > -----Original Message-----
> > From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> > Sent: 11 May 2016 13:23
> > To: Olaf Hering; xen-devel@lists.xen.org; Paul Durrant
> > Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> > server
> > 
> > On 11/05/16 13:18, Olaf Hering wrote:
> > > Migrating a HVM guest from staging-4.4 to staging fails:
> > >
> > > # cat /var/log/xen/qemu-dm-fv-x64-sles12sp1-clean--incoming.log
> > > char device redirected to /dev/pts/3 (label serial0)
> > > xen: ioreq server create: Cannot allocate memory
> > > qemu-system-x86_64: xen hardware virtual machine initialisation failed
> > >
> > > Looks like hvm_alloc_ioreq_gmfn finds no bit in
> > > d->arch.hvm_domain.ioreq_gmfn.mask.  Is there a slim change that 4.4
> > does not
> > > know about HVM_PARAM_NR_IOREQ_SERVER_PAGES, and as a result 4.7
> > fails to
> > > configure the guest properly?
> > 
> > HVM_PARAM_NR_IOREQ_SERVER_PAGES was introduced in 4.6 iirc.  CC'ing
> > Paul
> > who did this work.
> 
> The problem is because the new QEMU will assume that the guest was provisioned with ioreq server pages. Somehow it needs to know to behave as a 'default' ioreq server (as qemu trad would) in which case the compatibility code in the hypervisor would DTRT. I guess it would be ok to just have QEMU fall back to the old 'default' HVM param mechanism if creation of an IOREQ server fails. The only other way out would be allow Xen to 'steal' the default server's pages if it doesn't exist.
> The former obviously requires a patch to QEMU (but the compat code already exists as a compile-time option so it's probably a small-ish change) and the latter requires a patch to Xen. Which is more preferable at this stage?
> 

Please help me understand: both ways require patching latest xen.git or
qemu.git, not patching xen 4.4 or the qemu shipped in 4.4. Right?

Wei.

>   Paul
> 
> > 
> > ~Andrew
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-12 10:55     ` Wei Liu
@ 2016-05-12 12:39       ` Paul Durrant
  2016-05-12 13:01         ` Wei Liu
  0 siblings, 1 reply; 23+ messages in thread
From: Paul Durrant @ 2016-05-12 12:39 UTC (permalink / raw)
  Cc: Andrew Cooper, Olaf Hering, Wei Liu, xen-devel

> -----Original Message-----
> From: Wei Liu [mailto:wei.liu2@citrix.com]
> Sent: 12 May 2016 11:56
> To: Paul Durrant
> Cc: Andrew Cooper; Olaf Hering; xen-devel@lists.xen.org; Wei Liu
> Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> server
> 
> On Wed, May 11, 2016 at 12:38:46PM +0000, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> > > Sent: 11 May 2016 13:23
> > > To: Olaf Hering; xen-devel@lists.xen.org; Paul Durrant
> > > Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> > > server
> > >
> > > On 11/05/16 13:18, Olaf Hering wrote:
> > > > Migrating a HVM guest from staging-4.4 to staging fails:
> > > >
> > > > # cat /var/log/xen/qemu-dm-fv-x64-sles12sp1-clean--incoming.log
> > > > char device redirected to /dev/pts/3 (label serial0)
> > > > xen: ioreq server create: Cannot allocate memory
> > > > qemu-system-x86_64: xen hardware virtual machine initialisation failed
> > > >
> > > > Looks like hvm_alloc_ioreq_gmfn finds no bit in
> > > > d->arch.hvm_domain.ioreq_gmfn.mask.  Is there a slim change that 4.4
> > > does not
> > > > know about HVM_PARAM_NR_IOREQ_SERVER_PAGES, and as a result
> 4.7
> > > fails to
> > > > configure the guest properly?
> > >
> > > HVM_PARAM_NR_IOREQ_SERVER_PAGES was introduced in 4.6 iirc.
> CC'ing
> > > Paul
> > > who did this work.
> >
> > The problem is because the new QEMU will assume that the guest was
> provisioned with ioreq server pages. Somehow it needs to know to behave
> as a 'default' ioreq server (as qemu trad would) in which case the
> compatibility code in the hypervisor would DTRT. I guess it would be ok to
> just have QEMU fall back to the old 'default' HVM param mechanism if
> creation of an IOREQ server fails. The only other way out would be allow Xen
> to 'steal' the default server's pages if it doesn't exist.
> > The former obviously requires a patch to QEMU (but the compat code
> already exists as a compile-time option so it's probably a small-ish change)
> and the latter requires a patch to Xen. Which is more preferable at this
> stage?
> >
> 
> Please help me understand: both ways require patching latest xen.git or
> qemu.git, not patching xen 4.4 or the qemu shipped in 4.4. Right?
> 

Right. We either have to make QEMU accept that a VM can't support the ioreq server hypercalls, or have Xen make them work for old VMs. Nothing has to be done to the older Xen or QEMU.

  Paul

> Wei.
> 
> >   Paul
> >
> > >
> > > ~Andrew
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-12 12:39       ` Paul Durrant
@ 2016-05-12 13:01         ` Wei Liu
  2016-05-12 13:03           ` Paul Durrant
  2016-05-12 13:38           ` Stefano Stabellini
  0 siblings, 2 replies; 23+ messages in thread
From: Wei Liu @ 2016-05-12 13:01 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Olaf Hering, Stefano Stabellini, Wei Liu, Andrew Cooper,
	xen-devel, Anthony PERARD

On Thu, May 12, 2016 at 01:39:49PM +0100, Paul Durrant wrote:
> > -----Original Message-----
> > From: Wei Liu [mailto:wei.liu2@citrix.com]
> > Sent: 12 May 2016 11:56
> > To: Paul Durrant
> > Cc: Andrew Cooper; Olaf Hering; xen-devel@lists.xen.org; Wei Liu
> > Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> > server
> > 
> > On Wed, May 11, 2016 at 12:38:46PM +0000, Paul Durrant wrote:
> > > > -----Original Message-----
> > > > From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> > > > Sent: 11 May 2016 13:23
> > > > To: Olaf Hering; xen-devel@lists.xen.org; Paul Durrant
> > > > Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> > > > server
> > > >
> > > > On 11/05/16 13:18, Olaf Hering wrote:
> > > > > Migrating a HVM guest from staging-4.4 to staging fails:
> > > > >
> > > > > # cat /var/log/xen/qemu-dm-fv-x64-sles12sp1-clean--incoming.log
> > > > > char device redirected to /dev/pts/3 (label serial0)
> > > > > xen: ioreq server create: Cannot allocate memory
> > > > > qemu-system-x86_64: xen hardware virtual machine initialisation failed
> > > > >
> > > > > Looks like hvm_alloc_ioreq_gmfn finds no bit in
> > > > > d->arch.hvm_domain.ioreq_gmfn.mask.  Is there a slim change that 4.4
> > > > does not
> > > > > know about HVM_PARAM_NR_IOREQ_SERVER_PAGES, and as a result
> > 4.7
> > > > fails to
> > > > > configure the guest properly?
> > > >
> > > > HVM_PARAM_NR_IOREQ_SERVER_PAGES was introduced in 4.6 iirc.
> > CC'ing
> > > > Paul
> > > > who did this work.
> > >
> > > The problem is because the new QEMU will assume that the guest was
> > provisioned with ioreq server pages. Somehow it needs to know to behave
> > as a 'default' ioreq server (as qemu trad would) in which case the
> > compatibility code in the hypervisor would DTRT. I guess it would be ok to
> > just have QEMU fall back to the old 'default' HVM param mechanism if
> > creation of an IOREQ server fails. The only other way out would be allow Xen
> > to 'steal' the default server's pages if it doesn't exist.
> > > The former obviously requires a patch to QEMU (but the compat code
> > already exists as a compile-time option so it's probably a small-ish change)
> > and the latter requires a patch to Xen. Which is more preferable at this
> > stage?
> > >
> > 
> > Please help me understand: both ways require patching latest xen.git or
> > qemu.git, not patching xen 4.4 or the qemu shipped in 4.4. Right?
> > 
> 
> Right. We either have to make QEMU accept that a VM can't support the ioreq server hypercalls, or have Xen make them work for old VMs. Nothing has to be done to the older Xen or QEMU.
> 

OK. Thanks for the explanation.

I'm neither the QEMU maintainer nor the Xen maintainer, so I've CC'ed
Anthony and Stefano for you.

If I were to choose, I would choose to patch QEMU to keep the hypervsior
as simple as possible.

From a release point of view, both ways require us to put a patch
in-tree, so it doesn't make much of a difference to me.

Wei.

>   Paul
> 
> > Wei.
> > 
> > >   Paul
> > >
> > > >
> > > > ~Andrew
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-12 13:01         ` Wei Liu
@ 2016-05-12 13:03           ` Paul Durrant
  2016-05-12 13:18             ` Olaf Hering
  2016-05-12 14:10             ` Wei Liu
  2016-05-12 13:38           ` Stefano Stabellini
  1 sibling, 2 replies; 23+ messages in thread
From: Paul Durrant @ 2016-05-12 13:03 UTC (permalink / raw)
  Cc: Olaf Hering, Stefano Stabellini, Wei Liu, Andrew Cooper,
	xen-devel, Anthony Perard

> -----Original Message-----
> From: Wei Liu [mailto:wei.liu2@citrix.com]
> Sent: 12 May 2016 14:02
> To: Paul Durrant
> Cc: Wei Liu; Andrew Cooper; Olaf Hering; xen-devel@lists.xen.org; Anthony
> Perard; Stefano Stabellini
> Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> server
> 
> On Thu, May 12, 2016 at 01:39:49PM +0100, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Wei Liu [mailto:wei.liu2@citrix.com]
> > > Sent: 12 May 2016 11:56
> > > To: Paul Durrant
> > > Cc: Andrew Cooper; Olaf Hering; xen-devel@lists.xen.org; Wei Liu
> > > Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> > > server
> > >
> > > On Wed, May 11, 2016 at 12:38:46PM +0000, Paul Durrant wrote:
> > > > > -----Original Message-----
> > > > > From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> > > > > Sent: 11 May 2016 13:23
> > > > > To: Olaf Hering; xen-devel@lists.xen.org; Paul Durrant
> > > > > Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in
> ioreq
> > > > > server
> > > > >
> > > > > On 11/05/16 13:18, Olaf Hering wrote:
> > > > > > Migrating a HVM guest from staging-4.4 to staging fails:
> > > > > >
> > > > > > # cat /var/log/xen/qemu-dm-fv-x64-sles12sp1-clean--incoming.log
> > > > > > char device redirected to /dev/pts/3 (label serial0)
> > > > > > xen: ioreq server create: Cannot allocate memory
> > > > > > qemu-system-x86_64: xen hardware virtual machine initialisation
> failed
> > > > > >
> > > > > > Looks like hvm_alloc_ioreq_gmfn finds no bit in
> > > > > > d->arch.hvm_domain.ioreq_gmfn.mask.  Is there a slim change that
> 4.4
> > > > > does not
> > > > > > know about HVM_PARAM_NR_IOREQ_SERVER_PAGES, and as a
> result
> > > 4.7
> > > > > fails to
> > > > > > configure the guest properly?
> > > > >
> > > > > HVM_PARAM_NR_IOREQ_SERVER_PAGES was introduced in 4.6 iirc.
> > > CC'ing
> > > > > Paul
> > > > > who did this work.
> > > >
> > > > The problem is because the new QEMU will assume that the guest was
> > > provisioned with ioreq server pages. Somehow it needs to know to
> behave
> > > as a 'default' ioreq server (as qemu trad would) in which case the
> > > compatibility code in the hypervisor would DTRT. I guess it would be ok to
> > > just have QEMU fall back to the old 'default' HVM param mechanism if
> > > creation of an IOREQ server fails. The only other way out would be allow
> Xen
> > > to 'steal' the default server's pages if it doesn't exist.
> > > > The former obviously requires a patch to QEMU (but the compat code
> > > already exists as a compile-time option so it's probably a small-ish change)
> > > and the latter requires a patch to Xen. Which is more preferable at this
> > > stage?
> > > >
> > >
> > > Please help me understand: both ways require patching latest xen.git or
> > > qemu.git, not patching xen 4.4 or the qemu shipped in 4.4. Right?
> > >
> >
> > Right. We either have to make QEMU accept that a VM can't support the
> ioreq server hypercalls, or have Xen make them work for old VMs. Nothing
> has to be done to the older Xen or QEMU.
> >
> 
> OK. Thanks for the explanation.
> 
> I'm neither the QEMU maintainer nor the Xen maintainer, so I've CC'ed
> Anthony and Stefano for you.
> 
> If I were to choose, I would choose to patch QEMU to keep the hypervsior
> as simple as possible.
> 
> From a release point of view, both ways require us to put a patch
> in-tree, so it doesn't make much of a difference to me.
>

Ok. Do you regard this as a critical issue for 4.7?

  Paul
 
> Wei.
> 
> >   Paul
> >
> > > Wei.
> > >
> > > >   Paul
> > > >
> > > > >
> > > > > ~Andrew
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@lists.xen.org
> > > > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-12 13:03           ` Paul Durrant
@ 2016-05-12 13:18             ` Olaf Hering
  2016-05-12 14:10             ` Wei Liu
  1 sibling, 0 replies; 23+ messages in thread
From: Olaf Hering @ 2016-05-12 13:18 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Anthony Perard, Andrew Cooper, Stefano Stabellini, Wei Liu, xen-devel

On Thu, May 12, Paul Durrant wrote:

> Ok. Do you regard this as a critical issue for 4.7?

I do, coming from 4.4 ;-)

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-12 13:01         ` Wei Liu
  2016-05-12 13:03           ` Paul Durrant
@ 2016-05-12 13:38           ` Stefano Stabellini
  1 sibling, 0 replies; 23+ messages in thread
From: Stefano Stabellini @ 2016-05-12 13:38 UTC (permalink / raw)
  To: Wei Liu
  Cc: Olaf Hering, Stefano Stabellini, Andrew Cooper, xen-devel,
	Paul Durrant, Anthony PERARD

On Thu, 12 May 2016, Wei Liu wrote:
> On Thu, May 12, 2016 at 01:39:49PM +0100, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Wei Liu [mailto:wei.liu2@citrix.com]
> > > Sent: 12 May 2016 11:56
> > > To: Paul Durrant
> > > Cc: Andrew Cooper; Olaf Hering; xen-devel@lists.xen.org; Wei Liu
> > > Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> > > server
> > > 
> > > On Wed, May 11, 2016 at 12:38:46PM +0000, Paul Durrant wrote:
> > > > > -----Original Message-----
> > > > > From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> > > > > Sent: 11 May 2016 13:23
> > > > > To: Olaf Hering; xen-devel@lists.xen.org; Paul Durrant
> > > > > Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> > > > > server
> > > > >
> > > > > On 11/05/16 13:18, Olaf Hering wrote:
> > > > > > Migrating a HVM guest from staging-4.4 to staging fails:
> > > > > >
> > > > > > # cat /var/log/xen/qemu-dm-fv-x64-sles12sp1-clean--incoming.log
> > > > > > char device redirected to /dev/pts/3 (label serial0)
> > > > > > xen: ioreq server create: Cannot allocate memory
> > > > > > qemu-system-x86_64: xen hardware virtual machine initialisation failed
> > > > > >
> > > > > > Looks like hvm_alloc_ioreq_gmfn finds no bit in
> > > > > > d->arch.hvm_domain.ioreq_gmfn.mask.  Is there a slim change that 4.4
> > > > > does not
> > > > > > know about HVM_PARAM_NR_IOREQ_SERVER_PAGES, and as a result
> > > 4.7
> > > > > fails to
> > > > > > configure the guest properly?
> > > > >
> > > > > HVM_PARAM_NR_IOREQ_SERVER_PAGES was introduced in 4.6 iirc.
> > > CC'ing
> > > > > Paul
> > > > > who did this work.
> > > >
> > > > The problem is because the new QEMU will assume that the guest was
> > > provisioned with ioreq server pages. Somehow it needs to know to behave
> > > as a 'default' ioreq server (as qemu trad would) in which case the
> > > compatibility code in the hypervisor would DTRT. I guess it would be ok to
> > > just have QEMU fall back to the old 'default' HVM param mechanism if
> > > creation of an IOREQ server fails. The only other way out would be allow Xen
> > > to 'steal' the default server's pages if it doesn't exist.
> > > > The former obviously requires a patch to QEMU (but the compat code
> > > already exists as a compile-time option so it's probably a small-ish change)
> > > and the latter requires a patch to Xen. Which is more preferable at this
> > > stage?
> > > >
> > > 
> > > Please help me understand: both ways require patching latest xen.git or
> > > qemu.git, not patching xen 4.4 or the qemu shipped in 4.4. Right?
> > > 
> > 
> > Right. We either have to make QEMU accept that a VM can't support the ioreq server hypercalls, or have Xen make them work for old VMs. Nothing has to be done to the older Xen or QEMU.
> > 
> 
> OK. Thanks for the explanation.
> 
> I'm neither the QEMU maintainer nor the Xen maintainer, so I've CC'ed
> Anthony and Stefano for you.
> 
> If I were to choose, I would choose to patch QEMU to keep the hypervsior
> as simple as possible.
> 
> >From a release point of view, both ways require us to put a patch
> in-tree, so it doesn't make much of a difference to me.

I think it might be better to fix QEMU too.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-12 13:03           ` Paul Durrant
  2016-05-12 13:18             ` Olaf Hering
@ 2016-05-12 14:10             ` Wei Liu
  2016-05-12 14:13               ` Paul Durrant
  1 sibling, 1 reply; 23+ messages in thread
From: Wei Liu @ 2016-05-12 14:10 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Olaf Hering, Stefano Stabellini, Wei Liu, Andrew Cooper,
	xen-devel, Anthony Perard

On Thu, May 12, 2016 at 02:03:31PM +0100, Paul Durrant wrote:
> > -----Original Message-----
> > From: Wei Liu [mailto:wei.liu2@citrix.com]
> > Sent: 12 May 2016 14:02
> > To: Paul Durrant
> > Cc: Wei Liu; Andrew Cooper; Olaf Hering; xen-devel@lists.xen.org; Anthony
> > Perard; Stefano Stabellini
> > Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> > server
> > 
> > On Thu, May 12, 2016 at 01:39:49PM +0100, Paul Durrant wrote:
> > > > -----Original Message-----
> > > > From: Wei Liu [mailto:wei.liu2@citrix.com]
> > > > Sent: 12 May 2016 11:56
> > > > To: Paul Durrant
> > > > Cc: Andrew Cooper; Olaf Hering; xen-devel@lists.xen.org; Wei Liu
> > > > Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> > > > server
> > > >
> > > > On Wed, May 11, 2016 at 12:38:46PM +0000, Paul Durrant wrote:
> > > > > > -----Original Message-----
> > > > > > From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> > > > > > Sent: 11 May 2016 13:23
> > > > > > To: Olaf Hering; xen-devel@lists.xen.org; Paul Durrant
> > > > > > Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in
> > ioreq
> > > > > > server
> > > > > >
> > > > > > On 11/05/16 13:18, Olaf Hering wrote:
> > > > > > > Migrating a HVM guest from staging-4.4 to staging fails:
> > > > > > >
> > > > > > > # cat /var/log/xen/qemu-dm-fv-x64-sles12sp1-clean--incoming.log
> > > > > > > char device redirected to /dev/pts/3 (label serial0)
> > > > > > > xen: ioreq server create: Cannot allocate memory
> > > > > > > qemu-system-x86_64: xen hardware virtual machine initialisation
> > failed
> > > > > > >
> > > > > > > Looks like hvm_alloc_ioreq_gmfn finds no bit in
> > > > > > > d->arch.hvm_domain.ioreq_gmfn.mask.  Is there a slim change that
> > 4.4
> > > > > > does not
> > > > > > > know about HVM_PARAM_NR_IOREQ_SERVER_PAGES, and as a
> > result
> > > > 4.7
> > > > > > fails to
> > > > > > > configure the guest properly?
> > > > > >
> > > > > > HVM_PARAM_NR_IOREQ_SERVER_PAGES was introduced in 4.6 iirc.
> > > > CC'ing
> > > > > > Paul
> > > > > > who did this work.
> > > > >
> > > > > The problem is because the new QEMU will assume that the guest was
> > > > provisioned with ioreq server pages. Somehow it needs to know to
> > behave
> > > > as a 'default' ioreq server (as qemu trad would) in which case the
> > > > compatibility code in the hypervisor would DTRT. I guess it would be ok to
> > > > just have QEMU fall back to the old 'default' HVM param mechanism if
> > > > creation of an IOREQ server fails. The only other way out would be allow
> > Xen
> > > > to 'steal' the default server's pages if it doesn't exist.
> > > > > The former obviously requires a patch to QEMU (but the compat code
> > > > already exists as a compile-time option so it's probably a small-ish change)
> > > > and the latter requires a patch to Xen. Which is more preferable at this
> > > > stage?
> > > > >
> > > >
> > > > Please help me understand: both ways require patching latest xen.git or
> > > > qemu.git, not patching xen 4.4 or the qemu shipped in 4.4. Right?
> > > >
> > >
> > > Right. We either have to make QEMU accept that a VM can't support the
> > ioreq server hypercalls, or have Xen make them work for old VMs. Nothing
> > has to be done to the older Xen or QEMU.
> > >
> > 
> > OK. Thanks for the explanation.
> > 
> > I'm neither the QEMU maintainer nor the Xen maintainer, so I've CC'ed
> > Anthony and Stefano for you.
> > 
> > If I were to choose, I would choose to patch QEMU to keep the hypervsior
> > as simple as possible.
> > 
> > From a release point of view, both ways require us to put a patch
> > in-tree, so it doesn't make much of a difference to me.
> >
> 
> Ok. Do you regard this as a critical issue for 4.7?
> 

Our general support statement is to support N->N+1 migration, so it is
not really critical for me. On the other hand, if the fix is not overly
complex, it would be nice to have for 4.7.

Note that the fix will need to be in upstream QEMU first before it can
be cherry-picked to our tree, so there is risk that it might just be
blocked on QEMU side (I haven't checked their schedule). So I wouldn't
really block xen release just for that.

If for some reason (either you don't have time or the patch is blocked
on QEMU side) the fix doesn't make 4.7.0 I would suggest QEMU maintainer
to backport to 4.7.1 etc.

Wei.

>   Paul
>  
> > Wei.
> > 
> > >   Paul
> > >
> > > > Wei.
> > > >
> > > > >   Paul
> > > > >
> > > > > >
> > > > > > ~Andrew
> > > > > _______________________________________________
> > > > > Xen-devel mailing list
> > > > > Xen-devel@lists.xen.org
> > > > > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-12 14:10             ` Wei Liu
@ 2016-05-12 14:13               ` Paul Durrant
  2016-05-25 20:57                 ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 23+ messages in thread
From: Paul Durrant @ 2016-05-12 14:13 UTC (permalink / raw)
  Cc: Olaf Hering, Stefano Stabellini, Wei Liu, Andrew Cooper,
	xen-devel, Anthony Perard

> -----Original Message-----
[snip]
> >
> > Ok. Do you regard this as a critical issue for 4.7?
> >
> 
> Our general support statement is to support N->N+1 migration, so it is
> not really critical for me. On the other hand, if the fix is not overly
> complex, it would be nice to have for 4.7.
> 
> Note that the fix will need to be in upstream QEMU first before it can
> be cherry-picked to our tree, so there is risk that it might just be
> blocked on QEMU side (I haven't checked their schedule). So I wouldn't
> really block xen release just for that.
> 

Ok.

> If for some reason (either you don't have time or the patch is blocked
> on QEMU side) the fix doesn't make 4.7.0 I would suggest QEMU maintainer
> to backport to 4.7.1 etc.
> 

I'll try to get to it as soon as I can, but my guess is that it will miss 4.7.0.

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-12 14:13               ` Paul Durrant
@ 2016-05-25 20:57                 ` Konrad Rzeszutek Wilk
  2016-05-26  8:30                   ` Paul Durrant
  0 siblings, 1 reply; 23+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-05-25 20:57 UTC (permalink / raw)
  To: Paul Durrant, zhigang.x.wang
  Cc: Olaf Hering, Stefano Stabellini, Wei Liu, Andrew Cooper,
	xen-devel, Anthony Perard

On Thu, May 12, 2016 at 02:13:21PM +0000, Paul Durrant wrote:
> > -----Original Message-----
> [snip]
> > >
> > > Ok. Do you regard this as a critical issue for 4.7?
> > >
> > 
> > Our general support statement is to support N->N+1 migration, so it is
> > not really critical for me. On the other hand, if the fix is not overly
> > complex, it would be nice to have for 4.7.
> > 
> > Note that the fix will need to be in upstream QEMU first before it can
> > be cherry-picked to our tree, so there is risk that it might just be
> > blocked on QEMU side (I haven't checked their schedule). So I wouldn't
> > really block xen release just for that.
> > 
> 
> Ok.
> 
> > If for some reason (either you don't have time or the patch is blocked
> > on QEMU side) the fix doesn't make 4.7.0 I would suggest QEMU maintainer
> > to backport to 4.7.1 etc.
> > 
> 
> I'll try to get to it as soon as I can, but my guess is that it will miss 4.7.0.

+CC Zhigang.

Any ideas on the timeline for this fix? Thanks!
> 
>   Paul
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-25 20:57                 ` Konrad Rzeszutek Wilk
@ 2016-05-26  8:30                   ` Paul Durrant
  2016-06-03 20:07                     ` Konrad Rzeszutek Wilk
  2016-07-26 15:45                     ` Olaf Hering
  0 siblings, 2 replies; 23+ messages in thread
From: Paul Durrant @ 2016-05-26  8:30 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, zhigang.x.wang
  Cc: Olaf Hering, Stefano Stabellini, Wei Liu, Andrew Cooper,
	xen-devel, Anthony Perard

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: 25 May 2016 21:58
> To: Paul Durrant; zhigang.x.wang@oracle.com
> Cc: Wei Liu; Olaf Hering; Stefano Stabellini; Andrew Cooper; xen-
> devel@lists.xen.org; Anthony Perard
> Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> server
> 
> On Thu, May 12, 2016 at 02:13:21PM +0000, Paul Durrant wrote:
> > > -----Original Message-----
> > [snip]
> > > >
> > > > Ok. Do you regard this as a critical issue for 4.7?
> > > >
> > >
> > > Our general support statement is to support N->N+1 migration, so it is
> > > not really critical for me. On the other hand, if the fix is not overly
> > > complex, it would be nice to have for 4.7.
> > >
> > > Note that the fix will need to be in upstream QEMU first before it can
> > > be cherry-picked to our tree, so there is risk that it might just be
> > > blocked on QEMU side (I haven't checked their schedule). So I wouldn't
> > > really block xen release just for that.
> > >
> >
> > Ok.
> >
> > > If for some reason (either you don't have time or the patch is blocked
> > > on QEMU side) the fix doesn't make 4.7.0 I would suggest QEMU
> maintainer
> > > to backport to 4.7.1 etc.
> > >
> >
> > I'll try to get to it as soon as I can, but my guess is that it will miss 4.7.0.
> 
> +CC Zhigang.
> 
> Any ideas on the timeline for this fix? Thanks!

It's likely to be a while before I could find some time for this; rough guess would be a month... It depends how other stuff pans out.

  Paul

> >
> >   Paul
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-26  8:30                   ` Paul Durrant
@ 2016-06-03 20:07                     ` Konrad Rzeszutek Wilk
  2016-07-26 15:45                     ` Olaf Hering
  1 sibling, 0 replies; 23+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-06-03 20:07 UTC (permalink / raw)
  To: Paul Durrant, annie.li
  Cc: Olaf Hering, Stefano Stabellini, Wei Liu, Andrew Cooper,
	xen-devel, Anthony Perard, zhigang.x.wang

On Thu, May 26, 2016 at 08:30:43AM +0000, Paul Durrant wrote:
> > -----Original Message-----
> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > Sent: 25 May 2016 21:58
> > To: Paul Durrant; zhigang.x.wang@oracle.com
> > Cc: Wei Liu; Olaf Hering; Stefano Stabellini; Andrew Cooper; xen-
> > devel@lists.xen.org; Anthony Perard
> > Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> > server
> > 
> > On Thu, May 12, 2016 at 02:13:21PM +0000, Paul Durrant wrote:
> > > > -----Original Message-----
> > > [snip]
> > > > >
> > > > > Ok. Do you regard this as a critical issue for 4.7?
> > > > >
> > > >
> > > > Our general support statement is to support N->N+1 migration, so it is
> > > > not really critical for me. On the other hand, if the fix is not overly
> > > > complex, it would be nice to have for 4.7.
> > > >
> > > > Note that the fix will need to be in upstream QEMU first before it can
> > > > be cherry-picked to our tree, so there is risk that it might just be
> > > > blocked on QEMU side (I haven't checked their schedule). So I wouldn't
> > > > really block xen release just for that.
> > > >
> > >
> > > Ok.
> > >
> > > > If for some reason (either you don't have time or the patch is blocked
> > > > on QEMU side) the fix doesn't make 4.7.0 I would suggest QEMU
> > maintainer
> > > > to backport to 4.7.1 etc.
> > > >
> > >
> > > I'll try to get to it as soon as I can, but my guess is that it will miss 4.7.0.
> > 
> > +CC Zhigang.
> > 
> > Any ideas on the timeline for this fix? Thanks!
> 
> It's likely to be a while before I could find some time for this; rough guess would be a month... It depends how other stuff pans out.

+CC Annie
> 
>   Paul
> 
> > >
> > >   Paul
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-05-26  8:30                   ` Paul Durrant
  2016-06-03 20:07                     ` Konrad Rzeszutek Wilk
@ 2016-07-26 15:45                     ` Olaf Hering
  2016-07-26 15:48                       ` Paul Durrant
  1 sibling, 1 reply; 23+ messages in thread
From: Olaf Hering @ 2016-07-26 15:45 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, xen-devel,
	Anthony Perard, zhigang.x.wang


[-- Attachment #1.1: Type: text/plain, Size: 239 bytes --]

On Thu, May 26, Paul Durrant wrote:

> It's likely to be a while before I could find some time for this;
> rough guess would be a month... It depends how other stuff pans out.

Any news, Paul? Did you have a chance to compose a fix?

Olaf

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-07-26 15:45                     ` Olaf Hering
@ 2016-07-26 15:48                       ` Paul Durrant
  2016-07-29 10:11                         ` Paul Durrant
  0 siblings, 1 reply; 23+ messages in thread
From: Paul Durrant @ 2016-07-26 15:48 UTC (permalink / raw)
  To: Olaf Hering
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, xen-devel,
	Anthony Perard, zhigang.x.wang

> -----Original Message-----
> From: Olaf Hering [mailto:olaf@aepfle.de]
> Sent: 26 July 2016 16:45
> To: Paul Durrant
> Cc: Konrad Rzeszutek Wilk; zhigang.x.wang@oracle.com; Wei Liu; Stefano
> Stabellini; Andrew Cooper; xen-devel@lists.xen.org; Anthony Perard
> Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> server
> 
> On Thu, May 26, Paul Durrant wrote:
> 
> > It's likely to be a while before I could find some time for this;
> > rough guess would be a month... It depends how other stuff pans out.
> 
> Any news, Paul? Did you have a chance to compose a fix?
> 

Nope, not yet. I may get some time this week now that other stuff has died down.

  Paul

> Olaf
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-07-26 15:48                       ` Paul Durrant
@ 2016-07-29 10:11                         ` Paul Durrant
  2016-07-29 10:34                           ` Paul Durrant
  0 siblings, 1 reply; 23+ messages in thread
From: Paul Durrant @ 2016-07-29 10:11 UTC (permalink / raw)
  To: Paul Durrant, Olaf Hering
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, xen-devel,
	Anthony Perard, zhigang.x.wang

[-- Attachment #1: Type: text/plain, Size: 1385 bytes --]

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of
> Paul Durrant
> Sent: 26 July 2016 16:49
> To: Olaf Hering
> Cc: Stefano Stabellini; Wei Liu; Andrew Cooper; xen-devel@lists.xen.org;
> Anthony Perard; zhigang.x.wang@oracle.com
> Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> server
> 
> > -----Original Message-----
> > From: Olaf Hering [mailto:olaf@aepfle.de]
> > Sent: 26 July 2016 16:45
> > To: Paul Durrant
> > Cc: Konrad Rzeszutek Wilk; zhigang.x.wang@oracle.com; Wei Liu; Stefano
> > Stabellini; Andrew Cooper; xen-devel@lists.xen.org; Anthony Perard
> > Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> > server
> >
> > On Thu, May 26, Paul Durrant wrote:
> >
> > > It's likely to be a while before I could find some time for this;
> > > rough guess would be a month... It depends how other stuff pans out.
> >
> > Any news, Paul? Did you have a chance to compose a fix?
> >
> 
> Nope, not yet. I may get some time this week now that other stuff has died
> down.

Olaf,

  Could you give the attached patch a try? I believe it should solve the problem.

  Paul

> 
>   Paul
> 
> > Olaf
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel

[-- Attachment #2: 0001-xen-handle-inbound-migration-of-VMs-without-ioreq-se.patch --]
[-- Type: application/octet-stream, Size: 10723 bytes --]

From 6d94fa9791b21a5af082d7c49da7aae9c4e6e81f Mon Sep 17 00:00:00 2001
From: Paul Durrant <paul.durrant@citrix.com>
Date: Fri, 29 Jul 2016 09:37:41 +0100
Subject: [PATCH] xen: handle inbound migration of VMs without ioreq server
 pages

VMs created on older versions on Xen will not have been provisioned with
pages to support creation of non-default ioreq servers. In this case
the ioreq server API is not supported and QEMU's only option is to fall
back to using the default ioreq server pages as it did prior to
commit 3996e85c ("Xen: Use the ioreq-server API when available").

This patch therefore changes the code in xen_common.h to stop considering
a failure of xc_hvm_create_ioreq_server() as a hard failure but simply
as an indication that the guest is too old to support the ioreq server
API. Instead a boolean is set to cause reversion to old behaviour such
that the default ioreq server is then used.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
 include/hw/xen/xen_common.h | 123 +++++++++++++++++++++++++++++++-------------
 trace-events                |   1 +
 xen-hvm.c                   |   6 +--
 3 files changed, 90 insertions(+), 40 deletions(-)

diff --git a/include/hw/xen/xen_common.h b/include/hw/xen/xen_common.h
index 640c31e..f2c008a 100644
--- a/include/hw/xen/xen_common.h
+++ b/include/hw/xen/xen_common.h
@@ -107,6 +107,42 @@ static inline int xen_get_vmport_regs_pfn(xc_interface *xc, domid_t dom,
 
 #endif
 
+static inline int xen_get_default_ioreq_server_info(xc_interface *xc, domid_t dom,
+                                                    xen_pfn_t *ioreq_pfn,
+                                                    xen_pfn_t *bufioreq_pfn,
+                                                    evtchn_port_t *bufioreq_evtchn)
+{
+    unsigned long param;
+    int rc;
+
+    rc = xc_get_hvm_param(xc, dom, HVM_PARAM_IOREQ_PFN, &param);
+    if (rc < 0) {
+        fprintf(stderr, "failed to get HVM_PARAM_IOREQ_PFN\n");
+        return -1;
+    }
+
+    *ioreq_pfn = param;
+
+    rc = xc_get_hvm_param(xc, dom, HVM_PARAM_BUFIOREQ_PFN, &param);
+    if (rc < 0) {
+        fprintf(stderr, "failed to get HVM_PARAM_BUFIOREQ_PFN\n");
+        return -1;
+    }
+
+    *bufioreq_pfn = param;
+
+    rc = xc_get_hvm_param(xc, dom, HVM_PARAM_BUFIOREQ_EVTCHN,
+                          &param);
+    if (rc < 0) {
+        fprintf(stderr, "failed to get HVM_PARAM_BUFIOREQ_EVTCHN\n");
+        return -1;
+    }
+
+    *bufioreq_evtchn = param;
+
+    return 0;
+}
+
 /* Xen before 4.5 */
 #if CONFIG_XEN_CTRL_INTERFACE_VERSION < 450
 
@@ -154,10 +190,9 @@ static inline void xen_unmap_pcidev(xc_interface *xc, domid_t dom,
 {
 }
 
-static inline int xen_create_ioreq_server(xc_interface *xc, domid_t dom,
-                                          ioservid_t *ioservid)
+static inline void xen_create_ioreq_server(xc_interface *xc, domid_t dom,
+                                           ioservid_t *ioservid)
 {
-    return 0;
 }
 
 static inline void xen_destroy_ioreq_server(xc_interface *xc, domid_t dom,
@@ -171,35 +206,8 @@ static inline int xen_get_ioreq_server_info(xc_interface *xc, domid_t dom,
                                             xen_pfn_t *bufioreq_pfn,
                                             evtchn_port_t *bufioreq_evtchn)
 {
-    unsigned long param;
-    int rc;
-
-    rc = xc_get_hvm_param(xc, dom, HVM_PARAM_IOREQ_PFN, &param);
-    if (rc < 0) {
-        fprintf(stderr, "failed to get HVM_PARAM_IOREQ_PFN\n");
-        return -1;
-    }
-
-    *ioreq_pfn = param;
-
-    rc = xc_get_hvm_param(xc, dom, HVM_PARAM_BUFIOREQ_PFN, &param);
-    if (rc < 0) {
-        fprintf(stderr, "failed to get HVM_PARAM_BUFIOREQ_PFN\n");
-        return -1;
-    }
-
-    *bufioreq_pfn = param;
-
-    rc = xc_get_hvm_param(xc, dom, HVM_PARAM_BUFIOREQ_EVTCHN,
-                          &param);
-    if (rc < 0) {
-        fprintf(stderr, "failed to get HVM_PARAM_BUFIOREQ_EVTCHN\n");
-        return -1;
-    }
-
-    *bufioreq_evtchn = param;
-
-    return 0;
+    return xen_get_default_ioreq_server_info(xc, dom, ioreq_pfn, bufioreq_pfn,
+                                             bufioreq_evtchn;
 }
 
 static inline int xen_set_ioreq_server_state(xc_interface *xc, domid_t dom,
@@ -212,6 +220,8 @@ static inline int xen_set_ioreq_server_state(xc_interface *xc, domid_t dom,
 /* Xen 4.5 */
 #else
 
+static bool use_default_ioreq_server;
+
 static inline void xen_map_memory_section(xc_interface *xc, domid_t dom,
                                           ioservid_t ioservid,
                                           MemoryRegionSection *section)
@@ -220,6 +230,10 @@ static inline void xen_map_memory_section(xc_interface *xc, domid_t dom,
     ram_addr_t size = int128_get64(section->size);
     hwaddr end_addr = start_addr + size - 1;
 
+    if (use_default_ioreq_server) {
+        return;
+    }
+
     trace_xen_map_mmio_range(ioservid, start_addr, end_addr);
     xc_hvm_map_io_range_to_ioreq_server(xc, dom, ioservid, 1,
                                         start_addr, end_addr);
@@ -233,6 +247,11 @@ static inline void xen_unmap_memory_section(xc_interface *xc, domid_t dom,
     ram_addr_t size = int128_get64(section->size);
     hwaddr end_addr = start_addr + size - 1;
 
+    if (use_default_ioreq_server) {
+        return;
+    }
+
+
     trace_xen_unmap_mmio_range(ioservid, start_addr, end_addr);
     xc_hvm_unmap_io_range_from_ioreq_server(xc, dom, ioservid, 1,
                                             start_addr, end_addr);
@@ -246,6 +265,11 @@ static inline void xen_map_io_section(xc_interface *xc, domid_t dom,
     ram_addr_t size = int128_get64(section->size);
     hwaddr end_addr = start_addr + size - 1;
 
+    if (use_default_ioreq_server) {
+        return;
+    }
+
+
     trace_xen_map_portio_range(ioservid, start_addr, end_addr);
     xc_hvm_map_io_range_to_ioreq_server(xc, dom, ioservid, 0,
                                         start_addr, end_addr);
@@ -259,6 +283,10 @@ static inline void xen_unmap_io_section(xc_interface *xc, domid_t dom,
     ram_addr_t size = int128_get64(section->size);
     hwaddr end_addr = start_addr + size - 1;
 
+    if (use_default_ioreq_server) {
+        return;
+    }
+
     trace_xen_unmap_portio_range(ioservid, start_addr, end_addr);
     xc_hvm_unmap_io_range_from_ioreq_server(xc, dom, ioservid, 0,
                                             start_addr, end_addr);
@@ -268,6 +296,10 @@ static inline void xen_map_pcidev(xc_interface *xc, domid_t dom,
                                   ioservid_t ioservid,
                                   PCIDevice *pci_dev)
 {
+    if (use_default_ioreq_server) {
+        return;
+    }
+
     trace_xen_map_pcidev(ioservid, pci_bus_num(pci_dev->bus),
                          PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn));
     xc_hvm_map_pcidev_to_ioreq_server(xc, dom, ioservid,
@@ -280,6 +312,10 @@ static inline void xen_unmap_pcidev(xc_interface *xc, domid_t dom,
                                     ioservid_t ioservid,
                                     PCIDevice *pci_dev)
 {
+    if (use_default_ioreq_server) {
+        return;
+    }
+
     trace_xen_unmap_pcidev(ioservid, pci_bus_num(pci_dev->bus),
                            PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn));
     xc_hvm_unmap_pcidev_from_ioreq_server(xc, dom, ioservid,
@@ -288,22 +324,29 @@ static inline void xen_unmap_pcidev(xc_interface *xc, domid_t dom,
                                           PCI_FUNC(pci_dev->devfn));
 }
 
-static inline int xen_create_ioreq_server(xc_interface *xc, domid_t dom,
-                                          ioservid_t *ioservid)
+static inline void xen_create_ioreq_server(xc_interface *xc, domid_t dom,
+                                           ioservid_t *ioservid)
 {
     int rc = xc_hvm_create_ioreq_server(xc, dom, HVM_IOREQSRV_BUFIOREQ_ATOMIC,
                                         ioservid);
 
     if (rc == 0) {
         trace_xen_ioreq_server_create(*ioservid);
+        return;
     }
 
-    return rc;
+    *ioservid = 0;
+    use_default_ioreq_server = true;
+    trace_xen_default_ioreq_server();
 }
 
 static inline void xen_destroy_ioreq_server(xc_interface *xc, domid_t dom,
                                             ioservid_t ioservid)
 {
+    if (use_default_ioreq_server) {
+        return;
+    }
+
     trace_xen_ioreq_server_destroy(ioservid);
     xc_hvm_destroy_ioreq_server(xc, dom, ioservid);
 }
@@ -314,6 +357,12 @@ static inline int xen_get_ioreq_server_info(xc_interface *xc, domid_t dom,
                                             xen_pfn_t *bufioreq_pfn,
                                             evtchn_port_t *bufioreq_evtchn)
 {
+    if (use_default_ioreq_server) {
+        return xen_get_default_ioreq_server_info(xc, dom, ioreq_pfn,
+                                                 bufioreq_pfn,
+                                                 bufioreq_evtchn);
+    }
+
     return xc_hvm_get_ioreq_server_info(xc, dom, ioservid,
                                         ioreq_pfn, bufioreq_pfn,
                                         bufioreq_evtchn);
@@ -323,6 +372,10 @@ static inline int xen_set_ioreq_server_state(xc_interface *xc, domid_t dom,
                                              ioservid_t ioservid,
                                              bool enable)
 {
+    if (use_default_ioreq_server) {
+        return 0;
+    }
+
     trace_xen_ioreq_server_state(ioservid, enable);
     return xc_hvm_set_ioreq_server_state(xc, dom, ioservid, enable);
 }
diff --git a/trace-events b/trace-events
index 52c6a6c..616cc52 100644
--- a/trace-events
+++ b/trace-events
@@ -60,6 +60,7 @@ spice_vmc_event(int event) "spice vmc event %d"
 # xen-hvm.c
 xen_ram_alloc(unsigned long ram_addr, unsigned long size) "requested: %#lx, size %#lx"
 xen_client_set_memory(uint64_t start_addr, unsigned long size, bool log_dirty) "%#"PRIx64" size %#lx, log_dirty %i"
+xen_default_ioreq_server(void) ""
 xen_ioreq_server_create(uint32_t id) "id: %u"
 xen_ioreq_server_destroy(uint32_t id) "id: %u"
 xen_ioreq_server_state(uint32_t id, bool enable) "id: %u: enable: %i"
diff --git a/xen-hvm.c b/xen-hvm.c
index eb57792..cc3d4b0 100644
--- a/xen-hvm.c
+++ b/xen-hvm.c
@@ -1203,11 +1203,7 @@ void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory)
         goto err;
     }
 
-    rc = xen_create_ioreq_server(xen_xc, xen_domid, &state->ioservid);
-    if (rc < 0) {
-        perror("xen: ioreq server create");
-        goto err;
-    }
+    xen_create_ioreq_server(xen_xc, xen_domid, &state->ioservid);
 
     state->exit.notify = xen_exit_notifier;
     qemu_add_exit_notifier(&state->exit);
-- 
2.1.4


[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-07-29 10:11                         ` Paul Durrant
@ 2016-07-29 10:34                           ` Paul Durrant
  2016-07-29 13:10                             ` Olaf Hering
  0 siblings, 1 reply; 23+ messages in thread
From: Paul Durrant @ 2016-07-29 10:34 UTC (permalink / raw)
  To: Olaf Hering
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, xen-devel,
	Anthony Perard, zhigang.x.wang

[-- Attachment #1: Type: text/plain, Size: 1931 bytes --]

> -----Original Message-----
> From: Paul Durrant
> Sent: 29 July 2016 11:12
> To: Paul Durrant; Olaf Hering
> Cc: Stefano Stabellini; Wei Liu; Andrew Cooper; xen-devel@lists.xen.org;
> Anthony Perard; zhigang.x.wang@oracle.com
> Subject: RE: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> server
> 
> > -----Original Message-----
> > From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of
> > Paul Durrant
> > Sent: 26 July 2016 16:49
> > To: Olaf Hering
> > Cc: Stefano Stabellini; Wei Liu; Andrew Cooper; xen-devel@lists.xen.org;
> > Anthony Perard; zhigang.x.wang@oracle.com
> > Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> > server
> >
> > > -----Original Message-----
> > > From: Olaf Hering [mailto:olaf@aepfle.de]
> > > Sent: 26 July 2016 16:45
> > > To: Paul Durrant
> > > Cc: Konrad Rzeszutek Wilk; zhigang.x.wang@oracle.com; Wei Liu; Stefano
> > > Stabellini; Andrew Cooper; xen-devel@lists.xen.org; Anthony Perard
> > > Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> > > server
> > >
> > > On Thu, May 26, Paul Durrant wrote:
> > >
> > > > It's likely to be a while before I could find some time for this;
> > > > rough guess would be a month... It depends how other stuff pans out.
> > >
> > > Any news, Paul? Did you have a chance to compose a fix?
> > >
> >
> > Nope, not yet. I may get some time this week now that other stuff has died
> > down.
> 
> Olaf,
> 
>   Could you give the attached patch a try? I believe it should solve the
> problem.
>

For some reason the attached patch was a previous version that did not compile. Here's the one I actually tested...

  Paul

 
>   Paul
> 
> >
> >   Paul
> >
> > > Olaf
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > https://lists.xen.org/xen-devel

[-- Attachment #2: 0001-xen-handle-inbound-migration-of-VMs-without-ioreq-se.patch --]
[-- Type: application/octet-stream, Size: 10724 bytes --]

From e7a9e44cdbbe796e36a04a82a21eaaa9708775f8 Mon Sep 17 00:00:00 2001
From: Paul Durrant <paul.durrant@citrix.com>
Date: Fri, 29 Jul 2016 09:37:41 +0100
Subject: [PATCH] xen: handle inbound migration of VMs without ioreq server
 pages

VMs created on older versions on Xen will not have been provisioned with
pages to support creation of non-default ioreq servers. In this case
the ioreq server API is not supported and QEMU's only option is to fall
back to using the default ioreq server pages as it did prior to
commit 3996e85c ("Xen: Use the ioreq-server API when available").

This patch therefore changes the code in xen_common.h to stop considering
a failure of xc_hvm_create_ioreq_server() as a hard failure but simply
as an indication that the guest is too old to support the ioreq server
API. Instead a boolean is set to cause reversion to old behaviour such
that the default ioreq server is then used.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
 include/hw/xen/xen_common.h | 123 +++++++++++++++++++++++++++++++-------------
 trace-events                |   1 +
 xen-hvm.c                   |   6 +--
 3 files changed, 90 insertions(+), 40 deletions(-)

diff --git a/include/hw/xen/xen_common.h b/include/hw/xen/xen_common.h
index 640c31e..8707adc 100644
--- a/include/hw/xen/xen_common.h
+++ b/include/hw/xen/xen_common.h
@@ -107,6 +107,42 @@ static inline int xen_get_vmport_regs_pfn(xc_interface *xc, domid_t dom,
 
 #endif
 
+static inline int xen_get_default_ioreq_server_info(xc_interface *xc, domid_t dom,
+                                                    xen_pfn_t *ioreq_pfn,
+                                                    xen_pfn_t *bufioreq_pfn,
+                                                    evtchn_port_t *bufioreq_evtchn)
+{
+    unsigned long param;
+    int rc;
+
+    rc = xc_get_hvm_param(xc, dom, HVM_PARAM_IOREQ_PFN, &param);
+    if (rc < 0) {
+        fprintf(stderr, "failed to get HVM_PARAM_IOREQ_PFN\n");
+        return -1;
+    }
+
+    *ioreq_pfn = param;
+
+    rc = xc_get_hvm_param(xc, dom, HVM_PARAM_BUFIOREQ_PFN, &param);
+    if (rc < 0) {
+        fprintf(stderr, "failed to get HVM_PARAM_BUFIOREQ_PFN\n");
+        return -1;
+    }
+
+    *bufioreq_pfn = param;
+
+    rc = xc_get_hvm_param(xc, dom, HVM_PARAM_BUFIOREQ_EVTCHN,
+                          &param);
+    if (rc < 0) {
+        fprintf(stderr, "failed to get HVM_PARAM_BUFIOREQ_EVTCHN\n");
+        return -1;
+    }
+
+    *bufioreq_evtchn = param;
+
+    return 0;
+}
+
 /* Xen before 4.5 */
 #if CONFIG_XEN_CTRL_INTERFACE_VERSION < 450
 
@@ -154,10 +190,9 @@ static inline void xen_unmap_pcidev(xc_interface *xc, domid_t dom,
 {
 }
 
-static inline int xen_create_ioreq_server(xc_interface *xc, domid_t dom,
-                                          ioservid_t *ioservid)
+static inline void xen_create_ioreq_server(xc_interface *xc, domid_t dom,
+                                           ioservid_t *ioservid)
 {
-    return 0;
 }
 
 static inline void xen_destroy_ioreq_server(xc_interface *xc, domid_t dom,
@@ -171,35 +206,8 @@ static inline int xen_get_ioreq_server_info(xc_interface *xc, domid_t dom,
                                             xen_pfn_t *bufioreq_pfn,
                                             evtchn_port_t *bufioreq_evtchn)
 {
-    unsigned long param;
-    int rc;
-
-    rc = xc_get_hvm_param(xc, dom, HVM_PARAM_IOREQ_PFN, &param);
-    if (rc < 0) {
-        fprintf(stderr, "failed to get HVM_PARAM_IOREQ_PFN\n");
-        return -1;
-    }
-
-    *ioreq_pfn = param;
-
-    rc = xc_get_hvm_param(xc, dom, HVM_PARAM_BUFIOREQ_PFN, &param);
-    if (rc < 0) {
-        fprintf(stderr, "failed to get HVM_PARAM_BUFIOREQ_PFN\n");
-        return -1;
-    }
-
-    *bufioreq_pfn = param;
-
-    rc = xc_get_hvm_param(xc, dom, HVM_PARAM_BUFIOREQ_EVTCHN,
-                          &param);
-    if (rc < 0) {
-        fprintf(stderr, "failed to get HVM_PARAM_BUFIOREQ_EVTCHN\n");
-        return -1;
-    }
-
-    *bufioreq_evtchn = param;
-
-    return 0;
+    return xen_get_default_ioreq_server_info(xc, dom, ioreq_pfn, bufioreq_pfn,
+                                             bufioreq_evtchn);
 }
 
 static inline int xen_set_ioreq_server_state(xc_interface *xc, domid_t dom,
@@ -212,6 +220,8 @@ static inline int xen_set_ioreq_server_state(xc_interface *xc, domid_t dom,
 /* Xen 4.5 */
 #else
 
+static bool use_default_ioreq_server;
+
 static inline void xen_map_memory_section(xc_interface *xc, domid_t dom,
                                           ioservid_t ioservid,
                                           MemoryRegionSection *section)
@@ -220,6 +230,10 @@ static inline void xen_map_memory_section(xc_interface *xc, domid_t dom,
     ram_addr_t size = int128_get64(section->size);
     hwaddr end_addr = start_addr + size - 1;
 
+    if (use_default_ioreq_server) {
+        return;
+    }
+
     trace_xen_map_mmio_range(ioservid, start_addr, end_addr);
     xc_hvm_map_io_range_to_ioreq_server(xc, dom, ioservid, 1,
                                         start_addr, end_addr);
@@ -233,6 +247,11 @@ static inline void xen_unmap_memory_section(xc_interface *xc, domid_t dom,
     ram_addr_t size = int128_get64(section->size);
     hwaddr end_addr = start_addr + size - 1;
 
+    if (use_default_ioreq_server) {
+        return;
+    }
+
+
     trace_xen_unmap_mmio_range(ioservid, start_addr, end_addr);
     xc_hvm_unmap_io_range_from_ioreq_server(xc, dom, ioservid, 1,
                                             start_addr, end_addr);
@@ -246,6 +265,11 @@ static inline void xen_map_io_section(xc_interface *xc, domid_t dom,
     ram_addr_t size = int128_get64(section->size);
     hwaddr end_addr = start_addr + size - 1;
 
+    if (use_default_ioreq_server) {
+        return;
+    }
+
+
     trace_xen_map_portio_range(ioservid, start_addr, end_addr);
     xc_hvm_map_io_range_to_ioreq_server(xc, dom, ioservid, 0,
                                         start_addr, end_addr);
@@ -259,6 +283,10 @@ static inline void xen_unmap_io_section(xc_interface *xc, domid_t dom,
     ram_addr_t size = int128_get64(section->size);
     hwaddr end_addr = start_addr + size - 1;
 
+    if (use_default_ioreq_server) {
+        return;
+    }
+
     trace_xen_unmap_portio_range(ioservid, start_addr, end_addr);
     xc_hvm_unmap_io_range_from_ioreq_server(xc, dom, ioservid, 0,
                                             start_addr, end_addr);
@@ -268,6 +296,10 @@ static inline void xen_map_pcidev(xc_interface *xc, domid_t dom,
                                   ioservid_t ioservid,
                                   PCIDevice *pci_dev)
 {
+    if (use_default_ioreq_server) {
+        return;
+    }
+
     trace_xen_map_pcidev(ioservid, pci_bus_num(pci_dev->bus),
                          PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn));
     xc_hvm_map_pcidev_to_ioreq_server(xc, dom, ioservid,
@@ -280,6 +312,10 @@ static inline void xen_unmap_pcidev(xc_interface *xc, domid_t dom,
                                     ioservid_t ioservid,
                                     PCIDevice *pci_dev)
 {
+    if (use_default_ioreq_server) {
+        return;
+    }
+
     trace_xen_unmap_pcidev(ioservid, pci_bus_num(pci_dev->bus),
                            PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn));
     xc_hvm_unmap_pcidev_from_ioreq_server(xc, dom, ioservid,
@@ -288,22 +324,29 @@ static inline void xen_unmap_pcidev(xc_interface *xc, domid_t dom,
                                           PCI_FUNC(pci_dev->devfn));
 }
 
-static inline int xen_create_ioreq_server(xc_interface *xc, domid_t dom,
-                                          ioservid_t *ioservid)
+static inline void xen_create_ioreq_server(xc_interface *xc, domid_t dom,
+                                           ioservid_t *ioservid)
 {
     int rc = xc_hvm_create_ioreq_server(xc, dom, HVM_IOREQSRV_BUFIOREQ_ATOMIC,
                                         ioservid);
 
     if (rc == 0) {
         trace_xen_ioreq_server_create(*ioservid);
+        return;
     }
 
-    return rc;
+    *ioservid = 0;
+    use_default_ioreq_server = true;
+    trace_xen_default_ioreq_server();
 }
 
 static inline void xen_destroy_ioreq_server(xc_interface *xc, domid_t dom,
                                             ioservid_t ioservid)
 {
+    if (use_default_ioreq_server) {
+        return;
+    }
+
     trace_xen_ioreq_server_destroy(ioservid);
     xc_hvm_destroy_ioreq_server(xc, dom, ioservid);
 }
@@ -314,6 +357,12 @@ static inline int xen_get_ioreq_server_info(xc_interface *xc, domid_t dom,
                                             xen_pfn_t *bufioreq_pfn,
                                             evtchn_port_t *bufioreq_evtchn)
 {
+    if (use_default_ioreq_server) {
+        return xen_get_default_ioreq_server_info(xc, dom, ioreq_pfn,
+                                                 bufioreq_pfn,
+                                                 bufioreq_evtchn);
+    }
+
     return xc_hvm_get_ioreq_server_info(xc, dom, ioservid,
                                         ioreq_pfn, bufioreq_pfn,
                                         bufioreq_evtchn);
@@ -323,6 +372,10 @@ static inline int xen_set_ioreq_server_state(xc_interface *xc, domid_t dom,
                                              ioservid_t ioservid,
                                              bool enable)
 {
+    if (use_default_ioreq_server) {
+        return 0;
+    }
+
     trace_xen_ioreq_server_state(ioservid, enable);
     return xc_hvm_set_ioreq_server_state(xc, dom, ioservid, enable);
 }
diff --git a/trace-events b/trace-events
index 52c6a6c..616cc52 100644
--- a/trace-events
+++ b/trace-events
@@ -60,6 +60,7 @@ spice_vmc_event(int event) "spice vmc event %d"
 # xen-hvm.c
 xen_ram_alloc(unsigned long ram_addr, unsigned long size) "requested: %#lx, size %#lx"
 xen_client_set_memory(uint64_t start_addr, unsigned long size, bool log_dirty) "%#"PRIx64" size %#lx, log_dirty %i"
+xen_default_ioreq_server(void) ""
 xen_ioreq_server_create(uint32_t id) "id: %u"
 xen_ioreq_server_destroy(uint32_t id) "id: %u"
 xen_ioreq_server_state(uint32_t id, bool enable) "id: %u: enable: %i"
diff --git a/xen-hvm.c b/xen-hvm.c
index eb57792..cc3d4b0 100644
--- a/xen-hvm.c
+++ b/xen-hvm.c
@@ -1203,11 +1203,7 @@ void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory)
         goto err;
     }
 
-    rc = xen_create_ioreq_server(xen_xc, xen_domid, &state->ioservid);
-    if (rc < 0) {
-        perror("xen: ioreq server create");
-        goto err;
-    }
+    xen_create_ioreq_server(xen_xc, xen_domid, &state->ioservid);
 
     state->exit.notify = xen_exit_notifier;
     qemu_add_exit_notifier(&state->exit);
-- 
2.1.4


[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-07-29 10:34                           ` Paul Durrant
@ 2016-07-29 13:10                             ` Olaf Hering
  2016-07-29 13:22                               ` Paul Durrant
  0 siblings, 1 reply; 23+ messages in thread
From: Olaf Hering @ 2016-07-29 13:10 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, xen-devel,
	Anthony Perard, zhigang.x.wang


[-- Attachment #1.1: Type: text/plain, Size: 1243 bytes --]

On Fri, Jul 29, Paul Durrant wrote:

> >   Could you give the attached patch a try? I believe it should solve the
> > problem.

Thanks Paul. I tested it with 4.4.20160610T12110 dom0 as sender and
4.6.20160620T12082 as receiver.


Initially the result was this:

# cat /var/log/xen/qemu-dm-fv-x64-sles12sp1-clean--incoming.log
char device redirected to /dev/pts/3 (label serial0)
xen: ioreq server create: Cannot allocate memory
qemu-system-x86_64: xen hardware virtual machine initialisation failed


Now the result it this:
root@anonymi:~ # cat /var/log/xen/qemu-dm-fv-x64-sles12sp1-clean--incoming.log
char device redirected to /dev/pts/3 (label serial0)
xen_ram_alloc: do not alloc f800000 bytes of ram at 0 when runstate is INMIGRATE
xen_ram_alloc: do not alloc 800000 bytes of ram at f800000 when runstate is INMIGRATE
xen_ram_alloc: do not alloc 10000 bytes of ram at 10000000 when runstate is INMIGRATE
xen_ram_alloc: do not alloc 40000 bytes of ram at 10010000 when runstate is INMIGRATE
Unknown savevm section or instance 'kvm-tpr-opt' 0
qemu-system-i386: load of migration failed: Invalid argument

So appearently it got past the point of the ioreq server issue.


The "kvm-tpr-opt" thing was briefly discussed on 12 May 2016.

Olaf

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: live migrating hvm from 4.4 to 4.7 fails in ioreq server
  2016-07-29 13:10                             ` Olaf Hering
@ 2016-07-29 13:22                               ` Paul Durrant
  0 siblings, 0 replies; 23+ messages in thread
From: Paul Durrant @ 2016-07-29 13:22 UTC (permalink / raw)
  To: Olaf Hering
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, xen-devel,
	Anthony Perard, zhigang.x.wang

> -----Original Message-----
> From: Olaf Hering [mailto:olaf@aepfle.de]
> Sent: 29 July 2016 14:11
> To: Paul Durrant
> Cc: Stefano Stabellini; Wei Liu; Andrew Cooper; xen-devel@lists.xen.org;
> Anthony Perard; zhigang.x.wang@oracle.com
> Subject: Re: [Xen-devel] live migrating hvm from 4.4 to 4.7 fails in ioreq
> server
> 
> On Fri, Jul 29, Paul Durrant wrote:
> 
> > >   Could you give the attached patch a try? I believe it should solve the
> > > problem.
> 
> Thanks Paul. I tested it with 4.4.20160610T12110 dom0 as sender and
> 4.6.20160620T12082 as receiver.
> 
> 
> Initially the result was this:
> 
> # cat /var/log/xen/qemu-dm-fv-x64-sles12sp1-clean--incoming.log
> char device redirected to /dev/pts/3 (label serial0)
> xen: ioreq server create: Cannot allocate memory
> qemu-system-x86_64: xen hardware virtual machine initialisation failed
> 
> 
> Now the result it this:
> root@anonymi:~ # cat /var/log/xen/qemu-dm-fv-x64-sles12sp1-clean--
> incoming.log
> char device redirected to /dev/pts/3 (label serial0)
> xen_ram_alloc: do not alloc f800000 bytes of ram at 0 when runstate is
> INMIGRATE
> xen_ram_alloc: do not alloc 800000 bytes of ram at f800000 when runstate is
> INMIGRATE
> xen_ram_alloc: do not alloc 10000 bytes of ram at 10000000 when runstate is
> INMIGRATE
> xen_ram_alloc: do not alloc 40000 bytes of ram at 10010000 when runstate is
> INMIGRATE
> Unknown savevm section or instance 'kvm-tpr-opt' 0
> qemu-system-i386: load of migration failed: Invalid argument
> 
> So appearently it got past the point of the ioreq server issue.
> 

Yes, it certainly looks like it.

> 
> The "kvm-tpr-opt" thing was briefly discussed on 12 May 2016.
> 

I'll have a look for that.

Thanks,

  Paul

> Olaf
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2016-07-29 13:22 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-11 12:18 live migrating hvm from 4.4 to 4.7 fails in ioreq server Olaf Hering
2016-05-11 12:22 ` Andrew Cooper
2016-05-11 12:30   ` Olaf Hering
2016-05-11 13:07     ` Olaf Hering
2016-05-12 10:53       ` Wei Liu
2016-05-11 12:38   ` Paul Durrant
2016-05-12 10:55     ` Wei Liu
2016-05-12 12:39       ` Paul Durrant
2016-05-12 13:01         ` Wei Liu
2016-05-12 13:03           ` Paul Durrant
2016-05-12 13:18             ` Olaf Hering
2016-05-12 14:10             ` Wei Liu
2016-05-12 14:13               ` Paul Durrant
2016-05-25 20:57                 ` Konrad Rzeszutek Wilk
2016-05-26  8:30                   ` Paul Durrant
2016-06-03 20:07                     ` Konrad Rzeszutek Wilk
2016-07-26 15:45                     ` Olaf Hering
2016-07-26 15:48                       ` Paul Durrant
2016-07-29 10:11                         ` Paul Durrant
2016-07-29 10:34                           ` Paul Durrant
2016-07-29 13:10                             ` Olaf Hering
2016-07-29 13:22                               ` Paul Durrant
2016-05-12 13:38           ` Stefano Stabellini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.