All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [Xen-users] 4.8.1 migration fails over 1st interface, works over 2nd
       [not found] <4f9fa7f8-7339-7122-8987-6e8a0dafcc8c@pse-consulting.de>
@ 2017-06-05  9:17 ` George Dunlap
  2017-06-05  9:33   ` Andrew Cooper
  0 siblings, 1 reply; 4+ messages in thread
From: George Dunlap @ 2017-06-05  9:17 UTC (permalink / raw)
  To: Andreas Pflug; +Cc: Xen-users, Andrew Cooper, Wei Liu, xen-devel

On Mon, May 29, 2017 at 10:04 AM, Andreas Pflug
<pgadmin@pse-consulting.de> wrote:
> I've setup a fresh Debian stretch with xen 4.8.1 and shared storage via
> custom block scripts on two machines.
>
> Both machine have one main interface with some VLAN stuff, the VM
> bridges and the SAN interface connected to a switch, and another
> interface directly interconnecting both machines. To insure packets
> don't take weird routes, arp_announce=2/arp_ignore=1 is configured.
> Everything on the primary interface seems to work flawlessly, e.g.
> ssh-ing from one machine to the other (no firewall or other filter
> involved).
>
> With xl migrate <testdom> <secondMachineDirectInterface>, migration
> works as expected, bringing up the test domain fully functional back again.
>
> With xl migrate --debug <testdom> <secondMachinePrimaryInterface>, I get
>     xc: info: Saving domain 17, type x86 PV
>     xc: info: Found x86 PV domain from Xen 4.8
>     xc: info: Restoring domain
>
> and migration will stop here. The target machine will show the incoming
> VM, but nothing more happens. I have to kill xl on the target, Ctrl-C xl
> on the source machine, and destroy the target VM--incoming

Are you saying that migration works fine for you *unless* you add the
`--debug` option?

Andy / Wei, any ideas?

- George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Xen-users] 4.8.1 migration fails over 1st interface, works over 2nd
  2017-06-05  9:17 ` [Xen-users] 4.8.1 migration fails over 1st interface, works over 2nd George Dunlap
@ 2017-06-05  9:33   ` Andrew Cooper
  2017-06-29 17:56     ` Andreas Pflug
  2017-06-30  9:26     ` SOLVED/no bug " Andreas Pflug
  0 siblings, 2 replies; 4+ messages in thread
From: Andrew Cooper @ 2017-06-05  9:33 UTC (permalink / raw)
  To: George Dunlap, Andreas Pflug; +Cc: Xen-users, Wei Liu, xen-devel

On 05/06/17 10:17, George Dunlap wrote:
> On Mon, May 29, 2017 at 10:04 AM, Andreas Pflug
> <pgadmin@pse-consulting.de> wrote:
>> I've setup a fresh Debian stretch with xen 4.8.1 and shared storage via
>> custom block scripts on two machines.
>>
>> Both machine have one main interface with some VLAN stuff, the VM
>> bridges and the SAN interface connected to a switch, and another
>> interface directly interconnecting both machines. To insure packets
>> don't take weird routes, arp_announce=2/arp_ignore=1 is configured.
>> Everything on the primary interface seems to work flawlessly, e.g.
>> ssh-ing from one machine to the other (no firewall or other filter
>> involved).
>>
>> With xl migrate <testdom> <secondMachineDirectInterface>, migration
>> works as expected, bringing up the test domain fully functional back again.
>>
>> With xl migrate --debug <testdom> <secondMachinePrimaryInterface>, I get
>>     xc: info: Saving domain 17, type x86 PV
>>     xc: info: Found x86 PV domain from Xen 4.8
>>     xc: info: Restoring domain
>>
>> and migration will stop here. The target machine will show the incoming
>> VM, but nothing more happens. I have to kill xl on the target, Ctrl-C xl
>> on the source machine, and destroy the target VM--incoming
> Are you saying that migration works fine for you *unless* you add the
> `--debug` option?
>
> Andy / Wei, any ideas?

--debug adds a extra full memory copy, using memcmp() on the destination
side to spot if any memory got missed during the live phase.

It is only indented for development purposes, but it also expect it to
function normally in the way you've used it.

What does `xl -vvv migrate ...` say?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Xen-users] 4.8.1 migration fails over 1st interface, works over 2nd
  2017-06-05  9:33   ` Andrew Cooper
@ 2017-06-29 17:56     ` Andreas Pflug
  2017-06-30  9:26     ` SOLVED/no bug " Andreas Pflug
  1 sibling, 0 replies; 4+ messages in thread
From: Andreas Pflug @ 2017-06-29 17:56 UTC (permalink / raw)
  To: Andrew Cooper, George Dunlap; +Cc: Xen-users, Wei Liu, xen-devel

My problem still persists, but the thread seems to have stalled....
Apparently, my reply didn't hit the list



Am 05.06.17 um 11:33 schrieb Andrew Cooper:
> On 05/06/17 10:17, George Dunlap wrote:
>> On Mon, May 29, 2017 at 10:04 AM, Andreas Pflug
>> <pgadmin@pse-consulting.de> wrote:
>>> I've setup a fresh Debian stretch with xen 4.8.1 and shared storage via
>>> custom block scripts on two machines.
>>>
>>> Both machine have one main interface with some VLAN stuff, the VM
>>> bridges and the SAN interface connected to a switch, and another
>>> interface directly interconnecting both machines. To insure packets
>>> don't take weird routes, arp_announce=2/arp_ignore=1 is configured.
>>> Everything on the primary interface seems to work flawlessly, e.g.
>>> ssh-ing from one machine to the other (no firewall or other filter
>>> involved).
>>>
>>> With xl migrate <testdom> <secondMachineDirectInterface>, migration
>>> works as expected, bringing up the test domain fully functional back again.
>>>
>>> With xl migrate --debug <testdom> <secondMachinePrimaryInterface>, I get
>>>     xc: info: Saving domain 17, type x86 PV
>>>     xc: info: Found x86 PV domain from Xen 4.8
>>>     xc: info: Restoring domain
>>>
>>> and migration will stop here. The target machine will show the incoming
>>> VM, but nothing more happens. I have to kill xl on the target, Ctrl-C xl
>>> on the source machine, and destroy the target VM--incoming
>> Are you saying that migration works fine for you *unless* you add the
>> `--debug` option?
>>
>> Andy / Wei, any ideas?
> --debug adds a extra full memory copy, using memcmp() on the destination
> side to spot if any memory got missed during the live phase.
>
> It is only indented for development purposes, but it also expect it to
> function normally in the way you've used it.
>
> What does `xl -vvv migrate ...` say?
>
> ~Andrew

xl -vvv gives

libxl: debug: libxl.c:6895:libxl_retrieve_domain_configuration: no vtpm from xenstore for domain 21
libxl: debug: libxl.c:6895:libxl_retrieve_domain_configuration: no usbctrl from xenstore for domain 21
libxl: debug: libxl.c:6895:libxl_retrieve_domain_configuration: no usbdev from xenstore for domain 21
libxl: debug: libxl.c:6895:libxl_retrieve_domain_configuration: no pci from xenstore for domain 21
migration target: Ready to receive domain.
Saving to migration stream new xl format (info 0x3/0x0/1773)
libxl: debug: libxl.c:932:libxl_domain_suspend: ao 0x55efd7b089d0: create: how=(nil) callback=(nil) poller=0x55efd7b08810
libxl: debug: libxl.c:6627:libxl__fd_flags_modify_save: fnctl F_GETFL flags for fd 9 are 0x1
libxl: debug: libxl.c:6635:libxl__fd_flags_modify_save: fnctl F_SETFL of fd 9 to 0x1
libxl: debug: libxl.c:960:libxl_domain_suspend: ao 0x55efd7b089d0: inprogress: poller=0x55efd7b08810, flags=iLoading new save file <incoming migration stream> (new xl fmt info 0x3/0x0/1773)
 Savefile contains xl domain config in JSON format
Parsing config from <saved>

libxl: debug: libxl_create.c:1614:do_domain_create: ao 0x55dc55cea670: create: how=(nil) callback=(nil) poller=0x55dc55cea410
libxl: debug: libxl.c:6627:libxl__fd_flags_modify_save: fnctl F_GETFL flags for fd 0 are 0x0
libxl: debug: libxl.c:6635:libxl__fd_flags_modify_save: fnctl F_SETFL of fd 0 to 0x0
libxl-save-helper: debug: starting save: Success
xc: detail: fd 9, dom 21, max_iters 0, max_factor 0, flags 1, hvm 0
xc: info: Saving domain 21, type x86 PV
xc: detail: 64 bits, 4 levels
xc: detail: max_mfn 0xc40000
xc: detail: p2m list from 0xffffc90000000000 to 0xffffc900001fffff, root at 0xc3e407
xc: detail: max_pfn 0x3ffff, p2m_frames 512
libxl: debug: libxl_device.c:361:libxl__device_disk_set_backend: Disk vdev=xvda1 spec.backend=unknown
libxl: debug: libxl_device.c:276:disk_try_backend: Disk vdev=xvda1, uses script=... assuming phy backend
libxl: debug: libxl_device.c:396:libxl__device_disk_set_backend: Disk vdev=xvda1, using backend phy
libxl: debug: libxl_create.c:967:initiate_domain_create: restoring, not running bootloader
libxl: debug: libxl.c:4983:libxl__set_vcpuaffinity: New hard affinity for vcpu 0 has unreachable cpus
libxl: debug: libxl_create.c:1640:do_domain_create: ao 0x55dc55cea670: inprogress: poller=0x55dc55cea410, flags=i
libxl: debug: libxl_stream_read.c:358:stream_header_done: Stream v2
libxl: debug: libxl_stream_read.c:574:process_record: Record: 1, length 0
libxl-save-helper: debug: starting restore: Success
xc: detail: fd 7, dom 15, hvm 0, pae 0, superpages 0, stream_type 0
xc: info: Found x86 PV domain from Xen 4.8
xc: info: Restoring domain
xc: detail: 64 bits, 4 levels
xc: detail: max_mfn 0xc40000
xc: detail: Changed max_pfn from 0 to 0x3ffff

And stalls here, need to ctrl-c on the sender, destroy the incoming vm
on the receiver and killall xl.

When using the working interface, stuff looks identical, but will continue with 

libxl: debug: libxl_dom_suspend.c:193:domain_suspend_callback_common: issuing PV suspend request via XenBus control node

Regards

Andreas




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* SOLVED/no bug 4.8.1 migration fails over 1st interface, works over 2nd
  2017-06-05  9:33   ` Andrew Cooper
  2017-06-29 17:56     ` Andreas Pflug
@ 2017-06-30  9:26     ` Andreas Pflug
  1 sibling, 0 replies; 4+ messages in thread
From: Andreas Pflug @ 2017-06-30  9:26 UTC (permalink / raw)
  To: Andrew Cooper, George Dunlap; +Cc: Xen-users, Wei Liu, xen-devel

Ok, turns out to be a MTU related communication problem: the ethernet
interface and the switch both where configured for mtu=9216, but didn't
interpret this the same. Needed to reduce the eth iface mtu by 18 bytes....

Sorry for the noise!

Regards,
Andreas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2017-06-30  9:27 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <4f9fa7f8-7339-7122-8987-6e8a0dafcc8c@pse-consulting.de>
2017-06-05  9:17 ` [Xen-users] 4.8.1 migration fails over 1st interface, works over 2nd George Dunlap
2017-06-05  9:33   ` Andrew Cooper
2017-06-29 17:56     ` Andreas Pflug
2017-06-30  9:26     ` SOLVED/no bug " Andreas Pflug

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.