All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] Postcopy+spice crash
@ 2016-12-02 17:44 Dr. David Alan Gilbert
  2016-12-05  8:33 ` Gerd Hoffmann
  0 siblings, 1 reply; 8+ messages in thread
From: Dr. David Alan Gilbert @ 2016-12-02 17:44 UTC (permalink / raw)
  To: kraxel; +Cc: qemu-devel

Hi Gerd,
  I've got a moderately repeatable crash with spice playing
a video + postcopy.  Some of the time I just get a warning
(that I also get in precopy) but sometimes it turns into
a backtrace;

This is:
  f24 guest playing youtube fullscreen.
  migration between 2.7.0<->current head (had crash both ways)

The warning I get with precopy most of the time is:
  ./x86_64-softmmu/qemu-system-x86_64:26921): Spice-Warning **: red_memslots.c:94:validate_virt: virtual address out of range
    virt=0x7f5397ed002a+0x2925ff31 slot_id=1 group_id=1
    slot=0x7f5397c00000-0x7f539bbfe000 delta=0x7f5397c00000

The crash I've had with postcopy is:
red_dispatcher_loadvm_commands:
id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
id 1, group 1, virt start 7fbe83c00000, virt end 7fbe87bfe000, generation 0, delta 7fbe83c00000
id 2, group 1, virt start 7fbe7fa00000, virt end 7fbe83a00000, generation 0, delta 7fbe7fa00000
(./x86_64-softmmu/qemu-system-x86_64:22376): Spice-CRITICAL **: red_memslots.c:123:get_virt: slot_id 128 too big, addr=8000000000000000

#0  0x00007fc0aa42f49d in read () from /lib64/libpthread.so.0
#1  0x00007fc0a8c36c01 in spice_backtrace_gstack () from /lib64/libspice-server.so.1
#2  0x00007fc0a8c3e4f7 in spice_logv () from /lib64/libspice-server.so.1
#3  0x00007fc0a8c3e655 in spice_log () from /lib64/libspice-server.so.1
#4  0x00007fc0a8bfc6de in get_virt () from /lib64/libspice-server.so.1
#5  0x00007fc0a8bfcb73 in red_get_data_chunks_ptr () from /lib64/libspice-server.so.1
#6  0x00007fc0a8bff3fa in red_get_cursor_cmd () from /lib64/libspice-server.so.1
#7  0x00007fc0a8c0fd79 in handle_dev_loadvm_commands () from /lib64/libspice-server.so.1
#8  0x00007fc0a8bf9523 in dispatcher_handle_recv_read () from /lib64/libspice-server.so.1
#9  0x00007fc0a8c1d5a5 in red_worker_main () from /lib64/libspice-server.so.1
#10 0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
#11 0x00007fc0a61786ed in clone () from /lib64/libc.so.6

and:

red_dispatcher_loadvm_commands:
id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
id 1, group 1, virt start 7f3b93800000, virt end 7f3b977fe000, generation 0, delta 7f3b93800000
id 2, group 1, virt start 7f3b8f400000, virt end 7f3b93400000, generation 0, delta 7f3b8f400000
(/opt/qemu/v2.7.0/bin/qemu-system-x86_64:41053): Spice-CRITICAL **: red_memslots.c:123:get_virt: slot_id 80 too big, addr=5000000000000000

I'm using:
spice-server-devel-0.12.4-19.el7.x86_64

Dave

--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Postcopy+spice crash
  2016-12-02 17:44 [Qemu-devel] Postcopy+spice crash Dr. David Alan Gilbert
@ 2016-12-05  8:33 ` Gerd Hoffmann
  2016-12-05  9:46   ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 8+ messages in thread
From: Gerd Hoffmann @ 2016-12-05  8:33 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: qemu-devel, spice-devel

On Fr, 2016-12-02 at 17:44 +0000, Dr. David Alan Gilbert wrote:
> Hi Gerd,
>   I've got a moderately repeatable crash with spice playing
> a video + postcopy.  Some of the time I just get a warning
> (that I also get in precopy) but sometimes it turns into
> a backtrace;
> 
> This is:
>   f24 guest playing youtube fullscreen.
>   migration between 2.7.0<->current head (had crash both ways)
> 
> The warning I get with precopy most of the time is:
>   ./x86_64-softmmu/qemu-system-x86_64:26921): Spice-Warning **: red_memslots.c:94:validate_virt: virtual address out of range

That is in spice-server.  Which version do you run?
Adding spice-devel to Cc:

>     virt=0x7f5397ed002a+0x2925ff31 slot_id=1 group_id=1
>     slot=0x7f5397c00000-0x7f539bbfe000 delta=0x7f5397c00000

Base address looks sane.
Size (0x2925ff31) is bogous.

On a quick glance I'd blame the guest for sending corrupted commands.
Strange though that it happens on migration only, so there could be
a host issue too.  Or a timing issue triggered by migration.

Which migration phase?

Do you have seamless spice migration enabled?
If so: Does it still reproduce with seamless migration turned off?

> The crash I've had with postcopy is:
> red_dispatcher_loadvm_commands:
> id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
> id 1, group 1, virt start 7fbe83c00000, virt end 7fbe87bfe000, generation 0, delta 7fbe83c00000
> id 2, group 1, virt start 7fbe7fa00000, virt end 7fbe83a00000, generation 0, delta 7fbe7fa00000
> (./x86_64-softmmu/qemu-system-x86_64:22376): Spice-CRITICAL **: red_memslots.c:123:get_virt: slot_id 128 too big, addr=8000000000000000
> 
> #0  0x00007fc0aa42f49d in read () from /lib64/libpthread.so.0
> #1  0x00007fc0a8c36c01 in spice_backtrace_gstack () from /lib64/libspice-server.so.1
> #2  0x00007fc0a8c3e4f7 in spice_logv () from /lib64/libspice-server.so.1
> #3  0x00007fc0a8c3e655 in spice_log () from /lib64/libspice-server.so.1
> #4  0x00007fc0a8bfc6de in get_virt () from /lib64/libspice-server.so.1
> #5  0x00007fc0a8bfcb73 in red_get_data_chunks_ptr () from /lib64/libspice-server.so.1
> #6  0x00007fc0a8bff3fa in red_get_cursor_cmd () from /lib64/libspice-server.so.1
> #7  0x00007fc0a8c0fd79 in handle_dev_loadvm_commands () from /lib64/libspice-server.so.1
> #8  0x00007fc0a8bf9523 in dispatcher_handle_recv_read () from /lib64/libspice-server.so.1
> #9  0x00007fc0a8c1d5a5 in red_worker_main () from /lib64/libspice-server.so.1
> #10 0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
> #11 0x00007fc0a61786ed in clone () from /lib64/libc.so.6

Spice worker thread ...

> red_dispatcher_loadvm_commands:
> id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
> id 1, group 1, virt start 7f3b93800000, virt end 7f3b977fe000, generation 0, delta 7f3b93800000
> id 2, group 1, virt start 7f3b8f400000, virt end 7f3b93400000, generation 0, delta 7f3b8f400000
> (/opt/qemu/v2.7.0/bin/qemu-system-x86_64:41053): Spice-CRITICAL **: red_memslots.c:123:get_virt: slot_id 80 too big, addr=5000000000000000


... trying to decode a invalid qxl address.

> I'm using:
> spice-server-devel-0.12.4-19.el7.x86_64

Ah, RHEL-7.3 host.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Postcopy+spice crash
  2016-12-05  8:33 ` Gerd Hoffmann
@ 2016-12-05  9:46   ` Dr. David Alan Gilbert
  2016-12-05 12:06     ` [Qemu-devel] [Spice-devel] " Uri Lublin
  0 siblings, 1 reply; 8+ messages in thread
From: Dr. David Alan Gilbert @ 2016-12-05  9:46 UTC (permalink / raw)
  To: Gerd Hoffmann; +Cc: qemu-devel, spice-devel

* Gerd Hoffmann (kraxel@redhat.com) wrote:
> On Fr, 2016-12-02 at 17:44 +0000, Dr. David Alan Gilbert wrote:
> > Hi Gerd,
> >   I've got a moderately repeatable crash with spice playing
> > a video + postcopy.  Some of the time I just get a warning
> > (that I also get in precopy) but sometimes it turns into
> > a backtrace;
> > 
> > This is:
> >   f24 guest playing youtube fullscreen.
> >   migration between 2.7.0<->current head (had crash both ways)
> > 
> > The warning I get with precopy most of the time is:
> >   ./x86_64-softmmu/qemu-system-x86_64:26921): Spice-Warning **: red_memslots.c:94:validate_virt: virtual address out of range
> 
> That is in spice-server.  Which version do you run?

From the bottom of the post; spice-server-devel-0.12.4-19.el7.x86_64 (rhel 7)

> Adding spice-devel to Cc:
> 
> >     virt=0x7f5397ed002a+0x2925ff31 slot_id=1 group_id=1
> >     slot=0x7f5397c00000-0x7f539bbfe000 delta=0x7f5397c00000
> 
> Base address looks sane.
> Size (0x2925ff31) is bogous.
> 
> On a quick glance I'd blame the guest for sending corrupted commands.
> Strange though that it happens on migration only, so there could be
> a host issue too.  Or a timing issue triggered by migration.
> 
> Which migration phase?

This is the point at which it switches over in postcopy.

> Do you have seamless spice migration enabled?
> If so: Does it still reproduce with seamless migration turned off?

No I don't think so; I think the command line I was running was:
./x86_64-softmmu/qemu-system-x86_64 -vnc :0 -M pc-i440fx-2.7,accel=kvm -monitor stdio -netdev user,id=unet,hostfwd=tcp::2022-:22,hostfwd=tcp::2023-:2022 -device virtio-net-pci,netdev=unet -drive file=/home/vms/f24.qcow2,cache=none,id=disk,if=none  -device virtio-blk-pci,drive=disk -device virtio-balloon -vga qxl -device ich9-usb-ehci1 -device usb-tablet,id=in0 -device virtio-rng-pci -device AC97 -m 8192 -smp 4 -drive file=/home/vms/Fedora-Server-netinst-x86_64-23.iso,cache=none,id=cd,if=scsi -incoming tcp::4444

> > The crash I've had with postcopy is:
> > red_dispatcher_loadvm_commands:
> > id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
> > id 1, group 1, virt start 7fbe83c00000, virt end 7fbe87bfe000, generation 0, delta 7fbe83c00000
> > id 2, group 1, virt start 7fbe7fa00000, virt end 7fbe83a00000, generation 0, delta 7fbe7fa00000
> > (./x86_64-softmmu/qemu-system-x86_64:22376): Spice-CRITICAL **: red_memslots.c:123:get_virt: slot_id 128 too big, addr=8000000000000000
> > 
> > #0  0x00007fc0aa42f49d in read () from /lib64/libpthread.so.0
> > #1  0x00007fc0a8c36c01 in spice_backtrace_gstack () from /lib64/libspice-server.so.1
> > #2  0x00007fc0a8c3e4f7 in spice_logv () from /lib64/libspice-server.so.1
> > #3  0x00007fc0a8c3e655 in spice_log () from /lib64/libspice-server.so.1
> > #4  0x00007fc0a8bfc6de in get_virt () from /lib64/libspice-server.so.1
> > #5  0x00007fc0a8bfcb73 in red_get_data_chunks_ptr () from /lib64/libspice-server.so.1
> > #6  0x00007fc0a8bff3fa in red_get_cursor_cmd () from /lib64/libspice-server.so.1
> > #7  0x00007fc0a8c0fd79 in handle_dev_loadvm_commands () from /lib64/libspice-server.so.1
> > #8  0x00007fc0a8bf9523 in dispatcher_handle_recv_read () from /lib64/libspice-server.so.1
> > #9  0x00007fc0a8c1d5a5 in red_worker_main () from /lib64/libspice-server.so.1
> > #10 0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
> > #11 0x00007fc0a61786ed in clone () from /lib64/libc.so.6
> 
> Spice worker thread ...
> 
> > red_dispatcher_loadvm_commands:
> > id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
> > id 1, group 1, virt start 7f3b93800000, virt end 7f3b977fe000, generation 0, delta 7f3b93800000
> > id 2, group 1, virt start 7f3b8f400000, virt end 7f3b93400000, generation 0, delta 7f3b8f400000
> > (/opt/qemu/v2.7.0/bin/qemu-system-x86_64:41053): Spice-CRITICAL **: red_memslots.c:123:get_virt: slot_id 80 too big, addr=5000000000000000
> 
> 
> ... trying to decode a invalid qxl address.

Yes one observation is that I think a few (all?) of the bad
addresses I've seen there have been a single nybble followed by
a lot of 0's.

> > I'm using:
> > spice-server-devel-0.12.4-19.el7.x86_64
> 
> Ah, RHEL-7.3 host.
> 
> cheers,
>   Gerd
> 

Dave

--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [Spice-devel] Postcopy+spice crash
  2016-12-05  9:46   ` Dr. David Alan Gilbert
@ 2016-12-05 12:06     ` Uri Lublin
  2016-12-06  6:59       ` Gerd Hoffmann
  0 siblings, 1 reply; 8+ messages in thread
From: Uri Lublin @ 2016-12-05 12:06 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Gerd Hoffmann; +Cc: qemu-devel, spice-devel

On 12/05/2016 11:46 AM, Dr. David Alan Gilbert wrote:
> * Gerd Hoffmann (kraxel@redhat.com) wrote:
>> On Fr, 2016-12-02 at 17:44 +0000, Dr. David Alan Gilbert wrote:
>>> Hi Gerd,
>>>   I've got a moderately repeatable crash with spice playing
>>> a video + postcopy.  Some of the time I just get a warning
>>> (that I also get in precopy) but sometimes it turns into
>>> a backtrace;
>>>
>>> This is:
>>>   f24 guest playing youtube fullscreen.
>>>   migration between 2.7.0<->current head (had crash both ways)
>>>
>>> The warning I get with precopy most of the time is:
>>>   ./x86_64-softmmu/qemu-system-x86_64:26921): Spice-Warning **: red_memslots.c:94:validate_virt: virtual address out of range
>>
>> That is in spice-server.  Which version do you run?
>
> From the bottom of the post; spice-server-devel-0.12.4-19.el7.x86_64 (rhel 7)
>
>> Adding spice-devel to Cc:
>>
>>>     virt=0x7f5397ed002a+0x2925ff31 slot_id=1 group_id=1
>>>     slot=0x7f5397c00000-0x7f539bbfe000 delta=0x7f5397c00000
>>
>> Base address looks sane.
>> Size (0x2925ff31) is bogous.
>>
>> On a quick glance I'd blame the guest for sending corrupted commands.
>> Strange though that it happens on migration only, so there could be
>> a host issue too.  Or a timing issue triggered by migration.
>>
>> Which migration phase?
>
> This is the point at which it switches over in postcopy.

It looks like it's the vmstate (post) load phase of the qxl device on
destination host.
Maybe if you trace qxl device save/load related functions
on both src and dst hosts you'll see a difference.

>
>> Do you have seamless spice migration enabled?
>> If so: Does it still reproduce with seamless migration turned off?
>
> No I don't think so; I think the command line I was running was:
> ./x86_64-softmmu/qemu-system-x86_64 -vnc :0 -M pc-i440fx-2.7,accel=kvm -monitor stdio -netdev user,id=unet,hostfwd=tcp::2022-:22,hostfwd=tcp::2023-:2022 -device virtio-net-pci,netdev=unet -drive file=/home/vms/f24.qcow2,cache=none,id=disk,if=none  -device virtio-blk-pci,drive=disk -device virtio-balloon -vga qxl -device ich9-usb-ehci1 -device usb-tablet,id=in0 -device virtio-rng-pci -device AC97 -m 8192 -smp 4 -drive file=/home/vms/Fedora-Server-netinst-x86_64-23.iso,cache=none,id=cd,if=scsi -incoming tcp::4444

Note that VNC is used.

Uri.

>
>>> The crash I've had with postcopy is:
>>> red_dispatcher_loadvm_commands:
>>> id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
>>> id 1, group 1, virt start 7fbe83c00000, virt end 7fbe87bfe000, generation 0, delta 7fbe83c00000
>>> id 2, group 1, virt start 7fbe7fa00000, virt end 7fbe83a00000, generation 0, delta 7fbe7fa00000
>>> (./x86_64-softmmu/qemu-system-x86_64:22376): Spice-CRITICAL **: red_memslots.c:123:get_virt: slot_id 128 too big, addr=8000000000000000
>>>
>>> #0  0x00007fc0aa42f49d in read () from /lib64/libpthread.so.0
>>> #1  0x00007fc0a8c36c01 in spice_backtrace_gstack () from /lib64/libspice-server.so.1
>>> #2  0x00007fc0a8c3e4f7 in spice_logv () from /lib64/libspice-server.so.1
>>> #3  0x00007fc0a8c3e655 in spice_log () from /lib64/libspice-server.so.1
>>> #4  0x00007fc0a8bfc6de in get_virt () from /lib64/libspice-server.so.1
>>> #5  0x00007fc0a8bfcb73 in red_get_data_chunks_ptr () from /lib64/libspice-server.so.1
>>> #6  0x00007fc0a8bff3fa in red_get_cursor_cmd () from /lib64/libspice-server.so.1
>>> #7  0x00007fc0a8c0fd79 in handle_dev_loadvm_commands () from /lib64/libspice-server.so.1
>>> #8  0x00007fc0a8bf9523 in dispatcher_handle_recv_read () from /lib64/libspice-server.so.1
>>> #9  0x00007fc0a8c1d5a5 in red_worker_main () from /lib64/libspice-server.so.1
>>> #10 0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
>>> #11 0x00007fc0a61786ed in clone () from /lib64/libc.so.6
>>
>> Spice worker thread ...
>>
>>> red_dispatcher_loadvm_commands:
>>> id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
>>> id 1, group 1, virt start 7f3b93800000, virt end 7f3b977fe000, generation 0, delta 7f3b93800000
>>> id 2, group 1, virt start 7f3b8f400000, virt end 7f3b93400000, generation 0, delta 7f3b8f400000
>>> (/opt/qemu/v2.7.0/bin/qemu-system-x86_64:41053): Spice-CRITICAL **: red_memslots.c:123:get_virt: slot_id 80 too big, addr=5000000000000000
>>
>>
>> ... trying to decode a invalid qxl address.
>
> Yes one observation is that I think a few (all?) of the bad
> addresses I've seen there have been a single nybble followed by
> a lot of 0's.
>
>>> I'm using:
>>> spice-server-devel-0.12.4-19.el7.x86_64
>>
>> Ah, RHEL-7.3 host.
>>
>> cheers,
>>   Gerd
>>
>
> Dave
>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> _______________________________________________
> Spice-devel mailing list
> Spice-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/spice-devel
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [Spice-devel] Postcopy+spice crash
  2016-12-05 12:06     ` [Qemu-devel] [Spice-devel] " Uri Lublin
@ 2016-12-06  6:59       ` Gerd Hoffmann
  2016-12-06 10:53         ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 8+ messages in thread
From: Gerd Hoffmann @ 2016-12-06  6:59 UTC (permalink / raw)
  To: uril; +Cc: Dr. David Alan Gilbert, qemu-devel, spice-devel

  Hi,

> >> On a quick glance I'd blame the guest for sending corrupted commands.
> >> Strange though that it happens on migration only, so there could be
> >> a host issue too.  Or a timing issue triggered by migration.
> >>
> >> Which migration phase?
> >
> > This is the point at which it switches over in postcopy.
> 
> It looks like it's the vmstate (post) load phase of the qxl device on
> destination host.

Dave, can you try "thread apply all bt" so we see the other threads too?
That should show whenever it happens in post_load

> Maybe if you trace qxl device save/load related functions
> on both src and dst hosts you'll see a difference.

qxl keeps references to certain commands (create surface for example) in
qxl device memory, so it can replay them in post_load.  That possibly
doesn't work correctly with postcopy.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [Spice-devel] Postcopy+spice crash
  2016-12-06  6:59       ` Gerd Hoffmann
@ 2016-12-06 10:53         ` Dr. David Alan Gilbert
  2016-12-06 12:37           ` Gerd Hoffmann
  0 siblings, 1 reply; 8+ messages in thread
From: Dr. David Alan Gilbert @ 2016-12-06 10:53 UTC (permalink / raw)
  To: Gerd Hoffmann; +Cc: uril, qemu-devel, spice-devel

* Gerd Hoffmann (kraxel@redhat.com) wrote:
>   Hi,
> 
> > >> On a quick glance I'd blame the guest for sending corrupted commands.
> > >> Strange though that it happens on migration only, so there could be
> > >> a host issue too.  Or a timing issue triggered by migration.
> > >>
> > >> Which migration phase?
> > >
> > > This is the point at which it switches over in postcopy.
> > 
> > It looks like it's the vmstate (post) load phase of the qxl device on
> > destination host.
> 
> Dave, can you try "thread apply all bt" so we see the other threads too?
> That should show whenever it happens in post_load

Yes, I already have the full set of threads; you can see the qxl_post_load in
thread 1.

red_dispatcher_loadvm_commands: 
id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
id 1, group 1, virt start 7fbe83c00000, virt end 7fbe87bfe000, generation 0, delta 7fbe83c00000
id 2, group 1, virt start 7fbe7fa00000, virt end 7fbe83a00000, generation 0, delta 7fbe7fa00000
(./x86_64-softmmu/qemu-system-x86_64:22376): Spice-CRITICAL **: red_memslots.c:123:get_virt: slot_id 128 too big, addr=8000000000000000
Thread 12 (Thread 0x7fc0a0df2700 (LWP 22377)):
#0  0x00007fc0aa42f1bd in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fc0aa42ad02 in _L_lock_791 () from /lib64/libpthread.so.0
#2  0x00007fc0aa42ac08 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x0000556465736839 in qemu_mutex_lock (mutex=mutex@entry=0x556465d76120 <qemu_global_mutex>) at /root/git/qemu/util/qemu-thread-posix.c:64
#4  0x00005564653e69d6 in qemu_mutex_lock_iothread () at /root/git/qemu/cpus.c:1296
#5  0x000055646574596e in call_rcu_thread (opaque=<optimized out>) at /root/git/qemu/util/rcu.c:257
#6  0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
#7  0x00007fc0a61786ed in clone () from /lib64/libc.so.6
Thread 11 (Thread 0x7fc09f304700 (LWP 22379)):
#0  0x00007fc0aa42c6d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x0000556465736999 in qemu_cond_wait (cond=<optimized out>, mutex=mutex@entry=0x556465d76120 <qemu_global_mutex>) at /root/git/qemu/util/qemu-thread-posix.c:137
#2  0x00005564653e6fe3 in qemu_kvm_wait_io_event (cpu=<optimized out>) at /root/git/qemu/cpus.c:964
#3  qemu_kvm_cpu_thread_fn (arg=0x556466688740) at /root/git/qemu/cpus.c:1003
#4  0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fc0a61786ed in clone () from /lib64/libc.so.6
Thread 10 (Thread 0x7fc09eb03700 (LWP 22380)):
#0  0x00007fc0aa42c6d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x0000556465736999 in qemu_cond_wait (cond=<optimized out>, mutex=mutex@entry=0x556465d76120 <qemu_global_mutex>) at /root/git/qemu/util/qemu-thread-posix.c:137
#2  0x00005564653e6fe3 in qemu_kvm_wait_io_event (cpu=<optimized out>) at /root/git/qemu/cpus.c:964
#3  qemu_kvm_cpu_thread_fn (arg=0x5564666ea960) at /root/git/qemu/cpus.c:1003
#4  0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fc0a61786ed in clone () from /lib64/libc.so.6
Thread 9 (Thread 0x7fc09e302700 (LWP 22381)):
#0  0x00007fc0aa42c6d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x0000556465736999 in qemu_cond_wait (cond=<optimized out>, mutex=mutex@entry=0x556465d76120 <qemu_global_mutex>) at /root/git/qemu/util/qemu-thread-posix.c:137
#2  0x00005564653e6fe3 in qemu_kvm_wait_io_event (cpu=<optimized out>) at /root/git/qemu/cpus.c:964
#3  qemu_kvm_cpu_thread_fn (arg=0x55646670a120) at /root/git/qemu/cpus.c:1003
#4  0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fc0a61786ed in clone () from /lib64/libc.so.6
Thread 8 (Thread 0x7fc09db01700 (LWP 22382)):
#0  0x00007fc0aa42c6d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x0000556465736999 in qemu_cond_wait (cond=<optimized out>, mutex=mutex@entry=0x556465d76120 <qemu_global_mutex>) at /root/git/qemu/util/qemu-thread-posix.c:137
#2  0x00005564653e6fe3 in qemu_kvm_wait_io_event (cpu=<optimized out>) at /root/git/qemu/cpus.c:964
#3  qemu_kvm_cpu_thread_fn (arg=0x5564667298d0) at /root/git/qemu/cpus.c:1003
#4  0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fc0a61786ed in clone () from /lib64/libc.so.6
Thread 7 (Thread 0x7fbe7f9ff700 (LWP 22383)):
#0  0x00007fc0aa42f49d in read () from /lib64/libpthread.so.0
#1  0x00007fc0a8c36c01 in spice_backtrace_gstack () from /lib64/libspice-server.so.1
#2  0x00007fc0a8c3e4f7 in spice_logv () from /lib64/libspice-server.so.1
#3  0x00007fc0a8c3e655 in spice_log () from /lib64/libspice-server.so.1
#4  0x00007fc0a8bfc6de in get_virt () from /lib64/libspice-server.so.1
#5  0x00007fc0a8bfcb73 in red_get_data_chunks_ptr () from /lib64/libspice-server.so.1
#6  0x00007fc0a8bff3fa in red_get_cursor_cmd () from /lib64/libspice-server.so.1
#7  0x00007fc0a8c0fd79 in handle_dev_loadvm_commands () from /lib64/libspice-server.so.1
#8  0x00007fc0a8bf9523 in dispatcher_handle_recv_read () from /lib64/libspice-server.so.1
#9  0x00007fc0a8c1d5a5 in red_worker_main () from /lib64/libspice-server.so.1
#10 0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
#11 0x00007fc0a61786ed in clone () from /lib64/libc.so.6
Thread 6 (Thread 0x7fbe7efff700 (LWP 22385)):
#0  0x00007fc0aa42c6d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x0000556465736999 in qemu_cond_wait (cond=cond@entry=0x556466ecde00, mutex=mutex@entry=0x556466ecde30) at /root/git/qemu/util/qemu-thread-posix.c:137
#2  0x0000556465680c5b in vnc_worker_thread_loop (queue=queue@entry=0x556466ecde00) at /root/git/qemu/ui/vnc-jobs.c:228
#3  0x0000556465681198 in vnc_worker_thread (arg=0x556466ecde00) at /root/git/qemu/ui/vnc-jobs.c:335
#4  0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fc0a61786ed in clone () from /lib64/libc.so.6
Thread 5 (Thread 0x7fc0a05f1700 (LWP 22958)):
#0  0x00007fc0aa42c6d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x0000556465736999 in qemu_cond_wait (cond=cond@entry=0x556467b89730, mutex=mutex@entry=0x556467b89708) at /root/git/qemu/util/qemu-thread-posix.c:137
#2  0x000055646540a643 in do_data_decompress (opaque=0x556467b89700) at /root/git/qemu/migration/ram.c:2284
#3  0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
#4  0x00007fc0a61786ed in clone () from /lib64/libc.so.6
Thread 4 (Thread 0x7fbe7dfff700 (LWP 22959)):
#0  0x00007fc0aa42c6d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x0000556465736999 in qemu_cond_wait (cond=cond@entry=0x556467b897a8, mutex=mutex@entry=0x556467b89780) at /root/git/qemu/util/qemu-thread-posix.c:137
#2  0x000055646540a643 in do_data_decompress (opaque=0x556467b89778) at /root/git/qemu/migration/ram.c:2284
#3  0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
#4  0x00007fc0a61786ed in clone () from /lib64/libc.so.6
Thread 3 (Thread 0x7fbe7d7fe700 (LWP 22967)):
#0  0x00007fc0a616ddad in poll () from /lib64/libc.so.6
#1  0x0000556465631868 in poll (__timeout=-1, __nfds=2, __fds=0x7fbe7d7fd990) at /usr/include/bits/poll2.h:46
#2  postcopy_ram_fault_thread (opaque=0x556466c93f10) at /root/git/qemu/migration/postcopy-ram.c:405
#3  0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
#4  0x00007fc0a61786ed in clone () from /lib64/libc.so.6
Thread 2 (Thread 0x7fbe7cffd700 (LWP 22968)):
#0  0x00007fc0a6172ba9 in syscall () from /lib64/libc.so.6
#1  0x0000556465736ca6 in futex_wait (val=<optimized out>, ev=<optimized out>) at /root/git/qemu/util/qemu-thread-posix.c:306
#2  qemu_event_wait (ev=ev@entry=0x556466c93f18) at /root/git/qemu/util/qemu-thread-posix.c:422
#3  0x000055646541044d in postcopy_ram_listen_thread (opaque=0x556467e45740) at /root/git/qemu/migration/savevm.c:1485
#4  0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fc0a61786ed in clone () from /lib64/libc.so.6
Thread 1 (Thread 0x7fc0aead5c40 (LWP 22376)):
#0  0x00007fc0aa42f49d in read () from /lib64/libpthread.so.0
#1  0x00007fc0a8bf9264 in read_safe () from /lib64/libspice-server.so.1
#2  0x00007fc0a8bf9717 in dispatcher_send_message () from /lib64/libspice-server.so.1
#3  0x00007fc0a8bfa0c2 in red_dispatcher_loadvm_commands () from /lib64/libspice-server.so.1
#4  0x000055646556c03d in qxl_spice_loadvm_commands (qxl=qxl@entry=0x55646755b8c0, ext=ext@entry=0x556467a895a0, count=2) at /root/git/qemu/hw/display/qxl.c:219
#5  0x000055646556d15f in qxl_post_load (opaque=0x55646755b8c0, version=<optimized out>) at /root/git/qemu/hw/display/qxl.c:2212
#6  0x000055646562f1b8 in vmstate_load_state (f=f@entry=0x5564666347d0, vmsd=<optimized out>, opaque=0x55646755b8c0, version_id=version_id@entry=21) at /root/git/qemu/migration/vmstate.c:151
#7  0x000055646540f4a1 in vmstate_load (f=0x5564666347d0, se=0x5564676f90a0, version_id=21) at /root/git/qemu/migration/savevm.c:690
#8  0x000055646540f6db in qemu_loadvm_section_start_full (f=f@entry=0x5564666347d0, mis=mis@entry=0x556466c93f10) at /root/git/qemu/migration/savevm.c:1843
#9  0x000055646540f9ac in qemu_loadvm_state_main (f=f@entry=0x5564666347d0, mis=mis@entry=0x556466c93f10) at /root/git/qemu/migration/savevm.c:1900
#10 0x000055646540fd8f in loadvm_handle_cmd_packaged (mis=0x556466c93f10) at /root/git/qemu/migration/savevm.c:1660
#11 loadvm_process_command (f=0x556467e45740) at /root/git/qemu/migration/savevm.c:1723
#12 qemu_loadvm_state_main (f=f@entry=0x556467e45740, mis=mis@entry=0x556466c93f10) at /root/git/qemu/migration/savevm.c:1913
#13 0x0000556465412546 in qemu_loadvm_state (f=f@entry=0x556467e45740) at /root/git/qemu/migration/savevm.c:1973
#14 0x000055646562b4e8 in process_incoming_migration_co (opaque=0x556467e45740) at /root/git/qemu/migration/migration.c:394
#15 0x0000556465746ada in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at /root/git/qemu/util/coroutine-ucontext.c:79
#16 0x00007fc0a60c7cf0 in ?? () from /lib64/libc.so.6
#17 0x00007ffe14885180 in ?? ()
#18 0x0000000000000000 in ?? ()

> > Maybe if you trace qxl device save/load related functions
> > on both src and dst hosts you'll see a difference.
> 
> qxl keeps references to certain commands (create surface for example) in
> qxl device memory, so it can replay them in post_load.  That possibly
> doesn't work correctly with postcopy.

It should; the device memory is just a RAMBlock that's migrated, so if it's
not arrived yet from the source the qxl code will block until postcopy
drags it across; assuming that is that the qxl code on the source isn't
still trying to write to it's copy at the same time, which at this
point it shouldn't.

Dave

> cheers,
>   Gerd
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [Spice-devel] Postcopy+spice crash
  2016-12-06 10:53         ` Dr. David Alan Gilbert
@ 2016-12-06 12:37           ` Gerd Hoffmann
  2016-12-06 16:47             ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 8+ messages in thread
From: Gerd Hoffmann @ 2016-12-06 12:37 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: uril, qemu-devel, spice-devel

  Hi,

Yep, spice worker thread ...

> Thread 7 (Thread 0x7fbe7f9ff700 (LWP 22383)):
> #0  0x00007fc0aa42f49d in read () from /lib64/libpthread.so.0
> #1  0x00007fc0a8c36c01 in spice_backtrace_gstack () from /lib64/libspice-server.so.1
> #2  0x00007fc0a8c3e4f7 in spice_logv () from /lib64/libspice-server.so.1
> #3  0x00007fc0a8c3e655 in spice_log () from /lib64/libspice-server.so.1
> #4  0x00007fc0a8bfc6de in get_virt () from /lib64/libspice-server.so.1
> #5  0x00007fc0a8bfcb73 in red_get_data_chunks_ptr () from /lib64/libspice-server.so.1
> #6  0x00007fc0a8bff3fa in red_get_cursor_cmd () from /lib64/libspice-server.so.1
> #7  0x00007fc0a8c0fd79 in handle_dev_loadvm_commands () from /lib64/libspice-server.so.1
> #8  0x00007fc0a8bf9523 in dispatcher_handle_recv_read () from /lib64/libspice-server.so.1
> #9  0x00007fc0a8c1d5a5 in red_worker_main () from /lib64/libspice-server.so.1
> #10 0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
> #11 0x00007fc0a61786ed in clone () from /lib64/libc.so.6

... busy processing post_load request from main thread ...

> Thread 1 (Thread 0x7fc0aead5c40 (LWP 22376)):
> #0  0x00007fc0aa42f49d in read () from /lib64/libpthread.so.0
> #1  0x00007fc0a8bf9264 in read_safe () from /lib64/libspice-server.so.1
> #2  0x00007fc0a8bf9717 in dispatcher_send_message () from /lib64/libspice-server.so.1
> #3  0x00007fc0a8bfa0c2 in red_dispatcher_loadvm_commands () from /lib64/libspice-server.so.1
> #4  0x000055646556c03d in qxl_spice_loadvm_commands (qxl=qxl@entry=0x55646755b8c0, ext=ext@entry=0x556467a895a0, count=2) at /root/git/qemu/hw/display/qxl.c:219
> #5  0x000055646556d15f in qxl_post_load (opaque=0x55646755b8c0, version=<optimized out>) at /root/git/qemu/hw/display/qxl.c:2212
> #6  0x000055646562f1b8 in vmstate_load_state (f=f@entry=0x5564666347d0, vmsd=<optimized out>, opaque=0x55646755b8c0, version_id=version_id@entry=21) at /root/git/qemu/migration/vmstate.c:151
> #7  0x000055646540f4a1 in vmstate_load (f=0x5564666347d0, se=0x5564676f90a0, version_id=21) at /root/git/qemu/migration/savevm.c:690
> #8  0x000055646540f6db in qemu_loadvm_section_start_full (f=f@entry=0x5564666347d0, mis=mis@entry=0x556466c93f10) at /root/git/qemu/migration/savevm.c:1843
> #9  0x000055646540f9ac in qemu_loadvm_state_main (f=f@entry=0x5564666347d0, mis=mis@entry=0x556466c93f10) at /root/git/qemu/migration/savevm.c:1900
> #10 0x000055646540fd8f in loadvm_handle_cmd_packaged (mis=0x556466c93f10) at /root/git/qemu/migration/savevm.c:1660
> #11 loadvm_process_command (f=0x556467e45740) at /root/git/qemu/migration/savevm.c:1723
> #12 qemu_loadvm_state_main (f=f@entry=0x556467e45740, mis=mis@entry=0x556466c93f10) at /root/git/qemu/migration/savevm.c:1913
> #13 0x0000556465412546 in qemu_loadvm_state (f=f@entry=0x556467e45740) at /root/git/qemu/migration/savevm.c:1973
> #14 0x000055646562b4e8 in process_incoming_migration_co (opaque=0x556467e45740) at /root/git/qemu/migration/migration.c:394
> #15 0x0000556465746ada in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at /root/git/qemu/util/coroutine-ucontext.c:79
> #16 0x00007fc0a60c7cf0 in ?? () from /lib64/libc.so.6
> #17 0x00007ffe14885180 in ?? ()
> #18 0x0000000000000000 in ?? ()

> It should; the device memory is just a RAMBlock that's migrated, so if it's
> not arrived yet from the source the qxl code will block until postcopy
> drags it across; assuming that is that the qxl code on the source isn't
> still trying to write to it's copy at the same time, which at this
> point it shouldn't.

Seems it happens while restoring the cursor,
does this patch make a difference?

--- a/hw/display/qxl.c
+++ b/hw/display/qxl.c
@@ -2238,12 +2238,14 @@ static int qxl_post_load(void *opaque, int
version)
             cmds[out].group_id = MEMSLOT_GROUP_GUEST;
             out++;
         }
+#if 0
         if (d->guest_cursor) {
             cmds[out].cmd.data = d->guest_cursor;
             cmds[out].cmd.type = QXL_CMD_CURSOR;
             cmds[out].group_id = MEMSLOT_GROUP_GUEST;
             out++;
         }
+#endif
         qxl_spice_loadvm_commands(d, cmds, out);
         g_free(cmds);
         if (d->guest_monitors_config) {

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [Spice-devel] Postcopy+spice crash
  2016-12-06 12:37           ` Gerd Hoffmann
@ 2016-12-06 16:47             ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 8+ messages in thread
From: Dr. David Alan Gilbert @ 2016-12-06 16:47 UTC (permalink / raw)
  To: Gerd Hoffmann; +Cc: uril, qemu-devel, spice-devel

* Gerd Hoffmann (kraxel@redhat.com) wrote:
>   Hi,
> 
> Yep, spice worker thread ...
> 
> > Thread 7 (Thread 0x7fbe7f9ff700 (LWP 22383)):
> > #0  0x00007fc0aa42f49d in read () from /lib64/libpthread.so.0
> > #1  0x00007fc0a8c36c01 in spice_backtrace_gstack () from /lib64/libspice-server.so.1
> > #2  0x00007fc0a8c3e4f7 in spice_logv () from /lib64/libspice-server.so.1
> > #3  0x00007fc0a8c3e655 in spice_log () from /lib64/libspice-server.so.1
> > #4  0x00007fc0a8bfc6de in get_virt () from /lib64/libspice-server.so.1
> > #5  0x00007fc0a8bfcb73 in red_get_data_chunks_ptr () from /lib64/libspice-server.so.1
> > #6  0x00007fc0a8bff3fa in red_get_cursor_cmd () from /lib64/libspice-server.so.1
> > #7  0x00007fc0a8c0fd79 in handle_dev_loadvm_commands () from /lib64/libspice-server.so.1
> > #8  0x00007fc0a8bf9523 in dispatcher_handle_recv_read () from /lib64/libspice-server.so.1
> > #9  0x00007fc0a8c1d5a5 in red_worker_main () from /lib64/libspice-server.so.1
> > #10 0x00007fc0aa428dc5 in start_thread () from /lib64/libpthread.so.0
> > #11 0x00007fc0a61786ed in clone () from /lib64/libc.so.6
> 
> ... busy processing post_load request from main thread ...
> 
> > Thread 1 (Thread 0x7fc0aead5c40 (LWP 22376)):
> > #0  0x00007fc0aa42f49d in read () from /lib64/libpthread.so.0
> > #1  0x00007fc0a8bf9264 in read_safe () from /lib64/libspice-server.so.1
> > #2  0x00007fc0a8bf9717 in dispatcher_send_message () from /lib64/libspice-server.so.1
> > #3  0x00007fc0a8bfa0c2 in red_dispatcher_loadvm_commands () from /lib64/libspice-server.so.1
> > #4  0x000055646556c03d in qxl_spice_loadvm_commands (qxl=qxl@entry=0x55646755b8c0, ext=ext@entry=0x556467a895a0, count=2) at /root/git/qemu/hw/display/qxl.c:219
> > #5  0x000055646556d15f in qxl_post_load (opaque=0x55646755b8c0, version=<optimized out>) at /root/git/qemu/hw/display/qxl.c:2212
> > #6  0x000055646562f1b8 in vmstate_load_state (f=f@entry=0x5564666347d0, vmsd=<optimized out>, opaque=0x55646755b8c0, version_id=version_id@entry=21) at /root/git/qemu/migration/vmstate.c:151
> > #7  0x000055646540f4a1 in vmstate_load (f=0x5564666347d0, se=0x5564676f90a0, version_id=21) at /root/git/qemu/migration/savevm.c:690
> > #8  0x000055646540f6db in qemu_loadvm_section_start_full (f=f@entry=0x5564666347d0, mis=mis@entry=0x556466c93f10) at /root/git/qemu/migration/savevm.c:1843
> > #9  0x000055646540f9ac in qemu_loadvm_state_main (f=f@entry=0x5564666347d0, mis=mis@entry=0x556466c93f10) at /root/git/qemu/migration/savevm.c:1900
> > #10 0x000055646540fd8f in loadvm_handle_cmd_packaged (mis=0x556466c93f10) at /root/git/qemu/migration/savevm.c:1660
> > #11 loadvm_process_command (f=0x556467e45740) at /root/git/qemu/migration/savevm.c:1723
> > #12 qemu_loadvm_state_main (f=f@entry=0x556467e45740, mis=mis@entry=0x556466c93f10) at /root/git/qemu/migration/savevm.c:1913
> > #13 0x0000556465412546 in qemu_loadvm_state (f=f@entry=0x556467e45740) at /root/git/qemu/migration/savevm.c:1973
> > #14 0x000055646562b4e8 in process_incoming_migration_co (opaque=0x556467e45740) at /root/git/qemu/migration/migration.c:394
> > #15 0x0000556465746ada in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at /root/git/qemu/util/coroutine-ucontext.c:79
> > #16 0x00007fc0a60c7cf0 in ?? () from /lib64/libc.so.6
> > #17 0x00007ffe14885180 in ?? ()
> > #18 0x0000000000000000 in ?? ()
> 
> > It should; the device memory is just a RAMBlock that's migrated, so if it's
> > not arrived yet from the source the qxl code will block until postcopy
> > drags it across; assuming that is that the qxl code on the source isn't
> > still trying to write to it's copy at the same time, which at this
> > point it shouldn't.
> 
> Seems it happens while restoring the cursor,
> does this patch make a difference?

Hmm, my test case doesn't want to fail today, so unfortunately I can't tell.
(I've done at least 10 postcopies)

Dave

> --- a/hw/display/qxl.c
> +++ b/hw/display/qxl.c
> @@ -2238,12 +2238,14 @@ static int qxl_post_load(void *opaque, int
> version)
>              cmds[out].group_id = MEMSLOT_GROUP_GUEST;
>              out++;
>          }
> +#if 0
>          if (d->guest_cursor) {
>              cmds[out].cmd.data = d->guest_cursor;
>              cmds[out].cmd.type = QXL_CMD_CURSOR;
>              cmds[out].group_id = MEMSLOT_GROUP_GUEST;
>              out++;
>          }
> +#endif
>          qxl_spice_loadvm_commands(d, cmds, out);
>          g_free(cmds);
>          if (d->guest_monitors_config) {
> 
> cheers,
>   Gerd
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-12-06 16:47 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-02 17:44 [Qemu-devel] Postcopy+spice crash Dr. David Alan Gilbert
2016-12-05  8:33 ` Gerd Hoffmann
2016-12-05  9:46   ` Dr. David Alan Gilbert
2016-12-05 12:06     ` [Qemu-devel] [Spice-devel] " Uri Lublin
2016-12-06  6:59       ` Gerd Hoffmann
2016-12-06 10:53         ` Dr. David Alan Gilbert
2016-12-06 12:37           ` Gerd Hoffmann
2016-12-06 16:47             ` Dr. David Alan Gilbert

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.