All of lore.kernel.org
 help / color / mirror / Atom feed
* [BISECT] Regression: SEGV: 9156c5d dmeventd rework locking code
@ 2017-03-31 22:19 Eric Wheeler
  2017-04-01  9:25 ` Zdenek Kabelac
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Wheeler @ 2017-03-31 22:19 UTC (permalink / raw)
  To: lvm-devel

Hello all,

After upgrading from el7.2 to el7.3, we started getting dmeventd segment 
faults immediately after update. A bisect of the lvm2 git tree shows the 
first bad commit below. This bug prevents us from activating our logical 
volumes without disabling lvm2-monitor and setting activation{monitoring = 
0} in lvm.conf.

I was able to get a git backtrace from a core dump in case that is useful, 
also below.

Please let me know if you need additional information or have a patch that 
I can test with.

Thank you for your help!

-Eric

===== GDB =====
Mar 31 12:07:06 server1.localhost kernel: dmeventd[7885]: segfault at 7f753ae4c6a8 ip 00007f7537b69617 sp 00007f753ae4c6b0 error 7 in liblvm2cmd.so.2.02[7f7537ac8000+191000]

~]# gdb /usr/sbin/dmeventd /var/coredumps/core-dmeventd-sig11-user0-group0-pid20364-time1490987932 
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-94.el7

Reading symbols from /usr/sbin/dmeventd...Reading symbols from /usr/lib/debug/usr/sbin/dmeventd.debug...done.
done.

warning: core file may not match specified executable file.
[New LWP 20408]
[New LWP 20364]
[New LWP 20409]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `dmeventd'.
Program terminated with signal 11, Segmentation fault.
#0  _touch_memory (size=<optimized out>, mem=<optimized out>) at mm/memlock.c:141
141		size_t pagesize = lvm_getpagesize();
Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 elfutils-libelf-0.166-2.el7.x86_64 elfutils-libs-0.166-2.el7.x86_64 glibc-2.17-157.el7_3.1.x86_64 libattr-2.4.46-12.el7.x86_64 libblkid-2.23.2-33.el7.x86_64 libcap-2.22-8.el7.x86_64 libgcc-4.8.5-11.el7.x86_64 libselinux-2.5-6.el7.x86_64 libsepol-2.5-6.el7.x86_64 libuuid-2.23.2-33.el7.x86_64 pcre-8.32-15.el7_2.1.x86_64 systemd-libs-219-30.el7_3.7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64
(gdb) bt
#0  _touch_memory (size=<optimized out>, mem=<optimized out>) at mm/memlock.c:141
#1  _allocate_memory () at mm/memlock.c:163
#2  0x00007f49b0574107 in _lock_mem (cmd=0x7f49ac004a30) at mm/memlock.c:472
#3  _lock_mem_if_needed (cmd=0x7f49ac004a30) at mm/memlock.c:555
#4  0x00007f49b05e7916 in lvm2_run (handle=0x7f49ac004a30, cmdline=<optimized out>, cmdline at entry=0x7f49b0895197 "_memlock_inc") at lvmcmdlib.c:83
#5  0x00007f49b0894e9a in dmeventd_lvm2_init () at dmeventd_lvm.c:82
#6  0x00007f49b0a990a5 in register_device (device=0x7f49ac004a10 "data-data--pool-tpool", uuid=<optimized out>, major=<optimized out>, minor=<optimized out>, user=0x56352a5a7008) at dmeventd_thin.c:446
#7  0x0000563529b1832f in _do_register_device (thread=0x56352a5a6f90) at dmeventd.c:899
#8  _monitor_thread (arg=0x56352a5a6f90) at dmeventd.c:989
#9  0x00007f49b3224dc5 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f49b28d073d in clone () from /lib64/libc.so.6


Mar 31 12:06:42 server1.localhost kernel: dmeventd[7113]: segfault at 7fdaf759c6a8 ip 00007fdaf42b9617 sp 00007fdaf759c6b0 error 7 in liblvm2cmd.so.2.02[7fdaf4218000+191000]


===== git bisect =====
]# git bisect bad
9156c5d0888bf95b79d931682b51fc63c96ba236 is the first bad commit
commit 9156c5d0888bf95b79d931682b51fc63c96ba236
Author: Zdenek Kabelac <zkabelac@redhat.com>
Date:   Thu Oct 22 15:47:53 2015 +0200

    dmeventd: rework locking code
    
    Redesign threading code:
    
    - plugin registration runs within its new created thread for
      improved parallel usage.
    
    - wait task is created just once and used during whole plugin lifetime.
    
    - event thread is based over  'events' filter being set - when
      filter is 0, such thread is 'unused'.
    
    - event loop is  simplified.
    
    - timeout thread is never signaling 'processing' thread.
    
    - pending of events filter cnange is properly reported and
      running event thread is signalled when possible.
    
    - helgrind is not reporting problems.

:100644 100644 5f92657da584c0401363e6d5128abb770b53056a 8c271391dd7d2b27523313d1a6de1affa67d78c7 M	WHATS_NEW_DM
:040000 040000 ebc1e14428939d579b9a5f51f41052112354f6a0 d4edd549f18c6e530f2ff053003846eca8ce9c43 M	daemons

]# git bisect log
git bisect start
# good: [629398d0f275e6ee5abf7929bc80e671e9f141c7] pre-release
git bisect good 629398d0f275e6ee5abf7929bc80e671e9f141c7
# bad: [369bc264b0db8be18243a6f95e1f6c14fdd0db99] pre-release
git bisect bad 369bc264b0db8be18243a6f95e1f6c14fdd0db99
# bad: [76cff10a734a7c1e26b3835ff967dac0b7e46bcb] tests: avoid reading utils when skipping
git bisect bad 76cff10a734a7c1e26b3835ff967dac0b7e46bcb
# skip: [83f00e91567d387caf02da9cd3791c3fef85c80d] makefiles: drop explicit linking
git bisect skip 83f00e91567d387caf02da9cd3791c3fef85c80d
# bad: [ba41ee1dc94264f7ac8e61f8b1d56b10225b0d2f] thin: limit  no-flush using only for thin-pool
git bisect bad ba41ee1dc94264f7ac8e61f8b1d56b10225b0d2f
# skip: [256e432e78b720b929a285f5809b1d875b01861a] dmeventd: less locking mirror
git bisect skip 256e432e78b720b929a285f5809b1d875b01861a
# bad: [21748a86309443ddaefe1fcc0644f0b9a6ea138e] cleanup: gcc warning for old-style
git bisect bad 21748a86309443ddaefe1fcc0644f0b9a6ea138e
# skip: [842a7a17e3c0ffc0467ef23a3b59e4e8af2c8d74] cleanup: always set nsec
git bisect skip 842a7a17e3c0ffc0467ef23a3b59e4e8af2c8d74
# good: [e04424e87e66df22578d1e4d2488615cd3692873] report: identify LV hodling sanlock locks as 'private,lockd,sanlock' within lv_role report field
git bisect good e04424e87e66df22578d1e4d2488615cd3692873
# good: [a91fbe9d27a79b4be7fad72fc7a1ba2a976ecd41] makefiles: older gcc needs hint with rpath
git bisect good a91fbe9d27a79b4be7fad72fc7a1ba2a976ecd41
# good: [9c5c9e2355826ad3835f35e494dde9bb8b1e6356] dmeventd: raid plugin reporting
git bisect good 9c5c9e2355826ad3835f35e494dde9bb8b1e6356
# good: [466a1c72b7f24be4c932b503b3f8a3fb50a2eda5] cleanup: use enums
git bisect good 466a1c72b7f24be4c932b503b3f8a3fb50a2eda5
# bad: [8be60e6a65baf87b12862e07d24bd794608df2f2] cleanup: easier to read code
git bisect bad 8be60e6a65baf87b12862e07d24bd794608df2f2
# bad: [4284ba65ebe6401b2dbedc9abe850b650ed68f93] dmeventd: debug signals
git bisect bad 4284ba65ebe6401b2dbedc9abe850b650ed68f93
# bad: [12aa56d29867b962257d7d2789a661a22c649347] dmeventd: handle signal from plugin
git bisect bad 12aa56d29867b962257d7d2789a661a22c649347
# bad: [9156c5d0888bf95b79d931682b51fc63c96ba236] dmeventd: rework locking code
git bisect bad 9156c5d0888bf95b79d931682b51fc63c96ba236
# first bad commit: [9156c5d0888bf95b79d931682b51fc63c96ba236] dmeventd: rework locking code



--
Eric Wheeler



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [BISECT] Regression: SEGV: 9156c5d dmeventd rework locking code
  2017-03-31 22:19 [BISECT] Regression: SEGV: 9156c5d dmeventd rework locking code Eric Wheeler
@ 2017-04-01  9:25 ` Zdenek Kabelac
  2017-04-04 23:22   ` Eric Wheeler
  0 siblings, 1 reply; 5+ messages in thread
From: Zdenek Kabelac @ 2017-04-01  9:25 UTC (permalink / raw)
  To: lvm-devel

Dne 1.4.2017 v 00:19 Eric Wheeler napsal(a):
> Hello all,
>
> After upgrading from el7.2 to el7.3, we started getting dmeventd segment
> faults immediately after update. A bisect of the lvm2 git tree shows the
> first bad commit below. This bug prevents us from activating our logical
> volumes without disabling lvm2-monitor and setting activation{monitoring =
> 0} in lvm.conf.
>
> I was able to get a git backtrace from a core dump in case that is useful,
> also below.
>
> Please let me know if you need additional information or have a patch that
> I can test with.
>
> Thank you for your help!
>
> -Eric
>
> ===== GDB =====
> Mar 31 12:07:06 server1.localhost kernel: dmeventd[7885]: segfault at 7f753ae4c6a8 ip 00007f7537b69617 sp 00007f753ae4c6b0 error 7 in liblvm2cmd.so.2.02[7f7537ac8000+191000]
>
> ~]# gdb /usr/sbin/dmeventd /var/coredumps/core-dmeventd-sig11-user0-group0-pid20364-time1490987932
> GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-94.el7
>
> Reading symbols from /usr/sbin/dmeventd...Reading symbols from /usr/lib/debug/usr/sbin/dmeventd.debug...done.
> done.
>
> warning: core file may not match specified executable file.
> [New LWP 20408]
> [New LWP 20364]
> [New LWP 20409]
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `dmeventd'.
> Program terminated with signal 11, Segmentation fault.
> #0  _touch_memory (size=<optimized out>, mem=<optimized out>) at mm/memlock.c:141
> 141		size_t pagesize = lvm_getpagesize();
> Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 elfutils-libelf-0.166-2.el7.x86_64 elfutils-libs-0.166-2.el7.x86_64 glibc-2.17-157.el7_3.1.x86_64 libattr-2.4.46-12.el7.x86_64 libblkid-2.23.2-33.el7.x86_64 libcap-2.22-8.el7.x86_64 libgcc-4.8.5-11.el7.x86_64 libselinux-2.5-6.el7.x86_64 libsepol-2.5-6.el7.x86_64 libuuid-2.23.2-33.el7.x86_64 pcre-8.32-15.el7_2.1.x86_64 systemd-libs-219-30.el7_3.7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64
> (gdb) bt
> #0  _touch_memory (size=<optimized out>, mem=<optimized out>) at mm/memlock.c:141
> #1  _allocate_memory () at mm/memlock.c:163


Hi

Hmm few theories -  from your gdb backtrace it suggests it has failed on libc 
syscall  (getpagesize()) ??
So have you upgraded all related packages ?  (device-mapper*,  kernel*)
or it's some 'mixture' in use ?

Also don't you have some large/changed values of 'reserved_stack'   or 
'reserved_memory' in your lvm.conf ?

Recent version of lvm2 (169) has added 'extra page' for 'stack guard' for 
PPC64le - but since you report suggests you do use 'x86_64'  it should not be 
affecting this arch.

Please open BZ and attach your lvm.conf file in use and all other info
(installed packages)  - selinux enabled/disabled ?
Some non-standard kernel options in use ?

Ragards

Zdenek



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [BISECT] Regression: SEGV: 9156c5d dmeventd rework locking code
  2017-04-01  9:25 ` Zdenek Kabelac
@ 2017-04-04 23:22   ` Eric Wheeler
  2017-04-05  8:05     ` Zdenek Kabelac
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Wheeler @ 2017-04-04 23:22 UTC (permalink / raw)
  To: lvm-devel

On Sat, 1 Apr 2017, Zdenek Kabelac wrote:

> Dne 1.4.2017 v 00:19 Eric Wheeler napsal(a):
> > Hello all,
> >
> > After upgrading from el7.2 to el7.3, we started getting dmeventd segment
> > faults immediately after update. A bisect of the lvm2 git tree shows the
> > first bad commit below. This bug prevents us from activating our logical
> > volumes without disabling lvm2-monitor and setting activation{monitoring =
> > 0} in lvm.conf.
> >
> > I was able to get a git backtrace from a core dump in case that is useful,
> > also below.
> >
> > Please let me know if you need additional information or have a patch that
> > I can test with.
> >
> > Thank you for your help!
> >
> > -Eric
> >
> > ===== GDB =====
> > Mar 31 12:07:06 server1.localhost kernel: dmeventd[7885]: segfault at
> > 7f753ae4c6a8 ip 00007f7537b69617 sp 00007f753ae4c6b0 error 7 in
> > liblvm2cmd.so.2.02[7f7537ac8000+191000]
> >
> > ~ ]# gdb /usr/sbin/dmeventd
> > ~ ]/var/coredumps/core-dmeventd-sig11-user0-group0-pid20364-time1490987932
> > GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-94.el7
> >
> > Reading symbols from /usr/sbin/dmeventd...Reading symbols from
> > /usr/lib/debug/usr/sbin/dmeventd.debug...done.
> > done.
> >
> > warning: core file may not match specified executable file.
> > [New LWP 20408]
> > [New LWP 20364]
> > [New LWP 20409]
> > [Thread debugging using libthread_db enabled]
> > Using host libthread_db library "/lib64/libthread_db.so.1".
> > Core was generated by `dmeventd'.
> > Program terminated with signal 11, Segmentation fault.
> > #0  _touch_memory (size=<optimized out>, mem=<optimized out>) at
> > mm/memlock.c:141
> > 141		size_t pagesize = lvm_getpagesize();
> > Missing separate debuginfos, use: debuginfo-install
> > bzip2-libs-1.0.6-13.el7.x86_64 elfutils-libelf-0.166-2.el7.x86_64
> > elfutils-libs-0.166-2.el7.x86_64 glibc-2.17-157.el7_3.1.x86_64
> > libattr-2.4.46-12.el7.x86_64 libblkid-2.23.2-33.el7.x86_64
> > libcap-2.22-8.el7.x86_64 libgcc-4.8.5-11.el7.x86_64
> > libselinux-2.5-6.el7.x86_64 libsepol-2.5-6.el7.x86_64
> > libuuid-2.23.2-33.el7.x86_64 pcre-8.32-15.el7_2.1.x86_64
> > systemd-libs-219-30.el7_3.7.x86_64 xz-libs-5.2.2-1.el7.x86_64
> > zlib-1.2.7-17.el7.x86_64
> > (gdb) bt
> > #0  _touch_memory (size=<optimized out>, mem=<optimized out>) at
> > mm/memlock.c:141
> > #1  _allocate_memory () at mm/memlock.c:163
> 
> 
> Hi
> 
> Hmm few theories -  from your gdb backtrace it suggests it has failed on libc
> syscall  (getpagesize()) ??
> So have you upgraded all related packages ?  (device-mapper*,  kernel*)
> or it's some 'mixture' in use ?
> 
> Also don't you have some large/changed values of 'reserved_stack'   or
> 'reserved_memory' in your lvm.conf ?

Yes! Actually, reserved_stack was the problem. By trial and error we found 
that when reserved_stack is 290 or more, dmeventd will segfault. We tried 
on servers with far fewer logical volumes and did not have a problem, so 
while I am not going to try and figure out how many logical volumes hit 
this stack limit, this is the problem!

Is there some kind of hard limit to reserved_stack that should be 
enforced?

I seem to recall increasing these values because lvcreate (or lvchange or 
something) suggested that the values were too small.

Do you still want a bugzilla report?

-Eric
 

> Recent version of lvm2 (169) has added 'extra page' for 'stack guard' for
> PPC64le - but since you report suggests you do use 'x86_64'  it should not be
> affecting this arch.
> 
> Please open BZ and attach your lvm.conf file in use and all other info
> (installed packages)  - selinux enabled/disabled ?
> Some non-standard kernel options in use ?
> 
> Ragards
> 
> Zdenek
> 



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [BISECT] Regression: SEGV: 9156c5d dmeventd rework locking code
  2017-04-04 23:22   ` Eric Wheeler
@ 2017-04-05  8:05     ` Zdenek Kabelac
  2017-04-05 17:53       ` Eric Wheeler
  0 siblings, 1 reply; 5+ messages in thread
From: Zdenek Kabelac @ 2017-04-05  8:05 UTC (permalink / raw)
  To: lvm-devel

Dne 5.4.2017 v 01:22 Eric Wheeler napsal(a):
> On Sat, 1 Apr 2017, Zdenek Kabelac wrote:
>
>> Dne 1.4.2017 v 00:19 Eric Wheeler napsal(a):

>>> (gdb) bt
>>> #0  _touch_memory (size=<optimized out>, mem=<optimized out>) at
>>> mm/memlock.c:141
>>> #1  _allocate_memory () at mm/memlock.c:163
>>
>>
>> Hi
>>
>> Hmm few theories -  from your gdb backtrace it suggests it has failed on libc
>> syscall  (getpagesize()) ??
>> So have you upgraded all related packages ?  (device-mapper*,  kernel*)
>> or it's some 'mixture' in use ?
>>
>> Also don't you have some large/changed values of 'reserved_stack'   or
>> 'reserved_memory' in your lvm.conf ?
>
> Yes! Actually, reserved_stack was the problem. By trial and error we found
> that when reserved_stack is 290 or more, dmeventd will segfault. We tried
> on servers with far fewer logical volumes and did not have a problem, so
> while I am not going to try and figure out how many logical volumes hit
> this stack limit, this is the problem!
>
> Is there some kind of hard limit to reserved_stack that should be
> enforced?
>
> I seem to recall increasing these values because lvcreate (or lvchange or
> something) suggested that the values were too small.
>
> Do you still want a bugzilla report?


Hi

Yep - so this explains it. Not clear we need BZ yet.
I'll explain the reason of limitation.

Dmeventd is using threads - and to minimize the RAM usage by memlocked process 
with number of threads - we picked 'relatively' low value for pthread_stack 
with assumption noone will ever need bigger value than this :)

Now lvm.conf does defined 'reserved_stack' amount - and this stack is then 
'mapped' in dmeventd lvm plugin thread.  However this is after 'dmeventd' has 
created thread with 128K stack limit (dmeventd itself doesn't 'see/use' 
lvm.conf so can't create threads with different settings)

So clearly a logical problem we can't really solve in any easy way.
We limited amount of stack used by lvm2 code - so current defaults should be 
rather good enough for almost every possible use-case.

So before we start to fix this  'catch 42' case - could you check and 
eventually describe which use-case was not working well we 'default' 
reserved_stack  so you had to raise the value to 290 to make it work ?

Otherwise I think the best solution would be to simply internally limit
the 'accepted' value and ignore any higher setting and just document it.


Regards

Zdenek



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [BISECT] Regression: SEGV: 9156c5d dmeventd rework locking code
  2017-04-05  8:05     ` Zdenek Kabelac
@ 2017-04-05 17:53       ` Eric Wheeler
  0 siblings, 0 replies; 5+ messages in thread
From: Eric Wheeler @ 2017-04-05 17:53 UTC (permalink / raw)
  To: lvm-devel

On Wed, 5 Apr 2017, Zdenek Kabelac wrote:

> Dne 5.4.2017 v 01:22 Eric Wheeler napsal(a):
> > On Sat, 1 Apr 2017, Zdenek Kabelac wrote:
> >
> > > Dne 1.4.2017 v 00:19 Eric Wheeler napsal(a):
> 
> > > > (gdb) bt
> > > > #0  _touch_memory (size=<optimized out>, mem=<optimized out>) at
> > > > mm/memlock.c:141
> > > > #1  _allocate_memory () at mm/memlock.c:163
> > >
> > >
> > > Hi
> > >
> > > Hmm few theories -  from your gdb backtrace it suggests it has failed on
> > > libc
> > > syscall  (getpagesize()) ??
> > > So have you upgraded all related packages ?  (device-mapper*,  kernel*)
> > > or it's some 'mixture' in use ?
> > >
> > > Also don't you have some large/changed values of 'reserved_stack'   or
> > > 'reserved_memory' in your lvm.conf ?
> >
> > Yes! Actually, reserved_stack was the problem. By trial and error we found
> > that when reserved_stack is 290 or more, dmeventd will segfault. We tried
> > on servers with far fewer logical volumes and did not have a problem, so
> > while I am not going to try and figure out how many logical volumes hit
> > this stack limit, this is the problem!
> >
> > Is there some kind of hard limit to reserved_stack that should be
> > enforced?
> >
> > I seem to recall increasing these values because lvcreate (or lvchange or
> > something) suggested that the values were too small.
> >
> > Do you still want a bugzilla report?
> 
> 
> Hi
> 
> Yep - so this explains it. Not clear we need BZ yet.
> I'll explain the reason of limitation.
> 
> Dmeventd is using threads - and to minimize the RAM usage by memlocked process
> with number of threads - we picked 'relatively' low value for pthread_stack
> with assumption noone will ever need bigger value than this :)
> 
> Now lvm.conf does defined 'reserved_stack' amount - and this stack is then
> 'mapped' in dmeventd lvm plugin thread.  However this is after 'dmeventd' has
> created thread with 128K stack limit (dmeventd itself doesn't 'see/use'
> lvm.conf so can't create threads with different settings)
> 
> So clearly a logical problem we can't really solve in any easy way.
> We limited amount of stack used by lvm2 code - so current defaults should be
> rather good enough for almost every possible use-case.
> 
> So before we start to fix this  'catch 42' case - could you check and
> eventually describe which use-case was not working well we 'default'
> reserved_stack  so you had to raise the value to 290 to make it work ?
> 
> Otherwise I think the best solution would be to simply internally limit
> the 'accepted' value and ignore any higher setting and just document it.

I don't recall the reason that we needed to increase the defaults, that 
was a long time ago probably in 7.0 . Now it seems to work fine with 
default values, so internally limiting the value is probably a good idea 
to prevent others from having such an issue.

It might be a good idea to gracefully accept and warn about any higher 
limit but cap it to the internal maximum instead of giving an error in 
case users have lvm.conf inside of initrds---we definitely don't want to 
break peoples' bootup processes.

--
Eric Wheeler
 
 
> Regards
> 
> Zdenek
> 
> --
> lvm-devel mailing list
> lvm-devel at redhat.com
> https://www.redhat.com/mailman/listinfo/lvm-devel
> 
> 



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-04-05 17:53 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-31 22:19 [BISECT] Regression: SEGV: 9156c5d dmeventd rework locking code Eric Wheeler
2017-04-01  9:25 ` Zdenek Kabelac
2017-04-04 23:22   ` Eric Wheeler
2017-04-05  8:05     ` Zdenek Kabelac
2017-04-05 17:53       ` Eric Wheeler

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.