All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] zodcache - auto-start dm-cache devices
@ 2015-12-18  6:16 Ian Pilcher
  2015-12-18  9:44 ` Joe Thornber
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Ian Pilcher @ 2015-12-18  6:16 UTC (permalink / raw)
  To: dm-devel

https://github.com/ipilcher/zodcache

I've been looking for a "simple" way to manage dm-cache devices for a
while now -- something that operates a bit more like bcache than LVM
cache does, while still using the dm-cache infrastructure.  Having not
found anything, I finally created "zodcache".  (The name is mostly due
to wanting something that could be used as a magic number -- 20DCAC8E.)

Hopefully the README does a decent job of explaining how it works, so I
won't belabor it here:

   https://github.com/ipilcher/zodcache/blob/master/readme.pdf

I'm very interested in any feedback on this.

  * Is this approach useful?
  * Is there a better way of doing this?
  * Is the libdevmapper stuff in zcstart.c correct/optimal?

And now to sleep ...

Thanks!

-- 
========================================================================
Ian Pilcher                                         arequipeno@gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] zodcache - auto-start dm-cache devices
  2015-12-18  6:16 [RFC] zodcache - auto-start dm-cache devices Ian Pilcher
@ 2015-12-18  9:44 ` Joe Thornber
  2015-12-18 16:07 ` John Stoffel
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 16+ messages in thread
From: Joe Thornber @ 2015-12-18  9:44 UTC (permalink / raw)
  To: device-mapper development

This is great timing, we'd just been talking about the need for a
dm-cache tool separate from LVM.  I'll look through your code today.



On Fri, Dec 18, 2015 at 12:16:32AM -0600, Ian Pilcher wrote:
> https://github.com/ipilcher/zodcache
> 
> I've been looking for a "simple" way to manage dm-cache devices for a
> while now -- something that operates a bit more like bcache than LVM
> cache does, while still using the dm-cache infrastructure.  Having not
> found anything, I finally created "zodcache".  (The name is mostly due
> to wanting something that could be used as a magic number -- 20DCAC8E.)
> 
> Hopefully the README does a decent job of explaining how it works, so I
> won't belabor it here:
> 
>   https://github.com/ipilcher/zodcache/blob/master/readme.pdf
> 
> I'm very interested in any feedback on this.
> 
>  * Is this approach useful?
>  * Is there a better way of doing this?
>  * Is the libdevmapper stuff in zcstart.c correct/optimal?
> 
> And now to sleep ...
> 
> Thanks!
> 
> -- 
> ========================================================================
> Ian Pilcher                                         arequipeno@gmail.com
> -------- "I grew up before Mark Zuckerberg invented friendship" --------
> ========================================================================
> 
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] zodcache - auto-start dm-cache devices
  2015-12-18  6:16 [RFC] zodcache - auto-start dm-cache devices Ian Pilcher
  2015-12-18  9:44 ` Joe Thornber
@ 2015-12-18 16:07 ` John Stoffel
  2015-12-18 16:55   ` Ian Pilcher
  2015-12-18 18:51 ` Alasdair G Kergon
  2015-12-19  0:53 ` Never mind (was [RFC] zodcache - auto-start dm-cache devices) Ian Pilcher
  3 siblings, 1 reply; 16+ messages in thread
From: John Stoffel @ 2015-12-18 16:07 UTC (permalink / raw)
  To: device-mapper development


Ian> I've been looking for a "simple" way to manage dm-cache devices
Ian> for a while now -- something that operates a bit more like bcache
Ian> than LVM cache does, while still using the dm-cache
Ian> infrastructure.  Having not found anything, I finally created
Ian> "zodcache".  (The name is mostly due to wanting something that
Ian> could be used as a magic number -- 20DCAC8E.)

Ian> Hopefully the README does a decent job of explaining how it works, so I
Ian> won't belabor it here:

I just read this over because I've just deployed lvmcache on my home
system, and I was interested in how this would be different.  One
thing I would comment on is that having a diagram comparting the three
options would be really useful.

From my reading of the docs, it's clear that zodcache is lower down
the stack than LVMcache, but higher than bcache.

For my setup, I have a pair of mirrored SSDs for root, /boot and
lvmcache partitions.  I'm using MD to mirror the partitions I've
created on the SSDs.

I then have another MD mirror composed of two 4Tb disks, which is
turned into a PV in a VG with a bunch of LVs which are now cached.
Before I had seperate VGs for some of my data, but now I've coalesced
them so that I don't need multiple MD arrays for seperate cache PVs in
each VG.

So describinng how your setup can provide a central cache pool across
multiple VGs would be awesome, but it's not quite clear to me that you
can do this in reality without doing multiple layers of block devices.

And since I'm paranoid (to a degree!) about resiliency, mirroring the
cache devices is a critical part for me.

Also, I'm on debian, so that's another piece of documentation that's
kinda sorta missing.

John

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] zodcache - auto-start dm-cache devices
  2015-12-18 16:07 ` John Stoffel
@ 2015-12-18 16:55   ` Ian Pilcher
  2015-12-18 17:39     ` Ian Pilcher
  2015-12-18 18:53     ` John Stoffel
  0 siblings, 2 replies; 16+ messages in thread
From: Ian Pilcher @ 2015-12-18 16:55 UTC (permalink / raw)
  To: dm-devel

On 12/18/2015 10:07 AM, John Stoffel wrote:
>>From my reading of the docs, it's clear that zodcache is lower down
> the stack than LVMcache, but higher than bcache.

I would actually say that zodcache is at about the same level as
bcache.  The main conceptual difference is that zodcache does device
probing/recognition and setup are done in userspace, where bcache does
it in kernel space.  (I.e. echo the device name to
/sys/fs/bcache/register and the kernel looks a the device, decides
what it is, etc.)

> So describinng how your setup can provide a central cache pool across
> multiple VGs would be awesome, but it's not quite clear to me that you
> can do this in reality without doing multiple layers of block devices.

Currently you'll need multiple zodcache devices to create multiple VGs.
It should be possible to make these devices partitionable with a udev
rule that calls kpartx as appropriate.

> And since I'm paranoid (to a degree!) about resiliency, mirroring the
> cache devices is a critical part for me.

I haven't yet tested zodcache on top of MD RAID, but I fully expect it
to work.

> Also, I'm on debian, so that's another piece of documentation that's
> kinda sorta missing.

What do you need beyond what's in section III-B of the README?

-- 
========================================================================
Ian Pilcher                                         arequipeno@gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] zodcache - auto-start dm-cache devices
  2015-12-18 16:55   ` Ian Pilcher
@ 2015-12-18 17:39     ` Ian Pilcher
  2015-12-18 18:53     ` John Stoffel
  1 sibling, 0 replies; 16+ messages in thread
From: Ian Pilcher @ 2015-12-18 17:39 UTC (permalink / raw)
  To: dm-devel

On 12/18/2015 10:55 AM, Ian Pilcher wrote:
> Currently you'll need multiple zodcache devices to create multiple VGs.
> It should be possible to make these devices partitionable with a udev
> rule that calls kpartx as appropriate.

This seems to work:

SUBSYSTEM!="block", GOTO="zodcache_end" 

ACTION=="remove", GOTO="zodcache_end" 

ENV{DM_UDEV_DISABLE_OTHER_RULES_FLAG}=="1", GOTO="zodcache_end" 

KERNEL=="fd*|sr*", GOTO="zodcache_end" 

 

# Look for a zodcache superblock on the device and start it if found 

RUN+="/usr/sbin/zcstart --udev $tempnode" 

 

# Process partition tables on zodcache devices (but not components) 

ENV{DM_NAME}!="zodcache-*", GOTO="zodcache_end" 

ENV{DM_NAME}=="*-origin", GOTO="zodcache_end" 

ENV{DM_NAME}=="*-cache", GOTO="zodcache_end" 

ENV{DM_NAME}=="*-metadata", GOTO="zodcache_end" 

ENV{ID_PART_TABLE_TYPE}=="dos", RUN+="/usr/sbin/kpartx -a $tempnode" 

 

LABEL="zodcache_end"

-- 
========================================================================
Ian Pilcher                                         arequipeno@gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] zodcache - auto-start dm-cache devices
  2015-12-18  6:16 [RFC] zodcache - auto-start dm-cache devices Ian Pilcher
  2015-12-18  9:44 ` Joe Thornber
  2015-12-18 16:07 ` John Stoffel
@ 2015-12-18 18:51 ` Alasdair G Kergon
  2015-12-18 19:44   ` John Stoffel
  2015-12-18 20:50   ` Ian Pilcher
  2015-12-19  0:53 ` Never mind (was [RFC] zodcache - auto-start dm-cache devices) Ian Pilcher
  3 siblings, 2 replies; 16+ messages in thread
From: Alasdair G Kergon @ 2015-12-18 18:51 UTC (permalink / raw)
  To: Ian Pilcher; +Cc: dm-devel

On Fri, Dec 18, 2015 at 12:16:32AM -0600, Ian Pilcher wrote:
>  * Is there a better way of doing this?

Did you try stacking LVM on top of itself first?

I.e. Create one cached LV consuming all your space, then create a new PV
and VG on top of that LV, and carve that stacked VG up into whatever LVs
you need?

Alasdair

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] zodcache - auto-start dm-cache devices
  2015-12-18 16:55   ` Ian Pilcher
  2015-12-18 17:39     ` Ian Pilcher
@ 2015-12-18 18:53     ` John Stoffel
  1 sibling, 0 replies; 16+ messages in thread
From: John Stoffel @ 2015-12-18 18:53 UTC (permalink / raw)
  To: device-mapper development

>>>>> "Ian" == Ian Pilcher <arequipeno@gmail.com> writes:

Ian> On 12/18/2015 10:07 AM, John Stoffel wrote:
>>> From my reading of the docs, it's clear that zodcache is lower down
>> the stack than LVMcache, but higher than bcache.

Ian> I would actually say that zodcache is at about the same level as
Ian> bcache.  The main conceptual difference is that zodcache does device
Ian> probing/recognition and setup are done in userspace, where bcache does
Ian> it in kernel space.  (I.e. echo the device name to
Ian> /sys/fs/bcache/register and the kernel looks a the device, decides
Ian> what it is, etc.)

I think a picture would help show this all more clearly, but I do
appreciate your clarifications.  I had looked at bcache myself, but
the inflexibility of it didn't make me happy.

I like that lvmcache allows me to dynamically remove the cache without
impacting the system, except for performance.  :-)

>> So describinng how your setup can provide a central cache pool across
>> multiple VGs would be awesome, but it's not quite clear to me that you
>> can do this in reality without doing multiple layers of block devices.

Ian> Currently you'll need multiple zodcache devices to create
Ian> multiple VGs.  It should be possible to make these devices
Ian> partitionable with a udev rule that calls kpartx as appropriate.

Ok, that makes sense.  One other issue I have with lvmcache is that
the cache LVs are in the VG, so if you're not careful, you could
expand other LVs onto those PVs.  I think this is a bad design myself.

>> And since I'm paranoid (to a degree!) about resiliency, mirroring the
>> cache devices is a critical part for me.

Ian> I haven't yet tested zodcache on top of MD RAID, but I fully expect it
Ian> to work.

>> Also, I'm on debian, so that's another piece of documentation that's
>> kinda sorta missing.

Ian> What do you need beyond what's in section III-B of the README?

Sorry, I must have skipped over that too quickly, I didn't see it,
I'll try to take a look and see if I can spin up something.

John

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] zodcache - auto-start dm-cache devices
  2015-12-18 18:51 ` Alasdair G Kergon
@ 2015-12-18 19:44   ` John Stoffel
  2015-12-18 20:01     ` Alasdair G Kergon
  2015-12-18 20:50   ` Ian Pilcher
  1 sibling, 1 reply; 16+ messages in thread
From: John Stoffel @ 2015-12-18 19:44 UTC (permalink / raw)
  To: device-mapper development; +Cc: Ian Pilcher

>>>>> "Alasdair" == Alasdair G Kergon <agk@redhat.com> writes:

Alasdair> On Fri, Dec 18, 2015 at 12:16:32AM -0600, Ian Pilcher wrote:
>> * Is there a better way of doing this?

Alasdair> Did you try stacking LVM on top of itself first?

Alasdair> I.e. Create one cached LV consuming all your space, then create a new PV
Alasdair> and VG on top of that LV, and carve that stacked VG up into whatever LVs
Alasdair> you need?

How deep can you stack things like this before you run into problems?
And it does seem like it would put a bunch of stress on the UDEV setup
to asseble it all reliably.

John

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] zodcache - auto-start dm-cache devices
  2015-12-18 19:44   ` John Stoffel
@ 2015-12-18 20:01     ` Alasdair G Kergon
  0 siblings, 0 replies; 16+ messages in thread
From: Alasdair G Kergon @ 2015-12-18 20:01 UTC (permalink / raw)
  To: John Stoffel; +Cc: device-mapper development, Ian Pilcher

On Fri, Dec 18, 2015 at 02:44:47PM -0500, John Stoffel wrote:
> How deep can you stack things like this before you run into problems?
> And it does seem like it would put a bunch of stress on the UDEV setup
> to asseble it all reliably.

It should all work recursively to any reasonable level including udev.

If such use became more common, we'd to look at ways of optimising it:
We've considered allowing stacking within a single VG before, but
there hasn't really been enough demand to give it any priority.

If, as suggested here, there are now going to be common situations
where it makes sense, we can look again at introducing a more streamlined
approach.

Alasdair

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] zodcache - auto-start dm-cache devices
  2015-12-18 18:51 ` Alasdair G Kergon
  2015-12-18 19:44   ` John Stoffel
@ 2015-12-18 20:50   ` Ian Pilcher
  2015-12-18 21:06     ` Zdenek Kabelac
  1 sibling, 1 reply; 16+ messages in thread
From: Ian Pilcher @ 2015-12-18 20:50 UTC (permalink / raw)
  To: dm-devel

On 12/18/2015 12:51 PM, Alasdair G Kergon wrote:
> Did you try stacking LVM on top of itself first?

Honestly, no (although my initial testing of zodcache actually used
logical volumes as the origin and cache devices FWIW).

This exercise is as much about the "optics" as it is about the actual
technical merits of the solution.  The goal is provide something that
(at least on the surface) looks "simple".

With just a couple of obvious enhancements to the mkzc utility (clear
the metadata region and start the new device), creating a cached device
would literally take just a single command.

-- 
========================================================================
Ian Pilcher                                         arequipeno@gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] zodcache - auto-start dm-cache devices
  2015-12-18 20:50   ` Ian Pilcher
@ 2015-12-18 21:06     ` Zdenek Kabelac
  2015-12-19  0:01       ` Ian Pilcher
  0 siblings, 1 reply; 16+ messages in thread
From: Zdenek Kabelac @ 2015-12-18 21:06 UTC (permalink / raw)
  To: device-mapper development

Dne 18.12.2015 v 21:50 Ian Pilcher napsal(a):
> On 12/18/2015 12:51 PM, Alasdair G Kergon wrote:
>> Did you try stacking LVM on top of itself first?
>
> Honestly, no (although my initial testing of zodcache actually used
> logical volumes as the origin and cache devices FWIW).
>
> This exercise is as much about the "optics" as it is about the actual
> technical merits of the solution.  The goal is provide something that
> (at least on the surface) looks "simple".
>
> With just a couple of obvious enhancements to the mkzc utility (clear
> the metadata region and start the new device), creating a cached device
> would literally take just a single command.
>

Sure - as long as you do not need to solve any 'security/safety' points,
you can do things in a simple way.

But how do you plan to support any power-failures and recoveries ?

When someone pick drive to other system - what is going to happen ?

Are you going to write your own 'lvm2-like' tools to do the job ?

Are you going to maintain this over long time period ?

You really should have started from RFE on lvm2 enhancements to better suite
your needs -  since IMHO there are no SIMPLE things when we talk about devices...

Regards

Zdenek

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] zodcache - auto-start dm-cache devices
  2015-12-18 21:06     ` Zdenek Kabelac
@ 2015-12-19  0:01       ` Ian Pilcher
  0 siblings, 0 replies; 16+ messages in thread
From: Ian Pilcher @ 2015-12-19  0:01 UTC (permalink / raw)
  To: dm-devel

On 12/18/2015 03:06 PM, Zdenek Kabelac wrote:
> Sure - as long as you do not need to solve any 'security/safety' points,
> you can do things in a simple way.

What security/safety concerns do you have with this approach?

> But how do you plan to support any power-failures and recoveries ?

A zodcache device is a dm-cache device built from several dm-linear
devices.  AFAICT, that's exactly what a LVM cache LV is, so I would
expect that they would handle such events in exactly the same way.

> When someone pick drive to other system - what is going to happen ?

If the system already has zodcache support then things should pretty
much "just work".  If not, it takes exactly one program to manually
start a device, and the only dependencies are dm-cache and libdevmapper.

> Are you going to write your own 'lvm2-like' tools to do the job ?

Absolutely not.  This is not intended to compete with LVM.  In fact, I
would expect that using zodcache devices as LVM PVs to be the most
use case.  (It's certainly what I'm doing.)

> Are you going to maintain this over long time period ?

If people use it, then yes.

> You really should have started from RFE on lvm2 enhancements to better
> suite
> your needs -  since IMHO there are no SIMPLE things when we talk about
> devices...

But isn't that the beauty of device mapper?  The hard work has already
been done, and I'm just adding the bits to set things up.  (I'm not
doing anything that couldn't be done with dmsetup.)

-- 
========================================================================
Ian Pilcher                                         arequipeno@gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Never mind (was [RFC] zodcache - auto-start dm-cache devices)
  2015-12-18  6:16 [RFC] zodcache - auto-start dm-cache devices Ian Pilcher
                   ` (2 preceding siblings ...)
  2015-12-18 18:51 ` Alasdair G Kergon
@ 2015-12-19  0:53 ` Ian Pilcher
  2015-12-19  1:53   ` John Stoffel
  3 siblings, 1 reply; 16+ messages in thread
From: Ian Pilcher @ 2015-12-19  0:53 UTC (permalink / raw)
  To: dm-devel

repo deleted due to feedback.

-- 
========================================================================
Ian Pilcher                                         arequipeno@gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Never mind (was [RFC] zodcache - auto-start dm-cache devices)
  2015-12-19  0:53 ` Never mind (was [RFC] zodcache - auto-start dm-cache devices) Ian Pilcher
@ 2015-12-19  1:53   ` John Stoffel
  2016-01-04 15:52     ` Joe Thornber
  0 siblings, 1 reply; 16+ messages in thread
From: John Stoffel @ 2015-12-19  1:53 UTC (permalink / raw)
  To: device-mapper development

>>>>> "Ian" == Ian Pilcher <arequipeno@gmail.com> writes:

Ian> repo deleted due to feedback.

Umm... why?  Just because people we're asking questions and poking at
what you've written?  Hopefully not.  I like the ideas you were
pushing here, anything to make dmsetup easier to use.

If the tool could complement lvmcache/bcache, why not keep it around?
Esp since YOU find it useful.

I haven't even had a chance to play with it, should have cloned the
repo before I guess.

Oh well...

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Never mind (was [RFC] zodcache - auto-start dm-cache devices)
  2015-12-19  1:53   ` John Stoffel
@ 2016-01-04 15:52     ` Joe Thornber
  2016-01-06 15:49       ` Ian Pilcher
  0 siblings, 1 reply; 16+ messages in thread
From: Joe Thornber @ 2016-01-04 15:52 UTC (permalink / raw)
  To: device-mapper development

On Fri, Dec 18, 2015 at 08:53:06PM -0500, John Stoffel wrote:
> >>>>> "Ian" == Ian Pilcher <arequipeno@gmail.com> writes:
> 
> Ian> repo deleted due to feedback.
> 
> Umm... why?  Just because people we're asking questions and poking at
> what you've written?  Hopefully not.  I like the ideas you were
> pushing here, anything to make dmsetup easier to use.
> 
> If the tool could complement lvmcache/bcache, why not keep it around?
> Esp since YOU find it useful.
> 
> I haven't even had a chance to play with it, should have cloned the
> repo before I guess.

I cloned the repo here:

https://github.com/jthornber/zodcache

I looked through the code and thought this was a good tool that does
something that LVM currently doesn't support very well.

- Joe

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Never mind (was [RFC] zodcache - auto-start dm-cache devices)
  2016-01-04 15:52     ` Joe Thornber
@ 2016-01-06 15:49       ` Ian Pilcher
  0 siblings, 0 replies; 16+ messages in thread
From: Ian Pilcher @ 2016-01-06 15:49 UTC (permalink / raw)
  To: dm-devel

On 01/04/2016 09:52 AM, Joe Thornber wrote:
> I looked through the code and thought this was a good tool that does
> something that LVM currently doesn't support very well.

Well now I'm totally confused.

The feedback I received when I initially posted this led me to believe
that there was zero (or less) interest in a non-LVM interface to
dm-cache.  That was the reason that I initially pulled the repo, since
I didn't want someone to stumble upon it and unknowingly trust their
data to something that is essentially an unsupported dead end.

(It's back up by the way, with warnings that I hope are sufficiently
strong -- https://github.com/ipilcher/zodcache.)

So I'm back to the question that I asked in my original note ... Is this
approach (i.e. not LVM cache but more "bcache-like" for lack of a
better term) something that is worth pursuing?

If so, I'm happy to fix/enhance/maintain this code going forward, and I
hope that this community will be willing to provide suggestions, answer
the odd question, etc.

-- 
========================================================================
Ian Pilcher                                         arequipeno@gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2016-01-06 15:49 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-18  6:16 [RFC] zodcache - auto-start dm-cache devices Ian Pilcher
2015-12-18  9:44 ` Joe Thornber
2015-12-18 16:07 ` John Stoffel
2015-12-18 16:55   ` Ian Pilcher
2015-12-18 17:39     ` Ian Pilcher
2015-12-18 18:53     ` John Stoffel
2015-12-18 18:51 ` Alasdair G Kergon
2015-12-18 19:44   ` John Stoffel
2015-12-18 20:01     ` Alasdair G Kergon
2015-12-18 20:50   ` Ian Pilcher
2015-12-18 21:06     ` Zdenek Kabelac
2015-12-19  0:01       ` Ian Pilcher
2015-12-19  0:53 ` Never mind (was [RFC] zodcache - auto-start dm-cache devices) Ian Pilcher
2015-12-19  1:53   ` John Stoffel
2016-01-04 15:52     ` Joe Thornber
2016-01-06 15:49       ` Ian Pilcher

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.