All of lore.kernel.org
 help / color / mirror / Atom feed
* How does one enable SCTERC on an NVMe drive (and other install questions)
@ 2021-06-21  5:00 Edward Kuns
  2021-06-21 10:37 ` Wols Lists
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Edward Kuns @ 2021-06-21  5:00 UTC (permalink / raw)
  To: Linux-RAID; +Cc: Edward Kuns

1) Topic one - SCTERC on NVMe

I'm in the middle of installing Linux on a new PC.  I have a pair of
1TB NVMe drives.  All of my non-NVMe drives support "smartctl -l
scterc,70,70" but the NVMe drives do not seem to.  How can one ensure
that SCTERC is configured properly in an NVMe drive that is part of a
software RAID constructed using mdadm?  Is this an issue that has been
solved or asked or addressed before?  The searching I did didn't bring
anything up.

2) Topic two - RAID on /boot and /boot/efi

It looks like RHEL 8 and clones don't support the installer building
LVM on top of RAID as they used to.  I kind of suspect that the
installer would prefer that if I want LVM, that I use the RAID built
into LVM at this point.  But it looks to me like the mdadm tools are
still much more mature than the RAID built into LVM.  (Even though it
appears that this is built on top of the same code base?)

This means I have to do that work before running the installer, by
running the installer in rescue mode, then run the installer and
"reformat' the partitions I have created by hand.  I haven't gone all
the way through this process but it looks like it works.  It also
looks like maybe I cannot use the installer to set up RAID mirroring
for /boot or /boot/efi.  I may have to set that up after the fact.  It
looks like I have to use metadata format 1.0 for that?  I'm going to
go through a couple experimental installs to see how it all goes
(using wipefs, and so on, between attempts).  I've made a script to do
all the work for me so I can experiment.

The good thing about this is it gets me more familiar with the
command-line tools before I have an issue, and it forces me to
document what I'm doing in order to set it up.  One of my goals for
this install is that any single disk can fail, including a disk
containing / or /boot or /boot/efi, with a simple recovery process of
replacing the failed disk and rebuilding an array and no unscheduled
downtime.  I'm not sure it's possible (with /boot and /boot/efi in
particular)  but I'm going to find out.  All I can tell from research
so far is that metadata 1.1 or 1.2 won't work for such partitions.

3) Topic three - WD Red vs Red Plus vs Red Pro

In the Wiki, it might be worth mentioning that while WD Red are
currently shingled, the Red Plus and Red Pro are not.  I can search
again and provide links to that information if it would help.  I
thought I had bought bad drives (Red Pro) but then discovered that the
Red Pro is CMR now SMR.  Whew.

While this page is clear about the difference between Red, Red Pro,
and Red Plus:

https://raid.wiki.kernel.org/index.php/Timeout_Mismatch

this page is not:

https://raid.wiki.kernel.org/index.php/Drive_Data_Sheets

I would be happy to propose some text changes if that would help.

4) Topic four -- Wiki

Would it be worth it if I documented some of the work I've gone
through to get this set up?  I'm just an enthusiast who works with
RHEL at my employer and has been running Red Hat in some form or
another at home since 1996, but I'm not a sysadmin.  I'm certain I'm
going overkill in trying to ensure that every single filesystem is
ultimately on some form of mdadm RAID, but I just don't want to deal
with unscheduled downtime any longer.

        Thanks

                 Eddie

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: How does one enable SCTERC on an NVMe drive (and other install questions)
  2021-06-21  5:00 How does one enable SCTERC on an NVMe drive (and other install questions) Edward Kuns
@ 2021-06-21 10:37 ` Wols Lists
  2021-06-21 12:17 ` Phil Turmel
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Wols Lists @ 2021-06-21 10:37 UTC (permalink / raw)
  To: Edward Kuns, Linux-RAID

On 21/06/21 06:00, Edward Kuns wrote:
> While this page is clear about the difference between Red, Red Pro,
> and Red Plus:
> 
> https://raid.wiki.kernel.org/index.php/Timeout_Mismatch
> 
> this page is not:
> 
> https://raid.wiki.kernel.org/index.php/Drive_Data_Sheets
> 
> I would be happy to propose some text changes if that would help.

Whoops. Thanks for pointing that out.

I'll get that page fixed.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: How does one enable SCTERC on an NVMe drive (and other install questions)
  2021-06-21  5:00 How does one enable SCTERC on an NVMe drive (and other install questions) Edward Kuns
  2021-06-21 10:37 ` Wols Lists
@ 2021-06-21 12:17 ` Phil Turmel
  2021-06-21 16:28 ` Chris Murphy
  2021-06-25 22:08 ` Redundant EFI Systemp Partitions (Was Re: How does one enable SCTERC on an NVMe drive (and other install questions)) Andy Smith
  3 siblings, 0 replies; 10+ messages in thread
From: Phil Turmel @ 2021-06-21 12:17 UTC (permalink / raw)
  To: Edward Kuns, Linux-RAID

Hi Eddie,

On 6/21/21 1:00 AM, Edward Kuns wrote:
> 1) Topic one - SCTERC on NVMe
> 
> I'm in the middle of installing Linux on a new PC.  I have a pair of
> 1TB NVMe drives.  All of my non-NVMe drives support "smartctl -l
> scterc,70,70" but the NVMe drives do not seem to.  How can one ensure
> that SCTERC is configured properly in an NVMe drive that is part of a
> software RAID constructed using mdadm?  Is this an issue that has been
> solved or asked or addressed before?  The searching I did didn't bring
> anything up.

You can't, if the firmware doesn't allow it.  Do try reading the values 
before writing them, though.  I've seen SSDs that start up with "40,40" 
and refuse changes.  40,40 is fine.

Please report here any brands/models that either don't support SCTERC at 
all, or are stuck on disabled.  So the rest of us can avoid them.

> 2) Topic two - RAID on /boot and /boot/efi
> 
> It looks like RHEL 8 and clones don't support the installer building
> LVM on top of RAID as they used to.  I kind of suspect that the
> installer would prefer that if I want LVM, that I use the RAID built
> into LVM at this point.  But it looks to me like the mdadm tools are
> still much more mature than the RAID built into LVM.  (Even though it
> appears that this is built on top of the same code base?)
> 
> This means I have to do that work before running the installer, by
> running the installer in rescue mode, then run the installer and
> "reformat' the partitions I have created by hand.  I haven't gone all
> the way through this process but it looks like it works.  It also
> looks like maybe I cannot use the installer to set up RAID mirroring
> for /boot or /boot/efi.  I may have to set that up after the fact.  It
> looks like I have to use metadata format 1.0 for that?  I'm going to
> go through a couple experimental installs to see how it all goes
> (using wipefs, and so on, between attempts).  I've made a script to do
> all the work for me so I can experiment.
> 
> The good thing about this is it gets me more familiar with the
> command-line tools before I have an issue, and it forces me to
> document what I'm doing in order to set it up.  One of my goals for
> this install is that any single disk can fail, including a disk
> containing / or /boot or /boot/efi, with a simple recovery process of
> replacing the failed disk and rebuilding an array and no unscheduled
> downtime.  I'm not sure it's possible (with /boot and /boot/efi in
> particular)  but I'm going to find out.  All I can tell from research
> so far is that metadata 1.1 or 1.2 won't work for such partitions.

I don't use CentOS, but you seem to be headed down the same path I would 
follow.  And yes, use metadata v1.0 so the BIOS can treat /boot/efi as a 
normal partition.  You may be able to use other metadata on /boot itself 
if the grub shim in /boot/efi supports it.  (Not sure on that.)

[trimmed what Wol responded to]

Regards,

Phil

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: How does one enable SCTERC on an NVMe drive (and other install questions)
  2021-06-21  5:00 How does one enable SCTERC on an NVMe drive (and other install questions) Edward Kuns
  2021-06-21 10:37 ` Wols Lists
  2021-06-21 12:17 ` Phil Turmel
@ 2021-06-21 16:28 ` Chris Murphy
  2021-06-25 22:08 ` Redundant EFI Systemp Partitions (Was Re: How does one enable SCTERC on an NVMe drive (and other install questions)) Andy Smith
  3 siblings, 0 replies; 10+ messages in thread
From: Chris Murphy @ 2021-06-21 16:28 UTC (permalink / raw)
  To: Linux-RAID

> 2) Topic two - RAID on /boot and /boot/efi
>
> It looks like RHEL 8 and clones don't support the installer building
> LVM on top of RAID as they used to.

I haven't tried it more recently in Fedora than a couple of years but
it was putting LVM on mdadm raid, so I'd expect it is too for RHEL 8.
I'm not sure how favoring LVM raid (LVM management and metadata, but
still uses the md kernel driver) is exposed in the installer GUI,
maybe it's just distinguished in kickstart?

>It also
> looks like maybe I cannot use the installer to set up RAID mirroring
> for /boot or /boot/efi.

I'm virtually certain it will create an ESP on mdadm metadata v 0.9
for /boot/efi, at least it used to. And it was discussed at the time
on this list as not a good idea because these are strictly mdadm
member drives, they aren't ESP's until the raid is assembled. So it
leads to discontinuity in the best case. You either have to lie with a
partition type claiming it's an ESP and yet it's really an mdadm
member; or you tell the truth by saying it's an mdadm member but then
possibly the firmware won't read from it because it has the wrong type
guid. And some firmware care about that and others don't.

Further, it's possible for the firmware to write to the ESP. Which in
this case means it modifies a member drive outside of kernel code and
now they mismatch. As long as no further writes happen to either drive
separate or via md, you could do a scrub repair and force the second
drive to match the first. So long as the firmware modified the first
drive, following scrub repair they'd both be properly in sync rather
than regressing to a prior state. But I think this is just damn sloppy
all the way around. The UEFI spec doesn't address syncing ESPs at all.
And so we're left with bad hacks like using software raid rather than
a proper utility that determines which drive has the most recent
changes and syncing them to the other(s), which could be as simple as
an rsync service unit for this task. Or possibly better would be for
the canonical contents of the distro directory that belongs on the
ESP, to actually be in /usr and sync from it as the primary source, to
all the ESPs. Neither /boot nor /boot/efi need to be persistently
mounted. It's just lack of resources and a good design that we keep
these things around all the time. They only need to be mounted when
they're updated. And they don't need to be mounted during boot because
they're used strictly by the firmware to achieve boot.

A better idea would be using x-systemd.automount and
x-systemd.idle-timeout so that it's mounted on demand and unmounted
when idle. Perhaps even better would be an exclusive owner that
handles such things. One idea proposed is bootupd,
https://github.com/coreos/bootupd and it could be enhanced to sync all
the ESPs as well, in multiple disk scenarios.




-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Redundant EFI Systemp Partitions (Was Re: How does one enable SCTERC on an NVMe drive (and other install questions))
  2021-06-21  5:00 How does one enable SCTERC on an NVMe drive (and other install questions) Edward Kuns
                   ` (2 preceding siblings ...)
  2021-06-21 16:28 ` Chris Murphy
@ 2021-06-25 22:08 ` Andy Smith
  2021-06-26  0:23   ` Chris Murphy
  2021-06-26 14:36   ` Phil Turmel
  3 siblings, 2 replies; 10+ messages in thread
From: Andy Smith @ 2021-06-25 22:08 UTC (permalink / raw)
  To: Linux-RAID

Hello,

On Mon, Jun 21, 2021 at 12:00:13AM -0500, Edward Kuns wrote:
> looks like maybe I cannot use the installer to set up RAID mirroring
> for /boot or /boot/efi.  I may have to set that up after the fact.

In November 2020 I had this discussion on debian-user:

    https://www.mail-archive.com/debian-user@lists.debian.org/msg762784.html

The summary was that the ESP is for the firmware and the firmware
doesn't know about MD RAID, so is only ever going to see the member
devices.

You could lie to the firmware and tell it that each MD member device
is an ESP, but it isn't. This will probably work as long as you use
the correct metadata format (so the MD metadata is at the end and
the firmware is fooled that the member device is just a normal
partition). BUT it is in theory possible for the firmware to write
to the ESP and that would cause a broken array when you boot, which
you'd then recover by randomly choosing one of the member devices as
the "correct" one.

Some people (myself included, after discovering all that) decided
that putting ESP on an MD device was too complicated due to these
issues and that it would be better to have one ESP on each bootable
device and be able to boot from any of them. The primary one is
synced to all the others any time there is a system update.

Ubuntu have patched grub to detect multiple ESP and install grub on
all of them.

In theory it would be possible to write an EFI firmware module that
understands MD devices and then you could put the ESP on an MD array
in the same way that grub can boot off of an MD array.

Cheers,
Andy

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Redundant EFI Systemp Partitions (Was Re: How does one enable SCTERC on an NVMe drive (and other install questions))
  2021-06-25 22:08 ` Redundant EFI Systemp Partitions (Was Re: How does one enable SCTERC on an NVMe drive (and other install questions)) Andy Smith
@ 2021-06-26  0:23   ` Chris Murphy
  2021-06-26 14:36   ` Phil Turmel
  1 sibling, 0 replies; 10+ messages in thread
From: Chris Murphy @ 2021-06-26  0:23 UTC (permalink / raw)
  To: Linux-RAID

On Fri, Jun 25, 2021 at 4:08 PM Andy Smith <andy@strugglers.net> wrote:
>
> Hello,
>
> On Mon, Jun 21, 2021 at 12:00:13AM -0500, Edward Kuns wrote:
> > looks like maybe I cannot use the installer to set up RAID mirroring
> > for /boot or /boot/efi.  I may have to set that up after the fact.
>
> In November 2020 I had this discussion on debian-user:
>
>     https://www.mail-archive.com/debian-user@lists.debian.org/msg762784.html
>
> The summary was that the ESP is for the firmware and the firmware
> doesn't know about MD RAID, so is only ever going to see the member
> devices.
>
> You could lie to the firmware and tell it that each MD member device
> is an ESP, but it isn't. This will probably work as long as you use
> the correct metadata format (so the MD metadata is at the end and
> the firmware is fooled that the member device is just a normal
> partition). BUT it is in theory possible for the firmware to write
> to the ESP and that would cause a broken array when you boot, which
> you'd then recover by randomly choosing one of the member devices as
> the "correct" one.
>
> Some people (myself included, after discovering all that) decided
> that putting ESP on an MD device was too complicated due to these
> issues and that it would be better to have one ESP on each bootable
> device and be able to boot from any of them. The primary one is
> synced to all the others any time there is a system update.
>
> Ubuntu have patched grub to detect multiple ESP and install grub on
> all of them.
>
> In theory it would be possible to write an EFI firmware module that
> understands MD devices and then you could put the ESP on an MD array
> in the same way that grub can boot off of an MD array.

Yeah, efifs might have it
https://github.com/pbatard/efifs

One solution is making the ESP static, other than OSLoader updates. A
"stub" grub.cfg points to $BOOT/grub/grub.cfg or $BOOT/grub2/grub.cfg,
where $BOOT is typically mounted at /boot; and then follow the Boot
Loader Spec to add drop-in configuration files for each menu entry.
Typically there is one drop in file per kernel. This is how Fedora has
worked for several releases now. It permits the two grub.cfg's to
remain static, is less prone to problems if replacing it or modifying
it is interrupted by a crash. And the snippets are a more user
friendly format for editing, should it be deemed necessary, while
being less fragile.

A canonical ESP that resides virtually in /usr I think is another way
of doing all of this; and use a service unit to sync from /usr to each
ESP by mounting them e.g. somewhere in /run, in sequence, thereby
containing any problems with interruptions.

https://systemd.io/BOOT_LOADER_INTERFACE/
https://systemd.io/BOOT_LOADER_SPECIFICATION/

While some platforms are currently left out of the specs, I think it's
better to grow the spec to make booting more reliable and a lower
maintenance burden rather than continuing to do things in a rather ad
hoc manner.


-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Redundant EFI Systemp Partitions (Was Re: How does one enable SCTERC on an NVMe drive (and other install questions))
  2021-06-25 22:08 ` Redundant EFI Systemp Partitions (Was Re: How does one enable SCTERC on an NVMe drive (and other install questions)) Andy Smith
  2021-06-26  0:23   ` Chris Murphy
@ 2021-06-26 14:36   ` Phil Turmel
  2021-06-27  3:13     ` Chris Murphy
  1 sibling, 1 reply; 10+ messages in thread
From: Phil Turmel @ 2021-06-26 14:36 UTC (permalink / raw)
  To: Linux-RAID

Good morning Andy,

On 6/25/21 6:08 PM, Andy Smith wrote:
> Hello,
> 
> On Mon, Jun 21, 2021 at 12:00:13AM -0500, Edward Kuns wrote:
>> looks like maybe I cannot use the installer to set up RAID mirroring
>> for /boot or /boot/efi.  I may have to set that up after the fact.
> 
> In November 2020 I had this discussion on debian-user:
> 
>      https://www.mail-archive.com/debian-user@lists.debian.org/msg762784.html
> 
> The summary was that the ESP is for the firmware and the firmware
> doesn't know about MD RAID, so is only ever going to see the member
> devices.

Indeed.

> You could lie to the firmware and tell it that each MD member device
> is an ESP, but it isn't. This will probably work as long as you use
> the correct metadata format (so the MD metadata is at the end and
> the firmware is fooled that the member device is just a normal
> partition). BUT it is in theory possible for the firmware to write
> to the ESP and that would cause a broken array when you boot, which
> you'd then recover by randomly choosing one of the member devices as
> the "correct" one.

Pretty low risk, I think, but yes.  If you construct the raid with what 
the EFI system thinks as the "first" bootable ESP as member role 0, 
mdadm will sync correctly.  Fragile, but generally works.

> Some people (myself included, after discovering all that) decided
> that putting ESP on an MD device was too complicated due to these
> issues and that it would be better to have one ESP on each bootable
> device and be able to boot from any of them. The primary one is
> synced to all the others any time there is a system update.

I started doing this with my work server.  I wrote a hook script for 
initramfs updates to ensure everything was in place.

> Ubuntu have patched grub to detect multiple ESP and install grub on
> all of them.

Didn't know this.  I will experiment to see if I can retire my hook.

> In theory it would be possible to write an EFI firmware module that
> understands MD devices and then you could put the ESP on an MD array
> in the same way that grub can boot off of an MD array.

Yeah, not holding my breath for this.

> Cheers,
> Andy

Thanks!  Learned something new today.

Phil


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Redundant EFI Systemp Partitions (Was Re: How does one enable SCTERC on an NVMe drive (and other install questions))
  2021-06-26 14:36   ` Phil Turmel
@ 2021-06-27  3:13     ` Chris Murphy
  2021-06-28  2:20       ` Edward Kuns
  0 siblings, 1 reply; 10+ messages in thread
From: Chris Murphy @ 2021-06-27  3:13 UTC (permalink / raw)
  To: Phil Turmel; +Cc: Linux-RAID

On Sat, Jun 26, 2021 at 8:36 AM Phil Turmel <philip@turmel.org> wrote:

> > You could lie to the firmware and tell it that each MD member device
> > is an ESP, but it isn't. This will probably work as long as you use
> > the correct metadata format (so the MD metadata is at the end and
> > the firmware is fooled that the member device is just a normal
> > partition). BUT it is in theory possible for the firmware to write
> > to the ESP and that would cause a broken array when you boot, which
> > you'd then recover by randomly choosing one of the member devices as
> > the "correct" one.
>
> Pretty low risk, I think, but yes.  If you construct the raid with what
> the EFI system thinks as the "first" bootable ESP as member role 0,
> mdadm will sync correctly.  Fragile, but generally works.

I think it's unreliable. GRUB can write to the ESP when grubenv is on
it. And sd-boot likewise can write to the ESP as part of
https://systemd.io/AUTOMATIC_BOOT_ASSESSMENT/

And the firmware itself can write to the ESP for any reason but most
commonly when cleaning up after firmware updates. Any of these events
would write to just one of the members, and involve file system
writes. So now what happens when they're assembled by mdadm as a raid,
and the two member devices have the same event count, and yet now
completely different file system states? I think it's a train wreck.



-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Redundant EFI Systemp Partitions (Was Re: How does one enable SCTERC on an NVMe drive (and other install questions))
  2021-06-27  3:13     ` Chris Murphy
@ 2021-06-28  2:20       ` Edward Kuns
  2021-06-28  3:59         ` Chris Murphy
  0 siblings, 1 reply; 10+ messages in thread
From: Edward Kuns @ 2021-06-28  2:20 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Phil Turmel, Linux-RAID

On Sat, Jun 26, 2021 at 10:14 PM Chris Murphy <lists@colorremedies.com> wrote:
> I think it's unreliable. GRUB can write to the ESP when grubenv is on
> it. And sd-boot likewise can write to the ESP as part of
> https://systemd.io/AUTOMATIC_BOOT_ASSESSMENT/
>
> And the firmware itself can write to the ESP for any reason but most
> commonly when cleaning up after firmware updates. Any of these events
> would write to just one of the members, and involve file system
> writes. So now what happens when they're assembled by mdadm as a raid,
> and the two member devices have the same event count, and yet now
> completely different file system states? I think it's a train wreck.

It sounds like the least risky option is just manually creating more
than one ESP and manually syncing them periodically as Andy Smith
mentioned.  (Or automatically syncing them upon every boot.)

                Eddie

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Redundant EFI Systemp Partitions (Was Re: How does one enable SCTERC on an NVMe drive (and other install questions))
  2021-06-28  2:20       ` Edward Kuns
@ 2021-06-28  3:59         ` Chris Murphy
  0 siblings, 0 replies; 10+ messages in thread
From: Chris Murphy @ 2021-06-28  3:59 UTC (permalink / raw)
  To: Edward Kuns; +Cc: Chris Murphy, Phil Turmel, Linux-RAID

On Sun, Jun 27, 2021 at 8:21 PM Edward Kuns <eddie.kuns@gmail.com> wrote:
>
> On Sat, Jun 26, 2021 at 10:14 PM Chris Murphy <lists@colorremedies.com> wrote:
> > I think it's unreliable. GRUB can write to the ESP when grubenv is on
> > it. And sd-boot likewise can write to the ESP as part of
> > https://systemd.io/AUTOMATIC_BOOT_ASSESSMENT/
> >
> > And the firmware itself can write to the ESP for any reason but most
> > commonly when cleaning up after firmware updates. Any of these events
> > would write to just one of the members, and involve file system
> > writes. So now what happens when they're assembled by mdadm as a raid,
> > and the two member devices have the same event count, and yet now
> > completely different file system states? I think it's a train wreck.
>
> It sounds like the least risky option is just manually creating more
> than one ESP and manually syncing them periodically as Andy Smith
> mentioned.  (Or automatically syncing them upon every boot.)
>
>                 Eddie

I'd like to say we are definitely better off with stale ESPs
occasionally being used, than corrupt file systems. That's probably
almost always true.  But since fallback to another ESP can be silent,
without the benefit of any information from the pre-boot environment
ending up in the system journal to know which ESP booted the system,
it might be false comfort.




-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-06-28  3:59 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-21  5:00 How does one enable SCTERC on an NVMe drive (and other install questions) Edward Kuns
2021-06-21 10:37 ` Wols Lists
2021-06-21 12:17 ` Phil Turmel
2021-06-21 16:28 ` Chris Murphy
2021-06-25 22:08 ` Redundant EFI Systemp Partitions (Was Re: How does one enable SCTERC on an NVMe drive (and other install questions)) Andy Smith
2021-06-26  0:23   ` Chris Murphy
2021-06-26 14:36   ` Phil Turmel
2021-06-27  3:13     ` Chris Murphy
2021-06-28  2:20       ` Edward Kuns
2021-06-28  3:59         ` Chris Murphy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.