* [PATCH 0/2] NPT virtualization follow-up
@ 2010-09-14 15:46 Joerg Roedel
2010-09-14 15:46 ` [PATCH 1/2] KVM: MMU: Don't track nested fault info in error-code Joerg Roedel
2010-09-14 15:46 ` [PATCH 2/2] KVM: MMU: Use base_role.nxe for mmu.nx Joerg Roedel
0 siblings, 2 replies; 8+ messages in thread
From: Joerg Roedel @ 2010-09-14 15:46 UTC (permalink / raw)
To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, linux-kernel
Hi Avi, Marcelo,
this patch-set includes two follow-on patches to the npt virtualization patch
set merged recently. These are the patches requested by Avi in his review of
the v4 npt virtualization patch-set.
Joerg
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/2] KVM: MMU: Don't track nested fault info in error-code
2010-09-14 15:46 [PATCH 0/2] NPT virtualization follow-up Joerg Roedel
@ 2010-09-14 15:46 ` Joerg Roedel
2010-09-16 14:01 ` Avi Kivity
2010-09-14 15:46 ` [PATCH 2/2] KVM: MMU: Use base_role.nxe for mmu.nx Joerg Roedel
1 sibling, 1 reply; 8+ messages in thread
From: Joerg Roedel @ 2010-09-14 15:46 UTC (permalink / raw)
To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, linux-kernel, Joerg Roedel
This patch moves the detection whether a page-fault was
nested or not out of the error code and moves it into a
separate variable in the fault struct.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/mmu.h | 1 -
arch/x86/kvm/x86.c | 14 ++++----------
3 files changed, 5 insertions(+), 11 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3a00741..8a83177 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -322,6 +322,7 @@ struct kvm_vcpu_arch {
struct {
u64 address;
unsigned error_code;
+ bool nested;
} fault;
/* only needed in kvm_pv_mmu_op() path, but it's hot so
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 513abbb..7086ca8 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -47,7 +47,6 @@
#define PFERR_USER_MASK (1U << 2)
#define PFERR_RSVD_MASK (1U << 3)
#define PFERR_FETCH_MASK (1U << 4)
-#define PFERR_NESTED_MASK (1U << 31)
int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4]);
int kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3ff0a8f..335519f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -340,18 +340,12 @@ void kvm_inject_page_fault(struct kvm_vcpu *vcpu)
void kvm_propagate_fault(struct kvm_vcpu *vcpu)
{
- u32 nested, error;
-
- error = vcpu->arch.fault.error_code;
- nested = error & PFERR_NESTED_MASK;
- error = error & ~PFERR_NESTED_MASK;
-
- vcpu->arch.fault.error_code = error;
-
- if (mmu_is_nested(vcpu) && !nested)
+ if (mmu_is_nested(vcpu) && !vcpu->arch.fault.nested)
vcpu->arch.nested_mmu.inject_page_fault(vcpu);
else
vcpu->arch.mmu.inject_page_fault(vcpu);
+
+ vcpu->arch.fault.nested = false;
}
void kvm_inject_nmi(struct kvm_vcpu *vcpu)
@@ -3518,7 +3512,7 @@ static gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access)
access |= PFERR_USER_MASK;
t_gpa = vcpu->arch.mmu.gva_to_gpa(vcpu, gpa, access, &error);
if (t_gpa == UNMAPPED_GVA)
- vcpu->arch.fault.error_code |= PFERR_NESTED_MASK;
+ vcpu->arch.fault.nested = true;
return t_gpa;
}
--
1.7.0.4
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/2] KVM: MMU: Use base_role.nxe for mmu.nx
2010-09-14 15:46 [PATCH 0/2] NPT virtualization follow-up Joerg Roedel
2010-09-14 15:46 ` [PATCH 1/2] KVM: MMU: Don't track nested fault info in error-code Joerg Roedel
@ 2010-09-14 15:46 ` Joerg Roedel
2010-09-14 22:08 ` Marcelo Tosatti
1 sibling, 1 reply; 8+ messages in thread
From: Joerg Roedel @ 2010-09-14 15:46 UTC (permalink / raw)
To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, linux-kernel, Joerg Roedel
This patch removes the mmu.nx field and uses the equivalent
field mmu.base_role.nxe instead.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
arch/x86/include/asm/kvm_host.h | 2 --
arch/x86/kvm/mmu.c | 27 +++++++++++++--------------
arch/x86/kvm/paging_tmpl.h | 4 ++--
arch/x86/kvm/x86.c | 3 ---
4 files changed, 15 insertions(+), 21 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 8a83177..50506be 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -259,8 +259,6 @@ struct kvm_mmu {
u64 *lm_root;
u64 rsvd_bits_mask[2][4];
- bool nx;
-
u64 pdptrs[4]; /* pae */
};
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3ce56bf..21d2983 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -238,7 +238,7 @@ static int is_cpuid_PSE36(void)
static int is_nx(struct kvm_vcpu *vcpu)
{
- return vcpu->arch.efer & EFER_NX;
+ return !!(vcpu->arch.efer & EFER_NX);
}
static int is_shadow_present_pte(u64 pte)
@@ -2634,7 +2634,7 @@ static int nonpaging_init_context(struct kvm_vcpu *vcpu,
context->shadow_root_level = PT32E_ROOT_LEVEL;
context->root_hpa = INVALID_PAGE;
context->direct_map = true;
- context->nx = false;
+ context->base_role.nxe = 0;
return 0;
}
@@ -2688,7 +2688,7 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu,
int maxphyaddr = cpuid_maxphyaddr(vcpu);
u64 exb_bit_rsvd = 0;
- if (!context->nx)
+ if (!context->base_role.nxe)
exb_bit_rsvd = rsvd_bits(63, 63);
switch (level) {
case PT32_ROOT_LEVEL:
@@ -2747,7 +2747,7 @@ static int paging64_init_context_common(struct kvm_vcpu *vcpu,
struct kvm_mmu *context,
int level)
{
- context->nx = is_nx(vcpu);
+ context->base_role.nxe = is_nx(vcpu);
reset_rsvds_bits_mask(vcpu, context, level);
@@ -2775,7 +2775,7 @@ static int paging64_init_context(struct kvm_vcpu *vcpu,
static int paging32_init_context(struct kvm_vcpu *vcpu,
struct kvm_mmu *context)
{
- context->nx = false;
+ context->base_role.nxe = 0;
reset_rsvds_bits_mask(vcpu, context, PT32_ROOT_LEVEL);
@@ -2815,24 +2815,23 @@ static int init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
context->set_cr3 = kvm_x86_ops->set_tdp_cr3;
context->get_cr3 = get_cr3;
context->inject_page_fault = kvm_inject_page_fault;
- context->nx = is_nx(vcpu);
if (!is_paging(vcpu)) {
- context->nx = false;
+ context->base_role.nxe = 0;
context->gva_to_gpa = nonpaging_gva_to_gpa;
context->root_level = 0;
} else if (is_long_mode(vcpu)) {
- context->nx = is_nx(vcpu);
+ context->base_role.nxe = is_nx(vcpu);
reset_rsvds_bits_mask(vcpu, context, PT64_ROOT_LEVEL);
context->gva_to_gpa = paging64_gva_to_gpa;
context->root_level = PT64_ROOT_LEVEL;
} else if (is_pae(vcpu)) {
- context->nx = is_nx(vcpu);
+ context->base_role.nxe = is_nx(vcpu);
reset_rsvds_bits_mask(vcpu, context, PT32E_ROOT_LEVEL);
context->gva_to_gpa = paging64_gva_to_gpa;
context->root_level = PT32E_ROOT_LEVEL;
} else {
- context->nx = false;
+ context->base_role.nxe = 0;
reset_rsvds_bits_mask(vcpu, context, PT32_ROOT_LEVEL);
context->gva_to_gpa = paging32_gva_to_gpa;
context->root_level = PT32_ROOT_LEVEL;
@@ -2888,21 +2887,21 @@ static int init_kvm_nested_mmu(struct kvm_vcpu *vcpu)
* functions between mmu and nested_mmu are swapped.
*/
if (!is_paging(vcpu)) {
- g_context->nx = false;
+ g_context->base_role.nxe = 0;
g_context->root_level = 0;
g_context->gva_to_gpa = nonpaging_gva_to_gpa_nested;
} else if (is_long_mode(vcpu)) {
- g_context->nx = is_nx(vcpu);
+ g_context->base_role.nxe = is_nx(vcpu);
reset_rsvds_bits_mask(vcpu, g_context, PT64_ROOT_LEVEL);
g_context->root_level = PT64_ROOT_LEVEL;
g_context->gva_to_gpa = paging64_gva_to_gpa_nested;
} else if (is_pae(vcpu)) {
- g_context->nx = is_nx(vcpu);
+ g_context->base_role.nxe = is_nx(vcpu);
reset_rsvds_bits_mask(vcpu, g_context, PT32E_ROOT_LEVEL);
g_context->root_level = PT32E_ROOT_LEVEL;
g_context->gva_to_gpa = paging64_gva_to_gpa_nested;
} else {
- g_context->nx = false;
+ g_context->base_role.nxe = false;
reset_rsvds_bits_mask(vcpu, g_context, PT32_ROOT_LEVEL);
g_context->root_level = PT32_ROOT_LEVEL;
g_context->gva_to_gpa = paging32_gva_to_gpa_nested;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 2bdd843..9e85736 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -105,7 +105,7 @@ static unsigned FNAME(gpte_access)(struct kvm_vcpu *vcpu, pt_element_t gpte)
access = (gpte & (PT_WRITABLE_MASK | PT_USER_MASK)) | ACC_EXEC_MASK;
#if PTTYPE == 64
- if (vcpu->arch.mmu.nx)
+ if (vcpu->arch.mmu.base_role.nxe)
access &= ~(gpte >> PT64_NX_SHIFT);
#endif
return access;
@@ -272,7 +272,7 @@ error:
walker->error_code |= PFERR_WRITE_MASK;
if (user_fault)
walker->error_code |= PFERR_USER_MASK;
- if (fetch_fault && mmu->nx)
+ if (fetch_fault && mmu->base_role.nxe)
walker->error_code |= PFERR_FETCH_MASK;
if (rsvd_fault)
walker->error_code |= PFERR_RSVD_MASK;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 335519f..5464f31 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -831,9 +831,6 @@ static int set_efer(struct kvm_vcpu *vcpu, u64 efer)
kvm_x86_ops->set_efer(vcpu, efer);
- vcpu->arch.mmu.base_role.nxe = (efer & EFER_NX) && !tdp_enabled;
- kvm_mmu_reset_context(vcpu);
-
/* Update reserved bits */
if ((efer ^ old_efer) & EFER_NX)
kvm_mmu_reset_context(vcpu);
--
1.7.0.4
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] KVM: MMU: Use base_role.nxe for mmu.nx
2010-09-14 15:46 ` [PATCH 2/2] KVM: MMU: Use base_role.nxe for mmu.nx Joerg Roedel
@ 2010-09-14 22:08 ` Marcelo Tosatti
2010-09-15 8:48 ` Roedel, Joerg
0 siblings, 1 reply; 8+ messages in thread
From: Marcelo Tosatti @ 2010-09-14 22:08 UTC (permalink / raw)
To: Joerg Roedel; +Cc: Avi Kivity, kvm, linux-kernel
On Tue, Sep 14, 2010 at 05:46:13PM +0200, Joerg Roedel wrote:
> This patch removes the mmu.nx field and uses the equivalent
> field mmu.base_role.nxe instead.
>
> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
> ---
> arch/x86/include/asm/kvm_host.h | 2 --
> arch/x86/kvm/mmu.c | 27 +++++++++++++--------------
> arch/x86/kvm/paging_tmpl.h | 4 ++--
> arch/x86/kvm/x86.c | 3 ---
> 4 files changed, 15 insertions(+), 21 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 8a83177..50506be 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -259,8 +259,6 @@ struct kvm_mmu {
> u64 *lm_root;
> u64 rsvd_bits_mask[2][4];
>
> - bool nx;
> -
> u64 pdptrs[4]; /* pae */
> };
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 3ce56bf..21d2983 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -238,7 +238,7 @@ static int is_cpuid_PSE36(void)
>
> static int is_nx(struct kvm_vcpu *vcpu)
> {
> - return vcpu->arch.efer & EFER_NX;
> + return !!(vcpu->arch.efer & EFER_NX);
> }
>
> static int is_shadow_present_pte(u64 pte)
> @@ -2634,7 +2634,7 @@ static int nonpaging_init_context(struct kvm_vcpu *vcpu,
> context->shadow_root_level = PT32E_ROOT_LEVEL;
> context->root_hpa = INVALID_PAGE;
> context->direct_map = true;
> - context->nx = false;
> + context->base_role.nxe = 0;
> return 0;
> }
>
> @@ -2688,7 +2688,7 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu,
> int maxphyaddr = cpuid_maxphyaddr(vcpu);
> u64 exb_bit_rsvd = 0;
>
> - if (!context->nx)
> + if (!context->base_role.nxe)
> exb_bit_rsvd = rsvd_bits(63, 63);
> switch (level) {
> case PT32_ROOT_LEVEL:
> @@ -2747,7 +2747,7 @@ static int paging64_init_context_common(struct kvm_vcpu *vcpu,
> struct kvm_mmu *context,
> int level)
> {
> - context->nx = is_nx(vcpu);
> + context->base_role.nxe = is_nx(vcpu);
>
> reset_rsvds_bits_mask(vcpu, context, level);
>
> @@ -2775,7 +2775,7 @@ static int paging64_init_context(struct kvm_vcpu *vcpu,
> static int paging32_init_context(struct kvm_vcpu *vcpu,
> struct kvm_mmu *context)
> {
> - context->nx = false;
> + context->base_role.nxe = 0;
>
> reset_rsvds_bits_mask(vcpu, context, PT32_ROOT_LEVEL);
>
> @@ -2815,24 +2815,23 @@ static int init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
> context->set_cr3 = kvm_x86_ops->set_tdp_cr3;
> context->get_cr3 = get_cr3;
> context->inject_page_fault = kvm_inject_page_fault;
> - context->nx = is_nx(vcpu);
>
> if (!is_paging(vcpu)) {
> - context->nx = false;
> + context->base_role.nxe = 0;
> context->gva_to_gpa = nonpaging_gva_to_gpa;
> context->root_level = 0;
> } else if (is_long_mode(vcpu)) {
> - context->nx = is_nx(vcpu);
> + context->base_role.nxe = is_nx(vcpu);
> reset_rsvds_bits_mask(vcpu, context, PT64_ROOT_LEVEL);
> context->gva_to_gpa = paging64_gva_to_gpa;
> context->root_level = PT64_ROOT_LEVEL;
> } else if (is_pae(vcpu)) {
> - context->nx = is_nx(vcpu);
> + context->base_role.nxe = is_nx(vcpu);
> reset_rsvds_bits_mask(vcpu, context, PT32E_ROOT_LEVEL);
> context->gva_to_gpa = paging64_gva_to_gpa;
> context->root_level = PT32E_ROOT_LEVEL;
> } else {
> - context->nx = false;
> + context->base_role.nxe = 0;
> reset_rsvds_bits_mask(vcpu, context, PT32_ROOT_LEVEL);
> context->gva_to_gpa = paging32_gva_to_gpa;
> context->root_level = PT32_ROOT_LEVEL;
For tdp better set base_role.nxe to zero, otherwise duplicate tdp
pagetables can be created if the guest switches between nx/non-nx.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] KVM: MMU: Use base_role.nxe for mmu.nx
2010-09-14 22:08 ` Marcelo Tosatti
@ 2010-09-15 8:48 ` Roedel, Joerg
2010-09-15 14:48 ` Marcelo Tosatti
2010-09-16 14:00 ` Avi Kivity
0 siblings, 2 replies; 8+ messages in thread
From: Roedel, Joerg @ 2010-09-15 8:48 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: Avi Kivity, kvm, linux-kernel
On Tue, Sep 14, 2010 at 06:08:37PM -0400, Marcelo Tosatti wrote:
>
> For tdp better set base_role.nxe to zero, otherwise duplicate tdp
> pagetables can be created if the guest switches between nx/non-nx.
>
This does not work because bit 63 is marked as reserved if base_role.nxe
is 0. If the walk_addr_generic function then runs with tdp enabled it
would report a set nx bit as a rsvd fault.
We also can't use is_nx() in those path because it does not distinguish
between l1 and l2 nx. Are there guests that switch the efer.nx bit
regularly enough so that it matters? If so I would suggest to drop this
patch and keep mmu.nx.
Joerg
--
AMD Operating System Research Center
Advanced Micro Devices GmbH Einsteinring 24 85609 Dornach
General Managers: Alberto Bozzo, Andrew Bowd
Registration: Dornach, Landkr. Muenchen; Registerger. Muenchen, HRB Nr. 43632
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] KVM: MMU: Use base_role.nxe for mmu.nx
2010-09-15 8:48 ` Roedel, Joerg
@ 2010-09-15 14:48 ` Marcelo Tosatti
2010-09-16 14:00 ` Avi Kivity
1 sibling, 0 replies; 8+ messages in thread
From: Marcelo Tosatti @ 2010-09-15 14:48 UTC (permalink / raw)
To: Roedel, Joerg; +Cc: Avi Kivity, kvm, linux-kernel
On Wed, Sep 15, 2010 at 10:48:33AM +0200, Roedel, Joerg wrote:
> On Tue, Sep 14, 2010 at 06:08:37PM -0400, Marcelo Tosatti wrote:
> >
> > For tdp better set base_role.nxe to zero, otherwise duplicate tdp
> > pagetables can be created if the guest switches between nx/non-nx.
> >
>
> This does not work because bit 63 is marked as reserved if base_role.nxe
> is 0. If the walk_addr_generic function then runs with tdp enabled it
> would report a set nx bit as a rsvd fault.
Ah, OK.
> We also can't use is_nx() in those path because it does not distinguish
> between l1 and l2 nx. Are there guests that switch the efer.nx bit
> regularly enough so that it matters? If so I would suggest to drop this
> patch and keep mmu.nx.
Well, i don't think it would be a common scenario. Ignore me.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] KVM: MMU: Use base_role.nxe for mmu.nx
2010-09-15 8:48 ` Roedel, Joerg
2010-09-15 14:48 ` Marcelo Tosatti
@ 2010-09-16 14:00 ` Avi Kivity
1 sibling, 0 replies; 8+ messages in thread
From: Avi Kivity @ 2010-09-16 14:00 UTC (permalink / raw)
To: Roedel, Joerg; +Cc: Marcelo Tosatti, kvm, linux-kernel
On 09/15/2010 10:48 AM, Roedel, Joerg wrote:
> On Tue, Sep 14, 2010 at 06:08:37PM -0400, Marcelo Tosatti wrote:
> >
> > For tdp better set base_role.nxe to zero, otherwise duplicate tdp
> > pagetables can be created if the guest switches between nx/non-nx.
> >
>
> This does not work because bit 63 is marked as reserved if base_role.nxe
> is 0. If the walk_addr_generic function then runs with tdp enabled it
> would report a set nx bit as a rsvd fault.
> We also can't use is_nx() in those path because it does not distinguish
> between l1 and l2 nx. Are there guests that switch the efer.nx bit
> regularly enough so that it matters? If so I would suggest to drop this
> patch and keep mmu.nx.
>
>
I'll do that then. I'm still unhappy about the duplication.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/2] KVM: MMU: Don't track nested fault info in error-code
2010-09-14 15:46 ` [PATCH 1/2] KVM: MMU: Don't track nested fault info in error-code Joerg Roedel
@ 2010-09-16 14:01 ` Avi Kivity
0 siblings, 0 replies; 8+ messages in thread
From: Avi Kivity @ 2010-09-16 14:01 UTC (permalink / raw)
To: Joerg Roedel; +Cc: Marcelo Tosatti, kvm, linux-kernel
On 09/14/2010 05:46 PM, Joerg Roedel wrote:
> This patch moves the detection whether a page-fault was
> nested or not out of the error code and moves it into a
> separate variable in the fault struct.
>
Applied, thanks.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2010-09-16 14:01 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-09-14 15:46 [PATCH 0/2] NPT virtualization follow-up Joerg Roedel
2010-09-14 15:46 ` [PATCH 1/2] KVM: MMU: Don't track nested fault info in error-code Joerg Roedel
2010-09-16 14:01 ` Avi Kivity
2010-09-14 15:46 ` [PATCH 2/2] KVM: MMU: Use base_role.nxe for mmu.nx Joerg Roedel
2010-09-14 22:08 ` Marcelo Tosatti
2010-09-15 8:48 ` Roedel, Joerg
2010-09-15 14:48 ` Marcelo Tosatti
2010-09-16 14:00 ` Avi Kivity
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.