All of lore.kernel.org
 help / color / mirror / Atom feed
From: Quentin Perret <qperret@google.com>
To: Yanan Wang <wangyanan55@huawei.com>
Cc: Marc Zyngier <maz@kernel.org>, Will Deacon <will@kernel.org>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	James Morse <james.morse@arm.com>,
	Julien Thierry <julien.thierry.kdev@gmail.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Gavin Shan <gshan@redhat.com>,
	wanghaibin.wang@huawei.com, zhukeqian1@huawei.com,
	yuzenghui@huawei.com
Subject: Re: [PATCH v5 5/6] KVM: arm64: Move I-cache flush to the fault handlers
Date: Wed, 2 Jun 2021 10:58:28 +0000	[thread overview]
Message-ID: <YLdkVH0G2Lq9vPc5@google.com> (raw)
In-Reply-To: <20210415115032.35760-6-wangyanan55@huawei.com>

On Thursday 15 Apr 2021 at 19:50:31 (+0800), Yanan Wang wrote:
> In this patch, we move invalidation of I-cache to the fault handlers to

Nit: please avoid using 'This patch' in commit messages, see
Documentation/process/submitting-patches.rst.

> avoid unnecessary I-cache maintenances. On the map path, invalidate the
> I-cache if we are going to create an executable stage-2 mapping for guest.
> And on the permission path, invalidate the I-cache if we are going to add
> an executable permission to the existing guest stage-2 mapping.
> 
> Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
> ---
>  arch/arm64/include/asm/kvm_mmu.h | 15 --------------
>  arch/arm64/kvm/hyp/pgtable.c     | 35 +++++++++++++++++++++++++++++++-
>  arch/arm64/kvm/mmu.c             |  9 +-------
>  3 files changed, 35 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index e9b163c5f023..155492fe5b15 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -187,21 +187,6 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
>  	return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101;
>  }
>  
> -static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn,
> -						  unsigned long size)
> -{
> -	if (icache_is_aliasing()) {
> -		/* any kind of VIPT cache */
> -		__flush_icache_all();
> -	} else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
> -		/* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
> -		void *va = page_address(pfn_to_page(pfn));
> -
> -		invalidate_icache_range((unsigned long)va,
> -					(unsigned long)va + size);
> -	}
> -}
> -
>  void kvm_set_way_flush(struct kvm_vcpu *vcpu);
>  void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled);
>  
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index b480f6d1171e..9f4429d80df0 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -568,6 +568,26 @@ static bool stage2_pte_cacheable(struct kvm_pgtable *pgt, kvm_pte_t pte)
>  	return memattr == KVM_S2_MEMATTR(pgt, NORMAL);
>  }
>  
> +static bool stage2_pte_executable(kvm_pte_t pte)
> +{
> +	return !(pte & KVM_PTE_LEAF_ATTR_HI_S2_XN);
> +}
> +
> +static void stage2_invalidate_icache(void *addr, u64 size)
> +{
> +	if (icache_is_aliasing()) {
> +		/* Any kind of VIPT cache */
> +		__flush_icache_all();
> +	} else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {


> +		/*
> +		 * See comment in __kvm_tlb_flush_vmid_ipa().
> +		 * Invalidate PIPT, or VPIPT at EL2.
> +		 */
> +		invalidate_icache_range((unsigned long)addr,
> +					(unsigned long)addr + size);
> +	}
> +}
> +
>  static void stage2_put_pte(kvm_pte_t *ptep, struct kvm_s2_mmu *mmu, u64 addr,
>  			   u32 level, struct kvm_pgtable_mm_ops *mm_ops)
>  {
> @@ -618,6 +638,10 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>  		if (stage2_pte_cacheable(pgt, new) && !stage2_has_fwb(pgt))
>  			__flush_dcache_area(mm_ops->phys_to_virt(phys),
>  					    granule);
> +
> +		if (stage2_pte_executable(new))
> +			stage2_invalidate_icache(mm_ops->phys_to_virt(phys),
> +						 granule);
>  	}
>  
>  	smp_store_release(ptep, new);
> @@ -896,8 +920,17 @@ static int stage2_attr_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
>  	 * but worst-case the access flag update gets lost and will be
>  	 * set on the next access instead.
>  	 */
> -	if (data->pte != pte)
> +	if (data->pte != pte) {
> +		/*
> +		 * Invalidate the instruction cache before updating
> +		 * if we are going to add the executable permission
> +		 * for the guest stage-2 PTE.
> +		 */
> +		if (!stage2_pte_executable(*ptep) && stage2_pte_executable(pte))
> +			stage2_invalidate_icache(kvm_pte_follow(pte, data->mm_ops),
> +						 kvm_granule_size(level));
>  		WRITE_ONCE(*ptep, pte);
> +	}

As for the dcache stuff, it seems like this would be best placed in an
optional mm_ops callback, and have the kernel implement it.

Thanks,
Quentin

WARNING: multiple messages have this Message-ID (diff)
From: Quentin Perret <qperret@google.com>
To: Yanan Wang <wangyanan55@huawei.com>
Cc: kvm@vger.kernel.org, Marc Zyngier <maz@kernel.org>,
	linux-kernel@vger.kernel.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v5 5/6] KVM: arm64: Move I-cache flush to the fault handlers
Date: Wed, 2 Jun 2021 10:58:28 +0000	[thread overview]
Message-ID: <YLdkVH0G2Lq9vPc5@google.com> (raw)
In-Reply-To: <20210415115032.35760-6-wangyanan55@huawei.com>

On Thursday 15 Apr 2021 at 19:50:31 (+0800), Yanan Wang wrote:
> In this patch, we move invalidation of I-cache to the fault handlers to

Nit: please avoid using 'This patch' in commit messages, see
Documentation/process/submitting-patches.rst.

> avoid unnecessary I-cache maintenances. On the map path, invalidate the
> I-cache if we are going to create an executable stage-2 mapping for guest.
> And on the permission path, invalidate the I-cache if we are going to add
> an executable permission to the existing guest stage-2 mapping.
> 
> Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
> ---
>  arch/arm64/include/asm/kvm_mmu.h | 15 --------------
>  arch/arm64/kvm/hyp/pgtable.c     | 35 +++++++++++++++++++++++++++++++-
>  arch/arm64/kvm/mmu.c             |  9 +-------
>  3 files changed, 35 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index e9b163c5f023..155492fe5b15 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -187,21 +187,6 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
>  	return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101;
>  }
>  
> -static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn,
> -						  unsigned long size)
> -{
> -	if (icache_is_aliasing()) {
> -		/* any kind of VIPT cache */
> -		__flush_icache_all();
> -	} else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
> -		/* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
> -		void *va = page_address(pfn_to_page(pfn));
> -
> -		invalidate_icache_range((unsigned long)va,
> -					(unsigned long)va + size);
> -	}
> -}
> -
>  void kvm_set_way_flush(struct kvm_vcpu *vcpu);
>  void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled);
>  
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index b480f6d1171e..9f4429d80df0 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -568,6 +568,26 @@ static bool stage2_pte_cacheable(struct kvm_pgtable *pgt, kvm_pte_t pte)
>  	return memattr == KVM_S2_MEMATTR(pgt, NORMAL);
>  }
>  
> +static bool stage2_pte_executable(kvm_pte_t pte)
> +{
> +	return !(pte & KVM_PTE_LEAF_ATTR_HI_S2_XN);
> +}
> +
> +static void stage2_invalidate_icache(void *addr, u64 size)
> +{
> +	if (icache_is_aliasing()) {
> +		/* Any kind of VIPT cache */
> +		__flush_icache_all();
> +	} else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {


> +		/*
> +		 * See comment in __kvm_tlb_flush_vmid_ipa().
> +		 * Invalidate PIPT, or VPIPT at EL2.
> +		 */
> +		invalidate_icache_range((unsigned long)addr,
> +					(unsigned long)addr + size);
> +	}
> +}
> +
>  static void stage2_put_pte(kvm_pte_t *ptep, struct kvm_s2_mmu *mmu, u64 addr,
>  			   u32 level, struct kvm_pgtable_mm_ops *mm_ops)
>  {
> @@ -618,6 +638,10 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>  		if (stage2_pte_cacheable(pgt, new) && !stage2_has_fwb(pgt))
>  			__flush_dcache_area(mm_ops->phys_to_virt(phys),
>  					    granule);
> +
> +		if (stage2_pte_executable(new))
> +			stage2_invalidate_icache(mm_ops->phys_to_virt(phys),
> +						 granule);
>  	}
>  
>  	smp_store_release(ptep, new);
> @@ -896,8 +920,17 @@ static int stage2_attr_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
>  	 * but worst-case the access flag update gets lost and will be
>  	 * set on the next access instead.
>  	 */
> -	if (data->pte != pte)
> +	if (data->pte != pte) {
> +		/*
> +		 * Invalidate the instruction cache before updating
> +		 * if we are going to add the executable permission
> +		 * for the guest stage-2 PTE.
> +		 */
> +		if (!stage2_pte_executable(*ptep) && stage2_pte_executable(pte))
> +			stage2_invalidate_icache(kvm_pte_follow(pte, data->mm_ops),
> +						 kvm_granule_size(level));
>  		WRITE_ONCE(*ptep, pte);
> +	}

As for the dcache stuff, it seems like this would be best placed in an
optional mm_ops callback, and have the kernel implement it.

Thanks,
Quentin
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Quentin Perret <qperret@google.com>
To: Yanan Wang <wangyanan55@huawei.com>
Cc: Marc Zyngier <maz@kernel.org>, Will Deacon <will@kernel.org>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	James Morse <james.morse@arm.com>,
	Julien Thierry <julien.thierry.kdev@gmail.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Gavin Shan <gshan@redhat.com>,
	wanghaibin.wang@huawei.com, zhukeqian1@huawei.com,
	yuzenghui@huawei.com
Subject: Re: [PATCH v5 5/6] KVM: arm64: Move I-cache flush to the fault handlers
Date: Wed, 2 Jun 2021 10:58:28 +0000	[thread overview]
Message-ID: <YLdkVH0G2Lq9vPc5@google.com> (raw)
In-Reply-To: <20210415115032.35760-6-wangyanan55@huawei.com>

On Thursday 15 Apr 2021 at 19:50:31 (+0800), Yanan Wang wrote:
> In this patch, we move invalidation of I-cache to the fault handlers to

Nit: please avoid using 'This patch' in commit messages, see
Documentation/process/submitting-patches.rst.

> avoid unnecessary I-cache maintenances. On the map path, invalidate the
> I-cache if we are going to create an executable stage-2 mapping for guest.
> And on the permission path, invalidate the I-cache if we are going to add
> an executable permission to the existing guest stage-2 mapping.
> 
> Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
> ---
>  arch/arm64/include/asm/kvm_mmu.h | 15 --------------
>  arch/arm64/kvm/hyp/pgtable.c     | 35 +++++++++++++++++++++++++++++++-
>  arch/arm64/kvm/mmu.c             |  9 +-------
>  3 files changed, 35 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index e9b163c5f023..155492fe5b15 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -187,21 +187,6 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
>  	return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101;
>  }
>  
> -static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn,
> -						  unsigned long size)
> -{
> -	if (icache_is_aliasing()) {
> -		/* any kind of VIPT cache */
> -		__flush_icache_all();
> -	} else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
> -		/* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
> -		void *va = page_address(pfn_to_page(pfn));
> -
> -		invalidate_icache_range((unsigned long)va,
> -					(unsigned long)va + size);
> -	}
> -}
> -
>  void kvm_set_way_flush(struct kvm_vcpu *vcpu);
>  void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled);
>  
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index b480f6d1171e..9f4429d80df0 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -568,6 +568,26 @@ static bool stage2_pte_cacheable(struct kvm_pgtable *pgt, kvm_pte_t pte)
>  	return memattr == KVM_S2_MEMATTR(pgt, NORMAL);
>  }
>  
> +static bool stage2_pte_executable(kvm_pte_t pte)
> +{
> +	return !(pte & KVM_PTE_LEAF_ATTR_HI_S2_XN);
> +}
> +
> +static void stage2_invalidate_icache(void *addr, u64 size)
> +{
> +	if (icache_is_aliasing()) {
> +		/* Any kind of VIPT cache */
> +		__flush_icache_all();
> +	} else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {


> +		/*
> +		 * See comment in __kvm_tlb_flush_vmid_ipa().
> +		 * Invalidate PIPT, or VPIPT at EL2.
> +		 */
> +		invalidate_icache_range((unsigned long)addr,
> +					(unsigned long)addr + size);
> +	}
> +}
> +
>  static void stage2_put_pte(kvm_pte_t *ptep, struct kvm_s2_mmu *mmu, u64 addr,
>  			   u32 level, struct kvm_pgtable_mm_ops *mm_ops)
>  {
> @@ -618,6 +638,10 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>  		if (stage2_pte_cacheable(pgt, new) && !stage2_has_fwb(pgt))
>  			__flush_dcache_area(mm_ops->phys_to_virt(phys),
>  					    granule);
> +
> +		if (stage2_pte_executable(new))
> +			stage2_invalidate_icache(mm_ops->phys_to_virt(phys),
> +						 granule);
>  	}
>  
>  	smp_store_release(ptep, new);
> @@ -896,8 +920,17 @@ static int stage2_attr_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
>  	 * but worst-case the access flag update gets lost and will be
>  	 * set on the next access instead.
>  	 */
> -	if (data->pte != pte)
> +	if (data->pte != pte) {
> +		/*
> +		 * Invalidate the instruction cache before updating
> +		 * if we are going to add the executable permission
> +		 * for the guest stage-2 PTE.
> +		 */
> +		if (!stage2_pte_executable(*ptep) && stage2_pte_executable(pte))
> +			stage2_invalidate_icache(kvm_pte_follow(pte, data->mm_ops),
> +						 kvm_granule_size(level));
>  		WRITE_ONCE(*ptep, pte);
> +	}

As for the dcache stuff, it seems like this would be best placed in an
optional mm_ops callback, and have the kernel implement it.

Thanks,
Quentin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-06-02 10:59 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-15 11:50 [PATCH v5 0/6] KVM: arm64: Improve efficiency of stage2 page table Yanan Wang
2021-04-15 11:50 ` Yanan Wang
2021-04-15 11:50 ` Yanan Wang
2021-04-15 11:50 ` [PATCH v5 1/6] KVM: arm64: Introduce KVM_PGTABLE_S2_GUEST stage-2 flag Yanan Wang
2021-04-15 11:50   ` Yanan Wang
2021-04-15 11:50   ` Yanan Wang
2021-06-02 10:43   ` Quentin Perret
2021-06-02 10:43     ` Quentin Perret
2021-06-02 10:43     ` Quentin Perret
2021-06-03 12:36     ` wangyanan (Y)
2021-06-03 12:36       ` wangyanan (Y)
2021-06-03 12:36       ` wangyanan (Y)
2021-04-15 11:50 ` [PATCH v5 2/6] KVM: arm64: Move D-cache flush to the fault handlers Yanan Wang
2021-04-15 11:50   ` Yanan Wang
2021-04-15 11:50   ` Yanan Wang
2021-06-02 10:19   ` Marc Zyngier
2021-06-02 10:19     ` Marc Zyngier
2021-06-02 10:49     ` Quentin Perret
2021-06-02 10:49       ` Quentin Perret
2021-06-02 10:49       ` Quentin Perret
2021-06-03 12:33     ` wangyanan (Y)
2021-06-03 12:33       ` wangyanan (Y)
2021-06-03 12:33       ` wangyanan (Y)
2021-04-15 11:50 ` [PATCH v5 3/6] KVM: arm64: Add mm_ops member for structure stage2_attr_data Yanan Wang
2021-04-15 11:50   ` Yanan Wang
2021-04-15 11:50   ` Yanan Wang
2021-04-15 11:50 ` [PATCH v5 4/6] KVM: arm64: Provide invalidate_icache_range at non-VHE EL2 Yanan Wang
2021-04-15 11:50   ` Yanan Wang
2021-04-15 11:50   ` Yanan Wang
2021-06-02 10:22   ` Marc Zyngier
2021-06-02 10:22     ` Marc Zyngier
2021-06-03 12:34     ` wangyanan (Y)
2021-06-03 12:34       ` wangyanan (Y)
2021-06-03 12:34       ` wangyanan (Y)
2021-04-15 11:50 ` [PATCH v5 5/6] KVM: arm64: Move I-cache flush to the fault handlers Yanan Wang
2021-04-15 11:50   ` Yanan Wang
2021-04-15 11:50   ` Yanan Wang
2021-06-02 10:58   ` Quentin Perret [this message]
2021-06-02 10:58     ` Quentin Perret
2021-06-02 10:58     ` Quentin Perret
2021-06-03 12:35     ` wangyanan (Y)
2021-06-03 12:35       ` wangyanan (Y)
2021-06-03 12:35       ` wangyanan (Y)
2021-04-15 11:50 ` [PATCH v5 6/6] KVM: arm64: Distinguish cases of memcache allocations completely Yanan Wang
2021-04-15 11:50   ` Yanan Wang
2021-04-15 11:50   ` Yanan Wang
2021-06-02 11:07   ` Quentin Perret
2021-06-02 11:07     ` Quentin Perret
2021-06-02 11:07     ` Quentin Perret
2021-06-03 12:52     ` wangyanan (Y)
2021-06-03 12:52       ` wangyanan (Y)
2021-06-03 12:52       ` wangyanan (Y)
2021-05-12 12:54 ` [PATCH v5 0/6] KVM: arm64: Improve efficiency of stage2 page table wangyanan (Y)
2021-05-12 12:54   ` wangyanan (Y)
2021-05-12 12:54   ` wangyanan (Y)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YLdkVH0G2Lq9vPc5@google.com \
    --to=qperret@google.com \
    --cc=alexandru.elisei@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=gshan@redhat.com \
    --cc=james.morse@arm.com \
    --cc=julien.thierry.kdev@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=suzuki.poulose@arm.com \
    --cc=wanghaibin.wang@huawei.com \
    --cc=wangyanan55@huawei.com \
    --cc=will@kernel.org \
    --cc=yuzenghui@huawei.com \
    --cc=zhukeqian1@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.