All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3 v3] KVM: nVMX: nSVM: Add more statistics to KVM debugfs
@ 2021-06-09  1:19 Krish Sadhukhan
  2021-06-09  1:19 ` [PATCH 1/3 v3] KVM: nVMX: nSVM: 'nested_run' should count guest-entry attempts that make it to guest code Krish Sadhukhan
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Krish Sadhukhan @ 2021-06-09  1:19 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, vkuznets, wanpengli, joro

v2 -> v3:
	* Patch "KVM: nVMX: Reset 'nested_run_pending' only in guest
	  mode" from v2 has been dropped.
	* Patch "KVM: nVMX: nSVM: Add a new debugfs statistic to show how
	  many VCPUs have run nested guests" from v2 has been reverted back
	  to what it was in v1 where the statistic tracks if the VCPU
	  is currently executing in guest mode. This modifiled patch is
	  patch# 2 in v3. The name of the statistic has been changed to
	  'guest_mode' to better reflect what it is for.
	* The name of the statistic in patch# 3 has been changed to 'vcpus'.

[PATCH 1/3 v3] KVM: nVMX: nSVM: 'nested_run' should count guest-entry
[PATCH 2/3 v3] KVM: nVMX: nSVM: Add a new VCPU statistic to show if VCPU
[PATCH 3/3 v3] KVM: x86: Add a new VM statistic to show number of VCPUs

 arch/x86/include/asm/kvm_host.h |  4 +++-
 arch/x86/kvm/debugfs.c          | 11 +++++++++++
 arch/x86/kvm/kvm_cache_regs.h   |  3 +++
 arch/x86/kvm/svm/nested.c       |  2 --
 arch/x86/kvm/svm/svm.c          |  6 ++++++
 arch/x86/kvm/vmx/nested.c       |  2 --
 arch/x86/kvm/vmx/vmx.c          | 13 ++++++++++++-
 arch/x86/kvm/x86.c              |  4 +++-
 virt/kvm/kvm_main.c             |  2 ++
 9 files changed, 40 insertions(+), 7 deletions(-)

Krish Sadhukhan (3):
      KVM: nVMX: nSVM: 'nested_run' should count guest-entry attempts that make it to guest code
      KVM: nVMX: nSVM: Add a new VCPU statistic to show if VCPU is in guest mode
      KVM: x86: Add a new VM statistic to show number of VCPUs created in a given VM


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3 v3] KVM: nVMX: nSVM: 'nested_run' should count guest-entry attempts that make it to guest code
  2021-06-09  1:19 [PATCH 0/3 v3] KVM: nVMX: nSVM: Add more statistics to KVM debugfs Krish Sadhukhan
@ 2021-06-09  1:19 ` Krish Sadhukhan
  2021-06-09  1:19 ` [PATCH 2/3 v3] KVM: nVMX: nSVM: Add a new VCPU statistic to show if VCPU is in guest mode Krish Sadhukhan
  2021-06-09  1:19 ` [PATCH 3/3 v3] KVM: x86: Add a new VM statistic to show number of VCPUs created in a given VM Krish Sadhukhan
  2 siblings, 0 replies; 8+ messages in thread
From: Krish Sadhukhan @ 2021-06-09  1:19 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, vkuznets, wanpengli, joro

Currently, the 'nested_run' statistic counts all guest-entry attempts,
including those that fail during vmentry checks on Intel and during
consistency checks on AMD. Convert this statistic to count only those
guest-entries that make it past these state checks and make it to guest
code. This will tell us the number of guest-entries that actually executed
or tried to execute guest code.

Also, rename this statistic to 'nested_runs' since it is a count.

Signed-off-by: Krish Sadhukhan <Krish.Sadhukhan@oracle.com>
---
 arch/x86/include/asm/kvm_host.h |  2 +-
 arch/x86/kvm/svm/nested.c       |  2 --
 arch/x86/kvm/svm/svm.c          |  6 ++++++
 arch/x86/kvm/vmx/nested.c       |  2 --
 arch/x86/kvm/vmx/vmx.c          | 13 ++++++++++++-
 arch/x86/kvm/x86.c              |  2 +-
 6 files changed, 20 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 55efbacfc244..cf8557b2b90f 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1170,7 +1170,7 @@ struct kvm_vcpu_stat {
 	u64 req_event;
 	u64 halt_poll_success_ns;
 	u64 halt_poll_fail_ns;
-	u64 nested_run;
+	u64 nested_runs;
 	u64 directed_yield_attempted;
 	u64 directed_yield_successful;
 };
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 5e8d8443154e..34fc74b0d58a 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -596,8 +596,6 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
 	struct kvm_host_map map;
 	u64 vmcb12_gpa;
 
-	++vcpu->stat.nested_run;
-
 	if (is_smm(vcpu)) {
 		kvm_queue_exception(vcpu, UD_VECTOR);
 		return 1;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 4dd9b7856e5b..31646b5c4877 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3872,6 +3872,12 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
 	svm->next_rip = 0;
 	if (is_guest_mode(vcpu)) {
 		nested_sync_control_from_vmcb02(svm);
+
+		/* Track VMRUNs that have made past consistency checking */
+		if (svm->nested.nested_run_pending &&
+		    svm->vmcb->control.exit_code != SVM_EXIT_ERR)
+                        ++vcpu->stat.nested_runs;
+
 		svm->nested.nested_run_pending = 0;
 	}
 
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 6058a65a6ede..94f70c0af4a4 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -3454,8 +3454,6 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
 	u32 interrupt_shadow = vmx_get_interrupt_shadow(vcpu);
 	enum nested_evmptrld_status evmptrld_status;
 
-	++vcpu->stat.nested_run;
-
 	if (!nested_vmx_check_permission(vcpu))
 		return 1;
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index f2fd447eed45..fa8df7ab2756 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6839,7 +6839,18 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
 
 	kvm_load_host_xsave_state(vcpu);
 
-	vmx->nested.nested_run_pending = 0;
+	if (is_guest_mode(vcpu)) {
+		/*
+		 * Track VMLAUNCH/VMRESUME that have made past guest state
+		 * checking.
+		 */
+		if (vmx->nested.nested_run_pending &&
+		    !vmx->exit_reason.failed_vmentry)
+			++vcpu->stat.nested_runs;
+
+		vmx->nested.nested_run_pending = 0;
+	}
+
 	vmx->idt_vectoring_info = 0;
 
 	if (unlikely(vmx->fail)) {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5bd550eaf683..6d1f51f6c344 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -243,7 +243,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
 	VCPU_STAT("l1d_flush", l1d_flush),
 	VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns),
 	VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
-	VCPU_STAT("nested_run", nested_run),
+	VCPU_STAT("nested_runs", nested_runs),
 	VCPU_STAT("directed_yield_attempted", directed_yield_attempted),
 	VCPU_STAT("directed_yield_successful", directed_yield_successful),
 	VM_STAT("mmu_shadow_zapped", mmu_shadow_zapped),
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3 v3] KVM: nVMX: nSVM: Add a new VCPU statistic to show if VCPU is in guest mode
  2021-06-09  1:19 [PATCH 0/3 v3] KVM: nVMX: nSVM: Add more statistics to KVM debugfs Krish Sadhukhan
  2021-06-09  1:19 ` [PATCH 1/3 v3] KVM: nVMX: nSVM: 'nested_run' should count guest-entry attempts that make it to guest code Krish Sadhukhan
@ 2021-06-09  1:19 ` Krish Sadhukhan
  2021-06-09  1:19 ` [PATCH 3/3 v3] KVM: x86: Add a new VM statistic to show number of VCPUs created in a given VM Krish Sadhukhan
  2 siblings, 0 replies; 8+ messages in thread
From: Krish Sadhukhan @ 2021-06-09  1:19 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, vkuznets, wanpengli, joro

Add the following per-VCPU statistic to KVM debugfs to show if a given
VCPU is in guest mode:

	guest_mode

Also add this as a per-VM statistic to KVM debugfs to show the total number
of VCPUs that are in guest mode in a given VM.

Signed-off-by: Krish Sadhukhan <Krish.Sadhukhan@oracle.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/debugfs.c          | 11 +++++++++++
 arch/x86/kvm/kvm_cache_regs.h   |  3 +++
 arch/x86/kvm/x86.c              |  1 +
 4 files changed, 16 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index cf8557b2b90f..f6d5387bb88f 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1173,6 +1173,7 @@ struct kvm_vcpu_stat {
 	u64 nested_runs;
 	u64 directed_yield_attempted;
 	u64 directed_yield_successful;
+	u64 guest_mode;
 };
 
 struct x86_instruction_info;
diff --git a/arch/x86/kvm/debugfs.c b/arch/x86/kvm/debugfs.c
index 7e818d64bb4d..95a98413dc32 100644
--- a/arch/x86/kvm/debugfs.c
+++ b/arch/x86/kvm/debugfs.c
@@ -17,6 +17,15 @@ static int vcpu_get_timer_advance_ns(void *data, u64 *val)
 
 DEFINE_SIMPLE_ATTRIBUTE(vcpu_timer_advance_ns_fops, vcpu_get_timer_advance_ns, NULL, "%llu\n");
 
+static int vcpu_get_guest_mode(void *data, u64 *val)
+{
+	struct kvm_vcpu *vcpu = (struct kvm_vcpu *) data;
+	*val = vcpu->stat.guest_mode;
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(vcpu_guest_mode_fops, vcpu_get_guest_mode, NULL, "%lld\n");
+
 static int vcpu_get_tsc_offset(void *data, u64 *val)
 {
 	struct kvm_vcpu *vcpu = (struct kvm_vcpu *) data;
@@ -45,6 +54,8 @@ DEFINE_SIMPLE_ATTRIBUTE(vcpu_tsc_scaling_frac_fops, vcpu_get_tsc_scaling_frac_bi
 
 void kvm_arch_create_vcpu_debugfs(struct kvm_vcpu *vcpu, struct dentry *debugfs_dentry)
 {
+	debugfs_create_file("guest_mode", 0444, debugfs_dentry, vcpu,
+			    &vcpu_guest_mode_fops);
 	debugfs_create_file("tsc-offset", 0444, debugfs_dentry, vcpu,
 			    &vcpu_tsc_offset_fops);
 
diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
index 3db5c42c9ecd..ebddbd37a0bf 100644
--- a/arch/x86/kvm/kvm_cache_regs.h
+++ b/arch/x86/kvm/kvm_cache_regs.h
@@ -162,6 +162,7 @@ static inline u64 kvm_read_edx_eax(struct kvm_vcpu *vcpu)
 static inline void enter_guest_mode(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.hflags |= HF_GUEST_MASK;
+	vcpu->stat.guest_mode = 1;
 }
 
 static inline void leave_guest_mode(struct kvm_vcpu *vcpu)
@@ -172,6 +173,8 @@ static inline void leave_guest_mode(struct kvm_vcpu *vcpu)
 		vcpu->arch.load_eoi_exitmap_pending = false;
 		kvm_make_request(KVM_REQ_LOAD_EOI_EXITMAP, vcpu);
 	}
+
+	vcpu->stat.guest_mode = 0;
 }
 
 static inline bool is_guest_mode(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6d1f51f6c344..baa953757911 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -246,6 +246,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
 	VCPU_STAT("nested_runs", nested_runs),
 	VCPU_STAT("directed_yield_attempted", directed_yield_attempted),
 	VCPU_STAT("directed_yield_successful", directed_yield_successful),
+	VCPU_STAT("guest_mode", guest_mode),
 	VM_STAT("mmu_shadow_zapped", mmu_shadow_zapped),
 	VM_STAT("mmu_pte_write", mmu_pte_write),
 	VM_STAT("mmu_pde_zapped", mmu_pde_zapped),
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3 v3] KVM: x86: Add a new VM statistic to show number of VCPUs created in a given VM
  2021-06-09  1:19 [PATCH 0/3 v3] KVM: nVMX: nSVM: Add more statistics to KVM debugfs Krish Sadhukhan
  2021-06-09  1:19 ` [PATCH 1/3 v3] KVM: nVMX: nSVM: 'nested_run' should count guest-entry attempts that make it to guest code Krish Sadhukhan
  2021-06-09  1:19 ` [PATCH 2/3 v3] KVM: nVMX: nSVM: Add a new VCPU statistic to show if VCPU is in guest mode Krish Sadhukhan
@ 2021-06-09  1:19 ` Krish Sadhukhan
  2021-06-09  5:08     ` kernel test robot
  2021-06-09  7:08     ` kernel test robot
  2 siblings, 2 replies; 8+ messages in thread
From: Krish Sadhukhan @ 2021-06-09  1:19 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, vkuznets, wanpengli, joro

'struct kvm' already has a member for tracking the number of VCPUs created
in a given VM. Add this as a new VM statistic to KVM debugfs. This statistic
can be a useful metric to track the usage of VCPUs on a host running
customer VMs.

Signed-off-by: Krish Sadhukhan <Krish.Sadhukhan@oracle.com>
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/x86.c              | 1 +
 virt/kvm/kvm_main.c             | 2 ++
 3 files changed, 4 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f6d5387bb88f..8f61a3fc3d39 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1138,6 +1138,7 @@ struct kvm_vm_stat {
 	ulong lpages;
 	ulong nx_lpage_splits;
 	ulong max_mmu_page_hash_collisions;
+	ulong vcpus;
 };
 
 struct kvm_vcpu_stat {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index baa953757911..7a1ff3052488 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -258,6 +258,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
 	VM_STAT("largepages", lpages, .mode = 0444),
 	VM_STAT("nx_largepages_splitted", nx_lpage_splits, .mode = 0444),
 	VM_STAT("max_mmu_page_hash_collisions", max_mmu_page_hash_collisions),
+	VM_STAT("vcpus", vcpus),
 	{ NULL }
 };
 
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 6b4feb92dc79..d910e4020a43 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3318,6 +3318,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
 	}
 
 	kvm->created_vcpus++;
+	kvm->stat.vcpus++;
 	mutex_unlock(&kvm->lock);
 
 	r = kvm_arch_vcpu_precreate(kvm, id);
@@ -3394,6 +3395,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
 vcpu_decrement:
 	mutex_lock(&kvm->lock);
 	kvm->created_vcpus--;
+	kvm->stat.vcpus--;
 	mutex_unlock(&kvm->lock);
 	return r;
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3 v3] KVM: x86: Add a new VM statistic to show number of VCPUs created in a given VM
  2021-06-09  1:19 ` [PATCH 3/3 v3] KVM: x86: Add a new VM statistic to show number of VCPUs created in a given VM Krish Sadhukhan
@ 2021-06-09  5:08     ` kernel test robot
  2021-06-09  7:08     ` kernel test robot
  1 sibling, 0 replies; 8+ messages in thread
From: kernel test robot @ 2021-06-09  5:08 UTC (permalink / raw)
  To: Krish Sadhukhan, kvm
  Cc: kbuild-all, pbonzini, jmattson, seanjc, vkuznets, wanpengli, joro

[-- Attachment #1: Type: text/plain, Size: 4869 bytes --]

Hi Krish,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on kvm/queue]
[also build test ERROR on v5.13-rc5 next-20210608]
[cannot apply to vhost/linux-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Krish-Sadhukhan/KVM-nVMX-nSVM-Add-more-statistics-to-KVM-debugfs/20210609-101158
base:   https://git.kernel.org/pub/scm/virt/kvm/kvm.git queue
config: s390-randconfig-r034-20210608 (attached as .config)
compiler: s390-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/8b558261089468777eaf3ec89ca30eb954242e4e
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Krish-Sadhukhan/KVM-nVMX-nSVM-Add-more-statistics-to-KVM-debugfs/20210609-101158
        git checkout 8b558261089468777eaf3ec89ca30eb954242e4e
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=s390 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   arch/s390/kvm/../../../virt/kvm/kvm_main.c: In function 'kvm_vm_ioctl_create_vcpu':
>> arch/s390/kvm/../../../virt/kvm/kvm_main.c:3321:11: error: 'struct kvm_vm_stat' has no member named 'vcpus'
    3321 |  kvm->stat.vcpus++;
         |           ^
   arch/s390/kvm/../../../virt/kvm/kvm_main.c:3398:11: error: 'struct kvm_vm_stat' has no member named 'vcpus'
    3398 |  kvm->stat.vcpus--;
         |           ^


vim +3321 arch/s390/kvm/../../../virt/kvm/kvm_main.c

  3301	
  3302	/*
  3303	 * Creates some virtual cpus.  Good luck creating more than one.
  3304	 */
  3305	static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
  3306	{
  3307		int r;
  3308		struct kvm_vcpu *vcpu;
  3309		struct page *page;
  3310	
  3311		if (id >= KVM_MAX_VCPU_ID)
  3312			return -EINVAL;
  3313	
  3314		mutex_lock(&kvm->lock);
  3315		if (kvm->created_vcpus == KVM_MAX_VCPUS) {
  3316			mutex_unlock(&kvm->lock);
  3317			return -EINVAL;
  3318		}
  3319	
  3320		kvm->created_vcpus++;
> 3321		kvm->stat.vcpus++;
  3322		mutex_unlock(&kvm->lock);
  3323	
  3324		r = kvm_arch_vcpu_precreate(kvm, id);
  3325		if (r)
  3326			goto vcpu_decrement;
  3327	
  3328		vcpu = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL_ACCOUNT);
  3329		if (!vcpu) {
  3330			r = -ENOMEM;
  3331			goto vcpu_decrement;
  3332		}
  3333	
  3334		BUILD_BUG_ON(sizeof(struct kvm_run) > PAGE_SIZE);
  3335		page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
  3336		if (!page) {
  3337			r = -ENOMEM;
  3338			goto vcpu_free;
  3339		}
  3340		vcpu->run = page_address(page);
  3341	
  3342		kvm_vcpu_init(vcpu, kvm, id);
  3343	
  3344		r = kvm_arch_vcpu_create(vcpu);
  3345		if (r)
  3346			goto vcpu_free_run_page;
  3347	
  3348		if (kvm->dirty_ring_size) {
  3349			r = kvm_dirty_ring_alloc(&vcpu->dirty_ring,
  3350						 id, kvm->dirty_ring_size);
  3351			if (r)
  3352				goto arch_vcpu_destroy;
  3353		}
  3354	
  3355		mutex_lock(&kvm->lock);
  3356		if (kvm_get_vcpu_by_id(kvm, id)) {
  3357			r = -EEXIST;
  3358			goto unlock_vcpu_destroy;
  3359		}
  3360	
  3361		vcpu->vcpu_idx = atomic_read(&kvm->online_vcpus);
  3362		BUG_ON(kvm->vcpus[vcpu->vcpu_idx]);
  3363	
  3364		/* Now it's all set up, let userspace reach it */
  3365		kvm_get_kvm(kvm);
  3366		r = create_vcpu_fd(vcpu);
  3367		if (r < 0) {
  3368			kvm_put_kvm_no_destroy(kvm);
  3369			goto unlock_vcpu_destroy;
  3370		}
  3371	
  3372		kvm->vcpus[vcpu->vcpu_idx] = vcpu;
  3373	
  3374		/*
  3375		 * Pairs with smp_rmb() in kvm_get_vcpu.  Write kvm->vcpus
  3376		 * before kvm->online_vcpu's incremented value.
  3377		 */
  3378		smp_wmb();
  3379		atomic_inc(&kvm->online_vcpus);
  3380	
  3381		mutex_unlock(&kvm->lock);
  3382		kvm_arch_vcpu_postcreate(vcpu);
  3383		kvm_create_vcpu_debugfs(vcpu);
  3384		return r;
  3385	
  3386	unlock_vcpu_destroy:
  3387		mutex_unlock(&kvm->lock);
  3388		kvm_dirty_ring_free(&vcpu->dirty_ring);
  3389	arch_vcpu_destroy:
  3390		kvm_arch_vcpu_destroy(vcpu);
  3391	vcpu_free_run_page:
  3392		free_page((unsigned long)vcpu->run);
  3393	vcpu_free:
  3394		kmem_cache_free(kvm_vcpu_cache, vcpu);
  3395	vcpu_decrement:
  3396		mutex_lock(&kvm->lock);
  3397		kvm->created_vcpus--;
  3398		kvm->stat.vcpus--;
  3399		mutex_unlock(&kvm->lock);
  3400		return r;
  3401	}
  3402	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 16682 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3 v3] KVM: x86: Add a new VM statistic to show number of VCPUs created in a given VM
@ 2021-06-09  5:08     ` kernel test robot
  0 siblings, 0 replies; 8+ messages in thread
From: kernel test robot @ 2021-06-09  5:08 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 5018 bytes --]

Hi Krish,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on kvm/queue]
[also build test ERROR on v5.13-rc5 next-20210608]
[cannot apply to vhost/linux-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Krish-Sadhukhan/KVM-nVMX-nSVM-Add-more-statistics-to-KVM-debugfs/20210609-101158
base:   https://git.kernel.org/pub/scm/virt/kvm/kvm.git queue
config: s390-randconfig-r034-20210608 (attached as .config)
compiler: s390-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/8b558261089468777eaf3ec89ca30eb954242e4e
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Krish-Sadhukhan/KVM-nVMX-nSVM-Add-more-statistics-to-KVM-debugfs/20210609-101158
        git checkout 8b558261089468777eaf3ec89ca30eb954242e4e
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=s390 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   arch/s390/kvm/../../../virt/kvm/kvm_main.c: In function 'kvm_vm_ioctl_create_vcpu':
>> arch/s390/kvm/../../../virt/kvm/kvm_main.c:3321:11: error: 'struct kvm_vm_stat' has no member named 'vcpus'
    3321 |  kvm->stat.vcpus++;
         |           ^
   arch/s390/kvm/../../../virt/kvm/kvm_main.c:3398:11: error: 'struct kvm_vm_stat' has no member named 'vcpus'
    3398 |  kvm->stat.vcpus--;
         |           ^


vim +3321 arch/s390/kvm/../../../virt/kvm/kvm_main.c

  3301	
  3302	/*
  3303	 * Creates some virtual cpus.  Good luck creating more than one.
  3304	 */
  3305	static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
  3306	{
  3307		int r;
  3308		struct kvm_vcpu *vcpu;
  3309		struct page *page;
  3310	
  3311		if (id >= KVM_MAX_VCPU_ID)
  3312			return -EINVAL;
  3313	
  3314		mutex_lock(&kvm->lock);
  3315		if (kvm->created_vcpus == KVM_MAX_VCPUS) {
  3316			mutex_unlock(&kvm->lock);
  3317			return -EINVAL;
  3318		}
  3319	
  3320		kvm->created_vcpus++;
> 3321		kvm->stat.vcpus++;
  3322		mutex_unlock(&kvm->lock);
  3323	
  3324		r = kvm_arch_vcpu_precreate(kvm, id);
  3325		if (r)
  3326			goto vcpu_decrement;
  3327	
  3328		vcpu = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL_ACCOUNT);
  3329		if (!vcpu) {
  3330			r = -ENOMEM;
  3331			goto vcpu_decrement;
  3332		}
  3333	
  3334		BUILD_BUG_ON(sizeof(struct kvm_run) > PAGE_SIZE);
  3335		page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
  3336		if (!page) {
  3337			r = -ENOMEM;
  3338			goto vcpu_free;
  3339		}
  3340		vcpu->run = page_address(page);
  3341	
  3342		kvm_vcpu_init(vcpu, kvm, id);
  3343	
  3344		r = kvm_arch_vcpu_create(vcpu);
  3345		if (r)
  3346			goto vcpu_free_run_page;
  3347	
  3348		if (kvm->dirty_ring_size) {
  3349			r = kvm_dirty_ring_alloc(&vcpu->dirty_ring,
  3350						 id, kvm->dirty_ring_size);
  3351			if (r)
  3352				goto arch_vcpu_destroy;
  3353		}
  3354	
  3355		mutex_lock(&kvm->lock);
  3356		if (kvm_get_vcpu_by_id(kvm, id)) {
  3357			r = -EEXIST;
  3358			goto unlock_vcpu_destroy;
  3359		}
  3360	
  3361		vcpu->vcpu_idx = atomic_read(&kvm->online_vcpus);
  3362		BUG_ON(kvm->vcpus[vcpu->vcpu_idx]);
  3363	
  3364		/* Now it's all set up, let userspace reach it */
  3365		kvm_get_kvm(kvm);
  3366		r = create_vcpu_fd(vcpu);
  3367		if (r < 0) {
  3368			kvm_put_kvm_no_destroy(kvm);
  3369			goto unlock_vcpu_destroy;
  3370		}
  3371	
  3372		kvm->vcpus[vcpu->vcpu_idx] = vcpu;
  3373	
  3374		/*
  3375		 * Pairs with smp_rmb() in kvm_get_vcpu.  Write kvm->vcpus
  3376		 * before kvm->online_vcpu's incremented value.
  3377		 */
  3378		smp_wmb();
  3379		atomic_inc(&kvm->online_vcpus);
  3380	
  3381		mutex_unlock(&kvm->lock);
  3382		kvm_arch_vcpu_postcreate(vcpu);
  3383		kvm_create_vcpu_debugfs(vcpu);
  3384		return r;
  3385	
  3386	unlock_vcpu_destroy:
  3387		mutex_unlock(&kvm->lock);
  3388		kvm_dirty_ring_free(&vcpu->dirty_ring);
  3389	arch_vcpu_destroy:
  3390		kvm_arch_vcpu_destroy(vcpu);
  3391	vcpu_free_run_page:
  3392		free_page((unsigned long)vcpu->run);
  3393	vcpu_free:
  3394		kmem_cache_free(kvm_vcpu_cache, vcpu);
  3395	vcpu_decrement:
  3396		mutex_lock(&kvm->lock);
  3397		kvm->created_vcpus--;
  3398		kvm->stat.vcpus--;
  3399		mutex_unlock(&kvm->lock);
  3400		return r;
  3401	}
  3402	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 16682 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3 v3] KVM: x86: Add a new VM statistic to show number of VCPUs created in a given VM
  2021-06-09  1:19 ` [PATCH 3/3 v3] KVM: x86: Add a new VM statistic to show number of VCPUs created in a given VM Krish Sadhukhan
@ 2021-06-09  7:08     ` kernel test robot
  2021-06-09  7:08     ` kernel test robot
  1 sibling, 0 replies; 8+ messages in thread
From: kernel test robot @ 2021-06-09  7:08 UTC (permalink / raw)
  To: Krish Sadhukhan, kvm
  Cc: kbuild-all, pbonzini, jmattson, seanjc, vkuznets, wanpengli, joro

[-- Attachment #1: Type: text/plain, Size: 4885 bytes --]

Hi Krish,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on kvm/queue]
[also build test ERROR on v5.13-rc5 next-20210608]
[cannot apply to vhost/linux-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Krish-Sadhukhan/KVM-nVMX-nSVM-Add-more-statistics-to-KVM-debugfs/20210609-101158
base:   https://git.kernel.org/pub/scm/virt/kvm/kvm.git queue
config: powerpc-pseries_defconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/8b558261089468777eaf3ec89ca30eb954242e4e
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Krish-Sadhukhan/KVM-nVMX-nSVM-Add-more-statistics-to-KVM-debugfs/20210609-101158
        git checkout 8b558261089468777eaf3ec89ca30eb954242e4e
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   arch/powerpc/kvm/../../../virt/kvm/kvm_main.c: In function 'kvm_vm_ioctl_create_vcpu':
>> arch/powerpc/kvm/../../../virt/kvm/kvm_main.c:3321:11: error: 'struct kvm_vm_stat' has no member named 'vcpus'
    3321 |  kvm->stat.vcpus++;
         |           ^
   arch/powerpc/kvm/../../../virt/kvm/kvm_main.c:3398:11: error: 'struct kvm_vm_stat' has no member named 'vcpus'
    3398 |  kvm->stat.vcpus--;
         |           ^


vim +3321 arch/powerpc/kvm/../../../virt/kvm/kvm_main.c

  3301	
  3302	/*
  3303	 * Creates some virtual cpus.  Good luck creating more than one.
  3304	 */
  3305	static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
  3306	{
  3307		int r;
  3308		struct kvm_vcpu *vcpu;
  3309		struct page *page;
  3310	
  3311		if (id >= KVM_MAX_VCPU_ID)
  3312			return -EINVAL;
  3313	
  3314		mutex_lock(&kvm->lock);
  3315		if (kvm->created_vcpus == KVM_MAX_VCPUS) {
  3316			mutex_unlock(&kvm->lock);
  3317			return -EINVAL;
  3318		}
  3319	
  3320		kvm->created_vcpus++;
> 3321		kvm->stat.vcpus++;
  3322		mutex_unlock(&kvm->lock);
  3323	
  3324		r = kvm_arch_vcpu_precreate(kvm, id);
  3325		if (r)
  3326			goto vcpu_decrement;
  3327	
  3328		vcpu = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL_ACCOUNT);
  3329		if (!vcpu) {
  3330			r = -ENOMEM;
  3331			goto vcpu_decrement;
  3332		}
  3333	
  3334		BUILD_BUG_ON(sizeof(struct kvm_run) > PAGE_SIZE);
  3335		page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
  3336		if (!page) {
  3337			r = -ENOMEM;
  3338			goto vcpu_free;
  3339		}
  3340		vcpu->run = page_address(page);
  3341	
  3342		kvm_vcpu_init(vcpu, kvm, id);
  3343	
  3344		r = kvm_arch_vcpu_create(vcpu);
  3345		if (r)
  3346			goto vcpu_free_run_page;
  3347	
  3348		if (kvm->dirty_ring_size) {
  3349			r = kvm_dirty_ring_alloc(&vcpu->dirty_ring,
  3350						 id, kvm->dirty_ring_size);
  3351			if (r)
  3352				goto arch_vcpu_destroy;
  3353		}
  3354	
  3355		mutex_lock(&kvm->lock);
  3356		if (kvm_get_vcpu_by_id(kvm, id)) {
  3357			r = -EEXIST;
  3358			goto unlock_vcpu_destroy;
  3359		}
  3360	
  3361		vcpu->vcpu_idx = atomic_read(&kvm->online_vcpus);
  3362		BUG_ON(kvm->vcpus[vcpu->vcpu_idx]);
  3363	
  3364		/* Now it's all set up, let userspace reach it */
  3365		kvm_get_kvm(kvm);
  3366		r = create_vcpu_fd(vcpu);
  3367		if (r < 0) {
  3368			kvm_put_kvm_no_destroy(kvm);
  3369			goto unlock_vcpu_destroy;
  3370		}
  3371	
  3372		kvm->vcpus[vcpu->vcpu_idx] = vcpu;
  3373	
  3374		/*
  3375		 * Pairs with smp_rmb() in kvm_get_vcpu.  Write kvm->vcpus
  3376		 * before kvm->online_vcpu's incremented value.
  3377		 */
  3378		smp_wmb();
  3379		atomic_inc(&kvm->online_vcpus);
  3380	
  3381		mutex_unlock(&kvm->lock);
  3382		kvm_arch_vcpu_postcreate(vcpu);
  3383		kvm_create_vcpu_debugfs(vcpu);
  3384		return r;
  3385	
  3386	unlock_vcpu_destroy:
  3387		mutex_unlock(&kvm->lock);
  3388		kvm_dirty_ring_free(&vcpu->dirty_ring);
  3389	arch_vcpu_destroy:
  3390		kvm_arch_vcpu_destroy(vcpu);
  3391	vcpu_free_run_page:
  3392		free_page((unsigned long)vcpu->run);
  3393	vcpu_free:
  3394		kmem_cache_free(kvm_vcpu_cache, vcpu);
  3395	vcpu_decrement:
  3396		mutex_lock(&kvm->lock);
  3397		kvm->created_vcpus--;
  3398		kvm->stat.vcpus--;
  3399		mutex_unlock(&kvm->lock);
  3400		return r;
  3401	}
  3402	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 25116 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3 v3] KVM: x86: Add a new VM statistic to show number of VCPUs created in a given VM
@ 2021-06-09  7:08     ` kernel test robot
  0 siblings, 0 replies; 8+ messages in thread
From: kernel test robot @ 2021-06-09  7:08 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 5034 bytes --]

Hi Krish,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on kvm/queue]
[also build test ERROR on v5.13-rc5 next-20210608]
[cannot apply to vhost/linux-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Krish-Sadhukhan/KVM-nVMX-nSVM-Add-more-statistics-to-KVM-debugfs/20210609-101158
base:   https://git.kernel.org/pub/scm/virt/kvm/kvm.git queue
config: powerpc-pseries_defconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/8b558261089468777eaf3ec89ca30eb954242e4e
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Krish-Sadhukhan/KVM-nVMX-nSVM-Add-more-statistics-to-KVM-debugfs/20210609-101158
        git checkout 8b558261089468777eaf3ec89ca30eb954242e4e
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   arch/powerpc/kvm/../../../virt/kvm/kvm_main.c: In function 'kvm_vm_ioctl_create_vcpu':
>> arch/powerpc/kvm/../../../virt/kvm/kvm_main.c:3321:11: error: 'struct kvm_vm_stat' has no member named 'vcpus'
    3321 |  kvm->stat.vcpus++;
         |           ^
   arch/powerpc/kvm/../../../virt/kvm/kvm_main.c:3398:11: error: 'struct kvm_vm_stat' has no member named 'vcpus'
    3398 |  kvm->stat.vcpus--;
         |           ^


vim +3321 arch/powerpc/kvm/../../../virt/kvm/kvm_main.c

  3301	
  3302	/*
  3303	 * Creates some virtual cpus.  Good luck creating more than one.
  3304	 */
  3305	static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
  3306	{
  3307		int r;
  3308		struct kvm_vcpu *vcpu;
  3309		struct page *page;
  3310	
  3311		if (id >= KVM_MAX_VCPU_ID)
  3312			return -EINVAL;
  3313	
  3314		mutex_lock(&kvm->lock);
  3315		if (kvm->created_vcpus == KVM_MAX_VCPUS) {
  3316			mutex_unlock(&kvm->lock);
  3317			return -EINVAL;
  3318		}
  3319	
  3320		kvm->created_vcpus++;
> 3321		kvm->stat.vcpus++;
  3322		mutex_unlock(&kvm->lock);
  3323	
  3324		r = kvm_arch_vcpu_precreate(kvm, id);
  3325		if (r)
  3326			goto vcpu_decrement;
  3327	
  3328		vcpu = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL_ACCOUNT);
  3329		if (!vcpu) {
  3330			r = -ENOMEM;
  3331			goto vcpu_decrement;
  3332		}
  3333	
  3334		BUILD_BUG_ON(sizeof(struct kvm_run) > PAGE_SIZE);
  3335		page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
  3336		if (!page) {
  3337			r = -ENOMEM;
  3338			goto vcpu_free;
  3339		}
  3340		vcpu->run = page_address(page);
  3341	
  3342		kvm_vcpu_init(vcpu, kvm, id);
  3343	
  3344		r = kvm_arch_vcpu_create(vcpu);
  3345		if (r)
  3346			goto vcpu_free_run_page;
  3347	
  3348		if (kvm->dirty_ring_size) {
  3349			r = kvm_dirty_ring_alloc(&vcpu->dirty_ring,
  3350						 id, kvm->dirty_ring_size);
  3351			if (r)
  3352				goto arch_vcpu_destroy;
  3353		}
  3354	
  3355		mutex_lock(&kvm->lock);
  3356		if (kvm_get_vcpu_by_id(kvm, id)) {
  3357			r = -EEXIST;
  3358			goto unlock_vcpu_destroy;
  3359		}
  3360	
  3361		vcpu->vcpu_idx = atomic_read(&kvm->online_vcpus);
  3362		BUG_ON(kvm->vcpus[vcpu->vcpu_idx]);
  3363	
  3364		/* Now it's all set up, let userspace reach it */
  3365		kvm_get_kvm(kvm);
  3366		r = create_vcpu_fd(vcpu);
  3367		if (r < 0) {
  3368			kvm_put_kvm_no_destroy(kvm);
  3369			goto unlock_vcpu_destroy;
  3370		}
  3371	
  3372		kvm->vcpus[vcpu->vcpu_idx] = vcpu;
  3373	
  3374		/*
  3375		 * Pairs with smp_rmb() in kvm_get_vcpu.  Write kvm->vcpus
  3376		 * before kvm->online_vcpu's incremented value.
  3377		 */
  3378		smp_wmb();
  3379		atomic_inc(&kvm->online_vcpus);
  3380	
  3381		mutex_unlock(&kvm->lock);
  3382		kvm_arch_vcpu_postcreate(vcpu);
  3383		kvm_create_vcpu_debugfs(vcpu);
  3384		return r;
  3385	
  3386	unlock_vcpu_destroy:
  3387		mutex_unlock(&kvm->lock);
  3388		kvm_dirty_ring_free(&vcpu->dirty_ring);
  3389	arch_vcpu_destroy:
  3390		kvm_arch_vcpu_destroy(vcpu);
  3391	vcpu_free_run_page:
  3392		free_page((unsigned long)vcpu->run);
  3393	vcpu_free:
  3394		kmem_cache_free(kvm_vcpu_cache, vcpu);
  3395	vcpu_decrement:
  3396		mutex_lock(&kvm->lock);
  3397		kvm->created_vcpus--;
  3398		kvm->stat.vcpus--;
  3399		mutex_unlock(&kvm->lock);
  3400		return r;
  3401	}
  3402	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 25116 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-06-09  7:09 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-09  1:19 [PATCH 0/3 v3] KVM: nVMX: nSVM: Add more statistics to KVM debugfs Krish Sadhukhan
2021-06-09  1:19 ` [PATCH 1/3 v3] KVM: nVMX: nSVM: 'nested_run' should count guest-entry attempts that make it to guest code Krish Sadhukhan
2021-06-09  1:19 ` [PATCH 2/3 v3] KVM: nVMX: nSVM: Add a new VCPU statistic to show if VCPU is in guest mode Krish Sadhukhan
2021-06-09  1:19 ` [PATCH 3/3 v3] KVM: x86: Add a new VM statistic to show number of VCPUs created in a given VM Krish Sadhukhan
2021-06-09  5:08   ` kernel test robot
2021-06-09  5:08     ` kernel test robot
2021-06-09  7:08   ` kernel test robot
2021-06-09  7:08     ` kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.