kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [kvm-unit-tests PATCH v3 0/4] Zero allocated pages
@ 2019-05-09 20:05 Nadav Amit
  2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 1/4] lib/alloc_page: " Nadav Amit
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Nadav Amit @ 2019-05-09 20:05 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, Krish Sadhukhan, Andrew Jones, Alexandru Elisei, Nadav Amit

For reproducibility, it is best to zero pages before they are used.
There are hidden assumptions on memory being zeroed (by BIOS/KVM), which
might be broken at any given moment. The full argument appears in the
first patch commit log.

Following the first patch that zeros the memory, the rest of the
patch-set removes redundant zeroing do to the additional zeroing.

This patch-set is only tested on x86.

v2->v3:
* Typos [Alexandru]

v1->v2:
* Change alloc_pages() as well
* Remove redundant page zeroing [Andrew]

Nadav Amit (4):
  lib/alloc_page: Zero allocated pages
  x86: Remove redundant page zeroing
  lib: Remove redundant page zeroing
  arm: Remove redundant page zeroing

 lib/alloc_page.c         |  4 ++++
 lib/arm/asm/pgtable.h    |  2 --
 lib/arm/mmu.c            |  1 -
 lib/arm64/asm/pgtable.h  |  1 -
 lib/virtio-mmio.c        |  1 -
 lib/x86/intel-iommu.c    |  5 -----
 x86/eventinj.c           |  1 -
 x86/hyperv_connections.c |  4 ----
 x86/vmx.c                | 10 ----------
 x86/vmx_tests.c          | 11 -----------
 10 files changed, 4 insertions(+), 36 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [kvm-unit-tests PATCH v3 1/4] lib/alloc_page: Zero allocated pages
  2019-05-09 20:05 [kvm-unit-tests PATCH v3 0/4] Zero allocated pages Nadav Amit
@ 2019-05-09 20:05 ` Nadav Amit
  2019-05-15  9:18   ` Andrew Jones
  2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 2/4] x86: Remove redundant page zeroing Nadav Amit
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Nadav Amit @ 2019-05-09 20:05 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, Krish Sadhukhan, Andrew Jones, Alexandru Elisei, Nadav Amit

One of the most important properties of tests is reproducibility. For
tests to be reproducible, the same environment should be set on each
test invocation.

When it comes to memory content, this is not exactly the case in
kvm-unit-tests. The tests might, mistakenly or intentionally, assume
that memory is zeroed (by the BIOS or KVM).  However, failures might not
be reproducible if this assumption is broken.

As an example, consider x86 do_iret(), which mistakenly does not push
SS:RSP onto the stack on 64-bit mode, although they are popped
unconditionally on iret.

Do not assume that memory is zeroed. Clear it once it is allocated to
allow tests to easily be reproduced.

Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Signed-off-by: Nadav Amit <nadav.amit@gmail.com>
---
 lib/alloc_page.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/lib/alloc_page.c b/lib/alloc_page.c
index 730f2b5..97d1339 100644
--- a/lib/alloc_page.c
+++ b/lib/alloc_page.c
@@ -65,6 +65,8 @@ void *alloc_page()
 	freelist = *(void **)freelist;
 	spin_unlock(&lock);
 
+	if (p)
+		memset(p, 0, PAGE_SIZE);
 	return p;
 }
 
@@ -126,6 +128,8 @@ void *alloc_pages(unsigned long order)
 		}
 	}
 	spin_unlock(&lock);
+	if (run_start)
+		memset(run_start, 0, n * PAGE_SIZE);
 	return run_start;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [kvm-unit-tests PATCH v3 2/4] x86: Remove redundant page zeroing
  2019-05-09 20:05 [kvm-unit-tests PATCH v3 0/4] Zero allocated pages Nadav Amit
  2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 1/4] lib/alloc_page: " Nadav Amit
@ 2019-05-09 20:05 ` Nadav Amit
  2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 3/4] lib: " Nadav Amit
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Nadav Amit @ 2019-05-09 20:05 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, Krish Sadhukhan, Andrew Jones, Alexandru Elisei, Nadav Amit

Now that alloc_page() zeros the page, remove the redundant page zeroing.

Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Nadav Amit <nadav.amit@gmail.com>
---
 lib/x86/intel-iommu.c    |  5 -----
 x86/eventinj.c           |  1 -
 x86/hyperv_connections.c |  4 ----
 x86/vmx.c                | 10 ----------
 x86/vmx_tests.c          | 11 -----------
 5 files changed, 31 deletions(-)

diff --git a/lib/x86/intel-iommu.c b/lib/x86/intel-iommu.c
index 3f3f211..c811ba5 100644
--- a/lib/x86/intel-iommu.c
+++ b/lib/x86/intel-iommu.c
@@ -125,7 +125,6 @@ static void vtd_setup_root_table(void)
 {
 	void *root = alloc_page();
 
-	memset(root, 0, PAGE_SIZE);
 	vtd_writeq(DMAR_RTADDR_REG, virt_to_phys(root));
 	vtd_gcmd_or(VTD_GCMD_ROOT);
 	printf("DMAR table address: %#018lx\n", vtd_root_table());
@@ -135,7 +134,6 @@ static void vtd_setup_ir_table(void)
 {
 	void *root = alloc_page();
 
-	memset(root, 0, PAGE_SIZE);
 	/* 0xf stands for table size (2^(0xf+1) == 65536) */
 	vtd_writeq(DMAR_IRTA_REG, virt_to_phys(root) | 0xf);
 	vtd_gcmd_or(VTD_GCMD_IR_TABLE);
@@ -153,7 +151,6 @@ static void vtd_install_pte(vtd_pte_t *root, iova_t iova,
 		offset = PGDIR_OFFSET(iova, level);
 		if (!(root[offset] & VTD_PTE_RW)) {
 			page = alloc_page();
-			memset(page, 0, PAGE_SIZE);
 			root[offset] = virt_to_phys(page) | VTD_PTE_RW;
 		}
 		root = (uint64_t *)(phys_to_virt(root[offset] &
@@ -195,7 +192,6 @@ void vtd_map_range(uint16_t sid, iova_t iova, phys_addr_t pa, size_t size)
 
 	if (!re->present) {
 		ce = alloc_page();
-		memset(ce, 0, PAGE_SIZE);
 		memset(re, 0, sizeof(*re));
 		re->context_table_p = virt_to_phys(ce) >> VTD_PAGE_SHIFT;
 		re->present = 1;
@@ -209,7 +205,6 @@ void vtd_map_range(uint16_t sid, iova_t iova, phys_addr_t pa, size_t size)
 
 	if (!ce->present) {
 		slptptr = alloc_page();
-		memset(slptptr, 0, PAGE_SIZE);
 		memset(ce, 0, sizeof(*ce));
 		/* To make it simple, domain ID is the same as SID */
 		ce->domain_id = sid;
diff --git a/x86/eventinj.c b/x86/eventinj.c
index 8064eb9..d2dfc40 100644
--- a/x86/eventinj.c
+++ b/x86/eventinj.c
@@ -398,7 +398,6 @@ int main(void)
 
 	pt = alloc_page();
 	cr3 = (void*)read_cr3();
-	memset(pt, 0, 4096);
 	/* use shadowed stack during interrupt delivery */
 	for (i = 0; i < 4096/sizeof(ulong); i++) {
 		if (!cr3[i]) {
diff --git a/x86/hyperv_connections.c b/x86/hyperv_connections.c
index 5d541c9..8eade41 100644
--- a/x86/hyperv_connections.c
+++ b/x86/hyperv_connections.c
@@ -47,7 +47,6 @@ static void setup_hypercall(void)
 	hypercall_page = alloc_page();
 	if (!hypercall_page)
 		report_abort("failed to allocate hypercall page");
-	memset(hypercall_page, 0, PAGE_SIZE);
 
 	wrmsr(HV_X64_MSR_GUEST_OS_ID, guestid);
 
@@ -105,9 +104,6 @@ static void setup_cpu(void *ctx)
 	hv->post_msg = alloc_page();
 	if (!hv->msg_page || !hv->evt_page || !hv->post_msg)
 		report_abort("failed to allocate synic pages for vcpu");
-	memset(hv->msg_page, 0, sizeof(*hv->msg_page));
-	memset(hv->evt_page, 0, sizeof(*hv->evt_page));
-	memset(hv->post_msg, 0, sizeof(*hv->post_msg));
 	hv->msg_conn = MSG_CONN_BASE + vcpu;
 	hv->evt_conn = EVT_CONN_BASE + vcpu;
 
diff --git a/x86/vmx.c b/x86/vmx.c
index be47800..962ec0f 100644
--- a/x86/vmx.c
+++ b/x86/vmx.c
@@ -353,7 +353,6 @@ static void test_vmwrite_vmread(void)
 	struct vmcs *vmcs = alloc_page();
 	u32 vmcs_enum_max, max_index = 0;
 
-	memset(vmcs, 0, PAGE_SIZE);
 	vmcs->hdr.revision_id = basic.revision;
 	assert(!vmcs_clear(vmcs));
 	assert(!make_vmcs_current(vmcs));
@@ -373,7 +372,6 @@ static void test_vmcs_high(void)
 {
 	struct vmcs *vmcs = alloc_page();
 
-	memset(vmcs, 0, PAGE_SIZE);
 	vmcs->hdr.revision_id = basic.revision;
 	assert(!vmcs_clear(vmcs));
 	assert(!make_vmcs_current(vmcs));
@@ -400,7 +398,6 @@ static void test_vmcs_lifecycle(void)
 
 	for (i = 0; i < ARRAY_SIZE(vmcs); i++) {
 		vmcs[i] = alloc_page();
-		memset(vmcs[i], 0, PAGE_SIZE);
 		vmcs[i]->hdr.revision_id = basic.revision;
 	}
 
@@ -647,7 +644,6 @@ static void test_vmclear_flushing(void)
 
 	for (i = 0; i < ARRAY_SIZE(vmcs); i++) {
 		vmcs[i] = alloc_page();
-		memset(vmcs[i], 0, PAGE_SIZE);
 	}
 
 	vmcs[0]->hdr.revision_id = basic.revision;
@@ -745,7 +741,6 @@ static void split_large_ept_entry(unsigned long *ptep, int level)
 
 	new_pt = alloc_page();
 	assert(new_pt);
-	memset(new_pt, 0, PAGE_SIZE);
 
 	prototype = pte & ~EPT_ADDR_MASK;
 	if (level == 2)
@@ -1220,7 +1215,6 @@ static void init_vmcs_guest(void)
 static int init_vmcs(struct vmcs **vmcs)
 {
 	*vmcs = alloc_page();
-	memset(*vmcs, 0, PAGE_SIZE);
 	(*vmcs)->hdr.revision_id = basic.revision;
 	/* vmclear first to init vmcs */
 	if (vmcs_clear(*vmcs)) {
@@ -1259,7 +1253,6 @@ static void init_vmx(void)
 	ulong fix_cr4_set, fix_cr4_clr;
 
 	vmxon_region = alloc_page();
-	memset(vmxon_region, 0, PAGE_SIZE);
 
 	vmcs_root = alloc_page();
 
@@ -1291,9 +1284,7 @@ static void init_vmx(void)
 	*vmxon_region = basic.revision;
 
 	guest_stack = alloc_page();
-	memset(guest_stack, 0, PAGE_SIZE);
 	guest_syscall_stack = alloc_page();
-	memset(guest_syscall_stack, 0, PAGE_SIZE);
 }
 
 static void do_vmxon_off(void *data)
@@ -1420,7 +1411,6 @@ static void test_vmptrst(void)
 	struct vmcs *vmcs1, *vmcs2;
 
 	vmcs1 = alloc_page();
-	memset(vmcs1, 0, PAGE_SIZE);
 	init_vmcs(&vmcs1);
 	ret = vmcs_save(&vmcs2);
 	report("test vmptrst", (!ret) && (vmcs1 == vmcs2));
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 2d6b12d..c52ebc6 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -253,7 +253,6 @@ static void msr_bmp_init(void)
 	u32 ctrl_cpu0;
 
 	msr_bitmap = alloc_page();
-	memset(msr_bitmap, 0x0, PAGE_SIZE);
 	ctrl_cpu0 = vmcs_read(CPU_EXEC_CTRL0);
 	ctrl_cpu0 |= CPU_MSR_BITMAP;
 	vmcs_write(CPU_EXEC_CTRL0, ctrl_cpu0);
@@ -627,8 +626,6 @@ static int iobmp_init(struct vmcs *vmcs)
 
 	io_bitmap_a = alloc_page();
 	io_bitmap_b = alloc_page();
-	memset(io_bitmap_a, 0x0, PAGE_SIZE);
-	memset(io_bitmap_b, 0x0, PAGE_SIZE);
 	ctrl_cpu0 = vmcs_read(CPU_EXEC_CTRL0);
 	ctrl_cpu0 |= CPU_IO_BITMAP;
 	ctrl_cpu0 &= (~CPU_IO);
@@ -1062,8 +1059,6 @@ static int setup_ept(bool enable_ad)
 	if (__setup_ept(virt_to_phys(pml4), enable_ad))
 		return 1;
 
-	memset(pml4, 0, PAGE_SIZE);
-
 	end_of_memory = fwcfg_get_u64(FW_CFG_RAM_SIZE);
 	if (end_of_memory < (1ul << 32))
 		end_of_memory = (1ul << 32);
@@ -1135,8 +1130,6 @@ static int ept_init_common(bool have_ad)
 		return VMX_TEST_EXIT;
 	data_page1 = alloc_page();
 	data_page2 = alloc_page();
-	memset(data_page1, 0x0, PAGE_SIZE);
-	memset(data_page2, 0x0, PAGE_SIZE);
 	*((u32 *)data_page1) = MAGIC_VAL_1;
 	*((u32 *)data_page2) = MAGIC_VAL_2;
 	install_ept(pml4, (unsigned long)data_page1, (unsigned long)data_page2,
@@ -1483,7 +1476,6 @@ static int pml_init(struct vmcs *vmcs)
 	}
 
 	pml_log = alloc_page();
-	memset(pml_log, 0x0, PAGE_SIZE);
 	vmcs_write(PMLADDR, (u64)pml_log);
 	vmcs_write(GUEST_PML_INDEX, PML_INDEX - 1);
 
@@ -1908,9 +1900,6 @@ static int msr_switch_init(struct vmcs *vmcs)
 	exit_msr_store = alloc_page();
 	exit_msr_load = alloc_page();
 	entry_msr_load = alloc_page();
-	memset(exit_msr_store, 0, PAGE_SIZE);
-	memset(exit_msr_load, 0, PAGE_SIZE);
-	memset(entry_msr_load, 0, PAGE_SIZE);
 	entry_msr_load[0].index = MSR_KERNEL_GS_BASE;
 	entry_msr_load[0].value = MSR_MAGIC;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [kvm-unit-tests PATCH v3 3/4] lib: Remove redundant page zeroing
  2019-05-09 20:05 [kvm-unit-tests PATCH v3 0/4] Zero allocated pages Nadav Amit
  2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 1/4] lib/alloc_page: " Nadav Amit
  2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 2/4] x86: Remove redundant page zeroing Nadav Amit
@ 2019-05-09 20:05 ` Nadav Amit
  2019-05-15  9:19   ` Andrew Jones
  2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 4/4] arm: " Nadav Amit
  2019-05-10 12:55 ` [kvm-unit-tests PATCH v3 0/4] Zero allocated pages Konrad Rzeszutek Wilk
  4 siblings, 1 reply; 9+ messages in thread
From: Nadav Amit @ 2019-05-09 20:05 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, Krish Sadhukhan, Andrew Jones, Alexandru Elisei, Nadav Amit

Now that alloc_page() zeros the page, remove the redundant page zeroing.

Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Nadav Amit <nadav.amit@gmail.com>
---
 lib/virtio-mmio.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/lib/virtio-mmio.c b/lib/virtio-mmio.c
index 57fe78e..e5e8f66 100644
--- a/lib/virtio-mmio.c
+++ b/lib/virtio-mmio.c
@@ -56,7 +56,6 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev,
 	vq = calloc(1, sizeof(*vq));
 	assert(VIRTIO_MMIO_QUEUE_SIZE_MIN <= 2*PAGE_SIZE);
 	queue = alloc_pages(1);
-	memset(queue, 0, 2*PAGE_SIZE);
 	assert(vq && queue);
 
 	writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [kvm-unit-tests PATCH v3 4/4] arm: Remove redundant page zeroing
  2019-05-09 20:05 [kvm-unit-tests PATCH v3 0/4] Zero allocated pages Nadav Amit
                   ` (2 preceding siblings ...)
  2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 3/4] lib: " Nadav Amit
@ 2019-05-09 20:05 ` Nadav Amit
  2019-05-15  9:27   ` Andrew Jones
  2019-05-10 12:55 ` [kvm-unit-tests PATCH v3 0/4] Zero allocated pages Konrad Rzeszutek Wilk
  4 siblings, 1 reply; 9+ messages in thread
From: Nadav Amit @ 2019-05-09 20:05 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, Krish Sadhukhan, Andrew Jones, Alexandru Elisei, Nadav Amit

Now that alloc_page() zeros the page, remove the redundant page zeroing.

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Nadav Amit <nadav.amit@gmail.com>
---
 lib/arm/asm/pgtable.h   | 2 --
 lib/arm/mmu.c           | 1 -
 lib/arm64/asm/pgtable.h | 1 -
 3 files changed, 4 deletions(-)

diff --git a/lib/arm/asm/pgtable.h b/lib/arm/asm/pgtable.h
index b614bce..241dff6 100644
--- a/lib/arm/asm/pgtable.h
+++ b/lib/arm/asm/pgtable.h
@@ -53,7 +53,6 @@ static inline pmd_t *pmd_alloc_one(void)
 {
 	assert(PTRS_PER_PMD * sizeof(pmd_t) == PAGE_SIZE);
 	pmd_t *pmd = alloc_page();
-	memset(pmd, 0, PTRS_PER_PMD * sizeof(pmd_t));
 	return pmd;
 }
 static inline pmd_t *pmd_alloc(pgd_t *pgd, unsigned long addr)
@@ -80,7 +79,6 @@ static inline pte_t *pte_alloc_one(void)
 {
 	assert(PTRS_PER_PTE * sizeof(pte_t) == PAGE_SIZE);
 	pte_t *pte = alloc_page();
-	memset(pte, 0, PTRS_PER_PTE * sizeof(pte_t));
 	return pte;
 }
 static inline pte_t *pte_alloc(pmd_t *pmd, unsigned long addr)
diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c
index 03f6622..3d38c83 100644
--- a/lib/arm/mmu.c
+++ b/lib/arm/mmu.c
@@ -166,7 +166,6 @@ void *setup_mmu(phys_addr_t phys_end)
 #endif
 
 	mmu_idmap = alloc_page();
-	memset(mmu_idmap, 0, PAGE_SIZE);
 
 	/*
 	 * mach-virt I/O regions:
diff --git a/lib/arm64/asm/pgtable.h b/lib/arm64/asm/pgtable.h
index 5860abe..ee0a2c8 100644
--- a/lib/arm64/asm/pgtable.h
+++ b/lib/arm64/asm/pgtable.h
@@ -61,7 +61,6 @@ static inline pte_t *pte_alloc_one(void)
 {
 	assert(PTRS_PER_PTE * sizeof(pte_t) == PAGE_SIZE);
 	pte_t *pte = alloc_page();
-	memset(pte, 0, PTRS_PER_PTE * sizeof(pte_t));
 	return pte;
 }
 static inline pte_t *pte_alloc(pmd_t *pmd, unsigned long addr)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH v3 0/4] Zero allocated pages
  2019-05-09 20:05 [kvm-unit-tests PATCH v3 0/4] Zero allocated pages Nadav Amit
                   ` (3 preceding siblings ...)
  2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 4/4] arm: " Nadav Amit
@ 2019-05-10 12:55 ` Konrad Rzeszutek Wilk
  4 siblings, 0 replies; 9+ messages in thread
From: Konrad Rzeszutek Wilk @ 2019-05-10 12:55 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Paolo Bonzini, kvm, Krish Sadhukhan, Andrew Jones, Alexandru Elisei

On Thu, May 09, 2019 at 01:05:54PM -0700, Nadav Amit wrote:
> For reproducibility, it is best to zero pages before they are used.
> There are hidden assumptions on memory being zeroed (by BIOS/KVM), which
> might be broken at any given moment. The full argument appears in the
> first patch commit log.
> 
> Following the first patch that zeros the memory, the rest of the
> patch-set removes redundant zeroing do to the additional zeroing.
> 
> This patch-set is only tested on x86.

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

for all of them please.
> 
> v2->v3:
> * Typos [Alexandru]
> 
> v1->v2:
> * Change alloc_pages() as well
> * Remove redundant page zeroing [Andrew]
> 
> Nadav Amit (4):
>   lib/alloc_page: Zero allocated pages
>   x86: Remove redundant page zeroing
>   lib: Remove redundant page zeroing
>   arm: Remove redundant page zeroing
> 
>  lib/alloc_page.c         |  4 ++++
>  lib/arm/asm/pgtable.h    |  2 --
>  lib/arm/mmu.c            |  1 -
>  lib/arm64/asm/pgtable.h  |  1 -
>  lib/virtio-mmio.c        |  1 -
>  lib/x86/intel-iommu.c    |  5 -----
>  x86/eventinj.c           |  1 -
>  x86/hyperv_connections.c |  4 ----
>  x86/vmx.c                | 10 ----------
>  x86/vmx_tests.c          | 11 -----------
>  10 files changed, 4 insertions(+), 36 deletions(-)
> 
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH v3 1/4] lib/alloc_page: Zero allocated pages
  2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 1/4] lib/alloc_page: " Nadav Amit
@ 2019-05-15  9:18   ` Andrew Jones
  0 siblings, 0 replies; 9+ messages in thread
From: Andrew Jones @ 2019-05-15  9:18 UTC (permalink / raw)
  To: Nadav Amit; +Cc: Paolo Bonzini, kvm, Krish Sadhukhan, Alexandru Elisei

On Thu, May 09, 2019 at 01:05:55PM -0700, Nadav Amit wrote:
> One of the most important properties of tests is reproducibility. For
> tests to be reproducible, the same environment should be set on each
> test invocation.
> 
> When it comes to memory content, this is not exactly the case in
> kvm-unit-tests. The tests might, mistakenly or intentionally, assume
> that memory is zeroed (by the BIOS or KVM).  However, failures might not
> be reproducible if this assumption is broken.
> 
> As an example, consider x86 do_iret(), which mistakenly does not push
> SS:RSP onto the stack on 64-bit mode, although they are popped
> unconditionally on iret.
> 
> Do not assume that memory is zeroed. Clear it once it is allocated to
> allow tests to easily be reproduced.
> 
> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
> Signed-off-by: Nadav Amit <nadav.amit@gmail.com>
> ---
>  lib/alloc_page.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/lib/alloc_page.c b/lib/alloc_page.c
> index 730f2b5..97d1339 100644
> --- a/lib/alloc_page.c
> +++ b/lib/alloc_page.c
> @@ -65,6 +65,8 @@ void *alloc_page()
>  	freelist = *(void **)freelist;
>  	spin_unlock(&lock);
>  
> +	if (p)
> +		memset(p, 0, PAGE_SIZE);
>  	return p;
>  }
>  
> @@ -126,6 +128,8 @@ void *alloc_pages(unsigned long order)
>  		}
>  	}
>  	spin_unlock(&lock);
> +	if (run_start)
> +		memset(run_start, 0, n * PAGE_SIZE);
>  	return run_start;
>  }
>  
> -- 
> 2.17.1
>

Reviewed-by: Andrew Jones <drjones@redhat.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH v3 3/4] lib: Remove redundant page zeroing
  2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 3/4] lib: " Nadav Amit
@ 2019-05-15  9:19   ` Andrew Jones
  0 siblings, 0 replies; 9+ messages in thread
From: Andrew Jones @ 2019-05-15  9:19 UTC (permalink / raw)
  To: Nadav Amit; +Cc: Paolo Bonzini, kvm, Krish Sadhukhan, Alexandru Elisei

On Thu, May 09, 2019 at 01:05:57PM -0700, Nadav Amit wrote:
> Now that alloc_page() zeros the page, remove the redundant page zeroing.
> 
> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
> Suggested-by: Andrew Jones <drjones@redhat.com>
> Signed-off-by: Nadav Amit <nadav.amit@gmail.com>
> ---
>  lib/virtio-mmio.c | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/lib/virtio-mmio.c b/lib/virtio-mmio.c
> index 57fe78e..e5e8f66 100644
> --- a/lib/virtio-mmio.c
> +++ b/lib/virtio-mmio.c
> @@ -56,7 +56,6 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev,
>  	vq = calloc(1, sizeof(*vq));
>  	assert(VIRTIO_MMIO_QUEUE_SIZE_MIN <= 2*PAGE_SIZE);
>  	queue = alloc_pages(1);
> -	memset(queue, 0, 2*PAGE_SIZE);
>  	assert(vq && queue);
>  
>  	writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
> -- 
> 2.17.1
>

Reviewed-by: Andrew Jones <drjones@redhat.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH v3 4/4] arm: Remove redundant page zeroing
  2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 4/4] arm: " Nadav Amit
@ 2019-05-15  9:27   ` Andrew Jones
  0 siblings, 0 replies; 9+ messages in thread
From: Andrew Jones @ 2019-05-15  9:27 UTC (permalink / raw)
  To: Nadav Amit; +Cc: Paolo Bonzini, kvm, Krish Sadhukhan, Alexandru Elisei

On Thu, May 09, 2019 at 01:05:58PM -0700, Nadav Amit wrote:
> Now that alloc_page() zeros the page, remove the redundant page zeroing.
> 
> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
> Suggested-by: Andrew Jones <drjones@redhat.com>
> Signed-off-by: Nadav Amit <nadav.amit@gmail.com>
> ---
>  lib/arm/asm/pgtable.h   | 2 --
>  lib/arm/mmu.c           | 1 -
>  lib/arm64/asm/pgtable.h | 1 -
>  3 files changed, 4 deletions(-)
> 
> diff --git a/lib/arm/asm/pgtable.h b/lib/arm/asm/pgtable.h
> index b614bce..241dff6 100644
> --- a/lib/arm/asm/pgtable.h
> +++ b/lib/arm/asm/pgtable.h
> @@ -53,7 +53,6 @@ static inline pmd_t *pmd_alloc_one(void)
>  {
>  	assert(PTRS_PER_PMD * sizeof(pmd_t) == PAGE_SIZE);
>  	pmd_t *pmd = alloc_page();
> -	memset(pmd, 0, PTRS_PER_PMD * sizeof(pmd_t));
>  	return pmd;
>  }
>  static inline pmd_t *pmd_alloc(pgd_t *pgd, unsigned long addr)
> @@ -80,7 +79,6 @@ static inline pte_t *pte_alloc_one(void)
>  {
>  	assert(PTRS_PER_PTE * sizeof(pte_t) == PAGE_SIZE);
>  	pte_t *pte = alloc_page();
> -	memset(pte, 0, PTRS_PER_PTE * sizeof(pte_t));
>  	return pte;
>  }
>  static inline pte_t *pte_alloc(pmd_t *pmd, unsigned long addr)
> diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c
> index 03f6622..3d38c83 100644
> --- a/lib/arm/mmu.c
> +++ b/lib/arm/mmu.c
> @@ -166,7 +166,6 @@ void *setup_mmu(phys_addr_t phys_end)
>  #endif
>  
>  	mmu_idmap = alloc_page();
> -	memset(mmu_idmap, 0, PAGE_SIZE);
>  
>  	/*
>  	 * mach-virt I/O regions:
> diff --git a/lib/arm64/asm/pgtable.h b/lib/arm64/asm/pgtable.h
> index 5860abe..ee0a2c8 100644
> --- a/lib/arm64/asm/pgtable.h
> +++ b/lib/arm64/asm/pgtable.h
> @@ -61,7 +61,6 @@ static inline pte_t *pte_alloc_one(void)
>  {
>  	assert(PTRS_PER_PTE * sizeof(pte_t) == PAGE_SIZE);
>  	pte_t *pte = alloc_page();
> -	memset(pte, 0, PTRS_PER_PTE * sizeof(pte_t));
>  	return pte;
>  }
>  static inline pte_t *pte_alloc(pmd_t *pmd, unsigned long addr)
> -- 
> 2.17.1
>

Reviewed-by: Andrew Jones <drjones@redhat.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-05-15  9:27 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-09 20:05 [kvm-unit-tests PATCH v3 0/4] Zero allocated pages Nadav Amit
2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 1/4] lib/alloc_page: " Nadav Amit
2019-05-15  9:18   ` Andrew Jones
2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 2/4] x86: Remove redundant page zeroing Nadav Amit
2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 3/4] lib: " Nadav Amit
2019-05-15  9:19   ` Andrew Jones
2019-05-09 20:05 ` [kvm-unit-tests PATCH v3 4/4] arm: " Nadav Amit
2019-05-15  9:27   ` Andrew Jones
2019-05-10 12:55 ` [kvm-unit-tests PATCH v3 0/4] Zero allocated pages Konrad Rzeszutek Wilk

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).