kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [kvm-unit-tests PATCH v2 00/11] Fix and improve the page allocator
@ 2021-01-15 12:37 Claudio Imbrenda
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 01/11] lib/x86: fix page.h to include the generic header Claudio Imbrenda
                   ` (10 more replies)
  0 siblings, 11 replies; 20+ messages in thread
From: Claudio Imbrenda @ 2021-01-15 12:37 UTC (permalink / raw)
  To: kvm
  Cc: frankja, david, thuth, pbonzini, cohuck, lvivier, nadav.amit,
	krish.sadhukhan

My previous patchseries was rushed and not polished enough. Furthermore it
introduced some regressions.

This patchseries fixes hopefully all the issues reported, and introduces
some new features.

It also simplifies the code and hopefully makes it more readable.

Fixed:
* allocated memory is now zeroed by default

New features:
* per-allocation flags to specify not just the area (like before) but also
  other parameters
  - dontzero flag: the allocation will not be zeroed
  - fresh flag: the returned memory has never been read or written to before

I would appreciate if people could test these patches, especially on
strange, unusual or exotic hardware (I tested only on s390x)


GitLab:
  https://gitlab.com/imbrenda/kvm-unit-tests/-/tree/page_allocator_fixes
CI:
  https://gitlab.com/imbrenda/kvm-unit-tests/-/pipelines/241819726


v1->v2
* have DONTZERO flag instead of a ZERO flag, this way there is no need
  for a default.
* drop the last patch, since there is no need for a default now.
* fixed a pre-existing bug that caused wrong allocations
* renamed alloc_pages_special to reserve_pages, to make it clear it is
  not a normal allocation. The function now returns 0 on success and -1
  on failure.
* added a NULL check in vm_free; freeing a NULL pointer is now a no-op.
* fix and improve some comments
* remove a spurious return 

Claudio Imbrenda (11):
  lib/x86: fix page.h to include the generic header
  lib/list.h: add list_add_tail
  lib/vmalloc: add some asserts and improvements
  lib/asm: Fix definitions of memory areas
  lib/alloc_page: fix and improve the page allocator
  lib/alloc.h: remove align_min from struct alloc_ops
  lib/alloc_page: Optimization to skip known empty freelists
  lib/alloc_page: rework metadata format
  lib/alloc: replace areas with more generic flags
  lib/alloc_page: Wire up FLAG_DONTZERO
  lib/alloc_page: Properly handle requests for fresh blocks

 lib/asm-generic/memory_areas.h |   9 +-
 lib/arm/asm/memory_areas.h     |  11 +-
 lib/arm64/asm/memory_areas.h   |  11 +-
 lib/powerpc/asm/memory_areas.h |  11 +-
 lib/ppc64/asm/memory_areas.h   |  11 +-
 lib/s390x/asm/memory_areas.h   |  13 +-
 lib/x86/asm/memory_areas.h     |  27 ++--
 lib/x86/asm/page.h             |   4 +-
 lib/alloc.h                    |   1 -
 lib/alloc_page.h               |  76 ++++++---
 lib/list.h                     |   9 ++
 lib/alloc_page.c               | 283 +++++++++++++++++++--------------
 lib/alloc_phys.c               |   9 +-
 lib/s390x/smp.c                |   2 +-
 lib/vmalloc.c                  |  23 +--
 15 files changed, 284 insertions(+), 216 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [kvm-unit-tests PATCH v2 01/11] lib/x86: fix page.h to include the generic header
  2021-01-15 12:37 [kvm-unit-tests PATCH v2 00/11] Fix and improve the page allocator Claudio Imbrenda
@ 2021-01-15 12:37 ` Claudio Imbrenda
  2021-01-19 15:12   ` Janosch Frank
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 02/11] lib/list.h: add list_add_tail Claudio Imbrenda
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 20+ messages in thread
From: Claudio Imbrenda @ 2021-01-15 12:37 UTC (permalink / raw)
  To: kvm
  Cc: frankja, david, thuth, pbonzini, cohuck, lvivier, nadav.amit,
	krish.sadhukhan

Bring x86 in line with the other architectures and include the generic header
at asm-generic/page.h .
This provides the macros PAGE_SHIFT, PAGE_SIZE, PAGE_MASK, virt_to_pfn, and
pfn_to_virt.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 lib/x86/asm/page.h | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/lib/x86/asm/page.h b/lib/x86/asm/page.h
index 1359eb7..2cf8881 100644
--- a/lib/x86/asm/page.h
+++ b/lib/x86/asm/page.h
@@ -13,9 +13,7 @@
 typedef unsigned long pteval_t;
 typedef unsigned long pgd_t;
 
-#define PAGE_SHIFT	12
-#define PAGE_SIZE	(_AC(1,UL) << PAGE_SHIFT)
-#define PAGE_MASK	(~(PAGE_SIZE-1))
+#include <asm-generic/page.h>
 
 #ifndef __ASSEMBLY__
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [kvm-unit-tests PATCH v2 02/11] lib/list.h: add list_add_tail
  2021-01-15 12:37 [kvm-unit-tests PATCH v2 00/11] Fix and improve the page allocator Claudio Imbrenda
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 01/11] lib/x86: fix page.h to include the generic header Claudio Imbrenda
@ 2021-01-15 12:37 ` Claudio Imbrenda
  2021-01-19 15:18   ` Janosch Frank
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 03/11] lib/vmalloc: add some asserts and improvements Claudio Imbrenda
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 20+ messages in thread
From: Claudio Imbrenda @ 2021-01-15 12:37 UTC (permalink / raw)
  To: kvm
  Cc: frankja, david, thuth, pbonzini, cohuck, lvivier, nadav.amit,
	krish.sadhukhan

Add a list_add_tail wrapper function to allow adding elements to the end
of a list.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 lib/list.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/lib/list.h b/lib/list.h
index 18d9516..7f9717e 100644
--- a/lib/list.h
+++ b/lib/list.h
@@ -50,4 +50,13 @@ static inline void list_add(struct linked_list *head, struct linked_list *li)
 	head->next = li;
 }
 
+/*
+ * Add the given element before the given list head.
+ */
+static inline void list_add_tail(struct linked_list *head, struct linked_list *li)
+{
+	assert(head);
+	list_add(head->prev, li);
+}
+
 #endif
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [kvm-unit-tests PATCH v2 03/11] lib/vmalloc: add some asserts and improvements
  2021-01-15 12:37 [kvm-unit-tests PATCH v2 00/11] Fix and improve the page allocator Claudio Imbrenda
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 01/11] lib/x86: fix page.h to include the generic header Claudio Imbrenda
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 02/11] lib/list.h: add list_add_tail Claudio Imbrenda
@ 2021-01-15 12:37 ` Claudio Imbrenda
  2021-01-19 15:26   ` Janosch Frank
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 04/11] lib/asm: Fix definitions of memory areas Claudio Imbrenda
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 20+ messages in thread
From: Claudio Imbrenda @ 2021-01-15 12:37 UTC (permalink / raw)
  To: kvm
  Cc: frankja, david, thuth, pbonzini, cohuck, lvivier, nadav.amit,
	krish.sadhukhan

Add some asserts to make sure the state is consistent.

Simplify and improve the readability of vm_free.

If a NULL pointer is freed, no operation is performed.

Fixes: 3f6fee0d4da4 ("lib/vmalloc: vmalloc support for handling allocation metadata")

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 lib/vmalloc.c | 22 +++++++++++++---------
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/lib/vmalloc.c b/lib/vmalloc.c
index 986a34c..6b52790 100644
--- a/lib/vmalloc.c
+++ b/lib/vmalloc.c
@@ -162,13 +162,16 @@ static void *vm_memalign(size_t alignment, size_t size)
 static void vm_free(void *mem)
 {
 	struct metadata *m;
-	uintptr_t ptr, end;
+	uintptr_t ptr, page, i;
 
+	if (!mem)
+		return;
 	/* the pointer is not page-aligned, it was a single-page allocation */
 	if (!IS_ALIGNED((uintptr_t)mem, PAGE_SIZE)) {
 		assert(GET_MAGIC(mem) == VM_MAGIC);
-		ptr = virt_to_pte_phys(page_root, mem) & PAGE_MASK;
-		free_page(phys_to_virt(ptr));
+		page = virt_to_pte_phys(page_root, mem) & PAGE_MASK;
+		assert(page);
+		free_page(phys_to_virt(page));
 		return;
 	}
 
@@ -176,13 +179,14 @@ static void vm_free(void *mem)
 	m = GET_METADATA(mem);
 	assert(m->magic == VM_MAGIC);
 	assert(m->npages > 0);
+	assert(m->npages < BIT_ULL(BITS_PER_LONG - PAGE_SHIFT));
 	/* free all the pages including the metadata page */
-	ptr = (uintptr_t)mem - PAGE_SIZE;
-	end = ptr + m->npages * PAGE_SIZE;
-	for ( ; ptr < end; ptr += PAGE_SIZE)
-		free_page(phys_to_virt(virt_to_pte_phys(page_root, (void *)ptr)));
-	/* free the last one separately to avoid overflow issues */
-	free_page(phys_to_virt(virt_to_pte_phys(page_root, (void *)ptr)));
+	ptr = (uintptr_t)m & PAGE_MASK;
+	for (i = 0 ; i < m->npages + 1; i++, ptr += PAGE_SIZE) {
+		page = virt_to_pte_phys(page_root, (void *)ptr) & PAGE_MASK;
+		assert(page);
+		free_page(phys_to_virt(page));
+	}
 }
 
 static struct alloc_ops vmalloc_ops = {
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [kvm-unit-tests PATCH v2 04/11] lib/asm: Fix definitions of memory areas
  2021-01-15 12:37 [kvm-unit-tests PATCH v2 00/11] Fix and improve the page allocator Claudio Imbrenda
                   ` (2 preceding siblings ...)
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 03/11] lib/vmalloc: add some asserts and improvements Claudio Imbrenda
@ 2021-01-15 12:37 ` Claudio Imbrenda
  2021-01-19 15:33   ` Janosch Frank
  2021-01-21  1:23   ` David Matlack
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 05/11] lib/alloc_page: fix and improve the page allocator Claudio Imbrenda
                   ` (6 subsequent siblings)
  10 siblings, 2 replies; 20+ messages in thread
From: Claudio Imbrenda @ 2021-01-15 12:37 UTC (permalink / raw)
  To: kvm
  Cc: frankja, david, thuth, pbonzini, cohuck, lvivier, nadav.amit,
	krish.sadhukhan

Fix the definitions of the memory areas.

Bring the headers in line with the rest of the asm headers, by having the
appropriate #ifdef _ASM$ARCH_ guarding the headers.

Fixes: d74708246bd9 ("lib/asm: Add definitions of memory areas")

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 lib/asm-generic/memory_areas.h |  9 ++++-----
 lib/arm/asm/memory_areas.h     | 11 +++--------
 lib/arm64/asm/memory_areas.h   | 11 +++--------
 lib/powerpc/asm/memory_areas.h | 11 +++--------
 lib/ppc64/asm/memory_areas.h   | 11 +++--------
 lib/s390x/asm/memory_areas.h   | 13 ++++++-------
 lib/x86/asm/memory_areas.h     | 27 ++++++++++++++++-----------
 lib/alloc_page.h               |  3 +++
 lib/alloc_page.c               |  4 +---
 9 files changed, 42 insertions(+), 58 deletions(-)

diff --git a/lib/asm-generic/memory_areas.h b/lib/asm-generic/memory_areas.h
index 927baa7..3074afe 100644
--- a/lib/asm-generic/memory_areas.h
+++ b/lib/asm-generic/memory_areas.h
@@ -1,11 +1,10 @@
-#ifndef MEMORY_AREAS_H
-#define MEMORY_AREAS_H
+#ifndef __ASM_GENERIC_MEMORY_AREAS_H__
+#define __ASM_GENERIC_MEMORY_AREAS_H__
 
 #define AREA_NORMAL_PFN 0
 #define AREA_NORMAL_NUMBER 0
-#define AREA_NORMAL 1
+#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER)
 
-#define AREA_ANY -1
-#define AREA_ANY_NUMBER 0xff
+#define MAX_AREAS 1
 
 #endif
diff --git a/lib/arm/asm/memory_areas.h b/lib/arm/asm/memory_areas.h
index 927baa7..c723310 100644
--- a/lib/arm/asm/memory_areas.h
+++ b/lib/arm/asm/memory_areas.h
@@ -1,11 +1,6 @@
-#ifndef MEMORY_AREAS_H
-#define MEMORY_AREAS_H
+#ifndef _ASMARM_MEMORY_AREAS_H_
+#define _ASMARM_MEMORY_AREAS_H_
 
-#define AREA_NORMAL_PFN 0
-#define AREA_NORMAL_NUMBER 0
-#define AREA_NORMAL 1
-
-#define AREA_ANY -1
-#define AREA_ANY_NUMBER 0xff
+#include <asm-generic/memory_areas.h>
 
 #endif
diff --git a/lib/arm64/asm/memory_areas.h b/lib/arm64/asm/memory_areas.h
index 927baa7..18e8ca8 100644
--- a/lib/arm64/asm/memory_areas.h
+++ b/lib/arm64/asm/memory_areas.h
@@ -1,11 +1,6 @@
-#ifndef MEMORY_AREAS_H
-#define MEMORY_AREAS_H
+#ifndef _ASMARM64_MEMORY_AREAS_H_
+#define _ASMARM64_MEMORY_AREAS_H_
 
-#define AREA_NORMAL_PFN 0
-#define AREA_NORMAL_NUMBER 0
-#define AREA_NORMAL 1
-
-#define AREA_ANY -1
-#define AREA_ANY_NUMBER 0xff
+#include <asm-generic/memory_areas.h>
 
 #endif
diff --git a/lib/powerpc/asm/memory_areas.h b/lib/powerpc/asm/memory_areas.h
index 927baa7..76d1738 100644
--- a/lib/powerpc/asm/memory_areas.h
+++ b/lib/powerpc/asm/memory_areas.h
@@ -1,11 +1,6 @@
-#ifndef MEMORY_AREAS_H
-#define MEMORY_AREAS_H
+#ifndef _ASMPOWERPC_MEMORY_AREAS_H_
+#define _ASMPOWERPC_MEMORY_AREAS_H_
 
-#define AREA_NORMAL_PFN 0
-#define AREA_NORMAL_NUMBER 0
-#define AREA_NORMAL 1
-
-#define AREA_ANY -1
-#define AREA_ANY_NUMBER 0xff
+#include <asm-generic/memory_areas.h>
 
 #endif
diff --git a/lib/ppc64/asm/memory_areas.h b/lib/ppc64/asm/memory_areas.h
index 927baa7..b9fd46b 100644
--- a/lib/ppc64/asm/memory_areas.h
+++ b/lib/ppc64/asm/memory_areas.h
@@ -1,11 +1,6 @@
-#ifndef MEMORY_AREAS_H
-#define MEMORY_AREAS_H
+#ifndef _ASMPPC64_MEMORY_AREAS_H_
+#define _ASMPPC64_MEMORY_AREAS_H_
 
-#define AREA_NORMAL_PFN 0
-#define AREA_NORMAL_NUMBER 0
-#define AREA_NORMAL 1
-
-#define AREA_ANY -1
-#define AREA_ANY_NUMBER 0xff
+#include <asm-generic/memory_areas.h>
 
 #endif
diff --git a/lib/s390x/asm/memory_areas.h b/lib/s390x/asm/memory_areas.h
index 4856a27..827bfb3 100644
--- a/lib/s390x/asm/memory_areas.h
+++ b/lib/s390x/asm/memory_areas.h
@@ -1,16 +1,15 @@
-#ifndef MEMORY_AREAS_H
-#define MEMORY_AREAS_H
+#ifndef _ASMS390X_MEMORY_AREAS_H_
+#define _ASMS390X_MEMORY_AREAS_H_
 
-#define AREA_NORMAL_PFN BIT(31-12)
+#define AREA_NORMAL_PFN (1 << 19)
 #define AREA_NORMAL_NUMBER 0
-#define AREA_NORMAL 1
+#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER)
 
 #define AREA_LOW_PFN 0
 #define AREA_LOW_NUMBER 1
-#define AREA_LOW 2
+#define AREA_LOW (1 << AREA_LOW_NUMBER)
 
-#define AREA_ANY -1
-#define AREA_ANY_NUMBER 0xff
+#define MAX_AREAS 2
 
 #define AREA_DMA31 AREA_LOW
 
diff --git a/lib/x86/asm/memory_areas.h b/lib/x86/asm/memory_areas.h
index 952f5bd..e84016f 100644
--- a/lib/x86/asm/memory_areas.h
+++ b/lib/x86/asm/memory_areas.h
@@ -1,21 +1,26 @@
-#ifndef MEMORY_AREAS_H
-#define MEMORY_AREAS_H
+#ifndef _ASM_X86_MEMORY_AREAS_H_
+#define _ASM_X86_MEMORY_AREAS_H_
 
 #define AREA_NORMAL_PFN BIT(36-12)
 #define AREA_NORMAL_NUMBER 0
-#define AREA_NORMAL 1
+#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER)
 
-#define AREA_PAE_HIGH_PFN BIT(32-12)
-#define AREA_PAE_HIGH_NUMBER 1
-#define AREA_PAE_HIGH 2
+#define AREA_HIGH_PFN BIT(32-12)
+#define AREA_HIGH_NUMBER 1
+#define AREA_HIGH (1 << AREA_HIGH_NUMBER)
 
-#define AREA_LOW_PFN 0
+#define AREA_LOW_PFN BIT(24-12)
 #define AREA_LOW_NUMBER 2
-#define AREA_LOW 4
+#define AREA_LOW (1 << AREA_LOW_NUMBER)
 
-#define AREA_PAE (AREA_PAE | AREA_LOW)
+#define AREA_LOWEST_PFN 0
+#define AREA_LOWEST_NUMBER 3
+#define AREA_LOWEST (1 << AREA_LOWEST_NUMBER)
 
-#define AREA_ANY -1
-#define AREA_ANY_NUMBER 0xff
+#define MAX_AREAS 4
+
+#define AREA_DMA24 AREA_LOWEST
+#define AREA_DMA32 (AREA_LOWEST | AREA_LOW)
+#define AREA_PAE36 (AREA_LOWEST | AREA_LOW | AREA_HIGH)
 
 #endif
diff --git a/lib/alloc_page.h b/lib/alloc_page.h
index 816ff5d..b6aace5 100644
--- a/lib/alloc_page.h
+++ b/lib/alloc_page.h
@@ -10,6 +10,9 @@
 
 #include <asm/memory_areas.h>
 
+#define AREA_ANY -1
+#define AREA_ANY_NUMBER 0xff
+
 /* Returns true if the page allocator has been initialized */
 bool page_alloc_initialized(void);
 
diff --git a/lib/alloc_page.c b/lib/alloc_page.c
index 685ab1e..ed0ff02 100644
--- a/lib/alloc_page.c
+++ b/lib/alloc_page.c
@@ -19,8 +19,6 @@
 #define NLISTS ((BITS_PER_LONG) - (PAGE_SHIFT))
 #define PFN(x) ((uintptr_t)(x) >> PAGE_SHIFT)
 
-#define MAX_AREAS	6
-
 #define ORDER_MASK	0x3f
 #define ALLOC_MASK	0x40
 #define SPECIAL_MASK	0x80
@@ -509,7 +507,7 @@ void page_alloc_init_area(u8 n, uintptr_t base_pfn, uintptr_t top_pfn)
 		return;
 	}
 #ifdef AREA_HIGH_PFN
-	__page_alloc_init_area(AREA_HIGH_NUMBER, AREA_HIGH_PFN), base_pfn, &top_pfn);
+	__page_alloc_init_area(AREA_HIGH_NUMBER, AREA_HIGH_PFN, base_pfn, &top_pfn);
 #endif
 	__page_alloc_init_area(AREA_NORMAL_NUMBER, AREA_NORMAL_PFN, base_pfn, &top_pfn);
 #ifdef AREA_LOW_PFN
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [kvm-unit-tests PATCH v2 05/11] lib/alloc_page: fix and improve the page allocator
  2021-01-15 12:37 [kvm-unit-tests PATCH v2 00/11] Fix and improve the page allocator Claudio Imbrenda
                   ` (3 preceding siblings ...)
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 04/11] lib/asm: Fix definitions of memory areas Claudio Imbrenda
@ 2021-01-15 12:37 ` Claudio Imbrenda
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 06/11] lib/alloc.h: remove align_min from struct alloc_ops Claudio Imbrenda
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Claudio Imbrenda @ 2021-01-15 12:37 UTC (permalink / raw)
  To: kvm
  Cc: frankja, david, thuth, pbonzini, cohuck, lvivier, nadav.amit,
	krish.sadhukhan

This patch introduces some improvements to the code, mostly readability
improvements, but also some semantic details, and improvements in the
documentation.

* introduce and use pfn_t to semantically tag parameters as PFNs
* remove the PFN macro, use virt_to_pfn instead
* rename area_or_metadata_contains and area_contains to area_contains_pfn
  and usable_area_contains_pfn respectively
* fix/improve comments in lib/alloc_page.h
* move some wrapper functions to the header

Fixes: 8131e91a4b61 ("lib/alloc_page: complete rewrite of the page allocator")
Fixes: 34c950651861 ("lib/alloc_page: allow reserving arbitrary memory ranges")

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 lib/alloc_page.h |  52 ++++++++++-----
 lib/alloc_page.c | 165 +++++++++++++++++++++++------------------------
 2 files changed, 118 insertions(+), 99 deletions(-)

diff --git a/lib/alloc_page.h b/lib/alloc_page.h
index b6aace5..6fd2ff0 100644
--- a/lib/alloc_page.h
+++ b/lib/alloc_page.h
@@ -8,6 +8,7 @@
 #ifndef ALLOC_PAGE_H
 #define ALLOC_PAGE_H 1
 
+#include <stdbool.h>
 #include <asm/memory_areas.h>
 
 #define AREA_ANY -1
@@ -23,7 +24,7 @@ bool page_alloc_initialized(void);
  * top_pfn is the physical frame number of the first page immediately after
  * the end of the area to initialize
  */
-void page_alloc_init_area(u8 n, uintptr_t base_pfn, uintptr_t top_pfn);
+void page_alloc_init_area(u8 n, phys_addr_t base_pfn, phys_addr_t top_pfn);
 
 /* Enables the page allocator. At least one area must have been initialized */
 void page_alloc_ops_enable(void);
@@ -37,9 +38,12 @@ void *memalign_pages_area(unsigned int areas, size_t alignment, size_t size);
 
 /*
  * Allocate aligned memory from any area.
- * Equivalent to memalign_pages_area(~0, alignment, size).
+ * Equivalent to memalign_pages_area(AREA_ANY, alignment, size).
  */
-void *memalign_pages(size_t alignment, size_t size);
+static inline void *memalign_pages(size_t alignment, size_t size)
+{
+	return memalign_pages_area(AREA_ANY, alignment, size);
+}
 
 /*
  * Allocate naturally aligned memory from the specified areas.
@@ -48,16 +52,23 @@ void *memalign_pages(size_t alignment, size_t size);
 void *alloc_pages_area(unsigned int areas, unsigned int order);
 
 /*
- * Allocate one page from any area.
- * Equivalent to alloc_pages(0);
+ * Allocate naturally aligned pages from any area; the number of allocated
+ * pages is 1 << order.
+ * Equivalent to alloc_pages_area(AREA_ANY, order);
  */
-void *alloc_page(void);
+static inline void *alloc_pages(unsigned int order)
+{
+	return alloc_pages_area(AREA_ANY, order);
+}
 
 /*
- * Allocate naturally aligned memory from any area.
- * Equivalent to alloc_pages_area(~0, order);
+ * Allocate one page from any area.
+ * Equivalent to alloc_pages(0);
  */
-void *alloc_pages(unsigned int order);
+static inline void *alloc_page(void)
+{
+	return alloc_pages(0);
+}
 
 /*
  * Frees a memory block allocated with any of the memalign_pages* or
@@ -66,31 +77,40 @@ void *alloc_pages(unsigned int order);
  */
 void free_pages(void *mem);
 
-/* For backwards compatibility */
+/*
+ * Free one page.
+ * Equivalent to free_pages(mem).
+ */
 static inline void free_page(void *mem)
 {
 	return free_pages(mem);
 }
 
-/* For backwards compatibility */
+/*
+ * Free pages by order.
+ * Equivalent to free_pages(mem).
+ */
 static inline void free_pages_by_order(void *mem, unsigned int order)
 {
 	free_pages(mem);
 }
 
 /*
- * Allocates and reserves the specified memory range if possible.
- * Returns NULL in case of failure.
+ * Reserves the specified physical memory range if possible.
+ * If the specified range cannot be reserved in its entirety, no action is
+ * performed and -1 is returned.
+ *
+ * Returns 0 in case of success, -1 otherwise.
  */
-void *alloc_pages_special(uintptr_t addr, size_t npages);
+int reserve_pages(phys_addr_t addr, size_t npages);
 
 /*
  * Frees a reserved memory range that had been reserved with
- * alloc_pages_special.
+ * reserve_pages.
  * The memory range does not need to match a previous allocation
  * exactly, it can also be a subset, in which case only the specified
  * pages will be freed and unreserved.
  */
-void free_pages_special(uintptr_t addr, size_t npages);
+void unreserve_pages(phys_addr_t addr, size_t npages);
 
 #endif
diff --git a/lib/alloc_page.c b/lib/alloc_page.c
index ed0ff02..337a4e0 100644
--- a/lib/alloc_page.c
+++ b/lib/alloc_page.c
@@ -17,25 +17,29 @@
 
 #define IS_ALIGNED_ORDER(x,order) IS_ALIGNED((x),BIT_ULL(order))
 #define NLISTS ((BITS_PER_LONG) - (PAGE_SHIFT))
-#define PFN(x) ((uintptr_t)(x) >> PAGE_SHIFT)
 
 #define ORDER_MASK	0x3f
 #define ALLOC_MASK	0x40
 #define SPECIAL_MASK	0x80
 
+typedef phys_addr_t pfn_t;
+
 struct mem_area {
 	/* Physical frame number of the first usable frame in the area */
-	uintptr_t base;
+	pfn_t base;
 	/* Physical frame number of the first frame outside the area */
-	uintptr_t top;
-	/* Combination of SPECIAL_MASK, ALLOC_MASK, and order */
+	pfn_t top;
+	/* Per page metadata, each entry is a combination *_MASK and order */
 	u8 *page_states;
 	/* One freelist for each possible block size, up to NLISTS */
 	struct linked_list freelists[NLISTS];
 };
 
+/* Descriptors for each possible area */
 static struct mem_area areas[MAX_AREAS];
+/* Mask of initialized areas */
 static unsigned int areas_mask;
+/* Protects areas and areas mask */
 static struct spinlock lock;
 
 bool page_alloc_initialized(void)
@@ -43,12 +47,24 @@ bool page_alloc_initialized(void)
 	return areas_mask != 0;
 }
 
-static inline bool area_or_metadata_contains(struct mem_area *a, uintptr_t pfn)
+/*
+ * Each memory area contains an array of metadata entries at the very
+ * beginning. The usable memory follows immediately afterwards.
+ * This function returns true if the given pfn falls anywhere within the
+ * memory area, including the metadata area.
+ */
+static inline bool area_contains_pfn(struct mem_area *a, pfn_t pfn)
 {
-	return (pfn >= PFN(a->page_states)) && (pfn < a->top);
+	return (pfn >= virt_to_pfn(a->page_states)) && (pfn < a->top);
 }
 
-static inline bool area_contains(struct mem_area *a, uintptr_t pfn)
+/*
+ * Each memory area contains an array of metadata entries at the very
+ * beginning. The usable memory follows immediately afterwards.
+ * This function returns true if the given pfn falls in the usable range of
+ * the given memory area.
+ */
+static inline bool usable_area_contains_pfn(struct mem_area *a, pfn_t pfn)
 {
 	return (pfn >= a->base) && (pfn < a->top);
 }
@@ -69,21 +85,19 @@ static inline bool area_contains(struct mem_area *a, uintptr_t pfn)
  */
 static void split(struct mem_area *a, void *addr)
 {
-	uintptr_t pfn = PFN(addr);
-	struct linked_list *p;
-	uintptr_t i, idx;
+	pfn_t pfn = virt_to_pfn(addr);
+	pfn_t i, idx;
 	u8 order;
 
-	assert(a && area_contains(a, pfn));
+	assert(a && usable_area_contains_pfn(a, pfn));
 	idx = pfn - a->base;
 	order = a->page_states[idx];
 	assert(!(order & ~ORDER_MASK) && order && (order < NLISTS));
 	assert(IS_ALIGNED_ORDER(pfn, order));
-	assert(area_contains(a, pfn + BIT(order) - 1));
+	assert(usable_area_contains_pfn(a, pfn + BIT(order) - 1));
 
 	/* Remove the block from its free list */
-	p = list_remove(addr);
-	assert(p);
+	list_remove(addr);
 
 	/* update the block size for each page in the block */
 	for (i = 0; i < BIT(order); i++) {
@@ -92,9 +106,9 @@ static void split(struct mem_area *a, void *addr)
 	}
 	order--;
 	/* add the first half block to the appropriate free list */
-	list_add(a->freelists + order, p);
+	list_add(a->freelists + order, addr);
 	/* add the second half block to the appropriate free list */
-	list_add(a->freelists + order, (void *)((pfn + BIT(order)) * PAGE_SIZE));
+	list_add(a->freelists + order, pfn_to_virt(pfn + BIT(order)));
 }
 
 /*
@@ -105,7 +119,7 @@ static void split(struct mem_area *a, void *addr)
  */
 static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz)
 {
-	struct linked_list *p, *res = NULL;
+	struct linked_list *p;
 	u8 order;
 
 	assert((al < NLISTS) && (sz < NLISTS));
@@ -130,17 +144,17 @@ static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz)
 	for (; order > sz; order--)
 		split(a, p);
 
-	res = list_remove(p);
-	memset(a->page_states + (PFN(res) - a->base), ALLOC_MASK | order, BIT(order));
-	return res;
+	list_remove(p);
+	memset(a->page_states + (virt_to_pfn(p) - a->base), ALLOC_MASK | order, BIT(order));
+	return p;
 }
 
-static struct mem_area *get_area(uintptr_t pfn)
+static struct mem_area *get_area(pfn_t pfn)
 {
 	uintptr_t i;
 
 	for (i = 0; i < MAX_AREAS; i++)
-		if ((areas_mask & BIT(i)) && area_contains(areas + i, pfn))
+		if ((areas_mask & BIT(i)) && usable_area_contains_pfn(areas + i, pfn))
 			return areas + i;
 	return NULL;
 }
@@ -160,17 +174,16 @@ static struct mem_area *get_area(uintptr_t pfn)
  * - all of the pages of the two blocks must have the same block size
  * - the function is called with the lock held
  */
-static bool coalesce(struct mem_area *a, u8 order, uintptr_t pfn, uintptr_t pfn2)
+static bool coalesce(struct mem_area *a, u8 order, pfn_t pfn, pfn_t pfn2)
 {
-	uintptr_t first, second, i;
-	struct linked_list *li;
+	pfn_t first, second, i;
 
 	assert(IS_ALIGNED_ORDER(pfn, order) && IS_ALIGNED_ORDER(pfn2, order));
 	assert(pfn2 == pfn + BIT(order));
 	assert(a);
 
 	/* attempting to coalesce two blocks that belong to different areas */
-	if (!area_contains(a, pfn) || !area_contains(a, pfn2 + BIT(order) - 1))
+	if (!usable_area_contains_pfn(a, pfn) || !usable_area_contains_pfn(a, pfn2 + BIT(order) - 1))
 		return false;
 	first = pfn - a->base;
 	second = pfn2 - a->base;
@@ -179,17 +192,15 @@ static bool coalesce(struct mem_area *a, u8 order, uintptr_t pfn, uintptr_t pfn2
 		return false;
 
 	/* we can coalesce, remove both blocks from their freelists */
-	li = list_remove((void *)(pfn2 << PAGE_SHIFT));
-	assert(li);
-	li = list_remove((void *)(pfn << PAGE_SHIFT));
-	assert(li);
+	list_remove(pfn_to_virt(pfn2));
+	list_remove(pfn_to_virt(pfn));
 	/* check the metadata entries and update with the new size */
 	for (i = 0; i < (2ull << order); i++) {
 		assert(a->page_states[first + i] == order);
 		a->page_states[first + i] = order + 1;
 	}
 	/* finally add the newly coalesced block to the appropriate freelist */
-	list_add(a->freelists + order + 1, li);
+	list_add(a->freelists + order + 1, pfn_to_virt(pfn));
 	return true;
 }
 
@@ -209,7 +220,7 @@ static bool coalesce(struct mem_area *a, u8 order, uintptr_t pfn, uintptr_t pfn2
  */
 static void _free_pages(void *mem)
 {
-	uintptr_t pfn2, pfn = PFN(mem);
+	pfn_t pfn2, pfn = virt_to_pfn(mem);
 	struct mem_area *a = NULL;
 	uintptr_t i, p;
 	u8 order;
@@ -232,7 +243,7 @@ static void _free_pages(void *mem)
 	/* ensure that the block is aligned properly for its size */
 	assert(IS_ALIGNED_ORDER(pfn, order));
 	/* ensure that the area can contain the whole block */
-	assert(area_contains(a, pfn + BIT(order) - 1));
+	assert(usable_area_contains_pfn(a, pfn + BIT(order) - 1));
 
 	for (i = 0; i < BIT(order); i++) {
 		/* check that all pages of the block have consistent metadata */
@@ -268,63 +279,68 @@ void free_pages(void *mem)
 	spin_unlock(&lock);
 }
 
-static void *_alloc_page_special(uintptr_t addr)
+static int _reserve_one_page(pfn_t pfn)
 {
 	struct mem_area *a;
-	uintptr_t mask, i;
+	pfn_t mask, i;
 
-	a = get_area(PFN(addr));
-	assert(a);
-	i = PFN(addr) - a->base;
+	a = get_area(pfn);
+	if (!a)
+		return -1;
+	i = pfn - a->base;
 	if (a->page_states[i] & (ALLOC_MASK | SPECIAL_MASK))
-		return NULL;
+		return -1;
 	while (a->page_states[i]) {
-		mask = GENMASK_ULL(63, PAGE_SHIFT + a->page_states[i]);
-		split(a, (void *)(addr & mask));
+		mask = GENMASK_ULL(63, a->page_states[i]);
+		split(a, pfn_to_virt(pfn & mask));
 	}
 	a->page_states[i] = SPECIAL_MASK;
-	return (void *)addr;
+	return 0;
 }
 
-static void _free_page_special(uintptr_t addr)
+static void _unreserve_one_page(pfn_t pfn)
 {
 	struct mem_area *a;
-	uintptr_t i;
+	pfn_t i;
 
-	a = get_area(PFN(addr));
+	a = get_area(pfn);
 	assert(a);
-	i = PFN(addr) - a->base;
+	i = pfn - a->base;
 	assert(a->page_states[i] == SPECIAL_MASK);
 	a->page_states[i] = ALLOC_MASK;
-	_free_pages((void *)addr);
+	_free_pages(pfn_to_virt(pfn));
 }
 
-void *alloc_pages_special(uintptr_t addr, size_t n)
+int reserve_pages(phys_addr_t addr, size_t n)
 {
-	uintptr_t i;
+	pfn_t pfn;
+	size_t i;
 
 	assert(IS_ALIGNED(addr, PAGE_SIZE));
+	pfn = addr >> PAGE_SHIFT;
 	spin_lock(&lock);
 	for (i = 0; i < n; i++)
-		if (!_alloc_page_special(addr + i * PAGE_SIZE))
+		if (_reserve_one_page(pfn + i))
 			break;
 	if (i < n) {
 		for (n = 0 ; n < i; n++)
-			_free_page_special(addr + n * PAGE_SIZE);
-		addr = 0;
+			_unreserve_one_page(pfn + n);
+		n = 0;
 	}
 	spin_unlock(&lock);
-	return (void *)addr;
+	return -!n;
 }
 
-void free_pages_special(uintptr_t addr, size_t n)
+void unreserve_pages(phys_addr_t addr, size_t n)
 {
-	uintptr_t i;
+	pfn_t pfn;
+	size_t i;
 
 	assert(IS_ALIGNED(addr, PAGE_SIZE));
+	pfn = addr >> PAGE_SHIFT;
 	spin_lock(&lock);
 	for (i = 0; i < n; i++)
-		_free_page_special(addr + i * PAGE_SIZE);
+		_unreserve_one_page(pfn + i);
 	spin_unlock(&lock);
 }
 
@@ -351,11 +367,6 @@ void *alloc_pages_area(unsigned int area, unsigned int order)
 	return page_memalign_order_area(area, order, order);
 }
 
-void *alloc_pages(unsigned int order)
-{
-	return alloc_pages_area(AREA_ANY, order);
-}
-
 /*
  * Allocates (1 << order) physically contiguous aligned pages.
  * Returns NULL if the allocation was not possible.
@@ -370,18 +381,6 @@ void *memalign_pages_area(unsigned int area, size_t alignment, size_t size)
 	return page_memalign_order_area(area, size, alignment);
 }
 
-void *memalign_pages(size_t alignment, size_t size)
-{
-	return memalign_pages_area(AREA_ANY, alignment, size);
-}
-
-/*
- * Allocates one page
- */
-void *alloc_page()
-{
-	return alloc_pages(0);
-}
 
 static struct alloc_ops page_alloc_ops = {
 	.memalign = memalign_pages,
@@ -416,7 +415,7 @@ void page_alloc_ops_enable(void)
  * - the memory area to add does not overlap with existing areas
  * - the memory area to add has at least 5 pages available
  */
-static void _page_alloc_init_area(u8 n, uintptr_t start_pfn, uintptr_t top_pfn)
+static void _page_alloc_init_area(u8 n, pfn_t start_pfn, pfn_t top_pfn)
 {
 	size_t table_size, npages, i;
 	struct mem_area *a;
@@ -437,7 +436,7 @@ static void _page_alloc_init_area(u8 n, uintptr_t start_pfn, uintptr_t top_pfn)
 
 	/* fill in the values of the new area */
 	a = areas + n;
-	a->page_states = (void *)(start_pfn << PAGE_SHIFT);
+	a->page_states = pfn_to_virt(start_pfn);
 	a->base = start_pfn + table_size;
 	a->top = top_pfn;
 	npages = top_pfn - a->base;
@@ -447,14 +446,14 @@ static void _page_alloc_init_area(u8 n, uintptr_t start_pfn, uintptr_t top_pfn)
 	for (i = 0; i < MAX_AREAS; i++) {
 		if (!(areas_mask & BIT(i)))
 			continue;
-		assert(!area_or_metadata_contains(areas + i, start_pfn));
-		assert(!area_or_metadata_contains(areas + i, top_pfn - 1));
-		assert(!area_or_metadata_contains(a, PFN(areas[i].page_states)));
-		assert(!area_or_metadata_contains(a, areas[i].top - 1));
+		assert(!area_contains_pfn(areas + i, start_pfn));
+		assert(!area_contains_pfn(areas + i, top_pfn - 1));
+		assert(!area_contains_pfn(a, virt_to_pfn(areas[i].page_states)));
+		assert(!area_contains_pfn(a, areas[i].top - 1));
 	}
 	/* initialize all freelists for the new area */
 	for (i = 0; i < NLISTS; i++)
-		a->freelists[i].next = a->freelists[i].prev = a->freelists + i;
+		a->freelists[i].prev = a->freelists[i].next = a->freelists + i;
 
 	/* initialize the metadata for the available memory */
 	for (i = a->base; i < a->top; i += 1ull << order) {
@@ -473,13 +472,13 @@ static void _page_alloc_init_area(u8 n, uintptr_t start_pfn, uintptr_t top_pfn)
 		assert(order < NLISTS);
 		/* initialize the metadata and add to the freelist */
 		memset(a->page_states + (i - a->base), order, BIT(order));
-		list_add(a->freelists + order, (void *)(i << PAGE_SHIFT));
+		list_add(a->freelists + order, pfn_to_virt(i));
 	}
 	/* finally mark the area as present */
 	areas_mask |= BIT(n);
 }
 
-static void __page_alloc_init_area(u8 n, uintptr_t cutoff, uintptr_t base_pfn, uintptr_t *top_pfn)
+static void __page_alloc_init_area(u8 n, pfn_t cutoff, pfn_t base_pfn, pfn_t *top_pfn)
 {
 	if (*top_pfn > cutoff) {
 		spin_lock(&lock);
@@ -500,7 +499,7 @@ static void __page_alloc_init_area(u8 n, uintptr_t cutoff, uintptr_t base_pfn, u
  * Prerequisites:
  * see _page_alloc_init_area
  */
-void page_alloc_init_area(u8 n, uintptr_t base_pfn, uintptr_t top_pfn)
+void page_alloc_init_area(u8 n, phys_addr_t base_pfn, phys_addr_t top_pfn)
 {
 	if (n != AREA_ANY_NUMBER) {
 		__page_alloc_init_area(n, 0, base_pfn, &top_pfn);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [kvm-unit-tests PATCH v2 06/11] lib/alloc.h: remove align_min from struct alloc_ops
  2021-01-15 12:37 [kvm-unit-tests PATCH v2 00/11] Fix and improve the page allocator Claudio Imbrenda
                   ` (4 preceding siblings ...)
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 05/11] lib/alloc_page: fix and improve the page allocator Claudio Imbrenda
@ 2021-01-15 12:37 ` Claudio Imbrenda
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 07/11] lib/alloc_page: Optimization to skip known empty freelists Claudio Imbrenda
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Claudio Imbrenda @ 2021-01-15 12:37 UTC (permalink / raw)
  To: kvm
  Cc: frankja, david, thuth, pbonzini, cohuck, lvivier, nadav.amit,
	krish.sadhukhan

Remove align_min from struct alloc_ops, since it is no longer used.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 lib/alloc.h      | 1 -
 lib/alloc_page.c | 1 -
 lib/alloc_phys.c | 9 +++++----
 lib/vmalloc.c    | 1 -
 4 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/lib/alloc.h b/lib/alloc.h
index 9b4b634..db90b01 100644
--- a/lib/alloc.h
+++ b/lib/alloc.h
@@ -25,7 +25,6 @@
 struct alloc_ops {
 	void *(*memalign)(size_t alignment, size_t size);
 	void (*free)(void *ptr);
-	size_t align_min;
 };
 
 extern struct alloc_ops *alloc_ops;
diff --git a/lib/alloc_page.c b/lib/alloc_page.c
index 337a4e0..7d1fa85 100644
--- a/lib/alloc_page.c
+++ b/lib/alloc_page.c
@@ -385,7 +385,6 @@ void *memalign_pages_area(unsigned int area, size_t alignment, size_t size)
 static struct alloc_ops page_alloc_ops = {
 	.memalign = memalign_pages,
 	.free = free_pages,
-	.align_min = PAGE_SIZE,
 };
 
 /*
diff --git a/lib/alloc_phys.c b/lib/alloc_phys.c
index 72e20f7..a4d2bf2 100644
--- a/lib/alloc_phys.c
+++ b/lib/alloc_phys.c
@@ -29,8 +29,8 @@ static phys_addr_t base, top;
 static void *early_memalign(size_t alignment, size_t size);
 static struct alloc_ops early_alloc_ops = {
 	.memalign = early_memalign,
-	.align_min = DEFAULT_MINIMUM_ALIGNMENT
 };
+static size_t align_min;
 
 struct alloc_ops *alloc_ops = &early_alloc_ops;
 
@@ -39,8 +39,7 @@ void phys_alloc_show(void)
 	int i;
 
 	spin_lock(&lock);
-	printf("phys_alloc minimum alignment: %#" PRIx64 "\n",
-		(u64)early_alloc_ops.align_min);
+	printf("phys_alloc minimum alignment: %#" PRIx64 "\n", (u64)align_min);
 	for (i = 0; i < nr_regions; ++i)
 		printf("%016" PRIx64 "-%016" PRIx64 " [%s]\n",
 			(u64)regions[i].base,
@@ -64,7 +63,7 @@ void phys_alloc_set_minimum_alignment(phys_addr_t align)
 {
 	assert(align && !(align & (align - 1)));
 	spin_lock(&lock);
-	early_alloc_ops.align_min = align;
+	align_min = align;
 	spin_unlock(&lock);
 }
 
@@ -83,6 +82,8 @@ static phys_addr_t phys_alloc_aligned_safe(phys_addr_t size,
 		top_safe = MIN(top_safe, 1ULL << 32);
 
 	assert(base < top_safe);
+	if (align < align_min)
+		align = align_min;
 
 	addr = ALIGN(base, align);
 	size += addr - base;
diff --git a/lib/vmalloc.c b/lib/vmalloc.c
index 6b52790..aa7cc41 100644
--- a/lib/vmalloc.c
+++ b/lib/vmalloc.c
@@ -192,7 +192,6 @@ static void vm_free(void *mem)
 static struct alloc_ops vmalloc_ops = {
 	.memalign = vm_memalign,
 	.free = vm_free,
-	.align_min = PAGE_SIZE,
 };
 
 void __attribute__((__weak__)) find_highmem(void)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [kvm-unit-tests PATCH v2 07/11] lib/alloc_page: Optimization to skip known empty freelists
  2021-01-15 12:37 [kvm-unit-tests PATCH v2 00/11] Fix and improve the page allocator Claudio Imbrenda
                   ` (5 preceding siblings ...)
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 06/11] lib/alloc.h: remove align_min from struct alloc_ops Claudio Imbrenda
@ 2021-01-15 12:37 ` Claudio Imbrenda
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 08/11] lib/alloc_page: rework metadata format Claudio Imbrenda
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Claudio Imbrenda @ 2021-01-15 12:37 UTC (permalink / raw)
  To: kvm
  Cc: frankja, david, thuth, pbonzini, cohuck, lvivier, nadav.amit,
	krish.sadhukhan

Keep track of the largest block order available in each area, and do not
search past it when looking for free memory.

This will avoid needlessly scanning the freelists for the largest block
orders, which will be empty in most cases.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 lib/alloc_page.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/lib/alloc_page.c b/lib/alloc_page.c
index 7d1fa85..37f28ce 100644
--- a/lib/alloc_page.c
+++ b/lib/alloc_page.c
@@ -31,6 +31,8 @@ struct mem_area {
 	pfn_t top;
 	/* Per page metadata, each entry is a combination *_MASK and order */
 	u8 *page_states;
+	/* Highest block order available in this area */
+	u8 max_order;
 	/* One freelist for each possible block size, up to NLISTS */
 	struct linked_list freelists[NLISTS];
 };
@@ -104,6 +106,8 @@ static void split(struct mem_area *a, void *addr)
 		assert(a->page_states[idx + i] == order);
 		a->page_states[idx + i] = order - 1;
 	}
+	if ((order == a->max_order) && (is_list_empty(a->freelists + order)))
+		a->max_order--;
 	order--;
 	/* add the first half block to the appropriate free list */
 	list_add(a->freelists + order, addr);
@@ -127,13 +131,13 @@ static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz)
 	order = sz > al ? sz : al;
 
 	/* search all free lists for some memory */
-	for ( ; order < NLISTS; order++) {
+	for ( ; order <= a->max_order; order++) {
 		p = a->freelists[order].next;
 		if (!is_list_empty(p))
 			break;
 	}
 	/* out of memory */
-	if (order >= NLISTS)
+	if (order > a->max_order)
 		return NULL;
 
 	/*
@@ -201,6 +205,8 @@ static bool coalesce(struct mem_area *a, u8 order, pfn_t pfn, pfn_t pfn2)
 	}
 	/* finally add the newly coalesced block to the appropriate freelist */
 	list_add(a->freelists + order + 1, pfn_to_virt(pfn));
+	if (order + 1 > a->max_order)
+		a->max_order = order + 1;
 	return true;
 }
 
@@ -438,6 +444,7 @@ static void _page_alloc_init_area(u8 n, pfn_t start_pfn, pfn_t top_pfn)
 	a->page_states = pfn_to_virt(start_pfn);
 	a->base = start_pfn + table_size;
 	a->top = top_pfn;
+	a->max_order = 0;
 	npages = top_pfn - a->base;
 	assert((a->base - start_pfn) * PAGE_SIZE >= npages);
 
@@ -472,6 +479,8 @@ static void _page_alloc_init_area(u8 n, pfn_t start_pfn, pfn_t top_pfn)
 		/* initialize the metadata and add to the freelist */
 		memset(a->page_states + (i - a->base), order, BIT(order));
 		list_add(a->freelists + order, pfn_to_virt(i));
+		if (order > a->max_order)
+			a->max_order = order;
 	}
 	/* finally mark the area as present */
 	areas_mask |= BIT(n);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [kvm-unit-tests PATCH v2 08/11] lib/alloc_page: rework metadata format
  2021-01-15 12:37 [kvm-unit-tests PATCH v2 00/11] Fix and improve the page allocator Claudio Imbrenda
                   ` (6 preceding siblings ...)
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 07/11] lib/alloc_page: Optimization to skip known empty freelists Claudio Imbrenda
@ 2021-01-15 12:37 ` Claudio Imbrenda
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 09/11] lib/alloc: replace areas with more generic flags Claudio Imbrenda
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Claudio Imbrenda @ 2021-01-15 12:37 UTC (permalink / raw)
  To: kvm
  Cc: frankja, david, thuth, pbonzini, cohuck, lvivier, nadav.amit,
	krish.sadhukhan

This patch changes the format of the metadata so that the metadata is
now a 2-bit field instead of two separate flags.

This allows to have 4 different states for memory:

STATUS_FRESH: the memory is free and has not been touched at all since
              boot (not even read from!)
STATUS_FREE: the memory is free, but it is probably not fresh any more
STATUS_ALLOCATED: the memory has been allocated and is in use
STATUS_SPECIAL: the memory has been removed from the pool of allocated
                memory for some kind of special purpose according to
                the needs of the caller

Some macros are also introduced to test the status of a specific
metadata item.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 lib/alloc_page.c | 49 +++++++++++++++++++++++++++++-------------------
 1 file changed, 30 insertions(+), 19 deletions(-)

diff --git a/lib/alloc_page.c b/lib/alloc_page.c
index 37f28ce..d8b2758 100644
--- a/lib/alloc_page.c
+++ b/lib/alloc_page.c
@@ -18,9 +18,20 @@
 #define IS_ALIGNED_ORDER(x,order) IS_ALIGNED((x),BIT_ULL(order))
 #define NLISTS ((BITS_PER_LONG) - (PAGE_SHIFT))
 
-#define ORDER_MASK	0x3f
-#define ALLOC_MASK	0x40
-#define SPECIAL_MASK	0x80
+#define ORDER_MASK		0x3f
+#define STATUS_MASK		0xc0
+
+#define STATUS_FRESH		0x00
+#define STATUS_FREE		0x40
+#define STATUS_ALLOCATED	0x80
+#define STATUS_SPECIAL		0xc0
+
+#define IS_FRESH(x)	(((x) & STATUS_MASK) == STATUS_FRESH)
+#define IS_FREE(x)	(((x) & STATUS_MASK) == STATUS_FREE)
+#define IS_ALLOCATED(x)	(((x) & STATUS_MASK) == STATUS_ALLOCATED)
+#define IS_SPECIAL(x)	(((x) & STATUS_MASK) == STATUS_SPECIAL)
+
+#define IS_USABLE(x)	(IS_FREE(x) || IS_FRESH(x))
 
 typedef phys_addr_t pfn_t;
 
@@ -87,14 +98,14 @@ static inline bool usable_area_contains_pfn(struct mem_area *a, pfn_t pfn)
  */
 static void split(struct mem_area *a, void *addr)
 {
-	pfn_t pfn = virt_to_pfn(addr);
-	pfn_t i, idx;
-	u8 order;
+	pfn_t i, idx, pfn = virt_to_pfn(addr);
+	u8 metadata, order;
 
 	assert(a && usable_area_contains_pfn(a, pfn));
 	idx = pfn - a->base;
-	order = a->page_states[idx];
-	assert(!(order & ~ORDER_MASK) && order && (order < NLISTS));
+	metadata = a->page_states[idx];
+	order = metadata & ORDER_MASK;
+	assert(IS_USABLE(metadata) && order && (order < NLISTS));
 	assert(IS_ALIGNED_ORDER(pfn, order));
 	assert(usable_area_contains_pfn(a, pfn + BIT(order) - 1));
 
@@ -103,8 +114,8 @@ static void split(struct mem_area *a, void *addr)
 
 	/* update the block size for each page in the block */
 	for (i = 0; i < BIT(order); i++) {
-		assert(a->page_states[idx + i] == order);
-		a->page_states[idx + i] = order - 1;
+		assert(a->page_states[idx + i] == metadata);
+		a->page_states[idx + i] = metadata - 1;
 	}
 	if ((order == a->max_order) && (is_list_empty(a->freelists + order)))
 		a->max_order--;
@@ -149,7 +160,7 @@ static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz)
 		split(a, p);
 
 	list_remove(p);
-	memset(a->page_states + (virt_to_pfn(p) - a->base), ALLOC_MASK | order, BIT(order));
+	memset(a->page_states + (virt_to_pfn(p) - a->base), STATUS_ALLOCATED | order, BIT(order));
 	return p;
 }
 
@@ -243,7 +254,7 @@ static void _free_pages(void *mem)
 	order = a->page_states[p] & ORDER_MASK;
 
 	/* ensure that the first page is allocated and not special */
-	assert(a->page_states[p] == (order | ALLOC_MASK));
+	assert(IS_ALLOCATED(a->page_states[p]));
 	/* ensure that the order has a sane value */
 	assert(order < NLISTS);
 	/* ensure that the block is aligned properly for its size */
@@ -253,9 +264,9 @@ static void _free_pages(void *mem)
 
 	for (i = 0; i < BIT(order); i++) {
 		/* check that all pages of the block have consistent metadata */
-		assert(a->page_states[p + i] == (ALLOC_MASK | order));
+		assert(a->page_states[p + i] == (STATUS_ALLOCATED | order));
 		/* set the page as free */
-		a->page_states[p + i] &= ~ALLOC_MASK;
+		a->page_states[p + i] = STATUS_FREE | order;
 	}
 	/* provisionally add the block to the appropriate free list */
 	list_add(a->freelists + order, mem);
@@ -294,13 +305,13 @@ static int _reserve_one_page(pfn_t pfn)
 	if (!a)
 		return -1;
 	i = pfn - a->base;
-	if (a->page_states[i] & (ALLOC_MASK | SPECIAL_MASK))
+	if (!IS_USABLE(a->page_states[i]))
 		return -1;
 	while (a->page_states[i]) {
 		mask = GENMASK_ULL(63, a->page_states[i]);
 		split(a, pfn_to_virt(pfn & mask));
 	}
-	a->page_states[i] = SPECIAL_MASK;
+	a->page_states[i] = STATUS_SPECIAL;
 	return 0;
 }
 
@@ -312,8 +323,8 @@ static void _unreserve_one_page(pfn_t pfn)
 	a = get_area(pfn);
 	assert(a);
 	i = pfn - a->base;
-	assert(a->page_states[i] == SPECIAL_MASK);
-	a->page_states[i] = ALLOC_MASK;
+	assert(a->page_states[i] == STATUS_SPECIAL);
+	a->page_states[i] = STATUS_ALLOCATED;
 	_free_pages(pfn_to_virt(pfn));
 }
 
@@ -477,7 +488,7 @@ static void _page_alloc_init_area(u8 n, pfn_t start_pfn, pfn_t top_pfn)
 			order++;
 		assert(order < NLISTS);
 		/* initialize the metadata and add to the freelist */
-		memset(a->page_states + (i - a->base), order, BIT(order));
+		memset(a->page_states + (i - a->base), STATUS_FRESH | order, BIT(order));
 		list_add(a->freelists + order, pfn_to_virt(i));
 		if (order > a->max_order)
 			a->max_order = order;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [kvm-unit-tests PATCH v2 09/11] lib/alloc: replace areas with more generic flags
  2021-01-15 12:37 [kvm-unit-tests PATCH v2 00/11] Fix and improve the page allocator Claudio Imbrenda
                   ` (7 preceding siblings ...)
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 08/11] lib/alloc_page: rework metadata format Claudio Imbrenda
@ 2021-01-15 12:37 ` Claudio Imbrenda
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 10/11] lib/alloc_page: Wire up FLAG_DONTZERO Claudio Imbrenda
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 11/11] lib/alloc_page: Properly handle requests for fresh blocks Claudio Imbrenda
  10 siblings, 0 replies; 20+ messages in thread
From: Claudio Imbrenda @ 2021-01-15 12:37 UTC (permalink / raw)
  To: kvm
  Cc: frankja, david, thuth, pbonzini, cohuck, lvivier, nadav.amit,
	krish.sadhukhan

Replace the areas parameter with a more generic flags parameter. This
allows for up to 16 allocation areas and 16 allocation flags.

This patch introduces the flags and changes the names of the funcions,
subsequent patches will actually wire up the flags to do something.

The first two flags introduced are:
- FLAG_DONTZERO to ask the allocated memory not to be zeroed
- FLAG_FRESH to indicate that the allocated memory should have not been
  touched (READ or written to) in any way since boot.

This patch also fixes the order of arguments to consistently have alignment
first and then size, thereby fixing a bug where the two values would get
swapped.

Fixes: 8131e91a4b61 ("lib/alloc_page: complete rewrite of the page allocator")

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 lib/alloc_page.h | 39 ++++++++++++++++++++++-----------------
 lib/alloc_page.c | 16 ++++++++--------
 lib/s390x/smp.c  |  2 +-
 3 files changed, 31 insertions(+), 26 deletions(-)

diff --git a/lib/alloc_page.h b/lib/alloc_page.h
index 6fd2ff0..1af1419 100644
--- a/lib/alloc_page.h
+++ b/lib/alloc_page.h
@@ -11,8 +11,13 @@
 #include <stdbool.h>
 #include <asm/memory_areas.h>
 
-#define AREA_ANY -1
-#define AREA_ANY_NUMBER 0xff
+#define AREA_ANY_NUMBER	0xff
+
+#define AREA_ANY	0x00000
+#define AREA_MASK	0x0ffff
+
+#define FLAG_DONTZERO	0x10000
+#define FLAG_FRESH	0x20000
 
 /* Returns true if the page allocator has been initialized */
 bool page_alloc_initialized(void);
@@ -30,39 +35,39 @@ void page_alloc_init_area(u8 n, phys_addr_t base_pfn, phys_addr_t top_pfn);
 void page_alloc_ops_enable(void);
 
 /*
- * Allocate aligned memory from the specified areas.
- * areas is a bitmap of allowed areas
+ * Allocate aligned memory with the specified flags.
+ * flags is a bitmap of allowed areas and flags.
  * alignment must be a power of 2
  */
-void *memalign_pages_area(unsigned int areas, size_t alignment, size_t size);
+void *memalign_pages_flags(size_t alignment, size_t size, unsigned int flags);
 
 /*
- * Allocate aligned memory from any area.
- * Equivalent to memalign_pages_area(AREA_ANY, alignment, size).
+ * Allocate aligned memory from any area and with default flags.
+ * Equivalent to memalign_pages_flags(alignment, size, AREA_ANY).
  */
 static inline void *memalign_pages(size_t alignment, size_t size)
 {
-	return memalign_pages_area(AREA_ANY, alignment, size);
+	return memalign_pages_flags(alignment, size, AREA_ANY);
 }
 
 /*
- * Allocate naturally aligned memory from the specified areas.
- * Equivalent to memalign_pages_area(areas, 1ull << order, 1ull << order).
+ * Allocate 1ull << order naturally aligned pages with the specified flags.
+ * Equivalent to memalign_pages_flags(1ull << order, 1ull << order, flags).
  */
-void *alloc_pages_area(unsigned int areas, unsigned int order);
+void *alloc_pages_flags(unsigned int order, unsigned int flags);
 
 /*
- * Allocate naturally aligned pages from any area; the number of allocated
- * pages is 1 << order.
- * Equivalent to alloc_pages_area(AREA_ANY, order);
+ * Allocate 1ull << order naturally aligned pages from any area and with
+ * default flags.
+ * Equivalent to alloc_pages_flags(order, AREA_ANY);
  */
 static inline void *alloc_pages(unsigned int order)
 {
-	return alloc_pages_area(AREA_ANY, order);
+	return alloc_pages_flags(order, AREA_ANY);
 }
 
 /*
- * Allocate one page from any area.
+ * Allocate one page from any area and with default flags.
  * Equivalent to alloc_pages(0);
  */
 static inline void *alloc_page(void)
@@ -83,7 +88,7 @@ void free_pages(void *mem);
  */
 static inline void free_page(void *mem)
 {
-	return free_pages(mem);
+	free_pages(mem);
 }
 
 /*
diff --git a/lib/alloc_page.c b/lib/alloc_page.c
index d8b2758..47e2981 100644
--- a/lib/alloc_page.c
+++ b/lib/alloc_page.c
@@ -361,16 +361,16 @@ void unreserve_pages(phys_addr_t addr, size_t n)
 	spin_unlock(&lock);
 }
 
-static void *page_memalign_order_area(unsigned area, u8 ord, u8 al)
+static void *page_memalign_order_flags(u8 al, u8 ord, u32 flags)
 {
 	void *res = NULL;
-	int i;
+	int i, area;
 
 	spin_lock(&lock);
-	area &= areas_mask;
+	area = (flags & AREA_MASK) ? flags & areas_mask : areas_mask;
 	for (i = 0; !res && (i < MAX_AREAS); i++)
 		if (area & BIT(i))
-			res = page_memalign_order(areas + i, ord, al);
+			res = page_memalign_order(areas + i, al, ord);
 	spin_unlock(&lock);
 	return res;
 }
@@ -379,23 +379,23 @@ static void *page_memalign_order_area(unsigned area, u8 ord, u8 al)
  * Allocates (1 << order) physically contiguous and naturally aligned pages.
  * Returns NULL if the allocation was not possible.
  */
-void *alloc_pages_area(unsigned int area, unsigned int order)
+void *alloc_pages_flags(unsigned int order, unsigned int flags)
 {
-	return page_memalign_order_area(area, order, order);
+	return page_memalign_order_flags(order, order, flags);
 }
 
 /*
  * Allocates (1 << order) physically contiguous aligned pages.
  * Returns NULL if the allocation was not possible.
  */
-void *memalign_pages_area(unsigned int area, size_t alignment, size_t size)
+void *memalign_pages_flags(size_t alignment, size_t size, unsigned int flags)
 {
 	assert(is_power_of_2(alignment));
 	alignment = get_order(PAGE_ALIGN(alignment) >> PAGE_SHIFT);
 	size = get_order(PAGE_ALIGN(size) >> PAGE_SHIFT);
 	assert(alignment < NLISTS);
 	assert(size < NLISTS);
-	return page_memalign_order_area(area, size, alignment);
+	return page_memalign_order_flags(alignment, size, flags);
 }
 
 
diff --git a/lib/s390x/smp.c b/lib/s390x/smp.c
index 77d80ca..44b2eb4 100644
--- a/lib/s390x/smp.c
+++ b/lib/s390x/smp.c
@@ -190,7 +190,7 @@ int smp_cpu_setup(uint16_t addr, struct psw psw)
 
 	sigp_retry(cpu->addr, SIGP_INITIAL_CPU_RESET, 0, NULL);
 
-	lc = alloc_pages_area(AREA_DMA31, 1);
+	lc = alloc_pages_flags(1, AREA_DMA31);
 	cpu->lowcore = lc;
 	memset(lc, 0, PAGE_SIZE * 2);
 	sigp_retry(cpu->addr, SIGP_SET_PREFIX, (unsigned long )lc, NULL);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [kvm-unit-tests PATCH v2 10/11] lib/alloc_page: Wire up FLAG_DONTZERO
  2021-01-15 12:37 [kvm-unit-tests PATCH v2 00/11] Fix and improve the page allocator Claudio Imbrenda
                   ` (8 preceding siblings ...)
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 09/11] lib/alloc: replace areas with more generic flags Claudio Imbrenda
@ 2021-01-15 12:37 ` Claudio Imbrenda
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 11/11] lib/alloc_page: Properly handle requests for fresh blocks Claudio Imbrenda
  10 siblings, 0 replies; 20+ messages in thread
From: Claudio Imbrenda @ 2021-01-15 12:37 UTC (permalink / raw)
  To: kvm
  Cc: frankja, david, thuth, pbonzini, cohuck, lvivier, nadav.amit,
	krish.sadhukhan

Memory allocated without FLAG_DONTZERO will now be zeroed before being
returned to the caller.

This means that by default all allocated memory is now zeroed, restoring the
default behaviour that had been accidentally removed by a previous commit.

Fixes: 8131e91a4b61 ("lib/alloc_page: complete rewrite of the page allocator")
Reported-by: Nadav Amit <nadav.amit@gmail.com>

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 lib/alloc_page.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/alloc_page.c b/lib/alloc_page.c
index 47e2981..95d957b 100644
--- a/lib/alloc_page.c
+++ b/lib/alloc_page.c
@@ -372,6 +372,8 @@ static void *page_memalign_order_flags(u8 al, u8 ord, u32 flags)
 		if (area & BIT(i))
 			res = page_memalign_order(areas + i, al, ord);
 	spin_unlock(&lock);
+	if (res && !(flags & FLAG_DONTZERO))
+		memset(res, 0, BIT(ord) * PAGE_SIZE);
 	return res;
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [kvm-unit-tests PATCH v2 11/11] lib/alloc_page: Properly handle requests for fresh blocks
  2021-01-15 12:37 [kvm-unit-tests PATCH v2 00/11] Fix and improve the page allocator Claudio Imbrenda
                   ` (9 preceding siblings ...)
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 10/11] lib/alloc_page: Wire up FLAG_DONTZERO Claudio Imbrenda
@ 2021-01-15 12:37 ` Claudio Imbrenda
  10 siblings, 0 replies; 20+ messages in thread
From: Claudio Imbrenda @ 2021-01-15 12:37 UTC (permalink / raw)
  To: kvm
  Cc: frankja, david, thuth, pbonzini, cohuck, lvivier, nadav.amit,
	krish.sadhukhan

Upon initialization, all memory in an area is marked as fresh. Once
memory is used and freed, the freed memory is marked as free.

Free memory is always appended to the front of the freelist, meaning
that fresh memory stays on the tail.

When a block of fresh memory is split, the two blocks are put on the
tail of the appropriate freelist, so they can be found when needed.

When a fresh block is requested, a fresh block one order bigger is
taken, the first half is put back in the free pool (on the tail), and
the second half is returned. The reason behind this is that the first
page of every block always contains the pointers of the freelist.
Since the first page of a fresh block is actually not fresh, it cannot
be returned when a fresh allocation is requested.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 lib/alloc_page.c | 51 +++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 40 insertions(+), 11 deletions(-)

diff --git a/lib/alloc_page.c b/lib/alloc_page.c
index 95d957b..84f01e1 100644
--- a/lib/alloc_page.c
+++ b/lib/alloc_page.c
@@ -120,10 +120,17 @@ static void split(struct mem_area *a, void *addr)
 	if ((order == a->max_order) && (is_list_empty(a->freelists + order)))
 		a->max_order--;
 	order--;
-	/* add the first half block to the appropriate free list */
-	list_add(a->freelists + order, addr);
-	/* add the second half block to the appropriate free list */
-	list_add(a->freelists + order, pfn_to_virt(pfn + BIT(order)));
+
+	/* add the two half blocks to the appropriate free list */
+	if (IS_FRESH(metadata)) {
+		/* add to the tail if the blocks are fresh */
+		list_add_tail(a->freelists + order, addr);
+		list_add_tail(a->freelists + order, pfn_to_virt(pfn + BIT(order)));
+	} else {
+		/* add to the front if the blocks are dirty */
+		list_add(a->freelists + order, addr);
+		list_add(a->freelists + order, pfn_to_virt(pfn + BIT(order)));
+	}
 }
 
 /*
@@ -132,21 +139,33 @@ static void split(struct mem_area *a, void *addr)
  *
  * Both parameters must be not larger than the largest allowed order
  */
-static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz)
+static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz, bool fresh)
 {
 	struct linked_list *p;
+	pfn_t idx;
 	u8 order;
 
 	assert((al < NLISTS) && (sz < NLISTS));
 	/* we need the bigger of the two as starting point */
 	order = sz > al ? sz : al;
+	/*
+	 * we need to go one order up if we want a completely fresh block,
+	 * since the first page contains the freelist pointers, and
+	 * therefore it is always dirty
+	 */
+	order += fresh;
 
 	/* search all free lists for some memory */
 	for ( ; order <= a->max_order; order++) {
-		p = a->freelists[order].next;
-		if (!is_list_empty(p))
-			break;
+		p = fresh ? a->freelists[order].prev : a->freelists[order].next;
+		if (is_list_empty(p))
+			continue;
+		idx = virt_to_pfn(p) - a->base;
+		if (fresh && !IS_FRESH(a->page_states[idx]))
+			continue;
+		break;
 	}
+
 	/* out of memory */
 	if (order > a->max_order)
 		return NULL;
@@ -160,7 +179,16 @@ static void *page_memalign_order(struct mem_area *a, u8 al, u8 sz)
 		split(a, p);
 
 	list_remove(p);
-	memset(a->page_states + (virt_to_pfn(p) - a->base), STATUS_ALLOCATED | order, BIT(order));
+	/* We now have a block twice the size, but the first page is dirty. */
+	if (fresh) {
+		order--;
+		/* Put back the first (partially dirty) half of the block */
+		memset(a->page_states + idx, STATUS_FRESH | order, BIT(order));
+		list_add_tail(a->freelists + order, p);
+		idx += BIT(order);
+		p = pfn_to_virt(a->base + idx);
+	}
+	memset(a->page_states + idx, STATUS_ALLOCATED | order, BIT(order));
 	return p;
 }
 
@@ -364,13 +392,14 @@ void unreserve_pages(phys_addr_t addr, size_t n)
 static void *page_memalign_order_flags(u8 al, u8 ord, u32 flags)
 {
 	void *res = NULL;
-	int i, area;
+	int i, area, fresh;
 
+	fresh = !!(flags & FLAG_FRESH);
 	spin_lock(&lock);
 	area = (flags & AREA_MASK) ? flags & areas_mask : areas_mask;
 	for (i = 0; !res && (i < MAX_AREAS); i++)
 		if (area & BIT(i))
-			res = page_memalign_order(areas + i, al, ord);
+			res = page_memalign_order(areas + i, al, ord, fresh);
 	spin_unlock(&lock);
 	if (res && !(flags & FLAG_DONTZERO))
 		memset(res, 0, BIT(ord) * PAGE_SIZE);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [kvm-unit-tests PATCH v2 01/11] lib/x86: fix page.h to include the generic header
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 01/11] lib/x86: fix page.h to include the generic header Claudio Imbrenda
@ 2021-01-19 15:12   ` Janosch Frank
  0 siblings, 0 replies; 20+ messages in thread
From: Janosch Frank @ 2021-01-19 15:12 UTC (permalink / raw)
  To: Claudio Imbrenda, kvm
  Cc: david, thuth, pbonzini, cohuck, lvivier, nadav.amit, krish.sadhukhan

On 1/15/21 1:37 PM, Claudio Imbrenda wrote:
> Bring x86 in line with the other architectures and include the generic header
> at asm-generic/page.h .
> This provides the macros PAGE_SHIFT, PAGE_SIZE, PAGE_MASK, virt_to_pfn, and
> pfn_to_virt.
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>

Reviewed-by: Janosch Frank <frankja@linux.ibm.com>

> ---
>  lib/x86/asm/page.h | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/lib/x86/asm/page.h b/lib/x86/asm/page.h
> index 1359eb7..2cf8881 100644
> --- a/lib/x86/asm/page.h
> +++ b/lib/x86/asm/page.h
> @@ -13,9 +13,7 @@
>  typedef unsigned long pteval_t;
>  typedef unsigned long pgd_t;
>  
> -#define PAGE_SHIFT	12
> -#define PAGE_SIZE	(_AC(1,UL) << PAGE_SHIFT)
> -#define PAGE_MASK	(~(PAGE_SIZE-1))
> +#include <asm-generic/page.h>
>  
>  #ifndef __ASSEMBLY__
>  
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [kvm-unit-tests PATCH v2 02/11] lib/list.h: add list_add_tail
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 02/11] lib/list.h: add list_add_tail Claudio Imbrenda
@ 2021-01-19 15:18   ` Janosch Frank
  0 siblings, 0 replies; 20+ messages in thread
From: Janosch Frank @ 2021-01-19 15:18 UTC (permalink / raw)
  To: Claudio Imbrenda, kvm
  Cc: david, thuth, pbonzini, cohuck, lvivier, nadav.amit, krish.sadhukhan

On 1/15/21 1:37 PM, Claudio Imbrenda wrote:
> Add a list_add_tail wrapper function to allow adding elements to the end
> of a list.
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>

Reviewed-by: Janosch Frank <frankja@de.ibm.com>

> ---
>  lib/list.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/lib/list.h b/lib/list.h
> index 18d9516..7f9717e 100644
> --- a/lib/list.h
> +++ b/lib/list.h
> @@ -50,4 +50,13 @@ static inline void list_add(struct linked_list *head, struct linked_list *li)
>  	head->next = li;
>  }
>  
> +/*
> + * Add the given element before the given list head.
> + */
> +static inline void list_add_tail(struct linked_list *head, struct linked_list *li)
> +{
> +	assert(head);
> +	list_add(head->prev, li);
> +}
> +
>  #endif
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [kvm-unit-tests PATCH v2 03/11] lib/vmalloc: add some asserts and improvements
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 03/11] lib/vmalloc: add some asserts and improvements Claudio Imbrenda
@ 2021-01-19 15:26   ` Janosch Frank
  0 siblings, 0 replies; 20+ messages in thread
From: Janosch Frank @ 2021-01-19 15:26 UTC (permalink / raw)
  To: Claudio Imbrenda, kvm
  Cc: david, thuth, pbonzini, cohuck, lvivier, nadav.amit, krish.sadhukhan

On 1/15/21 1:37 PM, Claudio Imbrenda wrote:
> Add some asserts to make sure the state is consistent.
> 
> Simplify and improve the readability of vm_free.
> 
> If a NULL pointer is freed, no operation is performed.
> 
> Fixes: 3f6fee0d4da4 ("lib/vmalloc: vmalloc support for handling allocation metadata")
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>

Acked-by: Janosch Frank <frankja@de.ibm.com>

> ---
>  lib/vmalloc.c | 22 +++++++++++++---------
>  1 file changed, 13 insertions(+), 9 deletions(-)
> 
> diff --git a/lib/vmalloc.c b/lib/vmalloc.c
> index 986a34c..6b52790 100644
> --- a/lib/vmalloc.c
> +++ b/lib/vmalloc.c
> @@ -162,13 +162,16 @@ static void *vm_memalign(size_t alignment, size_t size)
>  static void vm_free(void *mem)
>  {
>  	struct metadata *m;
> -	uintptr_t ptr, end;
> +	uintptr_t ptr, page, i;
>  
> +	if (!mem)
> +		return;
>  	/* the pointer is not page-aligned, it was a single-page allocation */
>  	if (!IS_ALIGNED((uintptr_t)mem, PAGE_SIZE)) {
>  		assert(GET_MAGIC(mem) == VM_MAGIC);
> -		ptr = virt_to_pte_phys(page_root, mem) & PAGE_MASK;
> -		free_page(phys_to_virt(ptr));
> +		page = virt_to_pte_phys(page_root, mem) & PAGE_MASK;
> +		assert(page);
> +		free_page(phys_to_virt(page));
>  		return;
>  	}
>  
> @@ -176,13 +179,14 @@ static void vm_free(void *mem)
>  	m = GET_METADATA(mem);
>  	assert(m->magic == VM_MAGIC);
>  	assert(m->npages > 0);
> +	assert(m->npages < BIT_ULL(BITS_PER_LONG - PAGE_SHIFT));
>  	/* free all the pages including the metadata page */
> -	ptr = (uintptr_t)mem - PAGE_SIZE;
> -	end = ptr + m->npages * PAGE_SIZE;
> -	for ( ; ptr < end; ptr += PAGE_SIZE)
> -		free_page(phys_to_virt(virt_to_pte_phys(page_root, (void *)ptr)));
> -	/* free the last one separately to avoid overflow issues */
> -	free_page(phys_to_virt(virt_to_pte_phys(page_root, (void *)ptr)));
> +	ptr = (uintptr_t)m & PAGE_MASK;
> +	for (i = 0 ; i < m->npages + 1; i++, ptr += PAGE_SIZE) {

s/i < m->npages + 1/i <= m->npages/ ?

> +		page = virt_to_pte_phys(page_root, (void *)ptr) & PAGE_MASK;
> +		assert(page);
> +		free_page(phys_to_virt(page));
> +	}
>  }
>  
>  static struct alloc_ops vmalloc_ops = {
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [kvm-unit-tests PATCH v2 04/11] lib/asm: Fix definitions of memory areas
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 04/11] lib/asm: Fix definitions of memory areas Claudio Imbrenda
@ 2021-01-19 15:33   ` Janosch Frank
  2021-01-19 17:05     ` Claudio Imbrenda
  2021-01-21  1:23   ` David Matlack
  1 sibling, 1 reply; 20+ messages in thread
From: Janosch Frank @ 2021-01-19 15:33 UTC (permalink / raw)
  To: Claudio Imbrenda, kvm
  Cc: david, thuth, pbonzini, cohuck, lvivier, nadav.amit, krish.sadhukhan

On 1/15/21 1:37 PM, Claudio Imbrenda wrote:
> Fix the definitions of the memory areas.
> 
> Bring the headers in line with the rest of the asm headers, by having the
> appropriate #ifdef _ASM$ARCH_ guarding the headers.
> 
> Fixes: d74708246bd9 ("lib/asm: Add definitions of memory areas")
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
> ---
>  lib/asm-generic/memory_areas.h |  9 ++++-----
>  lib/arm/asm/memory_areas.h     | 11 +++--------
>  lib/arm64/asm/memory_areas.h   | 11 +++--------
>  lib/powerpc/asm/memory_areas.h | 11 +++--------
>  lib/ppc64/asm/memory_areas.h   | 11 +++--------
>  lib/s390x/asm/memory_areas.h   | 13 ++++++-------
>  lib/x86/asm/memory_areas.h     | 27 ++++++++++++++++-----------
>  lib/alloc_page.h               |  3 +++
>  lib/alloc_page.c               |  4 +---
>  9 files changed, 42 insertions(+), 58 deletions(-)
> 
> diff --git a/lib/asm-generic/memory_areas.h b/lib/asm-generic/memory_areas.h
> index 927baa7..3074afe 100644
> --- a/lib/asm-generic/memory_areas.h
> +++ b/lib/asm-generic/memory_areas.h
> @@ -1,11 +1,10 @@
> -#ifndef MEMORY_AREAS_H
> -#define MEMORY_AREAS_H
> +#ifndef __ASM_GENERIC_MEMORY_AREAS_H__
> +#define __ASM_GENERIC_MEMORY_AREAS_H__
>  
>  #define AREA_NORMAL_PFN 0
>  #define AREA_NORMAL_NUMBER 0
> -#define AREA_NORMAL 1
> +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER)
>  
> -#define AREA_ANY -1
> -#define AREA_ANY_NUMBER 0xff
> +#define MAX_AREAS 1
>  
>  #endif
> diff --git a/lib/arm/asm/memory_areas.h b/lib/arm/asm/memory_areas.h
> index 927baa7..c723310 100644
> --- a/lib/arm/asm/memory_areas.h
> +++ b/lib/arm/asm/memory_areas.h
> @@ -1,11 +1,6 @@
> -#ifndef MEMORY_AREAS_H
> -#define MEMORY_AREAS_H
> +#ifndef _ASMARM_MEMORY_AREAS_H_
> +#define _ASMARM_MEMORY_AREAS_H_
>  
> -#define AREA_NORMAL_PFN 0
> -#define AREA_NORMAL_NUMBER 0
> -#define AREA_NORMAL 1
> -
> -#define AREA_ANY -1
> -#define AREA_ANY_NUMBER 0xff
> +#include <asm-generic/memory_areas.h>
>  
>  #endif
> diff --git a/lib/arm64/asm/memory_areas.h b/lib/arm64/asm/memory_areas.h
> index 927baa7..18e8ca8 100644
> --- a/lib/arm64/asm/memory_areas.h
> +++ b/lib/arm64/asm/memory_areas.h
> @@ -1,11 +1,6 @@
> -#ifndef MEMORY_AREAS_H
> -#define MEMORY_AREAS_H
> +#ifndef _ASMARM64_MEMORY_AREAS_H_
> +#define _ASMARM64_MEMORY_AREAS_H_
>  
> -#define AREA_NORMAL_PFN 0
> -#define AREA_NORMAL_NUMBER 0
> -#define AREA_NORMAL 1
> -
> -#define AREA_ANY -1
> -#define AREA_ANY_NUMBER 0xff
> +#include <asm-generic/memory_areas.h>
>  
>  #endif
> diff --git a/lib/powerpc/asm/memory_areas.h b/lib/powerpc/asm/memory_areas.h
> index 927baa7..76d1738 100644
> --- a/lib/powerpc/asm/memory_areas.h
> +++ b/lib/powerpc/asm/memory_areas.h
> @@ -1,11 +1,6 @@
> -#ifndef MEMORY_AREAS_H
> -#define MEMORY_AREAS_H
> +#ifndef _ASMPOWERPC_MEMORY_AREAS_H_
> +#define _ASMPOWERPC_MEMORY_AREAS_H_
>  
> -#define AREA_NORMAL_PFN 0
> -#define AREA_NORMAL_NUMBER 0
> -#define AREA_NORMAL 1
> -
> -#define AREA_ANY -1
> -#define AREA_ANY_NUMBER 0xff
> +#include <asm-generic/memory_areas.h>
>  
>  #endif
> diff --git a/lib/ppc64/asm/memory_areas.h b/lib/ppc64/asm/memory_areas.h
> index 927baa7..b9fd46b 100644
> --- a/lib/ppc64/asm/memory_areas.h
> +++ b/lib/ppc64/asm/memory_areas.h
> @@ -1,11 +1,6 @@
> -#ifndef MEMORY_AREAS_H
> -#define MEMORY_AREAS_H
> +#ifndef _ASMPPC64_MEMORY_AREAS_H_
> +#define _ASMPPC64_MEMORY_AREAS_H_
>  
> -#define AREA_NORMAL_PFN 0
> -#define AREA_NORMAL_NUMBER 0
> -#define AREA_NORMAL 1
> -
> -#define AREA_ANY -1
> -#define AREA_ANY_NUMBER 0xff
> +#include <asm-generic/memory_areas.h>
>  
>  #endif
> diff --git a/lib/s390x/asm/memory_areas.h b/lib/s390x/asm/memory_areas.h
> index 4856a27..827bfb3 100644
> --- a/lib/s390x/asm/memory_areas.h
> +++ b/lib/s390x/asm/memory_areas.h
> @@ -1,16 +1,15 @@
> -#ifndef MEMORY_AREAS_H
> -#define MEMORY_AREAS_H
> +#ifndef _ASMS390X_MEMORY_AREAS_H_
> +#define _ASMS390X_MEMORY_AREAS_H_
>  
> -#define AREA_NORMAL_PFN BIT(31-12)
> +#define AREA_NORMAL_PFN (1 << 19)
>  #define AREA_NORMAL_NUMBER 0
> -#define AREA_NORMAL 1
> +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER)
>  
>  #define AREA_LOW_PFN 0
>  #define AREA_LOW_NUMBER 1
> -#define AREA_LOW 2
> +#define AREA_LOW (1 << AREA_LOW_NUMBER)
>  
> -#define AREA_ANY -1
> -#define AREA_ANY_NUMBER 0xff
> +#define MAX_AREAS 2
>  
>  #define AREA_DMA31 AREA_LOW
>  
> diff --git a/lib/x86/asm/memory_areas.h b/lib/x86/asm/memory_areas.h
> index 952f5bd..e84016f 100644
> --- a/lib/x86/asm/memory_areas.h
> +++ b/lib/x86/asm/memory_areas.h
> @@ -1,21 +1,26 @@
> -#ifndef MEMORY_AREAS_H
> -#define MEMORY_AREAS_H
> +#ifndef _ASM_X86_MEMORY_AREAS_H_
> +#define _ASM_X86_MEMORY_AREAS_H_
>  
>  #define AREA_NORMAL_PFN BIT(36-12)
>  #define AREA_NORMAL_NUMBER 0
> -#define AREA_NORMAL 1
> +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER)
>  
> -#define AREA_PAE_HIGH_PFN BIT(32-12)
> -#define AREA_PAE_HIGH_NUMBER 1
> -#define AREA_PAE_HIGH 2
> +#define AREA_HIGH_PFN BIT(32-12)
> +#define AREA_HIGH_NUMBER 1
> +#define AREA_HIGH (1 << AREA_HIGH_NUMBER)
>  
> -#define AREA_LOW_PFN 0
> +#define AREA_LOW_PFN BIT(24-12)
>  #define AREA_LOW_NUMBER 2
> -#define AREA_LOW 4
> +#define AREA_LOW (1 << AREA_LOW_NUMBER)
>  
> -#define AREA_PAE (AREA_PAE | AREA_LOW)
> +#define AREA_LOWEST_PFN 0
> +#define AREA_LOWEST_NUMBER 3
> +#define AREA_LOWEST (1 << AREA_LOWEST_NUMBER)
>  
> -#define AREA_ANY -1
> -#define AREA_ANY_NUMBER 0xff
> +#define MAX_AREAS 4
> +
> +#define AREA_DMA24 AREA_LOWEST
> +#define AREA_DMA32 (AREA_LOWEST | AREA_LOW)
> +#define AREA_PAE36 (AREA_LOWEST | AREA_LOW | AREA_HIGH)

This confuses the heck out of me.
The other things look ok to me though.

>  
>  #endif
> diff --git a/lib/alloc_page.h b/lib/alloc_page.h
> index 816ff5d..b6aace5 100644
> --- a/lib/alloc_page.h
> +++ b/lib/alloc_page.h
> @@ -10,6 +10,9 @@
>  
>  #include <asm/memory_areas.h>
>  
> +#define AREA_ANY -1
> +#define AREA_ANY_NUMBER 0xff
> +
>  /* Returns true if the page allocator has been initialized */
>  bool page_alloc_initialized(void);
>  
> diff --git a/lib/alloc_page.c b/lib/alloc_page.c
> index 685ab1e..ed0ff02 100644
> --- a/lib/alloc_page.c
> +++ b/lib/alloc_page.c
> @@ -19,8 +19,6 @@
>  #define NLISTS ((BITS_PER_LONG) - (PAGE_SHIFT))
>  #define PFN(x) ((uintptr_t)(x) >> PAGE_SHIFT)
>  
> -#define MAX_AREAS	6
> -
>  #define ORDER_MASK	0x3f
>  #define ALLOC_MASK	0x40
>  #define SPECIAL_MASK	0x80
> @@ -509,7 +507,7 @@ void page_alloc_init_area(u8 n, uintptr_t base_pfn, uintptr_t top_pfn)
>  		return;
>  	}
>  #ifdef AREA_HIGH_PFN
> -	__page_alloc_init_area(AREA_HIGH_NUMBER, AREA_HIGH_PFN), base_pfn, &top_pfn);
> +	__page_alloc_init_area(AREA_HIGH_NUMBER, AREA_HIGH_PFN, base_pfn, &top_pfn);
>  #endif
>  	__page_alloc_init_area(AREA_NORMAL_NUMBER, AREA_NORMAL_PFN, base_pfn, &top_pfn);
>  #ifdef AREA_LOW_PFN
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [kvm-unit-tests PATCH v2 04/11] lib/asm: Fix definitions of memory areas
  2021-01-19 15:33   ` Janosch Frank
@ 2021-01-19 17:05     ` Claudio Imbrenda
  0 siblings, 0 replies; 20+ messages in thread
From: Claudio Imbrenda @ 2021-01-19 17:05 UTC (permalink / raw)
  To: Janosch Frank
  Cc: kvm, david, thuth, pbonzini, cohuck, lvivier, nadav.amit,
	krish.sadhukhan

On Tue, 19 Jan 2021 16:33:29 +0100
Janosch Frank <frankja@linux.ibm.com> wrote:

> On 1/15/21 1:37 PM, Claudio Imbrenda wrote:
> > Fix the definitions of the memory areas.
> > 
> > Bring the headers in line with the rest of the asm headers, by
> > having the appropriate #ifdef _ASM$ARCH_ guarding the headers.
> > 
> > Fixes: d74708246bd9 ("lib/asm: Add definitions of memory areas")
> > 
> > Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> > Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
> > ---
> >  lib/asm-generic/memory_areas.h |  9 ++++-----
> >  lib/arm/asm/memory_areas.h     | 11 +++--------
> >  lib/arm64/asm/memory_areas.h   | 11 +++--------
> >  lib/powerpc/asm/memory_areas.h | 11 +++--------
> >  lib/ppc64/asm/memory_areas.h   | 11 +++--------
> >  lib/s390x/asm/memory_areas.h   | 13 ++++++-------
> >  lib/x86/asm/memory_areas.h     | 27 ++++++++++++++++-----------
> >  lib/alloc_page.h               |  3 +++
> >  lib/alloc_page.c               |  4 +---
> >  9 files changed, 42 insertions(+), 58 deletions(-)
> > 
> > diff --git a/lib/asm-generic/memory_areas.h
> > b/lib/asm-generic/memory_areas.h index 927baa7..3074afe 100644
> > --- a/lib/asm-generic/memory_areas.h
> > +++ b/lib/asm-generic/memory_areas.h
> > @@ -1,11 +1,10 @@
> > -#ifndef MEMORY_AREAS_H
> > -#define MEMORY_AREAS_H
> > +#ifndef __ASM_GENERIC_MEMORY_AREAS_H__
> > +#define __ASM_GENERIC_MEMORY_AREAS_H__
> >  
> >  #define AREA_NORMAL_PFN 0
> >  #define AREA_NORMAL_NUMBER 0
> > -#define AREA_NORMAL 1
> > +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER)
> >  
> > -#define AREA_ANY -1
> > -#define AREA_ANY_NUMBER 0xff
> > +#define MAX_AREAS 1
> >  
> >  #endif
> > diff --git a/lib/arm/asm/memory_areas.h b/lib/arm/asm/memory_areas.h
> > index 927baa7..c723310 100644
> > --- a/lib/arm/asm/memory_areas.h
> > +++ b/lib/arm/asm/memory_areas.h
> > @@ -1,11 +1,6 @@
> > -#ifndef MEMORY_AREAS_H
> > -#define MEMORY_AREAS_H
> > +#ifndef _ASMARM_MEMORY_AREAS_H_
> > +#define _ASMARM_MEMORY_AREAS_H_
> >  
> > -#define AREA_NORMAL_PFN 0
> > -#define AREA_NORMAL_NUMBER 0
> > -#define AREA_NORMAL 1
> > -
> > -#define AREA_ANY -1
> > -#define AREA_ANY_NUMBER 0xff
> > +#include <asm-generic/memory_areas.h>
> >  
> >  #endif
> > diff --git a/lib/arm64/asm/memory_areas.h
> > b/lib/arm64/asm/memory_areas.h index 927baa7..18e8ca8 100644
> > --- a/lib/arm64/asm/memory_areas.h
> > +++ b/lib/arm64/asm/memory_areas.h
> > @@ -1,11 +1,6 @@
> > -#ifndef MEMORY_AREAS_H
> > -#define MEMORY_AREAS_H
> > +#ifndef _ASMARM64_MEMORY_AREAS_H_
> > +#define _ASMARM64_MEMORY_AREAS_H_
> >  
> > -#define AREA_NORMAL_PFN 0
> > -#define AREA_NORMAL_NUMBER 0
> > -#define AREA_NORMAL 1
> > -
> > -#define AREA_ANY -1
> > -#define AREA_ANY_NUMBER 0xff
> > +#include <asm-generic/memory_areas.h>
> >  
> >  #endif
> > diff --git a/lib/powerpc/asm/memory_areas.h
> > b/lib/powerpc/asm/memory_areas.h index 927baa7..76d1738 100644
> > --- a/lib/powerpc/asm/memory_areas.h
> > +++ b/lib/powerpc/asm/memory_areas.h
> > @@ -1,11 +1,6 @@
> > -#ifndef MEMORY_AREAS_H
> > -#define MEMORY_AREAS_H
> > +#ifndef _ASMPOWERPC_MEMORY_AREAS_H_
> > +#define _ASMPOWERPC_MEMORY_AREAS_H_
> >  
> > -#define AREA_NORMAL_PFN 0
> > -#define AREA_NORMAL_NUMBER 0
> > -#define AREA_NORMAL 1
> > -
> > -#define AREA_ANY -1
> > -#define AREA_ANY_NUMBER 0xff
> > +#include <asm-generic/memory_areas.h>
> >  
> >  #endif
> > diff --git a/lib/ppc64/asm/memory_areas.h
> > b/lib/ppc64/asm/memory_areas.h index 927baa7..b9fd46b 100644
> > --- a/lib/ppc64/asm/memory_areas.h
> > +++ b/lib/ppc64/asm/memory_areas.h
> > @@ -1,11 +1,6 @@
> > -#ifndef MEMORY_AREAS_H
> > -#define MEMORY_AREAS_H
> > +#ifndef _ASMPPC64_MEMORY_AREAS_H_
> > +#define _ASMPPC64_MEMORY_AREAS_H_
> >  
> > -#define AREA_NORMAL_PFN 0
> > -#define AREA_NORMAL_NUMBER 0
> > -#define AREA_NORMAL 1
> > -
> > -#define AREA_ANY -1
> > -#define AREA_ANY_NUMBER 0xff
> > +#include <asm-generic/memory_areas.h>
> >  
> >  #endif
> > diff --git a/lib/s390x/asm/memory_areas.h
> > b/lib/s390x/asm/memory_areas.h index 4856a27..827bfb3 100644
> > --- a/lib/s390x/asm/memory_areas.h
> > +++ b/lib/s390x/asm/memory_areas.h
> > @@ -1,16 +1,15 @@
> > -#ifndef MEMORY_AREAS_H
> > -#define MEMORY_AREAS_H
> > +#ifndef _ASMS390X_MEMORY_AREAS_H_
> > +#define _ASMS390X_MEMORY_AREAS_H_
> >  
> > -#define AREA_NORMAL_PFN BIT(31-12)
> > +#define AREA_NORMAL_PFN (1 << 19)
> >  #define AREA_NORMAL_NUMBER 0
> > -#define AREA_NORMAL 1
> > +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER)
> >  
> >  #define AREA_LOW_PFN 0
> >  #define AREA_LOW_NUMBER 1
> > -#define AREA_LOW 2
> > +#define AREA_LOW (1 << AREA_LOW_NUMBER)
> >  
> > -#define AREA_ANY -1
> > -#define AREA_ANY_NUMBER 0xff
> > +#define MAX_AREAS 2
> >  
> >  #define AREA_DMA31 AREA_LOW
> >  
> > diff --git a/lib/x86/asm/memory_areas.h b/lib/x86/asm/memory_areas.h
> > index 952f5bd..e84016f 100644
> > --- a/lib/x86/asm/memory_areas.h
> > +++ b/lib/x86/asm/memory_areas.h
> > @@ -1,21 +1,26 @@
> > -#ifndef MEMORY_AREAS_H
> > -#define MEMORY_AREAS_H
> > +#ifndef _ASM_X86_MEMORY_AREAS_H_
> > +#define _ASM_X86_MEMORY_AREAS_H_
> >  
> >  #define AREA_NORMAL_PFN BIT(36-12)
> >  #define AREA_NORMAL_NUMBER 0
> > -#define AREA_NORMAL 1
> > +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER)
> >  
> > -#define AREA_PAE_HIGH_PFN BIT(32-12)
> > -#define AREA_PAE_HIGH_NUMBER 1
> > -#define AREA_PAE_HIGH 2
> > +#define AREA_HIGH_PFN BIT(32-12)
> > +#define AREA_HIGH_NUMBER 1
> > +#define AREA_HIGH (1 << AREA_HIGH_NUMBER)
> >  
> > -#define AREA_LOW_PFN 0
> > +#define AREA_LOW_PFN BIT(24-12)
> >  #define AREA_LOW_NUMBER 2
> > -#define AREA_LOW 4
> > +#define AREA_LOW (1 << AREA_LOW_NUMBER)
> >  
> > -#define AREA_PAE (AREA_PAE | AREA_LOW)
> > +#define AREA_LOWEST_PFN 0
> > +#define AREA_LOWEST_NUMBER 3
> > +#define AREA_LOWEST (1 << AREA_LOWEST_NUMBER)
> >  
> > -#define AREA_ANY -1
> > -#define AREA_ANY_NUMBER 0xff
> > +#define MAX_AREAS 4
> > +
> > +#define AREA_DMA24 AREA_LOWEST
> > +#define AREA_DMA32 (AREA_LOWEST | AREA_LOW)
> > +#define AREA_PAE36 (AREA_LOWEST | AREA_LOW | AREA_HIGH)  
> 
> This confuses the heck out of me.
> The other things look ok to me though.

if you want a page below 4G, then you don't want to restrict yourself
to the area between 16M and 32G; the area below 16M is good too.

same for 64G.

> >  
> >  #endif
> > diff --git a/lib/alloc_page.h b/lib/alloc_page.h
> > index 816ff5d..b6aace5 100644
> > --- a/lib/alloc_page.h
> > +++ b/lib/alloc_page.h
> > @@ -10,6 +10,9 @@
> >  
> >  #include <asm/memory_areas.h>
> >  
> > +#define AREA_ANY -1
> > +#define AREA_ANY_NUMBER 0xff
> > +
> >  /* Returns true if the page allocator has been initialized */
> >  bool page_alloc_initialized(void);
> >  
> > diff --git a/lib/alloc_page.c b/lib/alloc_page.c
> > index 685ab1e..ed0ff02 100644
> > --- a/lib/alloc_page.c
> > +++ b/lib/alloc_page.c
> > @@ -19,8 +19,6 @@
> >  #define NLISTS ((BITS_PER_LONG) - (PAGE_SHIFT))
> >  #define PFN(x) ((uintptr_t)(x) >> PAGE_SHIFT)
> >  
> > -#define MAX_AREAS	6
> > -
> >  #define ORDER_MASK	0x3f
> >  #define ALLOC_MASK	0x40
> >  #define SPECIAL_MASK	0x80
> > @@ -509,7 +507,7 @@ void page_alloc_init_area(u8 n, uintptr_t
> > base_pfn, uintptr_t top_pfn) return;
> >  	}
> >  #ifdef AREA_HIGH_PFN
> > -	__page_alloc_init_area(AREA_HIGH_NUMBER, AREA_HIGH_PFN),
> > base_pfn, &top_pfn);
> > +	__page_alloc_init_area(AREA_HIGH_NUMBER, AREA_HIGH_PFN,
> > base_pfn, &top_pfn); #endif
> >  	__page_alloc_init_area(AREA_NORMAL_NUMBER,
> > AREA_NORMAL_PFN, base_pfn, &top_pfn); #ifdef AREA_LOW_PFN
> >   
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [kvm-unit-tests PATCH v2 04/11] lib/asm: Fix definitions of memory areas
  2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 04/11] lib/asm: Fix definitions of memory areas Claudio Imbrenda
  2021-01-19 15:33   ` Janosch Frank
@ 2021-01-21  1:23   ` David Matlack
  2021-01-21  5:32     ` Thomas Huth
  2021-01-21  9:28     ` Claudio Imbrenda
  1 sibling, 2 replies; 20+ messages in thread
From: David Matlack @ 2021-01-21  1:23 UTC (permalink / raw)
  To: Claudio Imbrenda
  Cc: kvm list, frankja, David Hildenbrand, Thomas Huth, Paolo Bonzini,
	cohuck, Laurent Vivier, nadav.amit, krish.sadhukhan

Hi Claudio,

On Fri, Jan 15, 2021 at 8:07 AM Claudio Imbrenda <imbrenda@linux.ibm.com> wrote:
>
> Fix the definitions of the memory areas.

The test x86/smat.flat started falling for me at this commit. I'm
testing on Linux 5.7.17.

Here are the logs:

timeout -k 1s --foreground 90s /usr/bin/qemu-system-x86_64 --no-reboot
-nodefaults -device pc-testdev -device
isa-debug-exit,iobase=0xf4,iosize=0x4 -vnc none -serial stdio -device
pci-testdev -machine accel=kvm -kernel x86/smap.flat -smp 1 -cpu host
# -initrd /tmp/tmp.s4WKgsHkOh
enabling apic
paging enabled
cr0 = 80010011
cr3 = 1007000
cr4 = 20
testing without INVLPG
PASS: write to supervisor page
PASS: read from user page with AC=1
PASS: read from user page with AC=0
FAIL: write to user page with AC=1
FAIL: read from user page with AC=0
FAIL: write to user stack with AC=1
FAIL: write to user stack with AC=0
Unhandled exception 6 #UD at ip 0000000001800003
error_code=0000      rflags=00010082      cs=00000008
rax=000000000000000a rcx=00000000000003fd rdx=00000000000003f8
rbx=0000000000000000
rbp=0000000000517700 rsi=0000000000416422 rdi=0000000000000000
 r8=0000000000416422  r9=00000000000003f8 r10=000000000000000d
r11=000ffffffffff000
r12=0000000000000000 r13=0000000001418700 r14=0000000000000000
r15=0000000000000000
cr0=0000000080010011 cr2=00000000015176d8 cr3=0000000001007000
cr4=0000000000200020
cr8=0000000000000000
STACK: @1800003 400368
b'0x0000000001800003: ?? ??:0'



>
> Bring the headers in line with the rest of the asm headers, by having the
> appropriate #ifdef _ASM$ARCH_ guarding the headers.
>
> Fixes: d74708246bd9 ("lib/asm: Add definitions of memory areas")
>
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
> ---
>  lib/asm-generic/memory_areas.h |  9 ++++-----
>  lib/arm/asm/memory_areas.h     | 11 +++--------
>  lib/arm64/asm/memory_areas.h   | 11 +++--------
>  lib/powerpc/asm/memory_areas.h | 11 +++--------
>  lib/ppc64/asm/memory_areas.h   | 11 +++--------
>  lib/s390x/asm/memory_areas.h   | 13 ++++++-------
>  lib/x86/asm/memory_areas.h     | 27 ++++++++++++++++-----------
>  lib/alloc_page.h               |  3 +++
>  lib/alloc_page.c               |  4 +---
>  9 files changed, 42 insertions(+), 58 deletions(-)
>
> diff --git a/lib/asm-generic/memory_areas.h b/lib/asm-generic/memory_areas.h
> index 927baa7..3074afe 100644
> --- a/lib/asm-generic/memory_areas.h
> +++ b/lib/asm-generic/memory_areas.h
> @@ -1,11 +1,10 @@
> -#ifndef MEMORY_AREAS_H
> -#define MEMORY_AREAS_H
> +#ifndef __ASM_GENERIC_MEMORY_AREAS_H__
> +#define __ASM_GENERIC_MEMORY_AREAS_H__
>
>  #define AREA_NORMAL_PFN 0
>  #define AREA_NORMAL_NUMBER 0
> -#define AREA_NORMAL 1
> +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER)
>
> -#define AREA_ANY -1
> -#define AREA_ANY_NUMBER 0xff
> +#define MAX_AREAS 1
>
>  #endif
> diff --git a/lib/arm/asm/memory_areas.h b/lib/arm/asm/memory_areas.h
> index 927baa7..c723310 100644
> --- a/lib/arm/asm/memory_areas.h
> +++ b/lib/arm/asm/memory_areas.h
> @@ -1,11 +1,6 @@
> -#ifndef MEMORY_AREAS_H
> -#define MEMORY_AREAS_H
> +#ifndef _ASMARM_MEMORY_AREAS_H_
> +#define _ASMARM_MEMORY_AREAS_H_
>
> -#define AREA_NORMAL_PFN 0
> -#define AREA_NORMAL_NUMBER 0
> -#define AREA_NORMAL 1
> -
> -#define AREA_ANY -1
> -#define AREA_ANY_NUMBER 0xff
> +#include <asm-generic/memory_areas.h>
>
>  #endif
> diff --git a/lib/arm64/asm/memory_areas.h b/lib/arm64/asm/memory_areas.h
> index 927baa7..18e8ca8 100644
> --- a/lib/arm64/asm/memory_areas.h
> +++ b/lib/arm64/asm/memory_areas.h
> @@ -1,11 +1,6 @@
> -#ifndef MEMORY_AREAS_H
> -#define MEMORY_AREAS_H
> +#ifndef _ASMARM64_MEMORY_AREAS_H_
> +#define _ASMARM64_MEMORY_AREAS_H_
>
> -#define AREA_NORMAL_PFN 0
> -#define AREA_NORMAL_NUMBER 0
> -#define AREA_NORMAL 1
> -
> -#define AREA_ANY -1
> -#define AREA_ANY_NUMBER 0xff
> +#include <asm-generic/memory_areas.h>
>
>  #endif
> diff --git a/lib/powerpc/asm/memory_areas.h b/lib/powerpc/asm/memory_areas.h
> index 927baa7..76d1738 100644
> --- a/lib/powerpc/asm/memory_areas.h
> +++ b/lib/powerpc/asm/memory_areas.h
> @@ -1,11 +1,6 @@
> -#ifndef MEMORY_AREAS_H
> -#define MEMORY_AREAS_H
> +#ifndef _ASMPOWERPC_MEMORY_AREAS_H_
> +#define _ASMPOWERPC_MEMORY_AREAS_H_
>
> -#define AREA_NORMAL_PFN 0
> -#define AREA_NORMAL_NUMBER 0
> -#define AREA_NORMAL 1
> -
> -#define AREA_ANY -1
> -#define AREA_ANY_NUMBER 0xff
> +#include <asm-generic/memory_areas.h>
>
>  #endif
> diff --git a/lib/ppc64/asm/memory_areas.h b/lib/ppc64/asm/memory_areas.h
> index 927baa7..b9fd46b 100644
> --- a/lib/ppc64/asm/memory_areas.h
> +++ b/lib/ppc64/asm/memory_areas.h
> @@ -1,11 +1,6 @@
> -#ifndef MEMORY_AREAS_H
> -#define MEMORY_AREAS_H
> +#ifndef _ASMPPC64_MEMORY_AREAS_H_
> +#define _ASMPPC64_MEMORY_AREAS_H_
>
> -#define AREA_NORMAL_PFN 0
> -#define AREA_NORMAL_NUMBER 0
> -#define AREA_NORMAL 1
> -
> -#define AREA_ANY -1
> -#define AREA_ANY_NUMBER 0xff
> +#include <asm-generic/memory_areas.h>
>
>  #endif
> diff --git a/lib/s390x/asm/memory_areas.h b/lib/s390x/asm/memory_areas.h
> index 4856a27..827bfb3 100644
> --- a/lib/s390x/asm/memory_areas.h
> +++ b/lib/s390x/asm/memory_areas.h
> @@ -1,16 +1,15 @@
> -#ifndef MEMORY_AREAS_H
> -#define MEMORY_AREAS_H
> +#ifndef _ASMS390X_MEMORY_AREAS_H_
> +#define _ASMS390X_MEMORY_AREAS_H_
>
> -#define AREA_NORMAL_PFN BIT(31-12)
> +#define AREA_NORMAL_PFN (1 << 19)
>  #define AREA_NORMAL_NUMBER 0
> -#define AREA_NORMAL 1
> +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER)
>
>  #define AREA_LOW_PFN 0
>  #define AREA_LOW_NUMBER 1
> -#define AREA_LOW 2
> +#define AREA_LOW (1 << AREA_LOW_NUMBER)
>
> -#define AREA_ANY -1
> -#define AREA_ANY_NUMBER 0xff
> +#define MAX_AREAS 2
>
>  #define AREA_DMA31 AREA_LOW
>
> diff --git a/lib/x86/asm/memory_areas.h b/lib/x86/asm/memory_areas.h
> index 952f5bd..e84016f 100644
> --- a/lib/x86/asm/memory_areas.h
> +++ b/lib/x86/asm/memory_areas.h
> @@ -1,21 +1,26 @@
> -#ifndef MEMORY_AREAS_H
> -#define MEMORY_AREAS_H
> +#ifndef _ASM_X86_MEMORY_AREAS_H_
> +#define _ASM_X86_MEMORY_AREAS_H_
>
>  #define AREA_NORMAL_PFN BIT(36-12)
>  #define AREA_NORMAL_NUMBER 0
> -#define AREA_NORMAL 1
> +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER)
>
> -#define AREA_PAE_HIGH_PFN BIT(32-12)
> -#define AREA_PAE_HIGH_NUMBER 1
> -#define AREA_PAE_HIGH 2
> +#define AREA_HIGH_PFN BIT(32-12)
> +#define AREA_HIGH_NUMBER 1
> +#define AREA_HIGH (1 << AREA_HIGH_NUMBER)
>
> -#define AREA_LOW_PFN 0
> +#define AREA_LOW_PFN BIT(24-12)
>  #define AREA_LOW_NUMBER 2
> -#define AREA_LOW 4
> +#define AREA_LOW (1 << AREA_LOW_NUMBER)
>
> -#define AREA_PAE (AREA_PAE | AREA_LOW)
> +#define AREA_LOWEST_PFN 0
> +#define AREA_LOWEST_NUMBER 3
> +#define AREA_LOWEST (1 << AREA_LOWEST_NUMBER)
>
> -#define AREA_ANY -1
> -#define AREA_ANY_NUMBER 0xff
> +#define MAX_AREAS 4
> +
> +#define AREA_DMA24 AREA_LOWEST
> +#define AREA_DMA32 (AREA_LOWEST | AREA_LOW)
> +#define AREA_PAE36 (AREA_LOWEST | AREA_LOW | AREA_HIGH)
>
>  #endif
> diff --git a/lib/alloc_page.h b/lib/alloc_page.h
> index 816ff5d..b6aace5 100644
> --- a/lib/alloc_page.h
> +++ b/lib/alloc_page.h
> @@ -10,6 +10,9 @@
>
>  #include <asm/memory_areas.h>
>
> +#define AREA_ANY -1
> +#define AREA_ANY_NUMBER 0xff
> +
>  /* Returns true if the page allocator has been initialized */
>  bool page_alloc_initialized(void);
>
> diff --git a/lib/alloc_page.c b/lib/alloc_page.c
> index 685ab1e..ed0ff02 100644
> --- a/lib/alloc_page.c
> +++ b/lib/alloc_page.c
> @@ -19,8 +19,6 @@
>  #define NLISTS ((BITS_PER_LONG) - (PAGE_SHIFT))
>  #define PFN(x) ((uintptr_t)(x) >> PAGE_SHIFT)
>
> -#define MAX_AREAS      6
> -
>  #define ORDER_MASK     0x3f
>  #define ALLOC_MASK     0x40
>  #define SPECIAL_MASK   0x80
> @@ -509,7 +507,7 @@ void page_alloc_init_area(u8 n, uintptr_t base_pfn, uintptr_t top_pfn)
>                 return;
>         }
>  #ifdef AREA_HIGH_PFN
> -       __page_alloc_init_area(AREA_HIGH_NUMBER, AREA_HIGH_PFN), base_pfn, &top_pfn);
> +       __page_alloc_init_area(AREA_HIGH_NUMBER, AREA_HIGH_PFN, base_pfn, &top_pfn);
>  #endif
>         __page_alloc_init_area(AREA_NORMAL_NUMBER, AREA_NORMAL_PFN, base_pfn, &top_pfn);
>  #ifdef AREA_LOW_PFN
> --
> 2.26.2
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [kvm-unit-tests PATCH v2 04/11] lib/asm: Fix definitions of memory areas
  2021-01-21  1:23   ` David Matlack
@ 2021-01-21  5:32     ` Thomas Huth
  2021-01-21  9:28     ` Claudio Imbrenda
  1 sibling, 0 replies; 20+ messages in thread
From: Thomas Huth @ 2021-01-21  5:32 UTC (permalink / raw)
  To: David Matlack, Claudio Imbrenda
  Cc: kvm list, frankja, David Hildenbrand, Paolo Bonzini, cohuck,
	Laurent Vivier, nadav.amit, krish.sadhukhan

On 21/01/2021 02.23, David Matlack wrote:
> Hi Claudio,
> 
> On Fri, Jan 15, 2021 at 8:07 AM Claudio Imbrenda <imbrenda@linux.ibm.com> wrote:
>>
>> Fix the definitions of the memory areas.
> 
> The test x86/smat.flat started falling for me at this commit. I'm
> testing on Linux 5.7.17.

FWIW, seems like this was also happening on Travis, just before it became 
useless. Build #15 is still fine:

  https://travis-ci.com/gitlab/kvm-unit-tests/kvm-unit-tests/builds/213551621

But build #18 reported a broken smat test:

  https://travis-ci.com/gitlab/kvm-unit-tests/kvm-unit-tests/builds/213564672

The job logs say that the builders there use kernel version  5.4.0-1034-gcp

  Thomas


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [kvm-unit-tests PATCH v2 04/11] lib/asm: Fix definitions of memory areas
  2021-01-21  1:23   ` David Matlack
  2021-01-21  5:32     ` Thomas Huth
@ 2021-01-21  9:28     ` Claudio Imbrenda
  1 sibling, 0 replies; 20+ messages in thread
From: Claudio Imbrenda @ 2021-01-21  9:28 UTC (permalink / raw)
  To: David Matlack
  Cc: kvm list, frankja, David Hildenbrand, Thomas Huth, Paolo Bonzini,
	cohuck, Laurent Vivier, nadav.amit, krish.sadhukhan

On Wed, 20 Jan 2021 17:23:33 -0800
David Matlack <dmatlack@google.com> wrote:

> Hi Claudio,
> 
> On Fri, Jan 15, 2021 at 8:07 AM Claudio Imbrenda
> <imbrenda@linux.ibm.com> wrote:
> >
> > Fix the definitions of the memory areas.  
> 
> The test x86/smat.flat started falling for me at this commit. I'm
> testing on Linux 5.7.17.
> 
> Here are the logs:

the test does not fail with the default configuration, maybe the
default configuration should be changed?

I would have noticed the issue otherwise.


after a quick look at the testcase itself, it's quite obvious to me
that it is broken, since it completely relied on the fact that the page
allocator would not touch memory above 16M.

this is not the case any more.

I have fixed the testcase, I'll post a patch


> timeout -k 1s --foreground 90s /usr/bin/qemu-system-x86_64 --no-reboot
> -nodefaults -device pc-testdev -device
> isa-debug-exit,iobase=0xf4,iosize=0x4 -vnc none -serial stdio -device
> pci-testdev -machine accel=kvm -kernel x86/smap.flat -smp 1 -cpu host
> # -initrd /tmp/tmp.s4WKgsHkOh
> enabling apic
> paging enabled
> cr0 = 80010011
> cr3 = 1007000
> cr4 = 20
> testing without INVLPG
> PASS: write to supervisor page
> PASS: read from user page with AC=1
> PASS: read from user page with AC=0
> FAIL: write to user page with AC=1
> FAIL: read from user page with AC=0
> FAIL: write to user stack with AC=1
> FAIL: write to user stack with AC=0
> Unhandled exception 6 #UD at ip 0000000001800003
> error_code=0000      rflags=00010082      cs=00000008
> rax=000000000000000a rcx=00000000000003fd rdx=00000000000003f8
> rbx=0000000000000000
> rbp=0000000000517700 rsi=0000000000416422 rdi=0000000000000000
>  r8=0000000000416422  r9=00000000000003f8 r10=000000000000000d
> r11=000ffffffffff000
> r12=0000000000000000 r13=0000000001418700 r14=0000000000000000
> r15=0000000000000000
> cr0=0000000080010011 cr2=00000000015176d8 cr3=0000000001007000
> cr4=0000000000200020
> cr8=0000000000000000
> STACK: @1800003 400368
> b'0x0000000001800003: ?? ??:0'
> 
> 
> 
> >
> > Bring the headers in line with the rest of the asm headers, by
> > having the appropriate #ifdef _ASM$ARCH_ guarding the headers.
> >
> > Fixes: d74708246bd9 ("lib/asm: Add definitions of memory areas")
> >
> > Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> > Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
> > ---
> >  lib/asm-generic/memory_areas.h |  9 ++++-----
> >  lib/arm/asm/memory_areas.h     | 11 +++--------
> >  lib/arm64/asm/memory_areas.h   | 11 +++--------
> >  lib/powerpc/asm/memory_areas.h | 11 +++--------
> >  lib/ppc64/asm/memory_areas.h   | 11 +++--------
> >  lib/s390x/asm/memory_areas.h   | 13 ++++++-------
> >  lib/x86/asm/memory_areas.h     | 27 ++++++++++++++++-----------
> >  lib/alloc_page.h               |  3 +++
> >  lib/alloc_page.c               |  4 +---
> >  9 files changed, 42 insertions(+), 58 deletions(-)
> >
> > diff --git a/lib/asm-generic/memory_areas.h
> > b/lib/asm-generic/memory_areas.h index 927baa7..3074afe 100644
> > --- a/lib/asm-generic/memory_areas.h
> > +++ b/lib/asm-generic/memory_areas.h
> > @@ -1,11 +1,10 @@
> > -#ifndef MEMORY_AREAS_H
> > -#define MEMORY_AREAS_H
> > +#ifndef __ASM_GENERIC_MEMORY_AREAS_H__
> > +#define __ASM_GENERIC_MEMORY_AREAS_H__
> >
> >  #define AREA_NORMAL_PFN 0
> >  #define AREA_NORMAL_NUMBER 0
> > -#define AREA_NORMAL 1
> > +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER)
> >
> > -#define AREA_ANY -1
> > -#define AREA_ANY_NUMBER 0xff
> > +#define MAX_AREAS 1
> >
> >  #endif
> > diff --git a/lib/arm/asm/memory_areas.h b/lib/arm/asm/memory_areas.h
> > index 927baa7..c723310 100644
> > --- a/lib/arm/asm/memory_areas.h
> > +++ b/lib/arm/asm/memory_areas.h
> > @@ -1,11 +1,6 @@
> > -#ifndef MEMORY_AREAS_H
> > -#define MEMORY_AREAS_H
> > +#ifndef _ASMARM_MEMORY_AREAS_H_
> > +#define _ASMARM_MEMORY_AREAS_H_
> >
> > -#define AREA_NORMAL_PFN 0
> > -#define AREA_NORMAL_NUMBER 0
> > -#define AREA_NORMAL 1
> > -
> > -#define AREA_ANY -1
> > -#define AREA_ANY_NUMBER 0xff
> > +#include <asm-generic/memory_areas.h>
> >
> >  #endif
> > diff --git a/lib/arm64/asm/memory_areas.h
> > b/lib/arm64/asm/memory_areas.h index 927baa7..18e8ca8 100644
> > --- a/lib/arm64/asm/memory_areas.h
> > +++ b/lib/arm64/asm/memory_areas.h
> > @@ -1,11 +1,6 @@
> > -#ifndef MEMORY_AREAS_H
> > -#define MEMORY_AREAS_H
> > +#ifndef _ASMARM64_MEMORY_AREAS_H_
> > +#define _ASMARM64_MEMORY_AREAS_H_
> >
> > -#define AREA_NORMAL_PFN 0
> > -#define AREA_NORMAL_NUMBER 0
> > -#define AREA_NORMAL 1
> > -
> > -#define AREA_ANY -1
> > -#define AREA_ANY_NUMBER 0xff
> > +#include <asm-generic/memory_areas.h>
> >
> >  #endif
> > diff --git a/lib/powerpc/asm/memory_areas.h
> > b/lib/powerpc/asm/memory_areas.h index 927baa7..76d1738 100644
> > --- a/lib/powerpc/asm/memory_areas.h
> > +++ b/lib/powerpc/asm/memory_areas.h
> > @@ -1,11 +1,6 @@
> > -#ifndef MEMORY_AREAS_H
> > -#define MEMORY_AREAS_H
> > +#ifndef _ASMPOWERPC_MEMORY_AREAS_H_
> > +#define _ASMPOWERPC_MEMORY_AREAS_H_
> >
> > -#define AREA_NORMAL_PFN 0
> > -#define AREA_NORMAL_NUMBER 0
> > -#define AREA_NORMAL 1
> > -
> > -#define AREA_ANY -1
> > -#define AREA_ANY_NUMBER 0xff
> > +#include <asm-generic/memory_areas.h>
> >
> >  #endif
> > diff --git a/lib/ppc64/asm/memory_areas.h
> > b/lib/ppc64/asm/memory_areas.h index 927baa7..b9fd46b 100644
> > --- a/lib/ppc64/asm/memory_areas.h
> > +++ b/lib/ppc64/asm/memory_areas.h
> > @@ -1,11 +1,6 @@
> > -#ifndef MEMORY_AREAS_H
> > -#define MEMORY_AREAS_H
> > +#ifndef _ASMPPC64_MEMORY_AREAS_H_
> > +#define _ASMPPC64_MEMORY_AREAS_H_
> >
> > -#define AREA_NORMAL_PFN 0
> > -#define AREA_NORMAL_NUMBER 0
> > -#define AREA_NORMAL 1
> > -
> > -#define AREA_ANY -1
> > -#define AREA_ANY_NUMBER 0xff
> > +#include <asm-generic/memory_areas.h>
> >
> >  #endif
> > diff --git a/lib/s390x/asm/memory_areas.h
> > b/lib/s390x/asm/memory_areas.h index 4856a27..827bfb3 100644
> > --- a/lib/s390x/asm/memory_areas.h
> > +++ b/lib/s390x/asm/memory_areas.h
> > @@ -1,16 +1,15 @@
> > -#ifndef MEMORY_AREAS_H
> > -#define MEMORY_AREAS_H
> > +#ifndef _ASMS390X_MEMORY_AREAS_H_
> > +#define _ASMS390X_MEMORY_AREAS_H_
> >
> > -#define AREA_NORMAL_PFN BIT(31-12)
> > +#define AREA_NORMAL_PFN (1 << 19)
> >  #define AREA_NORMAL_NUMBER 0
> > -#define AREA_NORMAL 1
> > +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER)
> >
> >  #define AREA_LOW_PFN 0
> >  #define AREA_LOW_NUMBER 1
> > -#define AREA_LOW 2
> > +#define AREA_LOW (1 << AREA_LOW_NUMBER)
> >
> > -#define AREA_ANY -1
> > -#define AREA_ANY_NUMBER 0xff
> > +#define MAX_AREAS 2
> >
> >  #define AREA_DMA31 AREA_LOW
> >
> > diff --git a/lib/x86/asm/memory_areas.h b/lib/x86/asm/memory_areas.h
> > index 952f5bd..e84016f 100644
> > --- a/lib/x86/asm/memory_areas.h
> > +++ b/lib/x86/asm/memory_areas.h
> > @@ -1,21 +1,26 @@
> > -#ifndef MEMORY_AREAS_H
> > -#define MEMORY_AREAS_H
> > +#ifndef _ASM_X86_MEMORY_AREAS_H_
> > +#define _ASM_X86_MEMORY_AREAS_H_
> >
> >  #define AREA_NORMAL_PFN BIT(36-12)
> >  #define AREA_NORMAL_NUMBER 0
> > -#define AREA_NORMAL 1
> > +#define AREA_NORMAL (1 << AREA_NORMAL_NUMBER)
> >
> > -#define AREA_PAE_HIGH_PFN BIT(32-12)
> > -#define AREA_PAE_HIGH_NUMBER 1
> > -#define AREA_PAE_HIGH 2
> > +#define AREA_HIGH_PFN BIT(32-12)
> > +#define AREA_HIGH_NUMBER 1
> > +#define AREA_HIGH (1 << AREA_HIGH_NUMBER)
> >
> > -#define AREA_LOW_PFN 0
> > +#define AREA_LOW_PFN BIT(24-12)
> >  #define AREA_LOW_NUMBER 2
> > -#define AREA_LOW 4
> > +#define AREA_LOW (1 << AREA_LOW_NUMBER)
> >
> > -#define AREA_PAE (AREA_PAE | AREA_LOW)
> > +#define AREA_LOWEST_PFN 0
> > +#define AREA_LOWEST_NUMBER 3
> > +#define AREA_LOWEST (1 << AREA_LOWEST_NUMBER)
> >
> > -#define AREA_ANY -1
> > -#define AREA_ANY_NUMBER 0xff
> > +#define MAX_AREAS 4
> > +
> > +#define AREA_DMA24 AREA_LOWEST
> > +#define AREA_DMA32 (AREA_LOWEST | AREA_LOW)
> > +#define AREA_PAE36 (AREA_LOWEST | AREA_LOW | AREA_HIGH)
> >
> >  #endif
> > diff --git a/lib/alloc_page.h b/lib/alloc_page.h
> > index 816ff5d..b6aace5 100644
> > --- a/lib/alloc_page.h
> > +++ b/lib/alloc_page.h
> > @@ -10,6 +10,9 @@
> >
> >  #include <asm/memory_areas.h>
> >
> > +#define AREA_ANY -1
> > +#define AREA_ANY_NUMBER 0xff
> > +
> >  /* Returns true if the page allocator has been initialized */
> >  bool page_alloc_initialized(void);
> >
> > diff --git a/lib/alloc_page.c b/lib/alloc_page.c
> > index 685ab1e..ed0ff02 100644
> > --- a/lib/alloc_page.c
> > +++ b/lib/alloc_page.c
> > @@ -19,8 +19,6 @@
> >  #define NLISTS ((BITS_PER_LONG) - (PAGE_SHIFT))
> >  #define PFN(x) ((uintptr_t)(x) >> PAGE_SHIFT)
> >
> > -#define MAX_AREAS      6
> > -
> >  #define ORDER_MASK     0x3f
> >  #define ALLOC_MASK     0x40
> >  #define SPECIAL_MASK   0x80
> > @@ -509,7 +507,7 @@ void page_alloc_init_area(u8 n, uintptr_t
> > base_pfn, uintptr_t top_pfn) return;
> >         }
> >  #ifdef AREA_HIGH_PFN
> > -       __page_alloc_init_area(AREA_HIGH_NUMBER, AREA_HIGH_PFN),
> > base_pfn, &top_pfn);
> > +       __page_alloc_init_area(AREA_HIGH_NUMBER, AREA_HIGH_PFN,
> > base_pfn, &top_pfn); #endif
> >         __page_alloc_init_area(AREA_NORMAL_NUMBER, AREA_NORMAL_PFN,
> > base_pfn, &top_pfn); #ifdef AREA_LOW_PFN
> > --
> > 2.26.2
> >  


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2021-01-21  9:39 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-15 12:37 [kvm-unit-tests PATCH v2 00/11] Fix and improve the page allocator Claudio Imbrenda
2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 01/11] lib/x86: fix page.h to include the generic header Claudio Imbrenda
2021-01-19 15:12   ` Janosch Frank
2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 02/11] lib/list.h: add list_add_tail Claudio Imbrenda
2021-01-19 15:18   ` Janosch Frank
2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 03/11] lib/vmalloc: add some asserts and improvements Claudio Imbrenda
2021-01-19 15:26   ` Janosch Frank
2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 04/11] lib/asm: Fix definitions of memory areas Claudio Imbrenda
2021-01-19 15:33   ` Janosch Frank
2021-01-19 17:05     ` Claudio Imbrenda
2021-01-21  1:23   ` David Matlack
2021-01-21  5:32     ` Thomas Huth
2021-01-21  9:28     ` Claudio Imbrenda
2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 05/11] lib/alloc_page: fix and improve the page allocator Claudio Imbrenda
2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 06/11] lib/alloc.h: remove align_min from struct alloc_ops Claudio Imbrenda
2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 07/11] lib/alloc_page: Optimization to skip known empty freelists Claudio Imbrenda
2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 08/11] lib/alloc_page: rework metadata format Claudio Imbrenda
2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 09/11] lib/alloc: replace areas with more generic flags Claudio Imbrenda
2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 10/11] lib/alloc_page: Wire up FLAG_DONTZERO Claudio Imbrenda
2021-01-15 12:37 ` [kvm-unit-tests PATCH v2 11/11] lib/alloc_page: Properly handle requests for fresh blocks Claudio Imbrenda

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).