kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [kvm-unit-tests PATCH v1 0/4] More lib/alloc cleanup and a minor improvement
@ 2020-07-03 11:51 Claudio Imbrenda
  2020-07-03 11:51 ` [kvm-unit-tests PATCH v1 1/4] lib/vmalloc: fix pages count local variable to be size_t Claudio Imbrenda
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Claudio Imbrenda @ 2020-07-03 11:51 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: frankja, thuth, david

Some more cleanup of lib/alloc in light of upcoming changes

The first real feature: allow aligned virtual allocations with
alignment greater than one page.

Also export a function for allocating aligned non-backed virtual pages.

Claudio Imbrenda (4):
  lib/vmalloc: fix pages count local variable to be size_t
  lib/alloc_page: change some parameter types
  lib/alloc_page: move get_order and is_power_of_2 to a bitops.h
  lib/vmalloc: allow vm_memalign with alignment > PAGE_SIZE

 lib/alloc_page.h |  7 +++----
 lib/bitops.h     | 10 ++++++++++
 lib/libcflat.h   |  5 -----
 lib/vmalloc.h    |  3 +++
 lib/alloc.c      |  1 +
 lib/alloc_page.c | 13 ++++---------
 lib/vmalloc.c    | 42 +++++++++++++++++++++++++++++++++---------
 7 files changed, 54 insertions(+), 27 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [kvm-unit-tests PATCH v1 1/4] lib/vmalloc: fix pages count local variable to be size_t
  2020-07-03 11:51 [kvm-unit-tests PATCH v1 0/4] More lib/alloc cleanup and a minor improvement Claudio Imbrenda
@ 2020-07-03 11:51 ` Claudio Imbrenda
  2020-07-03 11:51 ` [kvm-unit-tests PATCH v1 2/4] lib/alloc_page: change some parameter types Claudio Imbrenda
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Claudio Imbrenda @ 2020-07-03 11:51 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: frankja, thuth, david

Since size is of type size_t, size >> PAGE_SHIFT might still be too big
for a normal unsigned int.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 lib/vmalloc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/vmalloc.c b/lib/vmalloc.c
index 10f15af..9237a0f 100644
--- a/lib/vmalloc.c
+++ b/lib/vmalloc.c
@@ -40,7 +40,7 @@ void *alloc_vpage(void)
 void *vmap(phys_addr_t phys, size_t size)
 {
 	void *mem, *p;
-	unsigned pages;
+	size_t pages;
 
 	size = PAGE_ALIGN(size);
 	pages = size / PAGE_SIZE;
@@ -58,7 +58,7 @@ void *vmap(phys_addr_t phys, size_t size)
 static void *vm_memalign(size_t alignment, size_t size)
 {
 	void *mem, *p;
-	unsigned pages;
+	size_t pages;
 
 	assert(alignment <= PAGE_SIZE);
 	size = PAGE_ALIGN(size);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [kvm-unit-tests PATCH v1 2/4] lib/alloc_page: change some parameter types
  2020-07-03 11:51 [kvm-unit-tests PATCH v1 0/4] More lib/alloc cleanup and a minor improvement Claudio Imbrenda
  2020-07-03 11:51 ` [kvm-unit-tests PATCH v1 1/4] lib/vmalloc: fix pages count local variable to be size_t Claudio Imbrenda
@ 2020-07-03 11:51 ` Claudio Imbrenda
  2020-07-03 11:51 ` [kvm-unit-tests PATCH v1 3/4] lib/alloc_page: move get_order and is_power_of_2 to a bitops.h Claudio Imbrenda
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Claudio Imbrenda @ 2020-07-03 11:51 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: frankja, thuth, david

For size parameters, size_t is probably semantically more appropriate
than unsigned long (although they map to the same value).

For order, unsigned long is just too big. Also, get_order returns an
unsigned int anyway.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 lib/alloc_page.h | 6 +++---
 lib/alloc_page.c | 8 ++++----
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/lib/alloc_page.h b/lib/alloc_page.h
index 6181299..d9aceb7 100644
--- a/lib/alloc_page.h
+++ b/lib/alloc_page.h
@@ -11,10 +11,10 @@
 bool page_alloc_initialized(void);
 void page_alloc_ops_enable(void);
 void *alloc_page(void);
-void *alloc_pages(unsigned long order);
+void *alloc_pages(unsigned int order);
 void free_page(void *page);
-void free_pages(void *mem, unsigned long size);
-void free_pages_by_order(void *mem, unsigned long order);
+void free_pages(void *mem, size_t size);
+void free_pages_by_order(void *mem, unsigned int order);
 unsigned int get_order(size_t size);
 
 #endif
diff --git a/lib/alloc_page.c b/lib/alloc_page.c
index 8769c3f..f16eaad 100644
--- a/lib/alloc_page.c
+++ b/lib/alloc_page.c
@@ -21,7 +21,7 @@ bool page_alloc_initialized(void)
 	return freelist != 0;
 }
 
-void free_pages(void *mem, unsigned long size)
+void free_pages(void *mem, size_t size)
 {
 	void *old_freelist;
 	void *end;
@@ -53,7 +53,7 @@ void free_pages(void *mem, unsigned long size)
 	spin_unlock(&lock);
 }
 
-void free_pages_by_order(void *mem, unsigned long order)
+void free_pages_by_order(void *mem, unsigned int order)
 {
 	free_pages(mem, 1ul << (order + PAGE_SHIFT));
 }
@@ -79,7 +79,7 @@ void *alloc_page()
  * Allocates (1 << order) physically contiguous and naturally aligned pages.
  * Returns NULL if there's no memory left.
  */
-void *alloc_pages(unsigned long order)
+void *alloc_pages(unsigned int order)
 {
 	/* Generic list traversal. */
 	void *prev;
@@ -150,7 +150,7 @@ void free_page(void *page)
 static void *page_memalign(size_t alignment, size_t size)
 {
 	unsigned long n = ALIGN(size, PAGE_SIZE) >> PAGE_SHIFT;
-	unsigned long order;
+	unsigned int order;
 
 	if (!size)
 		return NULL;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [kvm-unit-tests PATCH v1 3/4] lib/alloc_page: move get_order and is_power_of_2 to a bitops.h
  2020-07-03 11:51 [kvm-unit-tests PATCH v1 0/4] More lib/alloc cleanup and a minor improvement Claudio Imbrenda
  2020-07-03 11:51 ` [kvm-unit-tests PATCH v1 1/4] lib/vmalloc: fix pages count local variable to be size_t Claudio Imbrenda
  2020-07-03 11:51 ` [kvm-unit-tests PATCH v1 2/4] lib/alloc_page: change some parameter types Claudio Imbrenda
@ 2020-07-03 11:51 ` Claudio Imbrenda
  2020-07-03 11:51 ` [kvm-unit-tests PATCH v1 4/4] lib/vmalloc: allow vm_memalign with alignment > PAGE_SIZE Claudio Imbrenda
  2020-07-03 12:30 ` [kvm-unit-tests PATCH v1 0/4] More lib/alloc cleanup and a minor improvement Andrew Jones
  4 siblings, 0 replies; 9+ messages in thread
From: Claudio Imbrenda @ 2020-07-03 11:51 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: frankja, thuth, david

The functions get_order and is_power_of_2 are simple and should
probably be in a header, like similar simple functions in bitops.h

Since they concern bit manipulation, the logical place for them is in
bitops.h

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 lib/alloc_page.h |  1 -
 lib/bitops.h     | 10 ++++++++++
 lib/libcflat.h   |  5 -----
 lib/alloc.c      |  1 +
 lib/alloc_page.c |  5 -----
 5 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/lib/alloc_page.h b/lib/alloc_page.h
index d9aceb7..88540d1 100644
--- a/lib/alloc_page.h
+++ b/lib/alloc_page.h
@@ -15,6 +15,5 @@ void *alloc_pages(unsigned int order);
 void free_page(void *page);
 void free_pages(void *mem, size_t size);
 void free_pages_by_order(void *mem, unsigned int order);
-unsigned int get_order(size_t size);
 
 #endif
diff --git a/lib/bitops.h b/lib/bitops.h
index b310a22..308aa86 100644
--- a/lib/bitops.h
+++ b/lib/bitops.h
@@ -74,4 +74,14 @@ static inline unsigned long fls(unsigned long word)
 }
 #endif
 
+static inline bool is_power_of_2(unsigned long n)
+{
+	return n && !(n & (n - 1));
+}
+
+static inline unsigned int get_order(size_t size)
+{
+	return size ? fls(size) + !is_power_of_2(size) : 0;
+}
+
 #endif
diff --git a/lib/libcflat.h b/lib/libcflat.h
index 7092af2..ec0f58b 100644
--- a/lib/libcflat.h
+++ b/lib/libcflat.h
@@ -147,11 +147,6 @@ do {									\
 	}								\
 } while (0)
 
-static inline bool is_power_of_2(unsigned long n)
-{
-	return n && !(n & (n - 1));
-}
-
 /*
  * One byte per bit, a ' between each group of 4 bits, and a null terminator.
  */
diff --git a/lib/alloc.c b/lib/alloc.c
index 6c89f98..9d89d24 100644
--- a/lib/alloc.c
+++ b/lib/alloc.c
@@ -1,4 +1,5 @@
 #include "alloc.h"
+#include "bitops.h"
 #include "asm/page.h"
 #include "bitops.h"
 
diff --git a/lib/alloc_page.c b/lib/alloc_page.c
index f16eaad..fa3c527 100644
--- a/lib/alloc_page.c
+++ b/lib/alloc_page.c
@@ -175,8 +175,3 @@ void page_alloc_ops_enable(void)
 {
 	alloc_ops = &page_alloc_ops;
 }
-
-unsigned int get_order(size_t size)
-{
-	return is_power_of_2(size) ? fls(size) : fls(size) + 1;
-}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [kvm-unit-tests PATCH v1 4/4] lib/vmalloc: allow vm_memalign with alignment > PAGE_SIZE
  2020-07-03 11:51 [kvm-unit-tests PATCH v1 0/4] More lib/alloc cleanup and a minor improvement Claudio Imbrenda
                   ` (2 preceding siblings ...)
  2020-07-03 11:51 ` [kvm-unit-tests PATCH v1 3/4] lib/alloc_page: move get_order and is_power_of_2 to a bitops.h Claudio Imbrenda
@ 2020-07-03 11:51 ` Claudio Imbrenda
  2020-07-03 12:30   ` Andrew Jones
  2020-07-03 12:30 ` [kvm-unit-tests PATCH v1 0/4] More lib/alloc cleanup and a minor improvement Andrew Jones
  4 siblings, 1 reply; 9+ messages in thread
From: Claudio Imbrenda @ 2020-07-03 11:51 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: frankja, thuth, david

Allow allocating aligned virtual memory with alignment larger than only
one page.

Add a check that the backing pages were actually allocated.

Export the alloc_vpages_aligned function to allow users to allocate
non-backed aligned virtual addresses.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 lib/vmalloc.h |  3 +++
 lib/vmalloc.c | 40 ++++++++++++++++++++++++++++++++--------
 2 files changed, 35 insertions(+), 8 deletions(-)

diff --git a/lib/vmalloc.h b/lib/vmalloc.h
index 2b563f4..fa3fa22 100644
--- a/lib/vmalloc.h
+++ b/lib/vmalloc.h
@@ -5,6 +5,9 @@
 
 /* Allocate consecutive virtual pages (without backing) */
 extern void *alloc_vpages(ulong nr);
+/* Allocate consecutive and aligned virtual pages (without backing) */
+extern void *alloc_vpages_aligned(ulong nr, unsigned int alignment_pages);
+
 /* Allocate one virtual page (without backing) */
 extern void *alloc_vpage(void);
 /* Set the top of the virtual address space */
diff --git a/lib/vmalloc.c b/lib/vmalloc.c
index 9237a0f..2c482aa 100644
--- a/lib/vmalloc.c
+++ b/lib/vmalloc.c
@@ -12,19 +12,28 @@
 #include "alloc.h"
 #include "alloc_phys.h"
 #include "alloc_page.h"
+#include <bitops.h>
 #include "vmalloc.h"
 
 static struct spinlock lock;
 static void *vfree_top = 0;
 static void *page_root;
 
-void *alloc_vpages(ulong nr)
+/*
+ * Allocate a certain number of pages from the virtual address space (without
+ * physical backing).
+ *
+ * nr is the number of pages to allocate
+ * alignment_pages is the alignment of the allocation *in pages*
+ */
+static void *alloc_vpages_intern(ulong nr, unsigned int alignment_pages)
 {
 	uintptr_t ptr;
 
 	spin_lock(&lock);
 	ptr = (uintptr_t)vfree_top;
 	ptr -= PAGE_SIZE * nr;
+	ptr &= GENMASK_ULL(63, PAGE_SHIFT + get_order(alignment_pages));
 	vfree_top = (void *)ptr;
 	spin_unlock(&lock);
 
@@ -32,6 +41,16 @@ void *alloc_vpages(ulong nr)
 	return (void *)ptr;
 }
 
+void *alloc_vpages(ulong nr)
+{
+	return alloc_vpages_intern(nr, 1);
+}
+
+void *alloc_vpages_aligned(ulong nr, unsigned int alignment_pages)
+{
+	return alloc_vpages_intern(nr, alignment_pages);
+}
+
 void *alloc_vpage(void)
 {
 	return alloc_vpages(1);
@@ -55,17 +74,22 @@ void *vmap(phys_addr_t phys, size_t size)
 	return mem;
 }
 
+/*
+ * Allocate virtual memory, with the specified minimum alignment.
+ */
 static void *vm_memalign(size_t alignment, size_t size)
 {
+	phys_addr_t pa;
 	void *mem, *p;
-	size_t pages;
 
-	assert(alignment <= PAGE_SIZE);
-	size = PAGE_ALIGN(size);
-	pages = size / PAGE_SIZE;
-	mem = p = alloc_vpages(pages);
-	while (pages--) {
-		phys_addr_t pa = virt_to_phys(alloc_page());
+	assert(is_power_of_2(alignment));
+
+	size = PAGE_ALIGN(size) / PAGE_SIZE;
+	alignment = PAGE_ALIGN(alignment) / PAGE_SIZE;
+	mem = p = alloc_vpages_intern(size, alignment);
+	while (size--) {
+		pa = virt_to_phys(alloc_page());
+		assert(pa);
 		install_page(page_root, pa, p);
 		p += PAGE_SIZE;
 	}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH v1 4/4] lib/vmalloc: allow vm_memalign with alignment > PAGE_SIZE
  2020-07-03 11:51 ` [kvm-unit-tests PATCH v1 4/4] lib/vmalloc: allow vm_memalign with alignment > PAGE_SIZE Claudio Imbrenda
@ 2020-07-03 12:30   ` Andrew Jones
  2020-07-03 13:49     ` Claudio Imbrenda
  0 siblings, 1 reply; 9+ messages in thread
From: Andrew Jones @ 2020-07-03 12:30 UTC (permalink / raw)
  To: Claudio Imbrenda; +Cc: kvm, pbonzini, frankja, thuth, david

On Fri, Jul 03, 2020 at 01:51:09PM +0200, Claudio Imbrenda wrote:
> Allow allocating aligned virtual memory with alignment larger than only
> one page.
> 
> Add a check that the backing pages were actually allocated.
> 
> Export the alloc_vpages_aligned function to allow users to allocate
> non-backed aligned virtual addresses.
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> ---
>  lib/vmalloc.h |  3 +++
>  lib/vmalloc.c | 40 ++++++++++++++++++++++++++++++++--------
>  2 files changed, 35 insertions(+), 8 deletions(-)
> 
> diff --git a/lib/vmalloc.h b/lib/vmalloc.h
> index 2b563f4..fa3fa22 100644
> --- a/lib/vmalloc.h
> +++ b/lib/vmalloc.h
> @@ -5,6 +5,9 @@
>  
>  /* Allocate consecutive virtual pages (without backing) */
>  extern void *alloc_vpages(ulong nr);
> +/* Allocate consecutive and aligned virtual pages (without backing) */
> +extern void *alloc_vpages_aligned(ulong nr, unsigned int alignment_pages);
> +
>  /* Allocate one virtual page (without backing) */
>  extern void *alloc_vpage(void);
>  /* Set the top of the virtual address space */
> diff --git a/lib/vmalloc.c b/lib/vmalloc.c
> index 9237a0f..2c482aa 100644
> --- a/lib/vmalloc.c
> +++ b/lib/vmalloc.c
> @@ -12,19 +12,28 @@
>  #include "alloc.h"
>  #include "alloc_phys.h"
>  #include "alloc_page.h"
> +#include <bitops.h>
>  #include "vmalloc.h"
>  
>  static struct spinlock lock;
>  static void *vfree_top = 0;
>  static void *page_root;
>  
> -void *alloc_vpages(ulong nr)
> +/*
> + * Allocate a certain number of pages from the virtual address space (without
> + * physical backing).
> + *
> + * nr is the number of pages to allocate
> + * alignment_pages is the alignment of the allocation *in pages*
> + */
> +static void *alloc_vpages_intern(ulong nr, unsigned int alignment_pages)

This helper function isn't necessary. Just introduce
alloc_vpages_aligned() and then call alloc_vpages_aligned(nr, 1) from
alloc_vpages().

>  {
>  	uintptr_t ptr;
>  
>  	spin_lock(&lock);
>  	ptr = (uintptr_t)vfree_top;
>  	ptr -= PAGE_SIZE * nr;
> +	ptr &= GENMASK_ULL(63, PAGE_SHIFT + get_order(alignment_pages));
>  	vfree_top = (void *)ptr;
>  	spin_unlock(&lock);
>  
> @@ -32,6 +41,16 @@ void *alloc_vpages(ulong nr)
>  	return (void *)ptr;
>  }
>  
> +void *alloc_vpages(ulong nr)
> +{
> +	return alloc_vpages_intern(nr, 1);
> +}
> +
> +void *alloc_vpages_aligned(ulong nr, unsigned int alignment_pages)
> +{
> +	return alloc_vpages_intern(nr, alignment_pages);
> +}
> +
>  void *alloc_vpage(void)
>  {
>  	return alloc_vpages(1);
> @@ -55,17 +74,22 @@ void *vmap(phys_addr_t phys, size_t size)
>  	return mem;
>  }
>  
> +/*
> + * Allocate virtual memory, with the specified minimum alignment.
> + */
>  static void *vm_memalign(size_t alignment, size_t size)
>  {
> +	phys_addr_t pa;
>  	void *mem, *p;
> -	size_t pages;
>  
> -	assert(alignment <= PAGE_SIZE);
> -	size = PAGE_ALIGN(size);
> -	pages = size / PAGE_SIZE;
> -	mem = p = alloc_vpages(pages);
> -	while (pages--) {
> -		phys_addr_t pa = virt_to_phys(alloc_page());
> +	assert(is_power_of_2(alignment));
> +
> +	size = PAGE_ALIGN(size) / PAGE_SIZE;
> +	alignment = PAGE_ALIGN(alignment) / PAGE_SIZE;
> +	mem = p = alloc_vpages_intern(size, alignment);
> +	while (size--) {
> +		pa = virt_to_phys(alloc_page());
> +		assert(pa);
>  		install_page(page_root, pa, p);
>  		p += PAGE_SIZE;
>  	}
> -- 
> 2.26.2
>

Otherwise

Reviewed-by: Andrew Jones <drjones@redhat.com>

Thanks,
drew 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH v1 0/4] More lib/alloc cleanup and a minor improvement
  2020-07-03 11:51 [kvm-unit-tests PATCH v1 0/4] More lib/alloc cleanup and a minor improvement Claudio Imbrenda
                   ` (3 preceding siblings ...)
  2020-07-03 11:51 ` [kvm-unit-tests PATCH v1 4/4] lib/vmalloc: allow vm_memalign with alignment > PAGE_SIZE Claudio Imbrenda
@ 2020-07-03 12:30 ` Andrew Jones
  4 siblings, 0 replies; 9+ messages in thread
From: Andrew Jones @ 2020-07-03 12:30 UTC (permalink / raw)
  To: Claudio Imbrenda; +Cc: kvm, pbonzini, frankja, thuth, david

On Fri, Jul 03, 2020 at 01:51:05PM +0200, Claudio Imbrenda wrote:
> Some more cleanup of lib/alloc in light of upcoming changes
> 
> The first real feature: allow aligned virtual allocations with
> alignment greater than one page.
> 
> Also export a function for allocating aligned non-backed virtual pages.
> 
> Claudio Imbrenda (4):
>   lib/vmalloc: fix pages count local variable to be size_t
>   lib/alloc_page: change some parameter types
>   lib/alloc_page: move get_order and is_power_of_2 to a bitops.h
>   lib/vmalloc: allow vm_memalign with alignment > PAGE_SIZE
> 
>  lib/alloc_page.h |  7 +++----
>  lib/bitops.h     | 10 ++++++++++
>  lib/libcflat.h   |  5 -----
>  lib/vmalloc.h    |  3 +++
>  lib/alloc.c      |  1 +
>  lib/alloc_page.c | 13 ++++---------
>  lib/vmalloc.c    | 42 +++++++++++++++++++++++++++++++++---------
>  7 files changed, 54 insertions(+), 27 deletions(-)
> 
> -- 
> 2.26.2
>

For the series

Reviewed-by: Andrew Jones <drjones@redhat.com>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH v1 4/4] lib/vmalloc: allow vm_memalign with alignment > PAGE_SIZE
  2020-07-03 12:30   ` Andrew Jones
@ 2020-07-03 13:49     ` Claudio Imbrenda
  2020-07-03 16:20       ` Paolo Bonzini
  0 siblings, 1 reply; 9+ messages in thread
From: Claudio Imbrenda @ 2020-07-03 13:49 UTC (permalink / raw)
  To: Andrew Jones; +Cc: kvm, pbonzini, frankja, thuth, david

On Fri, 3 Jul 2020 14:30:01 +0200
Andrew Jones <drjones@redhat.com> wrote:

[...]

> > -void *alloc_vpages(ulong nr)
> > +/*
> > + * Allocate a certain number of pages from the virtual address
> > space (without
> > + * physical backing).
> > + *
> > + * nr is the number of pages to allocate
> > + * alignment_pages is the alignment of the allocation *in pages*
> > + */
> > +static void *alloc_vpages_intern(ulong nr, unsigned int
> > alignment_pages)  
> 
> This helper function isn't necessary. Just introduce
> alloc_vpages_aligned() and then call alloc_vpages_aligned(nr, 1) from
> alloc_vpages().

the helper will actually be useful in future patches.

maybe I should have written that in the patch description.

I can respin without helper if you prefer (and introduce it when
needed) or simply update the patch description.

> >  {
> >  	uintptr_t ptr;
> >  
> >  	spin_lock(&lock);
> >  	ptr = (uintptr_t)vfree_top;
> >  	ptr -= PAGE_SIZE * nr;
> > +	ptr &= GENMASK_ULL(63, PAGE_SHIFT +
> > get_order(alignment_pages)); vfree_top = (void *)ptr;
> >  	spin_unlock(&lock);
> >  
> > @@ -32,6 +41,16 @@ void *alloc_vpages(ulong nr)
> >  	return (void *)ptr;
> >  }
> >  
> > +void *alloc_vpages(ulong nr)
> > +{
> > +	return alloc_vpages_intern(nr, 1);
> > +}
> > +
> > +void *alloc_vpages_aligned(ulong nr, unsigned int alignment_pages)
> > +{
> > +	return alloc_vpages_intern(nr, alignment_pages);
> > +}
> > +
> >  void *alloc_vpage(void)
> >  {
> >  	return alloc_vpages(1);
> > @@ -55,17 +74,22 @@ void *vmap(phys_addr_t phys, size_t size)
> >  	return mem;
> >  }
> >  
> > +/*
> > + * Allocate virtual memory, with the specified minimum alignment.
> > + */
> >  static void *vm_memalign(size_t alignment, size_t size)
> >  {
> > +	phys_addr_t pa;
> >  	void *mem, *p;
> > -	size_t pages;
> >  
> > -	assert(alignment <= PAGE_SIZE);
> > -	size = PAGE_ALIGN(size);
> > -	pages = size / PAGE_SIZE;
> > -	mem = p = alloc_vpages(pages);
> > -	while (pages--) {
> > -		phys_addr_t pa = virt_to_phys(alloc_page());
> > +	assert(is_power_of_2(alignment));
> > +
> > +	size = PAGE_ALIGN(size) / PAGE_SIZE;
> > +	alignment = PAGE_ALIGN(alignment) / PAGE_SIZE;
> > +	mem = p = alloc_vpages_intern(size, alignment);
> > +	while (size--) {
> > +		pa = virt_to_phys(alloc_page());
> > +		assert(pa);
> >  		install_page(page_root, pa, p);
> >  		p += PAGE_SIZE;
> >  	}
> > -- 
> > 2.26.2
> >  
> 
> Otherwise
> 
> Reviewed-by: Andrew Jones <drjones@redhat.com>
> 
> Thanks,
> drew 
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH v1 4/4] lib/vmalloc: allow vm_memalign with alignment > PAGE_SIZE
  2020-07-03 13:49     ` Claudio Imbrenda
@ 2020-07-03 16:20       ` Paolo Bonzini
  0 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2020-07-03 16:20 UTC (permalink / raw)
  To: Claudio Imbrenda, Andrew Jones; +Cc: kvm, frankja, thuth, david

On 03/07/20 15:49, Claudio Imbrenda wrote:
> On Fri, 3 Jul 2020 14:30:01 +0200
> Andrew Jones <drjones@redhat.com> wrote:
> 
> [...]
> 
>>> -void *alloc_vpages(ulong nr)
>>> +/*
>>> + * Allocate a certain number of pages from the virtual address
>>> space (without
>>> + * physical backing).
>>> + *
>>> + * nr is the number of pages to allocate
>>> + * alignment_pages is the alignment of the allocation *in pages*
>>> + */
>>> +static void *alloc_vpages_intern(ulong nr, unsigned int
>>> alignment_pages)  
>>
>> This helper function isn't necessary. Just introduce
>> alloc_vpages_aligned() and then call alloc_vpages_aligned(nr, 1) from
>> alloc_vpages().
> 
> the helper will actually be useful in future patches.
> 
> maybe I should have written that in the patch description.
> 
> I can respin without helper if you prefer (and introduce it when
> needed) or simply update the patch description.

Would it make sense, for your future patches, to keep the helper (but
don't abbrev. "internal" :)) and make it get an order instead of the
number of pages?

Paolo


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-07-03 16:20 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-03 11:51 [kvm-unit-tests PATCH v1 0/4] More lib/alloc cleanup and a minor improvement Claudio Imbrenda
2020-07-03 11:51 ` [kvm-unit-tests PATCH v1 1/4] lib/vmalloc: fix pages count local variable to be size_t Claudio Imbrenda
2020-07-03 11:51 ` [kvm-unit-tests PATCH v1 2/4] lib/alloc_page: change some parameter types Claudio Imbrenda
2020-07-03 11:51 ` [kvm-unit-tests PATCH v1 3/4] lib/alloc_page: move get_order and is_power_of_2 to a bitops.h Claudio Imbrenda
2020-07-03 11:51 ` [kvm-unit-tests PATCH v1 4/4] lib/vmalloc: allow vm_memalign with alignment > PAGE_SIZE Claudio Imbrenda
2020-07-03 12:30   ` Andrew Jones
2020-07-03 13:49     ` Claudio Imbrenda
2020-07-03 16:20       ` Paolo Bonzini
2020-07-03 12:30 ` [kvm-unit-tests PATCH v1 0/4] More lib/alloc cleanup and a minor improvement Andrew Jones

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).