All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH] exec: optimize phys_page_set_level
@ 2015-05-21 13:19 Paolo Bonzini
  2015-05-22  8:01 ` Stefan Hajnoczi
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Paolo Bonzini @ 2015-05-21 13:19 UTC (permalink / raw)
  To: qemu-devel; +Cc: stefanha, mst

phys_page_set_level is writing zeroes to a struct that has just been
filled in by phys_map_node_alloc.  Instead, tell phys_map_node_alloc
whether to fill in the page "as a leaf" or "as a non-leaf".

memcpy is faster than struct assignment, which copies each bitfield
individually.  Arguably a compiler bug, but memcpy is super-special
cased anyway so what could go wrong?

This cuts the cost of phys_page_set_level from 25% to 5% when
booting qboot.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 exec.c | 24 ++++++++++--------------
 1 file changed, 10 insertions(+), 14 deletions(-)

diff --git a/exec.c b/exec.c
index e19ab22..fc8d05d 100644
--- a/exec.c
+++ b/exec.c
@@ -173,17 +173,22 @@ static void phys_map_node_reserve(PhysPageMap *map, unsigned nodes)
     }
 }
 
-static uint32_t phys_map_node_alloc(PhysPageMap *map)
+static uint32_t phys_map_node_alloc(PhysPageMap *map, bool leaf)
 {
     unsigned i;
     uint32_t ret;
+    PhysPageEntry e;
+    PhysPageEntry *p;
 
     ret = map->nodes_nb++;
+    p = map->nodes[ret];
     assert(ret != PHYS_MAP_NODE_NIL);
     assert(ret != map->nodes_nb_alloc);
+
+    e.skip = leaf ? 0 : 1;
+    e.ptr = leaf ? PHYS_SECTION_UNASSIGNED : PHYS_MAP_NODE_NIL;
     for (i = 0; i < P_L2_SIZE; ++i) {
-        map->nodes[ret][i].skip = 1;
-        map->nodes[ret][i].ptr = PHYS_MAP_NODE_NIL;
+        memcpy(&p[i], &e, sizeof(e));
     }
     return ret;
 }
@@ -193,21 +198,12 @@ static void phys_page_set_level(PhysPageMap *map, PhysPageEntry *lp,
                                 int level)
 {
     PhysPageEntry *p;
-    int i;
     hwaddr step = (hwaddr)1 << (level * P_L2_BITS);
 
     if (lp->skip && lp->ptr == PHYS_MAP_NODE_NIL) {
-        lp->ptr = phys_map_node_alloc(map);
-        p = map->nodes[lp->ptr];
-        if (level == 0) {
-            for (i = 0; i < P_L2_SIZE; i++) {
-                p[i].skip = 0;
-                p[i].ptr = PHYS_SECTION_UNASSIGNED;
-            }
-        }
-    } else {
-        p = map->nodes[lp->ptr];
+        lp->ptr = phys_map_node_alloc(map, level == 0);
     }
+    p = map->nodes[lp->ptr];
     lp = &p[(*index >> (level * P_L2_BITS)) & (P_L2_SIZE - 1)];
 
     while (*nb && lp < &p[P_L2_SIZE]) {
-- 
2.4.1

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] [PATCH] exec: optimize phys_page_set_level
  2015-05-21 13:19 [Qemu-devel] [PATCH] exec: optimize phys_page_set_level Paolo Bonzini
@ 2015-05-22  8:01 ` Stefan Hajnoczi
  2015-06-03  4:30 ` Richard Henderson
  2015-06-03 16:14 ` Michael S. Tsirkin
  2 siblings, 0 replies; 5+ messages in thread
From: Stefan Hajnoczi @ 2015-05-22  8:01 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel, mst

[-- Attachment #1: Type: text/plain, Size: 758 bytes --]

On Thu, May 21, 2015 at 03:19:58PM +0200, Paolo Bonzini wrote:
> phys_page_set_level is writing zeroes to a struct that has just been
> filled in by phys_map_node_alloc.  Instead, tell phys_map_node_alloc
> whether to fill in the page "as a leaf" or "as a non-leaf".
> 
> memcpy is faster than struct assignment, which copies each bitfield
> individually.  Arguably a compiler bug, but memcpy is super-special
> cased anyway so what could go wrong?
> 
> This cuts the cost of phys_page_set_level from 25% to 5% when
> booting qboot.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  exec.c | 24 ++++++++++--------------
>  1 file changed, 10 insertions(+), 14 deletions(-)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] [PATCH] exec: optimize phys_page_set_level
  2015-05-21 13:19 [Qemu-devel] [PATCH] exec: optimize phys_page_set_level Paolo Bonzini
  2015-05-22  8:01 ` Stefan Hajnoczi
@ 2015-06-03  4:30 ` Richard Henderson
  2015-06-03  7:03   ` Paolo Bonzini
  2015-06-03 16:14 ` Michael S. Tsirkin
  2 siblings, 1 reply; 5+ messages in thread
From: Richard Henderson @ 2015-06-03  4:30 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel; +Cc: stefanha, mst

On 05/21/2015 06:19 AM, Paolo Bonzini wrote:
> memcpy is faster than struct assignment, which copies each bitfield
> individually.  Arguably a compiler bug, but memcpy is super-special
> cased anyway so what could go wrong?
>

The compiler has the option of doing the copy either way.  Any way to actually 
show that the small memcpy is faster?  That's one of those things where I'm 
sure there's a cost calculation that said per member was better.



r~

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] [PATCH] exec: optimize phys_page_set_level
  2015-06-03  4:30 ` Richard Henderson
@ 2015-06-03  7:03   ` Paolo Bonzini
  0 siblings, 0 replies; 5+ messages in thread
From: Paolo Bonzini @ 2015-06-03  7:03 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: stefanha, mst



On 03/06/2015 06:30, Richard Henderson wrote:
> On 05/21/2015 06:19 AM, Paolo Bonzini wrote:
>> memcpy is faster than struct assignment, which copies each bitfield
>> individually.  Arguably a compiler bug, but memcpy is super-special
>> cased anyway so what could go wrong?
> 
> The compiler has the option of doing the copy either way.  Any way to
> actually show that the small memcpy is faster?  That's one of those
> things where I'm sure there's a cost calculation that said per member
> was better.

Because the struct size is 32 bits, it's a no brainer that full copy is
faster.  However, SRA gets in the way, and causes the struct assignment
to be compiled as two separate bitfield assignment.  Later GCC passes
don't have the means to merge them again.  I filed
https://gcc.gnu.org/PR66391 about this and CCed Martin Jambor.

Paolo

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] [PATCH] exec: optimize phys_page_set_level
  2015-05-21 13:19 [Qemu-devel] [PATCH] exec: optimize phys_page_set_level Paolo Bonzini
  2015-05-22  8:01 ` Stefan Hajnoczi
  2015-06-03  4:30 ` Richard Henderson
@ 2015-06-03 16:14 ` Michael S. Tsirkin
  2 siblings, 0 replies; 5+ messages in thread
From: Michael S. Tsirkin @ 2015-06-03 16:14 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel, stefanha

On Thu, May 21, 2015 at 03:19:58PM +0200, Paolo Bonzini wrote:
> phys_page_set_level is writing zeroes to a struct that has just been
> filled in by phys_map_node_alloc.  Instead, tell phys_map_node_alloc
> whether to fill in the page "as a leaf" or "as a non-leaf".
> 
> memcpy is faster than struct assignment, which copies each bitfield
> individually.  Arguably a compiler bug, but memcpy is super-special
> cased anyway so what could go wrong?
> 
> This cuts the cost of phys_page_set_level from 25% to 5% when
> booting qboot.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


This patch might also be faster for another reason:
it skips an extra loop over L2 in the leaf case.

Reviewed-by: Michael S. Tsirkin <mst@redhat.com>


> ---
>  exec.c | 24 ++++++++++--------------
>  1 file changed, 10 insertions(+), 14 deletions(-)
> 
> diff --git a/exec.c b/exec.c
> index e19ab22..fc8d05d 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -173,17 +173,22 @@ static void phys_map_node_reserve(PhysPageMap *map, unsigned nodes)
>      }
>  }
>  
> -static uint32_t phys_map_node_alloc(PhysPageMap *map)
> +static uint32_t phys_map_node_alloc(PhysPageMap *map, bool leaf)
>  {
>      unsigned i;
>      uint32_t ret;
> +    PhysPageEntry e;
> +    PhysPageEntry *p;
>  
>      ret = map->nodes_nb++;
> +    p = map->nodes[ret];
>      assert(ret != PHYS_MAP_NODE_NIL);
>      assert(ret != map->nodes_nb_alloc);
> +
> +    e.skip = leaf ? 0 : 1;
> +    e.ptr = leaf ? PHYS_SECTION_UNASSIGNED : PHYS_MAP_NODE_NIL;
>      for (i = 0; i < P_L2_SIZE; ++i) {
> -        map->nodes[ret][i].skip = 1;
> -        map->nodes[ret][i].ptr = PHYS_MAP_NODE_NIL;
> +        memcpy(&p[i], &e, sizeof(e));
>      }
>      return ret;
>  }
> @@ -193,21 +198,12 @@ static void phys_page_set_level(PhysPageMap *map, PhysPageEntry *lp,
>                                  int level)
>  {
>      PhysPageEntry *p;
> -    int i;
>      hwaddr step = (hwaddr)1 << (level * P_L2_BITS);
>  
>      if (lp->skip && lp->ptr == PHYS_MAP_NODE_NIL) {
> -        lp->ptr = phys_map_node_alloc(map);
> -        p = map->nodes[lp->ptr];
> -        if (level == 0) {
> -            for (i = 0; i < P_L2_SIZE; i++) {
> -                p[i].skip = 0;
> -                p[i].ptr = PHYS_SECTION_UNASSIGNED;
> -            }
> -        }
> -    } else {
> -        p = map->nodes[lp->ptr];
> +        lp->ptr = phys_map_node_alloc(map, level == 0);
>      }
> +    p = map->nodes[lp->ptr];
>      lp = &p[(*index >> (level * P_L2_BITS)) & (P_L2_SIZE - 1)];
>  
>      while (*nb && lp < &p[P_L2_SIZE]) {
> -- 
> 2.4.1
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-06-03 16:14 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-21 13:19 [Qemu-devel] [PATCH] exec: optimize phys_page_set_level Paolo Bonzini
2015-05-22  8:01 ` Stefan Hajnoczi
2015-06-03  4:30 ` Richard Henderson
2015-06-03  7:03   ` Paolo Bonzini
2015-06-03 16:14 ` Michael S. Tsirkin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.