All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v2 0/5] xen mapcache fixes and improvements
@ 2011-05-19 17:34 ` Stefano Stabellini
  0 siblings, 0 replies; 33+ messages in thread
From: Stefano Stabellini @ 2011-05-19 17:34 UTC (permalink / raw)
  To: qemu-devel; +Cc: xen-devel, Alexander Graf, Stefano Stabellini

Hi all,
this patch series introduces a series of fixes and improvements to the
xen mapcache in qemu.

Changes compared to v1:
    - remove the two includes from xen-mapcache.h.


The list of patches with a diffstat follows:

Stefano Stabellini (5):
      xen: fix qemu_map_cache with size != MCACHE_BUCKET_SIZE
      xen: remove qemu_map_cache_unlock
      xen: remove xen_map_block and xen_unmap_block
      exec.c: refactor cpu_physical_memory_map
      xen: mapcache performance improvements

 cpu-common.h        |    1 +
 exec.c              |   88 ++++++++++++++++----------------
 xen-mapcache-stub.c |    8 ---
 xen-mapcache.c      |  141 +++++++++++++++++++++++---------------------------
 xen-mapcache.h      |   16 ------
 5 files changed, 109 insertions(+), 145 deletions(-)


Most of them are just straightforward cleanups and only touch Xen code.

The first patch fixes the current mapcache implementation that even
though provides a size parameter isn't actually able to cope with sizes
other than 0 or the bucket size.

The second and the third patch remove some not very useful mapcache
related functions and replace them with proper calls to qemu_map_cache
and qemu_invalidate_entry. In particular in the case of xen_map_block
and xen_unmap_block it wasn't possible before because of the size
problem describe above.

The fourth patch refactors cpu_physical_memory_map to simplify the
implementation and replace multiple calls to qemu_get_ram_ptr with a
single call to a new function that takes an address and a size a
parameters. Hopefully the changes make the function easier to understand
and more efficient.
Comments and reviews on this patch are very very welcome.

The last patch introduces few interesting performance improvements:
assuming that qemu_get_ram_ptr is only called to perform one of the
following two actions:

1) map an entire block other than the main ram block (offset == 0);

2) map a single page in the main ram block to issue a read or a write
straight after the call;

we can make the conscious decision of avoid locking the mapcache entries
for case 2).
Then considering that qemu_put_ram_ptr is called to unmap pages in the
same two cases as before, and considering that we don't need to unmap
unlocked mapcache entries, we can avoid calling qemu_invalidate_entry
completely.

A git tree with the patch series applied to the latest qemu is available
here:

git://xenbits.xen.org/people/sstabellini/qemu-dm.git mapcache_fixes_2

- Stefano

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v2 0/5] xen mapcache fixes and improvements
@ 2011-05-19 17:34 ` Stefano Stabellini
  0 siblings, 0 replies; 33+ messages in thread
From: Stefano Stabellini @ 2011-05-19 17:34 UTC (permalink / raw)
  To: qemu-devel; +Cc: xen-devel, Alexander Graf, Stefano Stabellini

Hi all,
this patch series introduces a series of fixes and improvements to the
xen mapcache in qemu.

Changes compared to v1:
    - remove the two includes from xen-mapcache.h.


The list of patches with a diffstat follows:

Stefano Stabellini (5):
      xen: fix qemu_map_cache with size != MCACHE_BUCKET_SIZE
      xen: remove qemu_map_cache_unlock
      xen: remove xen_map_block and xen_unmap_block
      exec.c: refactor cpu_physical_memory_map
      xen: mapcache performance improvements

 cpu-common.h        |    1 +
 exec.c              |   88 ++++++++++++++++----------------
 xen-mapcache-stub.c |    8 ---
 xen-mapcache.c      |  141 +++++++++++++++++++++++---------------------------
 xen-mapcache.h      |   16 ------
 5 files changed, 109 insertions(+), 145 deletions(-)


Most of them are just straightforward cleanups and only touch Xen code.

The first patch fixes the current mapcache implementation that even
though provides a size parameter isn't actually able to cope with sizes
other than 0 or the bucket size.

The second and the third patch remove some not very useful mapcache
related functions and replace them with proper calls to qemu_map_cache
and qemu_invalidate_entry. In particular in the case of xen_map_block
and xen_unmap_block it wasn't possible before because of the size
problem describe above.

The fourth patch refactors cpu_physical_memory_map to simplify the
implementation and replace multiple calls to qemu_get_ram_ptr with a
single call to a new function that takes an address and a size a
parameters. Hopefully the changes make the function easier to understand
and more efficient.
Comments and reviews on this patch are very very welcome.

The last patch introduces few interesting performance improvements:
assuming that qemu_get_ram_ptr is only called to perform one of the
following two actions:

1) map an entire block other than the main ram block (offset == 0);

2) map a single page in the main ram block to issue a read or a write
straight after the call;

we can make the conscious decision of avoid locking the mapcache entries
for case 2).
Then considering that qemu_put_ram_ptr is called to unmap pages in the
same two cases as before, and considering that we don't need to unmap
unlocked mapcache entries, we can avoid calling qemu_invalidate_entry
completely.

A git tree with the patch series applied to the latest qemu is available
here:

git://xenbits.xen.org/people/sstabellini/qemu-dm.git mapcache_fixes_2

- Stefano

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Qemu-devel] [PATCH v2 1/5] xen: fix qemu_map_cache with size != MCACHE_BUCKET_SIZE
  2011-05-19 17:34 ` Stefano Stabellini
@ 2011-05-19 17:35   ` stefano.stabellini
  -1 siblings, 0 replies; 33+ messages in thread
From: stefano.stabellini @ 2011-05-19 17:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: xen-devel, agraf, Stefano Stabellini

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Fix the implementation of qemu_map_cache: correctly support size
arguments different from 0 or MCACHE_BUCKET_SIZE.
The new implementation supports locked mapcache entries with size
multiple of MCACHE_BUCKET_SIZE. qemu_invalidate_entry can correctly
find and unmap these "large" mapcache entries given that the virtual
address passed to qemu_invalidate_entry is the same returned by
qemu_map_cache when the locked mapcache entry was created.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen-mapcache.c |   77 +++++++++++++++++++++++++++++++++++++++++++++++--------
 1 files changed, 65 insertions(+), 12 deletions(-)

diff --git a/xen-mapcache.c b/xen-mapcache.c
index 349cc62..90fbd49 100644
--- a/xen-mapcache.c
+++ b/xen-mapcache.c
@@ -43,14 +43,16 @@
 typedef struct MapCacheEntry {
     target_phys_addr_t paddr_index;
     uint8_t *vaddr_base;
-    DECLARE_BITMAP(valid_mapping, MCACHE_BUCKET_SIZE >> XC_PAGE_SHIFT);
+    unsigned long *valid_mapping;
     uint8_t lock;
+    target_phys_addr_t size;
     struct MapCacheEntry *next;
 } MapCacheEntry;
 
 typedef struct MapCacheRev {
     uint8_t *vaddr_req;
     target_phys_addr_t paddr_index;
+    target_phys_addr_t size;
     QTAILQ_ENTRY(MapCacheRev) next;
 } MapCacheRev;
 
@@ -68,6 +70,15 @@ typedef struct MapCache {
 
 static MapCache *mapcache;
 
+static inline int test_bits(int nr, int size, const unsigned long *addr)
+{
+    unsigned long res = find_next_zero_bit(addr, size + nr, nr);
+    if (res >= nr + size)
+        return 1;
+    else
+        return 0;
+}
+
 void qemu_map_cache_init(void)
 {
     unsigned long size;
@@ -115,11 +126,15 @@ static void qemu_remap_bucket(MapCacheEntry *entry,
     err = qemu_mallocz(nb_pfn * sizeof (int));
 
     if (entry->vaddr_base != NULL) {
-        if (munmap(entry->vaddr_base, size) != 0) {
+        if (munmap(entry->vaddr_base, entry->size) != 0) {
             perror("unmap fails");
             exit(-1);
         }
     }
+    if (entry->valid_mapping != NULL) {
+        qemu_free(entry->valid_mapping);
+        entry->valid_mapping = NULL;
+    }
 
     for (i = 0; i < nb_pfn; i++) {
         pfns[i] = (address_index << (MCACHE_BUCKET_SHIFT-XC_PAGE_SHIFT)) + i;
@@ -134,6 +149,9 @@ static void qemu_remap_bucket(MapCacheEntry *entry,
 
     entry->vaddr_base = vaddr_base;
     entry->paddr_index = address_index;
+    entry->size = size;
+    entry->valid_mapping = (unsigned long *) qemu_mallocz(sizeof(unsigned long) *
+            BITS_TO_LONGS(size >> XC_PAGE_SHIFT));
 
     bitmap_zero(entry->valid_mapping, nb_pfn);
     for (i = 0; i < nb_pfn; i++) {
@@ -151,32 +169,47 @@ uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, u
     MapCacheEntry *entry, *pentry = NULL;
     target_phys_addr_t address_index  = phys_addr >> MCACHE_BUCKET_SHIFT;
     target_phys_addr_t address_offset = phys_addr & (MCACHE_BUCKET_SIZE - 1);
+    target_phys_addr_t __size = size;
 
     trace_qemu_map_cache(phys_addr);
 
-    if (address_index == mapcache->last_address_index && !lock) {
+    if (address_index == mapcache->last_address_index && !lock && !__size) {
         trace_qemu_map_cache_return(mapcache->last_address_vaddr + address_offset);
         return mapcache->last_address_vaddr + address_offset;
     }
 
+    /* size is always a multiple of MCACHE_BUCKET_SIZE */
+    if ((address_offset + (__size % MCACHE_BUCKET_SIZE)) > MCACHE_BUCKET_SIZE)
+        __size += MCACHE_BUCKET_SIZE;
+    if (__size % MCACHE_BUCKET_SIZE)
+        __size += MCACHE_BUCKET_SIZE - (__size % MCACHE_BUCKET_SIZE);
+    if (!__size)
+        __size = MCACHE_BUCKET_SIZE;
+
     entry = &mapcache->entry[address_index % mapcache->nr_buckets];
 
-    while (entry && entry->lock && entry->paddr_index != address_index && entry->vaddr_base) {
+    while (entry && entry->lock && entry->vaddr_base &&
+            (entry->paddr_index != address_index || entry->size != __size ||
+             !test_bits(address_offset >> XC_PAGE_SHIFT, size >> XC_PAGE_SHIFT,
+                 entry->valid_mapping))) {
         pentry = entry;
         entry = entry->next;
     }
     if (!entry) {
         entry = qemu_mallocz(sizeof (MapCacheEntry));
         pentry->next = entry;
-        qemu_remap_bucket(entry, size ? : MCACHE_BUCKET_SIZE, address_index);
+        qemu_remap_bucket(entry, __size, address_index);
     } else if (!entry->lock) {
         if (!entry->vaddr_base || entry->paddr_index != address_index ||
-            !test_bit(address_offset >> XC_PAGE_SHIFT, entry->valid_mapping)) {
-            qemu_remap_bucket(entry, size ? : MCACHE_BUCKET_SIZE, address_index);
+                entry->size != __size ||
+                !test_bits(address_offset >> XC_PAGE_SHIFT, size >> XC_PAGE_SHIFT,
+                    entry->valid_mapping)) {
+            qemu_remap_bucket(entry, __size, address_index);
         }
     }
 
-    if (!test_bit(address_offset >> XC_PAGE_SHIFT, entry->valid_mapping)) {
+    if(!test_bits(address_offset >> XC_PAGE_SHIFT, size >> XC_PAGE_SHIFT,
+                entry->valid_mapping)) {
         mapcache->last_address_index = -1;
         trace_qemu_map_cache_return(NULL);
         return NULL;
@@ -189,6 +222,7 @@ uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, u
         entry->lock++;
         reventry->vaddr_req = mapcache->last_address_vaddr + address_offset;
         reventry->paddr_index = mapcache->last_address_index;
+        reventry->size = entry->size;
         QTAILQ_INSERT_HEAD(&mapcache->locked_entries, reventry, next);
     }
 
@@ -231,13 +265,16 @@ void qemu_map_cache_unlock(void *buffer)
 
 ram_addr_t qemu_ram_addr_from_mapcache(void *ptr)
 {
+    MapCacheEntry *entry = NULL, *pentry = NULL;
     MapCacheRev *reventry;
     target_phys_addr_t paddr_index;
+    target_phys_addr_t size;
     int found = 0;
 
     QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
         if (reventry->vaddr_req == ptr) {
             paddr_index = reventry->paddr_index;
+            size = reventry->size;
             found = 1;
             break;
         }
@@ -252,7 +289,17 @@ ram_addr_t qemu_ram_addr_from_mapcache(void *ptr)
         return 0;
     }
 
-    return paddr_index << MCACHE_BUCKET_SHIFT;
+    entry = &mapcache->entry[paddr_index % mapcache->nr_buckets];
+    while (entry && (entry->paddr_index != paddr_index || entry->size != size)) {
+        pentry = entry;
+        entry = entry->next;
+    }
+    if (!entry) {
+        DPRINTF("Trying to find address %p that is not in the mapcache!\n", ptr);
+        return 0;
+    }
+    return (reventry->paddr_index << MCACHE_BUCKET_SHIFT) +
+        ((unsigned long) ptr - (unsigned long) entry->vaddr_base);
 }
 
 void qemu_invalidate_entry(uint8_t *buffer)
@@ -260,6 +307,7 @@ void qemu_invalidate_entry(uint8_t *buffer)
     MapCacheEntry *entry = NULL, *pentry = NULL;
     MapCacheRev *reventry;
     target_phys_addr_t paddr_index;
+    target_phys_addr_t size;
     int found = 0;
 
     if (mapcache->last_address_vaddr == buffer) {
@@ -269,6 +317,7 @@ void qemu_invalidate_entry(uint8_t *buffer)
     QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
         if (reventry->vaddr_req == buffer) {
             paddr_index = reventry->paddr_index;
+            size = reventry->size;
             found = 1;
             break;
         }
@@ -284,7 +333,7 @@ void qemu_invalidate_entry(uint8_t *buffer)
     qemu_free(reventry);
 
     entry = &mapcache->entry[paddr_index % mapcache->nr_buckets];
-    while (entry && entry->paddr_index != paddr_index) {
+    while (entry && (entry->paddr_index != paddr_index || entry->size != size)) {
         pentry = entry;
         entry = entry->next;
     }
@@ -298,10 +347,11 @@ void qemu_invalidate_entry(uint8_t *buffer)
     }
 
     pentry->next = entry->next;
-    if (munmap(entry->vaddr_base, MCACHE_BUCKET_SIZE) != 0) {
+    if (munmap(entry->vaddr_base, entry->size) != 0) {
         perror("unmap fails");
         exit(-1);
     }
+    qemu_free(entry->valid_mapping);
     qemu_free(entry);
 }
 
@@ -328,13 +378,16 @@ void qemu_invalidate_map_cache(void)
             continue;
         }
 
-        if (munmap(entry->vaddr_base, MCACHE_BUCKET_SIZE) != 0) {
+        if (munmap(entry->vaddr_base, entry->size) != 0) {
             perror("unmap fails");
             exit(-1);
         }
 
         entry->paddr_index = 0;
         entry->vaddr_base = NULL;
+        entry->size = 0;
+        qemu_free(entry->valid_mapping);
+        entry->valid_mapping = NULL;
     }
 
     mapcache->last_address_index = -1;
-- 
1.7.2.3

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 1/5] xen: fix qemu_map_cache with size != MCACHE_BUCKET_SIZE
@ 2011-05-19 17:35   ` stefano.stabellini
  0 siblings, 0 replies; 33+ messages in thread
From: stefano.stabellini @ 2011-05-19 17:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: xen-devel, agraf, Stefano Stabellini

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Fix the implementation of qemu_map_cache: correctly support size
arguments different from 0 or MCACHE_BUCKET_SIZE.
The new implementation supports locked mapcache entries with size
multiple of MCACHE_BUCKET_SIZE. qemu_invalidate_entry can correctly
find and unmap these "large" mapcache entries given that the virtual
address passed to qemu_invalidate_entry is the same returned by
qemu_map_cache when the locked mapcache entry was created.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen-mapcache.c |   77 +++++++++++++++++++++++++++++++++++++++++++++++--------
 1 files changed, 65 insertions(+), 12 deletions(-)

diff --git a/xen-mapcache.c b/xen-mapcache.c
index 349cc62..90fbd49 100644
--- a/xen-mapcache.c
+++ b/xen-mapcache.c
@@ -43,14 +43,16 @@
 typedef struct MapCacheEntry {
     target_phys_addr_t paddr_index;
     uint8_t *vaddr_base;
-    DECLARE_BITMAP(valid_mapping, MCACHE_BUCKET_SIZE >> XC_PAGE_SHIFT);
+    unsigned long *valid_mapping;
     uint8_t lock;
+    target_phys_addr_t size;
     struct MapCacheEntry *next;
 } MapCacheEntry;
 
 typedef struct MapCacheRev {
     uint8_t *vaddr_req;
     target_phys_addr_t paddr_index;
+    target_phys_addr_t size;
     QTAILQ_ENTRY(MapCacheRev) next;
 } MapCacheRev;
 
@@ -68,6 +70,15 @@ typedef struct MapCache {
 
 static MapCache *mapcache;
 
+static inline int test_bits(int nr, int size, const unsigned long *addr)
+{
+    unsigned long res = find_next_zero_bit(addr, size + nr, nr);
+    if (res >= nr + size)
+        return 1;
+    else
+        return 0;
+}
+
 void qemu_map_cache_init(void)
 {
     unsigned long size;
@@ -115,11 +126,15 @@ static void qemu_remap_bucket(MapCacheEntry *entry,
     err = qemu_mallocz(nb_pfn * sizeof (int));
 
     if (entry->vaddr_base != NULL) {
-        if (munmap(entry->vaddr_base, size) != 0) {
+        if (munmap(entry->vaddr_base, entry->size) != 0) {
             perror("unmap fails");
             exit(-1);
         }
     }
+    if (entry->valid_mapping != NULL) {
+        qemu_free(entry->valid_mapping);
+        entry->valid_mapping = NULL;
+    }
 
     for (i = 0; i < nb_pfn; i++) {
         pfns[i] = (address_index << (MCACHE_BUCKET_SHIFT-XC_PAGE_SHIFT)) + i;
@@ -134,6 +149,9 @@ static void qemu_remap_bucket(MapCacheEntry *entry,
 
     entry->vaddr_base = vaddr_base;
     entry->paddr_index = address_index;
+    entry->size = size;
+    entry->valid_mapping = (unsigned long *) qemu_mallocz(sizeof(unsigned long) *
+            BITS_TO_LONGS(size >> XC_PAGE_SHIFT));
 
     bitmap_zero(entry->valid_mapping, nb_pfn);
     for (i = 0; i < nb_pfn; i++) {
@@ -151,32 +169,47 @@ uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, u
     MapCacheEntry *entry, *pentry = NULL;
     target_phys_addr_t address_index  = phys_addr >> MCACHE_BUCKET_SHIFT;
     target_phys_addr_t address_offset = phys_addr & (MCACHE_BUCKET_SIZE - 1);
+    target_phys_addr_t __size = size;
 
     trace_qemu_map_cache(phys_addr);
 
-    if (address_index == mapcache->last_address_index && !lock) {
+    if (address_index == mapcache->last_address_index && !lock && !__size) {
         trace_qemu_map_cache_return(mapcache->last_address_vaddr + address_offset);
         return mapcache->last_address_vaddr + address_offset;
     }
 
+    /* size is always a multiple of MCACHE_BUCKET_SIZE */
+    if ((address_offset + (__size % MCACHE_BUCKET_SIZE)) > MCACHE_BUCKET_SIZE)
+        __size += MCACHE_BUCKET_SIZE;
+    if (__size % MCACHE_BUCKET_SIZE)
+        __size += MCACHE_BUCKET_SIZE - (__size % MCACHE_BUCKET_SIZE);
+    if (!__size)
+        __size = MCACHE_BUCKET_SIZE;
+
     entry = &mapcache->entry[address_index % mapcache->nr_buckets];
 
-    while (entry && entry->lock && entry->paddr_index != address_index && entry->vaddr_base) {
+    while (entry && entry->lock && entry->vaddr_base &&
+            (entry->paddr_index != address_index || entry->size != __size ||
+             !test_bits(address_offset >> XC_PAGE_SHIFT, size >> XC_PAGE_SHIFT,
+                 entry->valid_mapping))) {
         pentry = entry;
         entry = entry->next;
     }
     if (!entry) {
         entry = qemu_mallocz(sizeof (MapCacheEntry));
         pentry->next = entry;
-        qemu_remap_bucket(entry, size ? : MCACHE_BUCKET_SIZE, address_index);
+        qemu_remap_bucket(entry, __size, address_index);
     } else if (!entry->lock) {
         if (!entry->vaddr_base || entry->paddr_index != address_index ||
-            !test_bit(address_offset >> XC_PAGE_SHIFT, entry->valid_mapping)) {
-            qemu_remap_bucket(entry, size ? : MCACHE_BUCKET_SIZE, address_index);
+                entry->size != __size ||
+                !test_bits(address_offset >> XC_PAGE_SHIFT, size >> XC_PAGE_SHIFT,
+                    entry->valid_mapping)) {
+            qemu_remap_bucket(entry, __size, address_index);
         }
     }
 
-    if (!test_bit(address_offset >> XC_PAGE_SHIFT, entry->valid_mapping)) {
+    if(!test_bits(address_offset >> XC_PAGE_SHIFT, size >> XC_PAGE_SHIFT,
+                entry->valid_mapping)) {
         mapcache->last_address_index = -1;
         trace_qemu_map_cache_return(NULL);
         return NULL;
@@ -189,6 +222,7 @@ uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, u
         entry->lock++;
         reventry->vaddr_req = mapcache->last_address_vaddr + address_offset;
         reventry->paddr_index = mapcache->last_address_index;
+        reventry->size = entry->size;
         QTAILQ_INSERT_HEAD(&mapcache->locked_entries, reventry, next);
     }
 
@@ -231,13 +265,16 @@ void qemu_map_cache_unlock(void *buffer)
 
 ram_addr_t qemu_ram_addr_from_mapcache(void *ptr)
 {
+    MapCacheEntry *entry = NULL, *pentry = NULL;
     MapCacheRev *reventry;
     target_phys_addr_t paddr_index;
+    target_phys_addr_t size;
     int found = 0;
 
     QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
         if (reventry->vaddr_req == ptr) {
             paddr_index = reventry->paddr_index;
+            size = reventry->size;
             found = 1;
             break;
         }
@@ -252,7 +289,17 @@ ram_addr_t qemu_ram_addr_from_mapcache(void *ptr)
         return 0;
     }
 
-    return paddr_index << MCACHE_BUCKET_SHIFT;
+    entry = &mapcache->entry[paddr_index % mapcache->nr_buckets];
+    while (entry && (entry->paddr_index != paddr_index || entry->size != size)) {
+        pentry = entry;
+        entry = entry->next;
+    }
+    if (!entry) {
+        DPRINTF("Trying to find address %p that is not in the mapcache!\n", ptr);
+        return 0;
+    }
+    return (reventry->paddr_index << MCACHE_BUCKET_SHIFT) +
+        ((unsigned long) ptr - (unsigned long) entry->vaddr_base);
 }
 
 void qemu_invalidate_entry(uint8_t *buffer)
@@ -260,6 +307,7 @@ void qemu_invalidate_entry(uint8_t *buffer)
     MapCacheEntry *entry = NULL, *pentry = NULL;
     MapCacheRev *reventry;
     target_phys_addr_t paddr_index;
+    target_phys_addr_t size;
     int found = 0;
 
     if (mapcache->last_address_vaddr == buffer) {
@@ -269,6 +317,7 @@ void qemu_invalidate_entry(uint8_t *buffer)
     QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
         if (reventry->vaddr_req == buffer) {
             paddr_index = reventry->paddr_index;
+            size = reventry->size;
             found = 1;
             break;
         }
@@ -284,7 +333,7 @@ void qemu_invalidate_entry(uint8_t *buffer)
     qemu_free(reventry);
 
     entry = &mapcache->entry[paddr_index % mapcache->nr_buckets];
-    while (entry && entry->paddr_index != paddr_index) {
+    while (entry && (entry->paddr_index != paddr_index || entry->size != size)) {
         pentry = entry;
         entry = entry->next;
     }
@@ -298,10 +347,11 @@ void qemu_invalidate_entry(uint8_t *buffer)
     }
 
     pentry->next = entry->next;
-    if (munmap(entry->vaddr_base, MCACHE_BUCKET_SIZE) != 0) {
+    if (munmap(entry->vaddr_base, entry->size) != 0) {
         perror("unmap fails");
         exit(-1);
     }
+    qemu_free(entry->valid_mapping);
     qemu_free(entry);
 }
 
@@ -328,13 +378,16 @@ void qemu_invalidate_map_cache(void)
             continue;
         }
 
-        if (munmap(entry->vaddr_base, MCACHE_BUCKET_SIZE) != 0) {
+        if (munmap(entry->vaddr_base, entry->size) != 0) {
             perror("unmap fails");
             exit(-1);
         }
 
         entry->paddr_index = 0;
         entry->vaddr_base = NULL;
+        entry->size = 0;
+        qemu_free(entry->valid_mapping);
+        entry->valid_mapping = NULL;
     }
 
     mapcache->last_address_index = -1;
-- 
1.7.2.3

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Qemu-devel] [PATCH v2 2/5] xen: remove qemu_map_cache_unlock
  2011-05-19 17:34 ` Stefano Stabellini
@ 2011-05-19 17:35   ` stefano.stabellini
  -1 siblings, 0 replies; 33+ messages in thread
From: stefano.stabellini @ 2011-05-19 17:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: xen-devel, agraf, Stefano Stabellini

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

There is no need for qemu_map_cache_unlock, just use
qemu_invalidate_entry instead.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 exec.c              |    2 +-
 xen-mapcache-stub.c |    4 ----
 xen-mapcache.c      |   33 ---------------------------------
 xen-mapcache.h      |    1 -
 4 files changed, 1 insertions(+), 39 deletions(-)

diff --git a/exec.c b/exec.c
index a6df2d6..01498d2 100644
--- a/exec.c
+++ b/exec.c
@@ -3126,7 +3126,7 @@ void qemu_put_ram_ptr(void *addr)
             xen_unmap_block(block->host, block->length);
             block->host = NULL;
         } else {
-            qemu_map_cache_unlock(addr);
+            qemu_invalidate_entry(addr);
         }
     }
 }
diff --git a/xen-mapcache-stub.c b/xen-mapcache-stub.c
index 7c14b3d..60f712b 100644
--- a/xen-mapcache-stub.c
+++ b/xen-mapcache-stub.c
@@ -22,10 +22,6 @@ uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, u
     return qemu_get_ram_ptr(phys_addr);
 }
 
-void qemu_map_cache_unlock(void *buffer)
-{
-}
-
 ram_addr_t qemu_ram_addr_from_mapcache(void *ptr)
 {
     return -1;
diff --git a/xen-mapcache.c b/xen-mapcache.c
index 90fbd49..57fe24d 100644
--- a/xen-mapcache.c
+++ b/xen-mapcache.c
@@ -230,39 +230,6 @@ uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, u
     return mapcache->last_address_vaddr + address_offset;
 }
 
-void qemu_map_cache_unlock(void *buffer)
-{
-    MapCacheEntry *entry = NULL, *pentry = NULL;
-    MapCacheRev *reventry;
-    target_phys_addr_t paddr_index;
-    int found = 0;
-
-    QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
-        if (reventry->vaddr_req == buffer) {
-            paddr_index = reventry->paddr_index;
-            found = 1;
-            break;
-        }
-    }
-    if (!found) {
-        return;
-    }
-    QTAILQ_REMOVE(&mapcache->locked_entries, reventry, next);
-    qemu_free(reventry);
-
-    entry = &mapcache->entry[paddr_index % mapcache->nr_buckets];
-    while (entry && entry->paddr_index != paddr_index) {
-        pentry = entry;
-        entry = entry->next;
-    }
-    if (!entry) {
-        return;
-    }
-    if (entry->lock > 0) {
-        entry->lock--;
-    }
-}
-
 ram_addr_t qemu_ram_addr_from_mapcache(void *ptr)
 {
     MapCacheEntry *entry = NULL, *pentry = NULL;
diff --git a/xen-mapcache.h b/xen-mapcache.h
index 339444c..b89b8f9 100644
--- a/xen-mapcache.h
+++ b/xen-mapcache.h
@@ -14,7 +14,6 @@
 
 void     qemu_map_cache_init(void);
 uint8_t  *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, uint8_t lock);
-void     qemu_map_cache_unlock(void *phys_addr);
 ram_addr_t qemu_ram_addr_from_mapcache(void *ptr);
 void     qemu_invalidate_entry(uint8_t *buffer);
 void     qemu_invalidate_map_cache(void);
-- 
1.7.2.3

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 2/5] xen: remove qemu_map_cache_unlock
@ 2011-05-19 17:35   ` stefano.stabellini
  0 siblings, 0 replies; 33+ messages in thread
From: stefano.stabellini @ 2011-05-19 17:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: xen-devel, agraf, Stefano Stabellini

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

There is no need for qemu_map_cache_unlock, just use
qemu_invalidate_entry instead.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 exec.c              |    2 +-
 xen-mapcache-stub.c |    4 ----
 xen-mapcache.c      |   33 ---------------------------------
 xen-mapcache.h      |    1 -
 4 files changed, 1 insertions(+), 39 deletions(-)

diff --git a/exec.c b/exec.c
index a6df2d6..01498d2 100644
--- a/exec.c
+++ b/exec.c
@@ -3126,7 +3126,7 @@ void qemu_put_ram_ptr(void *addr)
             xen_unmap_block(block->host, block->length);
             block->host = NULL;
         } else {
-            qemu_map_cache_unlock(addr);
+            qemu_invalidate_entry(addr);
         }
     }
 }
diff --git a/xen-mapcache-stub.c b/xen-mapcache-stub.c
index 7c14b3d..60f712b 100644
--- a/xen-mapcache-stub.c
+++ b/xen-mapcache-stub.c
@@ -22,10 +22,6 @@ uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, u
     return qemu_get_ram_ptr(phys_addr);
 }
 
-void qemu_map_cache_unlock(void *buffer)
-{
-}
-
 ram_addr_t qemu_ram_addr_from_mapcache(void *ptr)
 {
     return -1;
diff --git a/xen-mapcache.c b/xen-mapcache.c
index 90fbd49..57fe24d 100644
--- a/xen-mapcache.c
+++ b/xen-mapcache.c
@@ -230,39 +230,6 @@ uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, u
     return mapcache->last_address_vaddr + address_offset;
 }
 
-void qemu_map_cache_unlock(void *buffer)
-{
-    MapCacheEntry *entry = NULL, *pentry = NULL;
-    MapCacheRev *reventry;
-    target_phys_addr_t paddr_index;
-    int found = 0;
-
-    QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
-        if (reventry->vaddr_req == buffer) {
-            paddr_index = reventry->paddr_index;
-            found = 1;
-            break;
-        }
-    }
-    if (!found) {
-        return;
-    }
-    QTAILQ_REMOVE(&mapcache->locked_entries, reventry, next);
-    qemu_free(reventry);
-
-    entry = &mapcache->entry[paddr_index % mapcache->nr_buckets];
-    while (entry && entry->paddr_index != paddr_index) {
-        pentry = entry;
-        entry = entry->next;
-    }
-    if (!entry) {
-        return;
-    }
-    if (entry->lock > 0) {
-        entry->lock--;
-    }
-}
-
 ram_addr_t qemu_ram_addr_from_mapcache(void *ptr)
 {
     MapCacheEntry *entry = NULL, *pentry = NULL;
diff --git a/xen-mapcache.h b/xen-mapcache.h
index 339444c..b89b8f9 100644
--- a/xen-mapcache.h
+++ b/xen-mapcache.h
@@ -14,7 +14,6 @@
 
 void     qemu_map_cache_init(void);
 uint8_t  *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, uint8_t lock);
-void     qemu_map_cache_unlock(void *phys_addr);
 ram_addr_t qemu_ram_addr_from_mapcache(void *ptr);
 void     qemu_invalidate_entry(uint8_t *buffer);
 void     qemu_invalidate_map_cache(void);
-- 
1.7.2.3

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Qemu-devel] [PATCH v2 3/5] xen: remove xen_map_block and xen_unmap_block
  2011-05-19 17:34 ` Stefano Stabellini
@ 2011-05-19 17:35   ` stefano.stabellini
  -1 siblings, 0 replies; 33+ messages in thread
From: stefano.stabellini @ 2011-05-19 17:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: xen-devel, agraf, Stefano Stabellini

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Replace xen_map_block with qemu_map_cache with the appropriate locking
and size parameters.
Replace xen_unmap_block with qemu_invalidate_entry.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 exec.c              |   19 ++++---------------
 xen-mapcache-stub.c |    4 ----
 xen-mapcache.c      |   31 -------------------------------
 xen-mapcache.h      |   15 ---------------
 4 files changed, 4 insertions(+), 65 deletions(-)

diff --git a/exec.c b/exec.c
index 01498d2..21f21f0 100644
--- a/exec.c
+++ b/exec.c
@@ -54,6 +54,7 @@
 #endif
 #else /* !CONFIG_USER_ONLY */
 #include "xen-mapcache.h"
+#include "trace.h"
 #endif
 
 //#define DEBUG_TB_INVALIDATE
@@ -3068,7 +3069,7 @@ void *qemu_get_ram_ptr(ram_addr_t addr)
                 if (block->offset == 0) {
                     return qemu_map_cache(addr, 0, 1);
                 } else if (block->host == NULL) {
-                    block->host = xen_map_block(block->offset, block->length);
+                    block->host = qemu_map_cache(block->offset, block->length, 1);
                 }
             }
             return block->host + (addr - block->offset);
@@ -3097,7 +3098,7 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
                 if (block->offset == 0) {
                     return qemu_map_cache(addr, 0, 1);
                 } else if (block->host == NULL) {
-                    block->host = xen_map_block(block->offset, block->length);
+                    block->host = qemu_map_cache(block->offset, block->length, 1);
                 }
             }
             return block->host + (addr - block->offset);
@@ -3115,19 +3116,7 @@ void qemu_put_ram_ptr(void *addr)
     trace_qemu_put_ram_ptr(addr);
 
     if (xen_mapcache_enabled()) {
-        RAMBlock *block;
-
-        QLIST_FOREACH(block, &ram_list.blocks, next) {
-            if (addr == block->host) {
-                break;
-            }
-        }
-        if (block && block->host) {
-            xen_unmap_block(block->host, block->length);
-            block->host = NULL;
-        } else {
-            qemu_invalidate_entry(addr);
-        }
+            qemu_invalidate_entry(block->host);
     }
 }
 
diff --git a/xen-mapcache-stub.c b/xen-mapcache-stub.c
index 60f712b..8a2380a 100644
--- a/xen-mapcache-stub.c
+++ b/xen-mapcache-stub.c
@@ -34,7 +34,3 @@ void qemu_invalidate_map_cache(void)
 void qemu_invalidate_entry(uint8_t *buffer)
 {
 }
-uint8_t *xen_map_block(target_phys_addr_t phys_addr, target_phys_addr_t size)
-{
-    return NULL;
-}
diff --git a/xen-mapcache.c b/xen-mapcache.c
index 57fe24d..fac47cd 100644
--- a/xen-mapcache.c
+++ b/xen-mapcache.c
@@ -362,34 +362,3 @@ void qemu_invalidate_map_cache(void)
 
     mapcache_unlock();
 }
-
-uint8_t *xen_map_block(target_phys_addr_t phys_addr, target_phys_addr_t size)
-{
-    uint8_t *vaddr_base;
-    xen_pfn_t *pfns;
-    int *err;
-    unsigned int i;
-    target_phys_addr_t nb_pfn = size >> XC_PAGE_SHIFT;
-
-    trace_xen_map_block(phys_addr, size);
-    phys_addr >>= XC_PAGE_SHIFT;
-
-    pfns = qemu_mallocz(nb_pfn * sizeof (xen_pfn_t));
-    err = qemu_mallocz(nb_pfn * sizeof (int));
-
-    for (i = 0; i < nb_pfn; i++) {
-        pfns[i] = phys_addr + i;
-    }
-
-    vaddr_base = xc_map_foreign_bulk(xen_xc, xen_domid, PROT_READ|PROT_WRITE,
-                                     pfns, err, nb_pfn);
-    if (vaddr_base == NULL) {
-        perror("xc_map_foreign_bulk");
-        exit(-1);
-    }
-
-    qemu_free(pfns);
-    qemu_free(err);
-
-    return vaddr_base;
-}
diff --git a/xen-mapcache.h b/xen-mapcache.h
index b89b8f9..6216cc3 100644
--- a/xen-mapcache.h
+++ b/xen-mapcache.h
@@ -9,27 +9,12 @@
 #ifndef XEN_MAPCACHE_H
 #define XEN_MAPCACHE_H
 
-#include <sys/mman.h>
-#include "trace.h"
-
 void     qemu_map_cache_init(void);
 uint8_t  *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, uint8_t lock);
 ram_addr_t qemu_ram_addr_from_mapcache(void *ptr);
 void     qemu_invalidate_entry(uint8_t *buffer);
 void     qemu_invalidate_map_cache(void);
 
-uint8_t *xen_map_block(target_phys_addr_t phys_addr, target_phys_addr_t size);
-
-static inline void xen_unmap_block(void *addr, ram_addr_t size)
-{
-    trace_xen_unmap_block(addr, size);
-
-    if (munmap(addr, size) != 0) {
-        hw_error("xen_unmap_block: %s", strerror(errno));
-    }
-}
-
-
 #define mapcache_lock()   ((void)0)
 #define mapcache_unlock() ((void)0)
 
-- 
1.7.2.3

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 3/5] xen: remove xen_map_block and xen_unmap_block
@ 2011-05-19 17:35   ` stefano.stabellini
  0 siblings, 0 replies; 33+ messages in thread
From: stefano.stabellini @ 2011-05-19 17:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: xen-devel, agraf, Stefano Stabellini

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Replace xen_map_block with qemu_map_cache with the appropriate locking
and size parameters.
Replace xen_unmap_block with qemu_invalidate_entry.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 exec.c              |   19 ++++---------------
 xen-mapcache-stub.c |    4 ----
 xen-mapcache.c      |   31 -------------------------------
 xen-mapcache.h      |   15 ---------------
 4 files changed, 4 insertions(+), 65 deletions(-)

diff --git a/exec.c b/exec.c
index 01498d2..21f21f0 100644
--- a/exec.c
+++ b/exec.c
@@ -54,6 +54,7 @@
 #endif
 #else /* !CONFIG_USER_ONLY */
 #include "xen-mapcache.h"
+#include "trace.h"
 #endif
 
 //#define DEBUG_TB_INVALIDATE
@@ -3068,7 +3069,7 @@ void *qemu_get_ram_ptr(ram_addr_t addr)
                 if (block->offset == 0) {
                     return qemu_map_cache(addr, 0, 1);
                 } else if (block->host == NULL) {
-                    block->host = xen_map_block(block->offset, block->length);
+                    block->host = qemu_map_cache(block->offset, block->length, 1);
                 }
             }
             return block->host + (addr - block->offset);
@@ -3097,7 +3098,7 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
                 if (block->offset == 0) {
                     return qemu_map_cache(addr, 0, 1);
                 } else if (block->host == NULL) {
-                    block->host = xen_map_block(block->offset, block->length);
+                    block->host = qemu_map_cache(block->offset, block->length, 1);
                 }
             }
             return block->host + (addr - block->offset);
@@ -3115,19 +3116,7 @@ void qemu_put_ram_ptr(void *addr)
     trace_qemu_put_ram_ptr(addr);
 
     if (xen_mapcache_enabled()) {
-        RAMBlock *block;
-
-        QLIST_FOREACH(block, &ram_list.blocks, next) {
-            if (addr == block->host) {
-                break;
-            }
-        }
-        if (block && block->host) {
-            xen_unmap_block(block->host, block->length);
-            block->host = NULL;
-        } else {
-            qemu_invalidate_entry(addr);
-        }
+            qemu_invalidate_entry(block->host);
     }
 }
 
diff --git a/xen-mapcache-stub.c b/xen-mapcache-stub.c
index 60f712b..8a2380a 100644
--- a/xen-mapcache-stub.c
+++ b/xen-mapcache-stub.c
@@ -34,7 +34,3 @@ void qemu_invalidate_map_cache(void)
 void qemu_invalidate_entry(uint8_t *buffer)
 {
 }
-uint8_t *xen_map_block(target_phys_addr_t phys_addr, target_phys_addr_t size)
-{
-    return NULL;
-}
diff --git a/xen-mapcache.c b/xen-mapcache.c
index 57fe24d..fac47cd 100644
--- a/xen-mapcache.c
+++ b/xen-mapcache.c
@@ -362,34 +362,3 @@ void qemu_invalidate_map_cache(void)
 
     mapcache_unlock();
 }
-
-uint8_t *xen_map_block(target_phys_addr_t phys_addr, target_phys_addr_t size)
-{
-    uint8_t *vaddr_base;
-    xen_pfn_t *pfns;
-    int *err;
-    unsigned int i;
-    target_phys_addr_t nb_pfn = size >> XC_PAGE_SHIFT;
-
-    trace_xen_map_block(phys_addr, size);
-    phys_addr >>= XC_PAGE_SHIFT;
-
-    pfns = qemu_mallocz(nb_pfn * sizeof (xen_pfn_t));
-    err = qemu_mallocz(nb_pfn * sizeof (int));
-
-    for (i = 0; i < nb_pfn; i++) {
-        pfns[i] = phys_addr + i;
-    }
-
-    vaddr_base = xc_map_foreign_bulk(xen_xc, xen_domid, PROT_READ|PROT_WRITE,
-                                     pfns, err, nb_pfn);
-    if (vaddr_base == NULL) {
-        perror("xc_map_foreign_bulk");
-        exit(-1);
-    }
-
-    qemu_free(pfns);
-    qemu_free(err);
-
-    return vaddr_base;
-}
diff --git a/xen-mapcache.h b/xen-mapcache.h
index b89b8f9..6216cc3 100644
--- a/xen-mapcache.h
+++ b/xen-mapcache.h
@@ -9,27 +9,12 @@
 #ifndef XEN_MAPCACHE_H
 #define XEN_MAPCACHE_H
 
-#include <sys/mman.h>
-#include "trace.h"
-
 void     qemu_map_cache_init(void);
 uint8_t  *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, uint8_t lock);
 ram_addr_t qemu_ram_addr_from_mapcache(void *ptr);
 void     qemu_invalidate_entry(uint8_t *buffer);
 void     qemu_invalidate_map_cache(void);
 
-uint8_t *xen_map_block(target_phys_addr_t phys_addr, target_phys_addr_t size);
-
-static inline void xen_unmap_block(void *addr, ram_addr_t size)
-{
-    trace_xen_unmap_block(addr, size);
-
-    if (munmap(addr, size) != 0) {
-        hw_error("xen_unmap_block: %s", strerror(errno));
-    }
-}
-
-
 #define mapcache_lock()   ((void)0)
 #define mapcache_unlock() ((void)0)
 
-- 
1.7.2.3

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Qemu-devel] [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
  2011-05-19 17:34 ` Stefano Stabellini
@ 2011-05-19 17:35   ` stefano.stabellini
  -1 siblings, 0 replies; 33+ messages in thread
From: stefano.stabellini @ 2011-05-19 17:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: xen-devel, agraf, Stefano Stabellini

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Introduce qemu_ram_ptr_length that takes an address and a size as
parameters rather than just an address.

Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
once rather than calling qemu_get_ram_ptr one time per page.
This is not only more efficient but also tries to simplify the logic of
the function.
Currently we are relying on the fact that all the pages are mapped
contiguously in qemu's address space: we have a check to make sure that
the virtual address returned by qemu_get_ram_ptr from the second call on
is consecutive. Now we are making this more explicit replacing all the
calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
passing a size argument.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: agraf@suse.de
CC: anthony@codemonkey.ws
---
 cpu-common.h |    1 +
 exec.c       |   51 ++++++++++++++++++++++++++++++++++-----------------
 2 files changed, 35 insertions(+), 17 deletions(-)

diff --git a/cpu-common.h b/cpu-common.h
index 151c32c..085aacb 100644
--- a/cpu-common.h
+++ b/cpu-common.h
@@ -64,6 +64,7 @@ void qemu_ram_free(ram_addr_t addr);
 void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
 /* This should only be used for ram local to a device.  */
 void *qemu_get_ram_ptr(ram_addr_t addr);
+void *qemu_ram_ptr_length(target_phys_addr_t addr, target_phys_addr_t *size);
 /* Same but slower, to use for migration, where the order of
  * RAMBlocks must not change. */
 void *qemu_safe_ram_ptr(ram_addr_t addr);
diff --git a/exec.c b/exec.c
index 21f21f0..ff9c174 100644
--- a/exec.c
+++ b/exec.c
@@ -3111,6 +3111,31 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
     return NULL;
 }
 
+/* Return a host pointer to guest's ram. Similar to qemu_get_ram_ptr
+ * but takes a size argument */
+void *qemu_ram_ptr_length(target_phys_addr_t addr, target_phys_addr_t *size)
+{
+    if (xen_mapcache_enabled())
+        return qemu_map_cache(addr, *size, 1);
+    else {
+        RAMBlock *block;
+
+        QLIST_FOREACH(block, &ram_list.blocks, next) {
+            if (addr - block->offset < block->length) {
+                if (addr - block->offset + *size > block->length)
+                    *size = block->length - addr + block->offset;
+                return block->host + (addr - block->offset);
+            }
+        }
+
+        fprintf(stderr, "Bad ram offset %" PRIx64 "\n", (uint64_t)addr);
+        abort();
+
+        *size = 0;
+        return NULL;
+    }
+}
+
 void qemu_put_ram_ptr(void *addr)
 {
     trace_qemu_put_ram_ptr(addr);
@@ -3972,14 +3997,12 @@ void *cpu_physical_memory_map(target_phys_addr_t addr,
                               int is_write)
 {
     target_phys_addr_t len = *plen;
-    target_phys_addr_t done = 0;
+    target_phys_addr_t todo = 0;
     int l;
-    uint8_t *ret = NULL;
-    uint8_t *ptr;
     target_phys_addr_t page;
     unsigned long pd;
     PhysPageDesc *p;
-    unsigned long addr1;
+    target_phys_addr_t addr1 = addr;
 
     while (len > 0) {
         page = addr & TARGET_PAGE_MASK;
@@ -3994,7 +4017,7 @@ void *cpu_physical_memory_map(target_phys_addr_t addr,
         }
 
         if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) {
-            if (done || bounce.buffer) {
+            if (todo || bounce.buffer) {
                 break;
             }
             bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, TARGET_PAGE_SIZE);
@@ -4003,23 +4026,17 @@ void *cpu_physical_memory_map(target_phys_addr_t addr,
             if (!is_write) {
                 cpu_physical_memory_read(addr, bounce.buffer, l);
             }
-            ptr = bounce.buffer;
-        } else {
-            addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK);
-            ptr = qemu_get_ram_ptr(addr1);
-        }
-        if (!done) {
-            ret = ptr;
-        } else if (ret + done != ptr) {
-            break;
+
+            *plen = l;
+            return bounce.buffer;
         }
 
         len -= l;
         addr += l;
-        done += l;
+        todo += l;
     }
-    *plen = done;
-    return ret;
+    *plen = todo;
+    return qemu_ram_ptr_length(addr1, plen);
 }
 
 /* Unmaps a memory region previously mapped by cpu_physical_memory_map().
-- 
1.7.2.3

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
@ 2011-05-19 17:35   ` stefano.stabellini
  0 siblings, 0 replies; 33+ messages in thread
From: stefano.stabellini @ 2011-05-19 17:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: xen-devel, agraf, Stefano Stabellini

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Introduce qemu_ram_ptr_length that takes an address and a size as
parameters rather than just an address.

Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
once rather than calling qemu_get_ram_ptr one time per page.
This is not only more efficient but also tries to simplify the logic of
the function.
Currently we are relying on the fact that all the pages are mapped
contiguously in qemu's address space: we have a check to make sure that
the virtual address returned by qemu_get_ram_ptr from the second call on
is consecutive. Now we are making this more explicit replacing all the
calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
passing a size argument.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: agraf@suse.de
CC: anthony@codemonkey.ws
---
 cpu-common.h |    1 +
 exec.c       |   51 ++++++++++++++++++++++++++++++++++-----------------
 2 files changed, 35 insertions(+), 17 deletions(-)

diff --git a/cpu-common.h b/cpu-common.h
index 151c32c..085aacb 100644
--- a/cpu-common.h
+++ b/cpu-common.h
@@ -64,6 +64,7 @@ void qemu_ram_free(ram_addr_t addr);
 void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
 /* This should only be used for ram local to a device.  */
 void *qemu_get_ram_ptr(ram_addr_t addr);
+void *qemu_ram_ptr_length(target_phys_addr_t addr, target_phys_addr_t *size);
 /* Same but slower, to use for migration, where the order of
  * RAMBlocks must not change. */
 void *qemu_safe_ram_ptr(ram_addr_t addr);
diff --git a/exec.c b/exec.c
index 21f21f0..ff9c174 100644
--- a/exec.c
+++ b/exec.c
@@ -3111,6 +3111,31 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
     return NULL;
 }
 
+/* Return a host pointer to guest's ram. Similar to qemu_get_ram_ptr
+ * but takes a size argument */
+void *qemu_ram_ptr_length(target_phys_addr_t addr, target_phys_addr_t *size)
+{
+    if (xen_mapcache_enabled())
+        return qemu_map_cache(addr, *size, 1);
+    else {
+        RAMBlock *block;
+
+        QLIST_FOREACH(block, &ram_list.blocks, next) {
+            if (addr - block->offset < block->length) {
+                if (addr - block->offset + *size > block->length)
+                    *size = block->length - addr + block->offset;
+                return block->host + (addr - block->offset);
+            }
+        }
+
+        fprintf(stderr, "Bad ram offset %" PRIx64 "\n", (uint64_t)addr);
+        abort();
+
+        *size = 0;
+        return NULL;
+    }
+}
+
 void qemu_put_ram_ptr(void *addr)
 {
     trace_qemu_put_ram_ptr(addr);
@@ -3972,14 +3997,12 @@ void *cpu_physical_memory_map(target_phys_addr_t addr,
                               int is_write)
 {
     target_phys_addr_t len = *plen;
-    target_phys_addr_t done = 0;
+    target_phys_addr_t todo = 0;
     int l;
-    uint8_t *ret = NULL;
-    uint8_t *ptr;
     target_phys_addr_t page;
     unsigned long pd;
     PhysPageDesc *p;
-    unsigned long addr1;
+    target_phys_addr_t addr1 = addr;
 
     while (len > 0) {
         page = addr & TARGET_PAGE_MASK;
@@ -3994,7 +4017,7 @@ void *cpu_physical_memory_map(target_phys_addr_t addr,
         }
 
         if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) {
-            if (done || bounce.buffer) {
+            if (todo || bounce.buffer) {
                 break;
             }
             bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, TARGET_PAGE_SIZE);
@@ -4003,23 +4026,17 @@ void *cpu_physical_memory_map(target_phys_addr_t addr,
             if (!is_write) {
                 cpu_physical_memory_read(addr, bounce.buffer, l);
             }
-            ptr = bounce.buffer;
-        } else {
-            addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK);
-            ptr = qemu_get_ram_ptr(addr1);
-        }
-        if (!done) {
-            ret = ptr;
-        } else if (ret + done != ptr) {
-            break;
+
+            *plen = l;
+            return bounce.buffer;
         }
 
         len -= l;
         addr += l;
-        done += l;
+        todo += l;
     }
-    *plen = done;
-    return ret;
+    *plen = todo;
+    return qemu_ram_ptr_length(addr1, plen);
 }
 
 /* Unmaps a memory region previously mapped by cpu_physical_memory_map().
-- 
1.7.2.3

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Qemu-devel] [PATCH v2 5/5] xen: mapcache performance improvements
  2011-05-19 17:34 ` Stefano Stabellini
@ 2011-05-19 17:35   ` stefano.stabellini
  -1 siblings, 0 replies; 33+ messages in thread
From: stefano.stabellini @ 2011-05-19 17:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: xen-devel, agraf, Stefano Stabellini

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Use qemu_invalidate_entry in cpu_physical_memory_unmap.

Do not lock mapcache entries in qemu_get_ram_ptr if the address falls in
the ramblock with offset == 0. We don't need to do that because the
callers of qemu_get_ram_ptr either try to map an entire block, other
from the main ramblock, or until the end of a page to implement a single
read or write in the main ramblock.
If we don't lock mapcache entries in qemu_get_ram_ptr we don't need to
call qemu_invalidate_entry in qemu_put_ram_ptr anymore because we can
leave with few long lived block mappings requested by devices.

Also move the call to qemu_ram_addr_from_mapcache at the beginning of
qemu_ram_addr_from_host.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 exec.c |   28 ++++++++++------------------
 1 files changed, 10 insertions(+), 18 deletions(-)

diff --git a/exec.c b/exec.c
index ff9c174..aebb23b 100644
--- a/exec.c
+++ b/exec.c
@@ -3065,9 +3065,10 @@ void *qemu_get_ram_ptr(ram_addr_t addr)
             if (xen_mapcache_enabled()) {
                 /* We need to check if the requested address is in the RAM
                  * because we don't want to map the entire memory in QEMU.
+                 * In that case just map until the end of the page.
                  */
                 if (block->offset == 0) {
-                    return qemu_map_cache(addr, 0, 1);
+                    return qemu_map_cache(addr, 0, 0);
                 } else if (block->host == NULL) {
                     block->host = qemu_map_cache(block->offset, block->length, 1);
                 }
@@ -3094,9 +3095,10 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
             if (xen_mapcache_enabled()) {
                 /* We need to check if the requested address is in the RAM
                  * because we don't want to map the entire memory in QEMU.
+                 * In that case just map until the end of the page.
                  */
                 if (block->offset == 0) {
-                    return qemu_map_cache(addr, 0, 1);
+                    return qemu_map_cache(addr, 0, 0);
                 } else if (block->host == NULL) {
                     block->host = qemu_map_cache(block->offset, block->length, 1);
                 }
@@ -3139,10 +3141,6 @@ void *qemu_ram_ptr_length(target_phys_addr_t addr, target_phys_addr_t *size)
 void qemu_put_ram_ptr(void *addr)
 {
     trace_qemu_put_ram_ptr(addr);
-
-    if (xen_mapcache_enabled()) {
-            qemu_invalidate_entry(block->host);
-    }
 }
 
 int qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr)
@@ -3150,6 +3148,11 @@ int qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr)
     RAMBlock *block;
     uint8_t *host = ptr;
 
+    if (xen_mapcache_enabled()) {
+        *ram_addr = qemu_ram_addr_from_mapcache(ptr);
+        return 0;
+    }
+
     QLIST_FOREACH(block, &ram_list.blocks, next) {
         /* This case append when the block is not mapped. */
         if (block->host == NULL) {
@@ -3161,11 +3164,6 @@ int qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr)
         }
     }
 
-    if (xen_mapcache_enabled()) {
-        *ram_addr = qemu_ram_addr_from_mapcache(ptr);
-        return 0;
-    }
-
     return -1;
 }
 
@@ -4066,13 +4064,7 @@ void cpu_physical_memory_unmap(void *buffer, target_phys_addr_t len,
             }
         }
         if (xen_mapcache_enabled()) {
-            uint8_t *buffer1 = buffer;
-            uint8_t *end_buffer = buffer + len;
-
-            while (buffer1 < end_buffer) {
-                qemu_put_ram_ptr(buffer1);
-                buffer1 += TARGET_PAGE_SIZE;
-            }
+            qemu_invalidate_entry(buffer);
         }
         return;
     }
-- 
1.7.2.3

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 5/5] xen: mapcache performance improvements
@ 2011-05-19 17:35   ` stefano.stabellini
  0 siblings, 0 replies; 33+ messages in thread
From: stefano.stabellini @ 2011-05-19 17:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: xen-devel, agraf, Stefano Stabellini

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Use qemu_invalidate_entry in cpu_physical_memory_unmap.

Do not lock mapcache entries in qemu_get_ram_ptr if the address falls in
the ramblock with offset == 0. We don't need to do that because the
callers of qemu_get_ram_ptr either try to map an entire block, other
from the main ramblock, or until the end of a page to implement a single
read or write in the main ramblock.
If we don't lock mapcache entries in qemu_get_ram_ptr we don't need to
call qemu_invalidate_entry in qemu_put_ram_ptr anymore because we can
leave with few long lived block mappings requested by devices.

Also move the call to qemu_ram_addr_from_mapcache at the beginning of
qemu_ram_addr_from_host.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 exec.c |   28 ++++++++++------------------
 1 files changed, 10 insertions(+), 18 deletions(-)

diff --git a/exec.c b/exec.c
index ff9c174..aebb23b 100644
--- a/exec.c
+++ b/exec.c
@@ -3065,9 +3065,10 @@ void *qemu_get_ram_ptr(ram_addr_t addr)
             if (xen_mapcache_enabled()) {
                 /* We need to check if the requested address is in the RAM
                  * because we don't want to map the entire memory in QEMU.
+                 * In that case just map until the end of the page.
                  */
                 if (block->offset == 0) {
-                    return qemu_map_cache(addr, 0, 1);
+                    return qemu_map_cache(addr, 0, 0);
                 } else if (block->host == NULL) {
                     block->host = qemu_map_cache(block->offset, block->length, 1);
                 }
@@ -3094,9 +3095,10 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
             if (xen_mapcache_enabled()) {
                 /* We need to check if the requested address is in the RAM
                  * because we don't want to map the entire memory in QEMU.
+                 * In that case just map until the end of the page.
                  */
                 if (block->offset == 0) {
-                    return qemu_map_cache(addr, 0, 1);
+                    return qemu_map_cache(addr, 0, 0);
                 } else if (block->host == NULL) {
                     block->host = qemu_map_cache(block->offset, block->length, 1);
                 }
@@ -3139,10 +3141,6 @@ void *qemu_ram_ptr_length(target_phys_addr_t addr, target_phys_addr_t *size)
 void qemu_put_ram_ptr(void *addr)
 {
     trace_qemu_put_ram_ptr(addr);
-
-    if (xen_mapcache_enabled()) {
-            qemu_invalidate_entry(block->host);
-    }
 }
 
 int qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr)
@@ -3150,6 +3148,11 @@ int qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr)
     RAMBlock *block;
     uint8_t *host = ptr;
 
+    if (xen_mapcache_enabled()) {
+        *ram_addr = qemu_ram_addr_from_mapcache(ptr);
+        return 0;
+    }
+
     QLIST_FOREACH(block, &ram_list.blocks, next) {
         /* This case append when the block is not mapped. */
         if (block->host == NULL) {
@@ -3161,11 +3164,6 @@ int qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr)
         }
     }
 
-    if (xen_mapcache_enabled()) {
-        *ram_addr = qemu_ram_addr_from_mapcache(ptr);
-        return 0;
-    }
-
     return -1;
 }
 
@@ -4066,13 +4064,7 @@ void cpu_physical_memory_unmap(void *buffer, target_phys_addr_t len,
             }
         }
         if (xen_mapcache_enabled()) {
-            uint8_t *buffer1 = buffer;
-            uint8_t *end_buffer = buffer + len;
-
-            while (buffer1 < end_buffer) {
-                qemu_put_ram_ptr(buffer1);
-                buffer1 += TARGET_PAGE_SIZE;
-            }
+            qemu_invalidate_entry(buffer);
         }
         return;
     }
-- 
1.7.2.3

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCH v2 0/5] xen mapcache fixes and improvements
  2011-05-19 17:34 ` Stefano Stabellini
@ 2011-05-27 23:30   ` Alexander Graf
  -1 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2011-05-27 23:30 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, qemu-devel


On 19.05.2011, at 19:34, Stefano Stabellini wrote:

> Hi all,
> this patch series introduces a series of fixes and improvements to the
> xen mapcache in qemu.
> 
> Changes compared to v1:
>    - remove the two includes from xen-mapcache.h.

Thanks, applied to xen-next.


Alex

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 0/5] xen mapcache fixes and improvements
@ 2011-05-27 23:30   ` Alexander Graf
  0 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2011-05-27 23:30 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, qemu-devel


On 19.05.2011, at 19:34, Stefano Stabellini wrote:

> Hi all,
> this patch series introduces a series of fixes and improvements to the
> xen mapcache in qemu.
> 
> Changes compared to v1:
>    - remove the two includes from xen-mapcache.h.

Thanks, applied to xen-next.


Alex

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
  2011-05-19 17:35   ` stefano.stabellini
@ 2011-07-11 22:17     ` Jan Kiszka
  -1 siblings, 0 replies; 33+ messages in thread
From: Jan Kiszka @ 2011-07-11 22:17 UTC (permalink / raw)
  To: stefano.stabellini; +Cc: Stefan BOSAK, xen-devel, qemu-devel, agraf

[-- Attachment #1: Type: text/plain, Size: 1291 bytes --]

On 2011-05-19 19:35, stefano.stabellini@eu.citrix.com wrote:
> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Introduce qemu_ram_ptr_length that takes an address and a size as
> parameters rather than just an address.
> 
> Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
> once rather than calling qemu_get_ram_ptr one time per page.
> This is not only more efficient but also tries to simplify the logic of
> the function.
> Currently we are relying on the fact that all the pages are mapped
> contiguously in qemu's address space: we have a check to make sure that
> the virtual address returned by qemu_get_ram_ptr from the second call on
> is consecutive. Now we are making this more explicit replacing all the
> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
> passing a size argument.

This breaks cpu_physical_memory_map for >4G addresses on PC.
Effectively, it doesn't account for the PCI gap, ie. that the RAM block
is actually mapped in two chunks into the guest physical memory. One
outcome is that QEMU aborts when we try to process an address that is
now "outside" RAM. Simple to reproduce with a virtio NIC and 5G guest
memory, even without KVM.

Please fix or revert.

Thanks,
Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
@ 2011-07-11 22:17     ` Jan Kiszka
  0 siblings, 0 replies; 33+ messages in thread
From: Jan Kiszka @ 2011-07-11 22:17 UTC (permalink / raw)
  To: stefano.stabellini; +Cc: Stefan BOSAK, xen-devel, qemu-devel, agraf


[-- Attachment #1.1: Type: text/plain, Size: 1291 bytes --]

On 2011-05-19 19:35, stefano.stabellini@eu.citrix.com wrote:
> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Introduce qemu_ram_ptr_length that takes an address and a size as
> parameters rather than just an address.
> 
> Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
> once rather than calling qemu_get_ram_ptr one time per page.
> This is not only more efficient but also tries to simplify the logic of
> the function.
> Currently we are relying on the fact that all the pages are mapped
> contiguously in qemu's address space: we have a check to make sure that
> the virtual address returned by qemu_get_ram_ptr from the second call on
> is consecutive. Now we are making this more explicit replacing all the
> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
> passing a size argument.

This breaks cpu_physical_memory_map for >4G addresses on PC.
Effectively, it doesn't account for the PCI gap, ie. that the RAM block
is actually mapped in two chunks into the guest physical memory. One
outcome is that QEMU aborts when we try to process an address that is
now "outside" RAM. Simple to reproduce with a virtio NIC and 5G guest
memory, even without KVM.

Please fix or revert.

Thanks,
Jan


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
  2011-07-11 22:17     ` Jan Kiszka
@ 2011-07-12  6:14       ` Alexander Graf
  -1 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2011-07-12  6:14 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: Stefan BOSAK, xen-devel, qemu-devel, stefano.stabellini


On 12.07.2011, at 00:17, Jan Kiszka wrote:

> On 2011-05-19 19:35, stefano.stabellini@eu.citrix.com wrote:
>> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> 
>> Introduce qemu_ram_ptr_length that takes an address and a size as
>> parameters rather than just an address.
>> 
>> Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
>> once rather than calling qemu_get_ram_ptr one time per page.
>> This is not only more efficient but also tries to simplify the logic of
>> the function.
>> Currently we are relying on the fact that all the pages are mapped
>> contiguously in qemu's address space: we have a check to make sure that
>> the virtual address returned by qemu_get_ram_ptr from the second call on
>> is consecutive. Now we are making this more explicit replacing all the
>> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
>> passing a size argument.
> 
> This breaks cpu_physical_memory_map for >4G addresses on PC.
> Effectively, it doesn't account for the PCI gap, ie. that the RAM block
> is actually mapped in two chunks into the guest physical memory. One
> outcome is that QEMU aborts when we try to process an address that is
> now "outside" RAM. Simple to reproduce with a virtio NIC and 5G guest
> memory, even without KVM.

Do you have some reliable test case? I can't seem to reproduce the issue. It works just fine for me with -m 10G, virtio nic and disk, polluting all the guest page cache.


Alex

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
@ 2011-07-12  6:14       ` Alexander Graf
  0 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2011-07-12  6:14 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: Stefan BOSAK, xen-devel, qemu-devel, stefano.stabellini


On 12.07.2011, at 00:17, Jan Kiszka wrote:

> On 2011-05-19 19:35, stefano.stabellini@eu.citrix.com wrote:
>> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> 
>> Introduce qemu_ram_ptr_length that takes an address and a size as
>> parameters rather than just an address.
>> 
>> Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
>> once rather than calling qemu_get_ram_ptr one time per page.
>> This is not only more efficient but also tries to simplify the logic of
>> the function.
>> Currently we are relying on the fact that all the pages are mapped
>> contiguously in qemu's address space: we have a check to make sure that
>> the virtual address returned by qemu_get_ram_ptr from the second call on
>> is consecutive. Now we are making this more explicit replacing all the
>> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
>> passing a size argument.
> 
> This breaks cpu_physical_memory_map for >4G addresses on PC.
> Effectively, it doesn't account for the PCI gap, ie. that the RAM block
> is actually mapped in two chunks into the guest physical memory. One
> outcome is that QEMU aborts when we try to process an address that is
> now "outside" RAM. Simple to reproduce with a virtio NIC and 5G guest
> memory, even without KVM.

Do you have some reliable test case? I can't seem to reproduce the issue. It works just fine for me with -m 10G, virtio nic and disk, polluting all the guest page cache.


Alex

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
  2011-07-11 22:17     ` Jan Kiszka
@ 2011-07-12  6:21       ` Alexander Graf
  -1 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2011-07-12  6:21 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: Stefan BOSAK, xen-devel, qemu-devel, stefano.stabellini


On 12.07.2011, at 00:17, Jan Kiszka wrote:

> On 2011-05-19 19:35, stefano.stabellini@eu.citrix.com wrote:
>> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> 
>> Introduce qemu_ram_ptr_length that takes an address and a size as
>> parameters rather than just an address.
>> 
>> Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
>> once rather than calling qemu_get_ram_ptr one time per page.
>> This is not only more efficient but also tries to simplify the logic of
>> the function.
>> Currently we are relying on the fact that all the pages are mapped
>> contiguously in qemu's address space: we have a check to make sure that
>> the virtual address returned by qemu_get_ram_ptr from the second call on
>> is consecutive. Now we are making this more explicit replacing all the
>> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
>> passing a size argument.
> 
> This breaks cpu_physical_memory_map for >4G addresses on PC.
> Effectively, it doesn't account for the PCI gap, ie. that the RAM block
> is actually mapped in two chunks into the guest physical memory. One
> outcome is that QEMU aborts when we try to process an address that is
> now "outside" RAM. Simple to reproduce with a virtio NIC and 5G guest
> memory, even without KVM.

Ah, I see what you mean now. It breaks on current HEAD, but not on my last xen-next branch which already included that patch, so I'd assume it's something different that came in later.


Alex

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
@ 2011-07-12  6:21       ` Alexander Graf
  0 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2011-07-12  6:21 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: Stefan BOSAK, xen-devel, qemu-devel, stefano.stabellini


On 12.07.2011, at 00:17, Jan Kiszka wrote:

> On 2011-05-19 19:35, stefano.stabellini@eu.citrix.com wrote:
>> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> 
>> Introduce qemu_ram_ptr_length that takes an address and a size as
>> parameters rather than just an address.
>> 
>> Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
>> once rather than calling qemu_get_ram_ptr one time per page.
>> This is not only more efficient but also tries to simplify the logic of
>> the function.
>> Currently we are relying on the fact that all the pages are mapped
>> contiguously in qemu's address space: we have a check to make sure that
>> the virtual address returned by qemu_get_ram_ptr from the second call on
>> is consecutive. Now we are making this more explicit replacing all the
>> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
>> passing a size argument.
> 
> This breaks cpu_physical_memory_map for >4G addresses on PC.
> Effectively, it doesn't account for the PCI gap, ie. that the RAM block
> is actually mapped in two chunks into the guest physical memory. One
> outcome is that QEMU aborts when we try to process an address that is
> now "outside" RAM. Simple to reproduce with a virtio NIC and 5G guest
> memory, even without KVM.

Ah, I see what you mean now. It breaks on current HEAD, but not on my last xen-next branch which already included that patch, so I'd assume it's something different that came in later.


Alex

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
  2011-07-11 22:17     ` Jan Kiszka
@ 2011-07-12  6:28       ` Alexander Graf
  -1 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2011-07-12  6:28 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: Stefan BOSAK, xen-devel Devel,
	qemu-devel@nongnu.orgDevelopers Developers, Stefano Stabellini


On 12.07.2011, at 00:17, Jan Kiszka wrote:

> On 2011-05-19 19:35, stefano.stabellini@eu.citrix.com wrote:
>> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> 
>> Introduce qemu_ram_ptr_length that takes an address and a size as
>> parameters rather than just an address.
>> 
>> Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
>> once rather than calling qemu_get_ram_ptr one time per page.
>> This is not only more efficient but also tries to simplify the logic of
>> the function.
>> Currently we are relying on the fact that all the pages are mapped
>> contiguously in qemu's address space: we have a check to make sure that
>> the virtual address returned by qemu_get_ram_ptr from the second call on
>> is consecutive. Now we are making this more explicit replacing all the
>> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
>> passing a size argument.
> 
> This breaks cpu_physical_memory_map for >4G addresses on PC.
> Effectively, it doesn't account for the PCI gap, ie. that the RAM block
> is actually mapped in two chunks into the guest physical memory. One
> outcome is that QEMU aborts when we try to process an address that is
> now "outside" RAM. Simple to reproduce with a virtio NIC and 5G guest
> memory, even without KVM.

Yeah, that's what happens when you read mails too early in the morning :). The xen branch didn't get pulled yet, so upstream is missing the following patch:

commit f221e5ac378feea71d9857ddaa40f829c511742f
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Mon Jun 27 18:26:06 2011 +0100

    qemu_ram_ptr_length: take ram_addr_t as arguments
    
    qemu_ram_ptr_length should take ram_addr_t as argument rather than
    target_phys_addr_t because is doing comparisons with RAMBlock addresses.
    
    cpu_physical_memory_map should create a ram_addr_t address to pass to
    qemu_ram_ptr_length from PhysPageDesc phys_offset.
    
    Remove code after abort() in qemu_ram_ptr_length.
    
    Changes in v2:
    
    - handle 0 size in qemu_ram_ptr_length;
    
    - rename addr1 to raddr;
    
    - initialize raddr to ULONG_MAX.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
    Signed-off-by: Alexander Graf <agraf@suse.de>


Anthony?

Alex

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
@ 2011-07-12  6:28       ` Alexander Graf
  0 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2011-07-12  6:28 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: Stefan BOSAK, xen-devel Devel,
	qemu-devel@nongnu.orgDevelopers Developers, Anthony Liguori,
	Stefano Stabellini


On 12.07.2011, at 00:17, Jan Kiszka wrote:

> On 2011-05-19 19:35, stefano.stabellini@eu.citrix.com wrote:
>> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> 
>> Introduce qemu_ram_ptr_length that takes an address and a size as
>> parameters rather than just an address.
>> 
>> Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
>> once rather than calling qemu_get_ram_ptr one time per page.
>> This is not only more efficient but also tries to simplify the logic of
>> the function.
>> Currently we are relying on the fact that all the pages are mapped
>> contiguously in qemu's address space: we have a check to make sure that
>> the virtual address returned by qemu_get_ram_ptr from the second call on
>> is consecutive. Now we are making this more explicit replacing all the
>> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
>> passing a size argument.
> 
> This breaks cpu_physical_memory_map for >4G addresses on PC.
> Effectively, it doesn't account for the PCI gap, ie. that the RAM block
> is actually mapped in two chunks into the guest physical memory. One
> outcome is that QEMU aborts when we try to process an address that is
> now "outside" RAM. Simple to reproduce with a virtio NIC and 5G guest
> memory, even without KVM.

Yeah, that's what happens when you read mails too early in the morning :). The xen branch didn't get pulled yet, so upstream is missing the following patch:

commit f221e5ac378feea71d9857ddaa40f829c511742f
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Mon Jun 27 18:26:06 2011 +0100

    qemu_ram_ptr_length: take ram_addr_t as arguments
    
    qemu_ram_ptr_length should take ram_addr_t as argument rather than
    target_phys_addr_t because is doing comparisons with RAMBlock addresses.
    
    cpu_physical_memory_map should create a ram_addr_t address to pass to
    qemu_ram_ptr_length from PhysPageDesc phys_offset.
    
    Remove code after abort() in qemu_ram_ptr_length.
    
    Changes in v2:
    
    - handle 0 size in qemu_ram_ptr_length;
    
    - rename addr1 to raddr;
    
    - initialize raddr to ULONG_MAX.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
    Signed-off-by: Alexander Graf <agraf@suse.de>


Anthony?

Alex

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
  2011-07-12  6:28       ` Alexander Graf
@ 2011-07-12  7:15         ` Jan Kiszka
  -1 siblings, 0 replies; 33+ messages in thread
From: Jan Kiszka @ 2011-07-12  7:15 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Stefan BOSAK, xen-devel Devel,
	qemu-devel@nongnu.orgDevelopers Developers, Stefano Stabellini

[-- Attachment #1: Type: text/plain, Size: 2697 bytes --]

On 2011-07-12 08:28, Alexander Graf wrote:
> 
> On 12.07.2011, at 00:17, Jan Kiszka wrote:
> 
>> On 2011-05-19 19:35, stefano.stabellini@eu.citrix.com wrote:
>>> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>>>
>>> Introduce qemu_ram_ptr_length that takes an address and a size as
>>> parameters rather than just an address.
>>>
>>> Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
>>> once rather than calling qemu_get_ram_ptr one time per page.
>>> This is not only more efficient but also tries to simplify the logic of
>>> the function.
>>> Currently we are relying on the fact that all the pages are mapped
>>> contiguously in qemu's address space: we have a check to make sure that
>>> the virtual address returned by qemu_get_ram_ptr from the second call on
>>> is consecutive. Now we are making this more explicit replacing all the
>>> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
>>> passing a size argument.
>>
>> This breaks cpu_physical_memory_map for >4G addresses on PC.
>> Effectively, it doesn't account for the PCI gap, ie. that the RAM block
>> is actually mapped in two chunks into the guest physical memory. One
>> outcome is that QEMU aborts when we try to process an address that is
>> now "outside" RAM. Simple to reproduce with a virtio NIC and 5G guest
>> memory, even without KVM.
> 
> Yeah, that's what happens when you read mails too early in the morning :). The xen branch didn't get pulled yet, so upstream is missing the following patch:
> 
> commit f221e5ac378feea71d9857ddaa40f829c511742f
> Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Date:   Mon Jun 27 18:26:06 2011 +0100
> 
>     qemu_ram_ptr_length: take ram_addr_t as arguments
>     
>     qemu_ram_ptr_length should take ram_addr_t as argument rather than
>     target_phys_addr_t because is doing comparisons with RAMBlock addresses.
>     
>     cpu_physical_memory_map should create a ram_addr_t address to pass to
>     qemu_ram_ptr_length from PhysPageDesc phys_offset.
>     
>     Remove code after abort() in qemu_ram_ptr_length.
>     
>     Changes in v2:
>     
>     - handle 0 size in qemu_ram_ptr_length;
>     
>     - rename addr1 to raddr;
>     
>     - initialize raddr to ULONG_MAX.
>     
>     Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>     Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
>     Signed-off-by: Alexander Graf <agraf@suse.de>

Maybe subject or changlog can reflect what this all fixes?

> 
> Anthony?

Am I the only one under the impression that too many patches are in
limbo ATM?

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
@ 2011-07-12  7:15         ` Jan Kiszka
  0 siblings, 0 replies; 33+ messages in thread
From: Jan Kiszka @ 2011-07-12  7:15 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Stefan BOSAK, xen-devel Devel,
	qemu-devel@nongnu.orgDevelopers Developers, Anthony Liguori,
	Stefano Stabellini


[-- Attachment #1.1: Type: text/plain, Size: 2697 bytes --]

On 2011-07-12 08:28, Alexander Graf wrote:
> 
> On 12.07.2011, at 00:17, Jan Kiszka wrote:
> 
>> On 2011-05-19 19:35, stefano.stabellini@eu.citrix.com wrote:
>>> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>>>
>>> Introduce qemu_ram_ptr_length that takes an address and a size as
>>> parameters rather than just an address.
>>>
>>> Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
>>> once rather than calling qemu_get_ram_ptr one time per page.
>>> This is not only more efficient but also tries to simplify the logic of
>>> the function.
>>> Currently we are relying on the fact that all the pages are mapped
>>> contiguously in qemu's address space: we have a check to make sure that
>>> the virtual address returned by qemu_get_ram_ptr from the second call on
>>> is consecutive. Now we are making this more explicit replacing all the
>>> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
>>> passing a size argument.
>>
>> This breaks cpu_physical_memory_map for >4G addresses on PC.
>> Effectively, it doesn't account for the PCI gap, ie. that the RAM block
>> is actually mapped in two chunks into the guest physical memory. One
>> outcome is that QEMU aborts when we try to process an address that is
>> now "outside" RAM. Simple to reproduce with a virtio NIC and 5G guest
>> memory, even without KVM.
> 
> Yeah, that's what happens when you read mails too early in the morning :). The xen branch didn't get pulled yet, so upstream is missing the following patch:
> 
> commit f221e5ac378feea71d9857ddaa40f829c511742f
> Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Date:   Mon Jun 27 18:26:06 2011 +0100
> 
>     qemu_ram_ptr_length: take ram_addr_t as arguments
>     
>     qemu_ram_ptr_length should take ram_addr_t as argument rather than
>     target_phys_addr_t because is doing comparisons with RAMBlock addresses.
>     
>     cpu_physical_memory_map should create a ram_addr_t address to pass to
>     qemu_ram_ptr_length from PhysPageDesc phys_offset.
>     
>     Remove code after abort() in qemu_ram_ptr_length.
>     
>     Changes in v2:
>     
>     - handle 0 size in qemu_ram_ptr_length;
>     
>     - rename addr1 to raddr;
>     
>     - initialize raddr to ULONG_MAX.
>     
>     Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>     Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
>     Signed-off-by: Alexander Graf <agraf@suse.de>

Maybe subject or changlog can reflect what this all fixes?

> 
> Anthony?

Am I the only one under the impression that too many patches are in
limbo ATM?

Jan


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
  2011-07-12  7:15         ` Jan Kiszka
@ 2011-07-12  7:18           ` Paolo Bonzini
  -1 siblings, 0 replies; 33+ messages in thread
From: Paolo Bonzini @ 2011-07-12  7:18 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: xen-devel Devel, Stefano Stabellini, Alexander Graf,
	qemu-devel@nongnu.orgDevelopers Developers, Stefan BOSAK

On 07/12/2011 09:15 AM, Jan Kiszka wrote:
> Am I the only one under the impression that too many patches are in
> limbo ATM?

No. :)

Paolo

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
@ 2011-07-12  7:18           ` Paolo Bonzini
  0 siblings, 0 replies; 33+ messages in thread
From: Paolo Bonzini @ 2011-07-12  7:18 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: xen-devel Devel, Stefano Stabellini, Alexander Graf,
	qemu-devel@nongnu.orgDevelopers Developers, Stefan BOSAK,
	Anthony Liguori

On 07/12/2011 09:15 AM, Jan Kiszka wrote:
> Am I the only one under the impression that too many patches are in
> limbo ATM?

No. :)

Paolo

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
  2011-07-12  7:18           ` Paolo Bonzini
  (?)
@ 2011-07-12  7:48           ` Markus Armbruster
  -1 siblings, 0 replies; 33+ messages in thread
From: Markus Armbruster @ 2011-07-12  7:48 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: xen-devel Devel, Stefano Stabellini,
	qemu-devel@nongnu.orgDevelopers Developers, Alexander Graf,
	Stefan BOSAK, Jan Kiszka

Paolo Bonzini <pbonzini@redhat.com> writes:

> On 07/12/2011 09:15 AM, Jan Kiszka wrote:
>> Am I the only one under the impression that too many patches are in
>> limbo ATM?
>
> No. :)

There's another place for patches to go?

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
  2011-05-19 17:35   ` stefano.stabellini
@ 2011-07-22  5:42     ` Liu Yu-B13201
  -1 siblings, 0 replies; 33+ messages in thread
From: Liu Yu-B13201 @ 2011-07-22  5:42 UTC (permalink / raw)
  To: stefano.stabellini, qemu-devel; +Cc: Yoder Stuart-B08248, xen-devel, agraf

 

> -----Original Message-----
> From: qemu-devel-bounces+yu.liu=freescale.com@nongnu.org 
> [mailto:qemu-devel-bounces+yu.liu=freescale.com@nongnu.org] 
> On Behalf Of stefano.stabellini@eu.citrix.com
> Sent: Friday, May 20, 2011 1:36 AM
> To: qemu-devel@nongnu.org
> Cc: xen-devel@lists.xensource.com; agraf@suse.de; Stefano Stabellini
> Subject: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor 
> cpu_physical_memory_map
> 
> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Introduce qemu_ram_ptr_length that takes an address and a size as
> parameters rather than just an address.
> 
> Refactor cpu_physical_memory_map so that we call 
> qemu_ram_ptr_length only
> once rather than calling qemu_get_ram_ptr one time per page.
> This is not only more efficient but also tries to simplify 
> the logic of
> the function.
> Currently we are relying on the fact that all the pages are mapped
> contiguously in qemu's address space: we have a check to make 
> sure that
> the virtual address returned by qemu_get_ram_ptr from the 
> second call on
> is consecutive. Now we are making this more explicit replacing all the
> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
> passing a size argument.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: agraf@suse.de
> CC: anthony@codemonkey.ws
> ---
>  cpu-common.h |    1 +
>  exec.c       |   51 
> ++++++++++++++++++++++++++++++++++-----------------
>  2 files changed, 35 insertions(+), 17 deletions(-)
> 
> diff --git a/cpu-common.h b/cpu-common.h
> index 151c32c..085aacb 100644
> --- a/cpu-common.h
> +++ b/cpu-common.h
> @@ -64,6 +64,7 @@ void qemu_ram_free(ram_addr_t addr);
>  void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
>  /* This should only be used for ram local to a device.  */
>  void *qemu_get_ram_ptr(ram_addr_t addr);
> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
> target_phys_addr_t *size);
>  /* Same but slower, to use for migration, where the order of
>   * RAMBlocks must not change. */
>  void *qemu_safe_ram_ptr(ram_addr_t addr);
> diff --git a/exec.c b/exec.c
> index 21f21f0..ff9c174 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -3111,6 +3111,31 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
>      return NULL;
>  }
>  
> +/* Return a host pointer to guest's ram. Similar to qemu_get_ram_ptr
> + * but takes a size argument */
> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
> target_phys_addr_t *size)
> +{
> +    if (xen_mapcache_enabled())
> +        return qemu_map_cache(addr, *size, 1);
> +    else {
> +        RAMBlock *block;
> +
> +        QLIST_FOREACH(block, &ram_list.blocks, next) {
> +            if (addr - block->offset < block->length) {
> +                if (addr - block->offset + *size > block->length)
> +                    *size = block->length - addr + block->offset;
> +                return block->host + (addr - block->offset);
> +            }
> +        }
> +
> +        fprintf(stderr, "Bad ram offset %" PRIx64 "\n", 
> (uint64_t)addr);
> +        abort();
> +
> +        *size = 0;
> +        return NULL;
> +    }
> +}
> +
>  void qemu_put_ram_ptr(void *addr)
>  {
>      trace_qemu_put_ram_ptr(addr);
> @@ -3972,14 +3997,12 @@ void 
> *cpu_physical_memory_map(target_phys_addr_t addr,
>                                int is_write)
>  {
>      target_phys_addr_t len = *plen;
> -    target_phys_addr_t done = 0;
> +    target_phys_addr_t todo = 0;
>      int l;
> -    uint8_t *ret = NULL;
> -    uint8_t *ptr;
>      target_phys_addr_t page;
>      unsigned long pd;
>      PhysPageDesc *p;
> -    unsigned long addr1;
> +    target_phys_addr_t addr1 = addr;
>  
>      while (len > 0) {
>          page = addr & TARGET_PAGE_MASK;
> @@ -3994,7 +4017,7 @@ void 
> *cpu_physical_memory_map(target_phys_addr_t addr,
>          }
>  
>          if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) {
> -            if (done || bounce.buffer) {
> +            if (todo || bounce.buffer) {
>                  break;
>              }
>              bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, 
> TARGET_PAGE_SIZE);
> @@ -4003,23 +4026,17 @@ void 
> *cpu_physical_memory_map(target_phys_addr_t addr,
>              if (!is_write) {
>                  cpu_physical_memory_read(addr, bounce.buffer, l);
>              }
> -            ptr = bounce.buffer;
> -        } else {
> -            addr1 = (pd & TARGET_PAGE_MASK) + (addr & 
> ~TARGET_PAGE_MASK);
> -            ptr = qemu_get_ram_ptr(addr1);
> -        }
> -        if (!done) {
> -            ret = ptr;
> -        } else if (ret + done != ptr) {
> -            break;
> +
> +            *plen = l;
> +            return bounce.buffer;
>          }
>  
>          len -= l;
>          addr += l;
> -        done += l;
> +        todo += l;
>      }
> -    *plen = done;
> -    return ret;
> +    *plen = todo;
> +    return qemu_ram_ptr_length(addr1, plen);
>  }
>  
>  /* Unmaps a memory region previously mapped by 
> cpu_physical_memory_map().
> -- 
> 1.7.2.3

Hello Stefano,

This commit breaks the case that guest memory doesn't start from 0.

In previous code
	addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK);
This transfer guest physical addr to qemu ram_addr, and so that it can pass the ram range checking.

But current code
	addr1 = addr
this make it fail to pass the ram range checking.


Thanks,
Yu

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
@ 2011-07-22  5:42     ` Liu Yu-B13201
  0 siblings, 0 replies; 33+ messages in thread
From: Liu Yu-B13201 @ 2011-07-22  5:42 UTC (permalink / raw)
  To: stefano.stabellini, qemu-devel; +Cc: Yoder Stuart-B08248, xen-devel, agraf

 

> -----Original Message-----
> From: qemu-devel-bounces+yu.liu=freescale.com@nongnu.org 
> [mailto:qemu-devel-bounces+yu.liu=freescale.com@nongnu.org] 
> On Behalf Of stefano.stabellini@eu.citrix.com
> Sent: Friday, May 20, 2011 1:36 AM
> To: qemu-devel@nongnu.org
> Cc: xen-devel@lists.xensource.com; agraf@suse.de; Stefano Stabellini
> Subject: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor 
> cpu_physical_memory_map
> 
> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Introduce qemu_ram_ptr_length that takes an address and a size as
> parameters rather than just an address.
> 
> Refactor cpu_physical_memory_map so that we call 
> qemu_ram_ptr_length only
> once rather than calling qemu_get_ram_ptr one time per page.
> This is not only more efficient but also tries to simplify 
> the logic of
> the function.
> Currently we are relying on the fact that all the pages are mapped
> contiguously in qemu's address space: we have a check to make 
> sure that
> the virtual address returned by qemu_get_ram_ptr from the 
> second call on
> is consecutive. Now we are making this more explicit replacing all the
> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
> passing a size argument.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: agraf@suse.de
> CC: anthony@codemonkey.ws
> ---
>  cpu-common.h |    1 +
>  exec.c       |   51 
> ++++++++++++++++++++++++++++++++++-----------------
>  2 files changed, 35 insertions(+), 17 deletions(-)
> 
> diff --git a/cpu-common.h b/cpu-common.h
> index 151c32c..085aacb 100644
> --- a/cpu-common.h
> +++ b/cpu-common.h
> @@ -64,6 +64,7 @@ void qemu_ram_free(ram_addr_t addr);
>  void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
>  /* This should only be used for ram local to a device.  */
>  void *qemu_get_ram_ptr(ram_addr_t addr);
> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
> target_phys_addr_t *size);
>  /* Same but slower, to use for migration, where the order of
>   * RAMBlocks must not change. */
>  void *qemu_safe_ram_ptr(ram_addr_t addr);
> diff --git a/exec.c b/exec.c
> index 21f21f0..ff9c174 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -3111,6 +3111,31 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
>      return NULL;
>  }
>  
> +/* Return a host pointer to guest's ram. Similar to qemu_get_ram_ptr
> + * but takes a size argument */
> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
> target_phys_addr_t *size)
> +{
> +    if (xen_mapcache_enabled())
> +        return qemu_map_cache(addr, *size, 1);
> +    else {
> +        RAMBlock *block;
> +
> +        QLIST_FOREACH(block, &ram_list.blocks, next) {
> +            if (addr - block->offset < block->length) {
> +                if (addr - block->offset + *size > block->length)
> +                    *size = block->length - addr + block->offset;
> +                return block->host + (addr - block->offset);
> +            }
> +        }
> +
> +        fprintf(stderr, "Bad ram offset %" PRIx64 "\n", 
> (uint64_t)addr);
> +        abort();
> +
> +        *size = 0;
> +        return NULL;
> +    }
> +}
> +
>  void qemu_put_ram_ptr(void *addr)
>  {
>      trace_qemu_put_ram_ptr(addr);
> @@ -3972,14 +3997,12 @@ void 
> *cpu_physical_memory_map(target_phys_addr_t addr,
>                                int is_write)
>  {
>      target_phys_addr_t len = *plen;
> -    target_phys_addr_t done = 0;
> +    target_phys_addr_t todo = 0;
>      int l;
> -    uint8_t *ret = NULL;
> -    uint8_t *ptr;
>      target_phys_addr_t page;
>      unsigned long pd;
>      PhysPageDesc *p;
> -    unsigned long addr1;
> +    target_phys_addr_t addr1 = addr;
>  
>      while (len > 0) {
>          page = addr & TARGET_PAGE_MASK;
> @@ -3994,7 +4017,7 @@ void 
> *cpu_physical_memory_map(target_phys_addr_t addr,
>          }
>  
>          if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) {
> -            if (done || bounce.buffer) {
> +            if (todo || bounce.buffer) {
>                  break;
>              }
>              bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, 
> TARGET_PAGE_SIZE);
> @@ -4003,23 +4026,17 @@ void 
> *cpu_physical_memory_map(target_phys_addr_t addr,
>              if (!is_write) {
>                  cpu_physical_memory_read(addr, bounce.buffer, l);
>              }
> -            ptr = bounce.buffer;
> -        } else {
> -            addr1 = (pd & TARGET_PAGE_MASK) + (addr & 
> ~TARGET_PAGE_MASK);
> -            ptr = qemu_get_ram_ptr(addr1);
> -        }
> -        if (!done) {
> -            ret = ptr;
> -        } else if (ret + done != ptr) {
> -            break;
> +
> +            *plen = l;
> +            return bounce.buffer;
>          }
>  
>          len -= l;
>          addr += l;
> -        done += l;
> +        todo += l;
>      }
> -    *plen = done;
> -    return ret;
> +    *plen = todo;
> +    return qemu_ram_ptr_length(addr1, plen);
>  }
>  
>  /* Unmaps a memory region previously mapped by 
> cpu_physical_memory_map().
> -- 
> 1.7.2.3

Hello Stefano,

This commit breaks the case that guest memory doesn't start from 0.

In previous code
	addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK);
This transfer guest physical addr to qemu ram_addr, and so that it can pass the ram range checking.

But current code
	addr1 = addr
this make it fail to pass the ram range checking.


Thanks,
Yu

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
  2011-07-22  5:42     ` Liu Yu-B13201
@ 2011-07-22  5:59       ` Alexander Graf
  -1 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2011-07-22  5:59 UTC (permalink / raw)
  To: Liu Yu-B13201
  Cc: Yoder Stuart-B08248, xen-devel, qemu-devel, stefano.stabellini


On 22.07.2011, at 07:42, Liu Yu-B13201 wrote:

> 
> 
>> -----Original Message-----
>> From: qemu-devel-bounces+yu.liu=freescale.com@nongnu.org 
>> [mailto:qemu-devel-bounces+yu.liu=freescale.com@nongnu.org] 
>> On Behalf Of stefano.stabellini@eu.citrix.com
>> Sent: Friday, May 20, 2011 1:36 AM
>> To: qemu-devel@nongnu.org
>> Cc: xen-devel@lists.xensource.com; agraf@suse.de; Stefano Stabellini
>> Subject: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor 
>> cpu_physical_memory_map
>> 
>> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> 
>> Introduce qemu_ram_ptr_length that takes an address and a size as
>> parameters rather than just an address.
>> 
>> Refactor cpu_physical_memory_map so that we call 
>> qemu_ram_ptr_length only
>> once rather than calling qemu_get_ram_ptr one time per page.
>> This is not only more efficient but also tries to simplify 
>> the logic of
>> the function.
>> Currently we are relying on the fact that all the pages are mapped
>> contiguously in qemu's address space: we have a check to make 
>> sure that
>> the virtual address returned by qemu_get_ram_ptr from the 
>> second call on
>> is consecutive. Now we are making this more explicit replacing all the
>> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
>> passing a size argument.
>> 
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> CC: agraf@suse.de
>> CC: anthony@codemonkey.ws
>> ---
>> cpu-common.h |    1 +
>> exec.c       |   51 
>> ++++++++++++++++++++++++++++++++++-----------------
>> 2 files changed, 35 insertions(+), 17 deletions(-)
>> 
>> diff --git a/cpu-common.h b/cpu-common.h
>> index 151c32c..085aacb 100644
>> --- a/cpu-common.h
>> +++ b/cpu-common.h
>> @@ -64,6 +64,7 @@ void qemu_ram_free(ram_addr_t addr);
>> void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
>> /* This should only be used for ram local to a device.  */
>> void *qemu_get_ram_ptr(ram_addr_t addr);
>> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
>> target_phys_addr_t *size);
>> /* Same but slower, to use for migration, where the order of
>>  * RAMBlocks must not change. */
>> void *qemu_safe_ram_ptr(ram_addr_t addr);
>> diff --git a/exec.c b/exec.c
>> index 21f21f0..ff9c174 100644
>> --- a/exec.c
>> +++ b/exec.c
>> @@ -3111,6 +3111,31 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
>>     return NULL;
>> }
>> 
>> +/* Return a host pointer to guest's ram. Similar to qemu_get_ram_ptr
>> + * but takes a size argument */
>> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
>> target_phys_addr_t *size)
>> +{
>> +    if (xen_mapcache_enabled())
>> +        return qemu_map_cache(addr, *size, 1);
>> +    else {
>> +        RAMBlock *block;
>> +
>> +        QLIST_FOREACH(block, &ram_list.blocks, next) {
>> +            if (addr - block->offset < block->length) {
>> +                if (addr - block->offset + *size > block->length)
>> +                    *size = block->length - addr + block->offset;
>> +                return block->host + (addr - block->offset);
>> +            }
>> +        }
>> +
>> +        fprintf(stderr, "Bad ram offset %" PRIx64 "\n", 
>> (uint64_t)addr);
>> +        abort();
>> +
>> +        *size = 0;
>> +        return NULL;
>> +    }
>> +}
>> +
>> void qemu_put_ram_ptr(void *addr)
>> {
>>     trace_qemu_put_ram_ptr(addr);
>> @@ -3972,14 +3997,12 @@ void 
>> *cpu_physical_memory_map(target_phys_addr_t addr,
>>                               int is_write)
>> {
>>     target_phys_addr_t len = *plen;
>> -    target_phys_addr_t done = 0;
>> +    target_phys_addr_t todo = 0;
>>     int l;
>> -    uint8_t *ret = NULL;
>> -    uint8_t *ptr;
>>     target_phys_addr_t page;
>>     unsigned long pd;
>>     PhysPageDesc *p;
>> -    unsigned long addr1;
>> +    target_phys_addr_t addr1 = addr;
>> 
>>     while (len > 0) {
>>         page = addr & TARGET_PAGE_MASK;
>> @@ -3994,7 +4017,7 @@ void 
>> *cpu_physical_memory_map(target_phys_addr_t addr,
>>         }
>> 
>>         if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) {
>> -            if (done || bounce.buffer) {
>> +            if (todo || bounce.buffer) {
>>                 break;
>>             }
>>             bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, 
>> TARGET_PAGE_SIZE);
>> @@ -4003,23 +4026,17 @@ void 
>> *cpu_physical_memory_map(target_phys_addr_t addr,
>>             if (!is_write) {
>>                 cpu_physical_memory_read(addr, bounce.buffer, l);
>>             }
>> -            ptr = bounce.buffer;
>> -        } else {
>> -            addr1 = (pd & TARGET_PAGE_MASK) + (addr & 
>> ~TARGET_PAGE_MASK);
>> -            ptr = qemu_get_ram_ptr(addr1);
>> -        }
>> -        if (!done) {
>> -            ret = ptr;
>> -        } else if (ret + done != ptr) {
>> -            break;
>> +
>> +            *plen = l;
>> +            return bounce.buffer;
>>         }
>> 
>>         len -= l;
>>         addr += l;
>> -        done += l;
>> +        todo += l;
>>     }
>> -    *plen = done;
>> -    return ret;
>> +    *plen = todo;
>> +    return qemu_ram_ptr_length(addr1, plen);
>> }
>> 
>> /* Unmaps a memory region previously mapped by 
>> cpu_physical_memory_map().
>> -- 
>> 1.7.2.3
> 
> Hello Stefano,
> 
> This commit breaks the case that guest memory doesn't start from 0.
> 
> In previous code
> 	addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK);
> This transfer guest physical addr to qemu ram_addr, and so that it can pass the ram range checking.
> 
> But current code
> 	addr1 = addr
> this make it fail to pass the ram range checking.

Are you sure it's still broken with commit 8ab934f93b5ad3d0af4ad419d2531235a75d672c? If so, mind to pinpoint where exactly?


Alex

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
@ 2011-07-22  5:59       ` Alexander Graf
  0 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2011-07-22  5:59 UTC (permalink / raw)
  To: Liu Yu-B13201
  Cc: Yoder Stuart-B08248, xen-devel, qemu-devel, stefano.stabellini


On 22.07.2011, at 07:42, Liu Yu-B13201 wrote:

> 
> 
>> -----Original Message-----
>> From: qemu-devel-bounces+yu.liu=freescale.com@nongnu.org 
>> [mailto:qemu-devel-bounces+yu.liu=freescale.com@nongnu.org] 
>> On Behalf Of stefano.stabellini@eu.citrix.com
>> Sent: Friday, May 20, 2011 1:36 AM
>> To: qemu-devel@nongnu.org
>> Cc: xen-devel@lists.xensource.com; agraf@suse.de; Stefano Stabellini
>> Subject: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor 
>> cpu_physical_memory_map
>> 
>> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> 
>> Introduce qemu_ram_ptr_length that takes an address and a size as
>> parameters rather than just an address.
>> 
>> Refactor cpu_physical_memory_map so that we call 
>> qemu_ram_ptr_length only
>> once rather than calling qemu_get_ram_ptr one time per page.
>> This is not only more efficient but also tries to simplify 
>> the logic of
>> the function.
>> Currently we are relying on the fact that all the pages are mapped
>> contiguously in qemu's address space: we have a check to make 
>> sure that
>> the virtual address returned by qemu_get_ram_ptr from the 
>> second call on
>> is consecutive. Now we are making this more explicit replacing all the
>> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
>> passing a size argument.
>> 
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> CC: agraf@suse.de
>> CC: anthony@codemonkey.ws
>> ---
>> cpu-common.h |    1 +
>> exec.c       |   51 
>> ++++++++++++++++++++++++++++++++++-----------------
>> 2 files changed, 35 insertions(+), 17 deletions(-)
>> 
>> diff --git a/cpu-common.h b/cpu-common.h
>> index 151c32c..085aacb 100644
>> --- a/cpu-common.h
>> +++ b/cpu-common.h
>> @@ -64,6 +64,7 @@ void qemu_ram_free(ram_addr_t addr);
>> void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
>> /* This should only be used for ram local to a device.  */
>> void *qemu_get_ram_ptr(ram_addr_t addr);
>> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
>> target_phys_addr_t *size);
>> /* Same but slower, to use for migration, where the order of
>>  * RAMBlocks must not change. */
>> void *qemu_safe_ram_ptr(ram_addr_t addr);
>> diff --git a/exec.c b/exec.c
>> index 21f21f0..ff9c174 100644
>> --- a/exec.c
>> +++ b/exec.c
>> @@ -3111,6 +3111,31 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
>>     return NULL;
>> }
>> 
>> +/* Return a host pointer to guest's ram. Similar to qemu_get_ram_ptr
>> + * but takes a size argument */
>> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
>> target_phys_addr_t *size)
>> +{
>> +    if (xen_mapcache_enabled())
>> +        return qemu_map_cache(addr, *size, 1);
>> +    else {
>> +        RAMBlock *block;
>> +
>> +        QLIST_FOREACH(block, &ram_list.blocks, next) {
>> +            if (addr - block->offset < block->length) {
>> +                if (addr - block->offset + *size > block->length)
>> +                    *size = block->length - addr + block->offset;
>> +                return block->host + (addr - block->offset);
>> +            }
>> +        }
>> +
>> +        fprintf(stderr, "Bad ram offset %" PRIx64 "\n", 
>> (uint64_t)addr);
>> +        abort();
>> +
>> +        *size = 0;
>> +        return NULL;
>> +    }
>> +}
>> +
>> void qemu_put_ram_ptr(void *addr)
>> {
>>     trace_qemu_put_ram_ptr(addr);
>> @@ -3972,14 +3997,12 @@ void 
>> *cpu_physical_memory_map(target_phys_addr_t addr,
>>                               int is_write)
>> {
>>     target_phys_addr_t len = *plen;
>> -    target_phys_addr_t done = 0;
>> +    target_phys_addr_t todo = 0;
>>     int l;
>> -    uint8_t *ret = NULL;
>> -    uint8_t *ptr;
>>     target_phys_addr_t page;
>>     unsigned long pd;
>>     PhysPageDesc *p;
>> -    unsigned long addr1;
>> +    target_phys_addr_t addr1 = addr;
>> 
>>     while (len > 0) {
>>         page = addr & TARGET_PAGE_MASK;
>> @@ -3994,7 +4017,7 @@ void 
>> *cpu_physical_memory_map(target_phys_addr_t addr,
>>         }
>> 
>>         if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) {
>> -            if (done || bounce.buffer) {
>> +            if (todo || bounce.buffer) {
>>                 break;
>>             }
>>             bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, 
>> TARGET_PAGE_SIZE);
>> @@ -4003,23 +4026,17 @@ void 
>> *cpu_physical_memory_map(target_phys_addr_t addr,
>>             if (!is_write) {
>>                 cpu_physical_memory_read(addr, bounce.buffer, l);
>>             }
>> -            ptr = bounce.buffer;
>> -        } else {
>> -            addr1 = (pd & TARGET_PAGE_MASK) + (addr & 
>> ~TARGET_PAGE_MASK);
>> -            ptr = qemu_get_ram_ptr(addr1);
>> -        }
>> -        if (!done) {
>> -            ret = ptr;
>> -        } else if (ret + done != ptr) {
>> -            break;
>> +
>> +            *plen = l;
>> +            return bounce.buffer;
>>         }
>> 
>>         len -= l;
>>         addr += l;
>> -        done += l;
>> +        todo += l;
>>     }
>> -    *plen = done;
>> -    return ret;
>> +    *plen = todo;
>> +    return qemu_ram_ptr_length(addr1, plen);
>> }
>> 
>> /* Unmaps a memory region previously mapped by 
>> cpu_physical_memory_map().
>> -- 
>> 1.7.2.3
> 
> Hello Stefano,
> 
> This commit breaks the case that guest memory doesn't start from 0.
> 
> In previous code
> 	addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK);
> This transfer guest physical addr to qemu ram_addr, and so that it can pass the ram range checking.
> 
> But current code
> 	addr1 = addr
> this make it fail to pass the ram range checking.

Are you sure it's still broken with commit 8ab934f93b5ad3d0af4ad419d2531235a75d672c? If so, mind to pinpoint where exactly?


Alex

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
  2011-07-22  5:59       ` Alexander Graf
@ 2011-07-22  6:14         ` Liu Yu-B13201
  -1 siblings, 0 replies; 33+ messages in thread
From: Liu Yu-B13201 @ 2011-07-22  6:14 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Yoder Stuart-B08248, xen-devel, qemu-devel, stefano.stabellini

 

> -----Original Message-----
> From: Alexander Graf [mailto:agraf@suse.de] 
> Sent: Friday, July 22, 2011 2:00 PM
> To: Liu Yu-B13201
> Cc: stefano.stabellini@eu.citrix.com; qemu-devel@nongnu.org; 
> xen-devel@lists.xensource.com; Yoder Stuart-B08248
> Subject: Re: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor 
> cpu_physical_memory_map
> 
> 
> On 22.07.2011, at 07:42, Liu Yu-B13201 wrote:
> 
> > 
> > 
> >> -----Original Message-----
> >> From: qemu-devel-bounces+yu.liu=freescale.com@nongnu.org 
> >> [mailto:qemu-devel-bounces+yu.liu=freescale.com@nongnu.org] 
> >> On Behalf Of stefano.stabellini@eu.citrix.com
> >> Sent: Friday, May 20, 2011 1:36 AM
> >> To: qemu-devel@nongnu.org
> >> Cc: xen-devel@lists.xensource.com; agraf@suse.de; Stefano 
> Stabellini
> >> Subject: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor 
> >> cpu_physical_memory_map
> >> 
> >> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> 
> >> Introduce qemu_ram_ptr_length that takes an address and a size as
> >> parameters rather than just an address.
> >> 
> >> Refactor cpu_physical_memory_map so that we call 
> >> qemu_ram_ptr_length only
> >> once rather than calling qemu_get_ram_ptr one time per page.
> >> This is not only more efficient but also tries to simplify 
> >> the logic of
> >> the function.
> >> Currently we are relying on the fact that all the pages are mapped
> >> contiguously in qemu's address space: we have a check to make 
> >> sure that
> >> the virtual address returned by qemu_get_ram_ptr from the 
> >> second call on
> >> is consecutive. Now we are making this more explicit 
> replacing all the
> >> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
> >> passing a size argument.
> >> 
> >> Signed-off-by: Stefano Stabellini 
> <stefano.stabellini@eu.citrix.com>
> >> CC: agraf@suse.de
> >> CC: anthony@codemonkey.ws
> >> ---
> >> cpu-common.h |    1 +
> >> exec.c       |   51 
> >> ++++++++++++++++++++++++++++++++++-----------------
> >> 2 files changed, 35 insertions(+), 17 deletions(-)
> >> 
> >> diff --git a/cpu-common.h b/cpu-common.h
> >> index 151c32c..085aacb 100644
> >> --- a/cpu-common.h
> >> +++ b/cpu-common.h
> >> @@ -64,6 +64,7 @@ void qemu_ram_free(ram_addr_t addr);
> >> void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
> >> /* This should only be used for ram local to a device.  */
> >> void *qemu_get_ram_ptr(ram_addr_t addr);
> >> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
> >> target_phys_addr_t *size);
> >> /* Same but slower, to use for migration, where the order of
> >>  * RAMBlocks must not change. */
> >> void *qemu_safe_ram_ptr(ram_addr_t addr);
> >> diff --git a/exec.c b/exec.c
> >> index 21f21f0..ff9c174 100644
> >> --- a/exec.c
> >> +++ b/exec.c
> >> @@ -3111,6 +3111,31 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
> >>     return NULL;
> >> }
> >> 
> >> +/* Return a host pointer to guest's ram. Similar to 
> qemu_get_ram_ptr
> >> + * but takes a size argument */
> >> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
> >> target_phys_addr_t *size)
> >> +{
> >> +    if (xen_mapcache_enabled())
> >> +        return qemu_map_cache(addr, *size, 1);
> >> +    else {
> >> +        RAMBlock *block;
> >> +
> >> +        QLIST_FOREACH(block, &ram_list.blocks, next) {
> >> +            if (addr - block->offset < block->length) {
> >> +                if (addr - block->offset + *size > block->length)
> >> +                    *size = block->length - addr + block->offset;
> >> +                return block->host + (addr - block->offset);
> >> +            }
> >> +        }
> >> +
> >> +        fprintf(stderr, "Bad ram offset %" PRIx64 "\n", 
> >> (uint64_t)addr);
> >> +        abort();
> >> +
> >> +        *size = 0;
> >> +        return NULL;
> >> +    }
> >> +}
> >> +
> >> void qemu_put_ram_ptr(void *addr)
> >> {
> >>     trace_qemu_put_ram_ptr(addr);
> >> @@ -3972,14 +3997,12 @@ void 
> >> *cpu_physical_memory_map(target_phys_addr_t addr,
> >>                               int is_write)
> >> {
> >>     target_phys_addr_t len = *plen;
> >> -    target_phys_addr_t done = 0;
> >> +    target_phys_addr_t todo = 0;
> >>     int l;
> >> -    uint8_t *ret = NULL;
> >> -    uint8_t *ptr;
> >>     target_phys_addr_t page;
> >>     unsigned long pd;
> >>     PhysPageDesc *p;
> >> -    unsigned long addr1;
> >> +    target_phys_addr_t addr1 = addr;
> >> 
> >>     while (len > 0) {
> >>         page = addr & TARGET_PAGE_MASK;
> >> @@ -3994,7 +4017,7 @@ void 
> >> *cpu_physical_memory_map(target_phys_addr_t addr,
> >>         }
> >> 
> >>         if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) {
> >> -            if (done || bounce.buffer) {
> >> +            if (todo || bounce.buffer) {
> >>                 break;
> >>             }
> >>             bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, 
> >> TARGET_PAGE_SIZE);
> >> @@ -4003,23 +4026,17 @@ void 
> >> *cpu_physical_memory_map(target_phys_addr_t addr,
> >>             if (!is_write) {
> >>                 cpu_physical_memory_read(addr, bounce.buffer, l);
> >>             }
> >> -            ptr = bounce.buffer;
> >> -        } else {
> >> -            addr1 = (pd & TARGET_PAGE_MASK) + (addr & 
> >> ~TARGET_PAGE_MASK);
> >> -            ptr = qemu_get_ram_ptr(addr1);
> >> -        }
> >> -        if (!done) {
> >> -            ret = ptr;
> >> -        } else if (ret + done != ptr) {
> >> -            break;
> >> +
> >> +            *plen = l;
> >> +            return bounce.buffer;
> >>         }
> >> 
> >>         len -= l;
> >>         addr += l;
> >> -        done += l;
> >> +        todo += l;
> >>     }
> >> -    *plen = done;
> >> -    return ret;
> >> +    *plen = todo;
> >> +    return qemu_ram_ptr_length(addr1, plen);
> >> }
> >> 
> >> /* Unmaps a memory region previously mapped by 
> >> cpu_physical_memory_map().
> >> -- 
> >> 1.7.2.3
> > 
> > Hello Stefano,
> > 
> > This commit breaks the case that guest memory doesn't start from 0.
> > 
> > In previous code
> > 	addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK);
> > This transfer guest physical addr to qemu ram_addr, and so 
> that it can pass the ram range checking.
> > 
> > But current code
> > 	addr1 = addr
> > this make it fail to pass the ram range checking.
> 
> Are you sure it's still broken with commit 
> 8ab934f93b5ad3d0af4ad419d2531235a75d672c? If so, mind to 
> pinpoint where exactly?
> 

Ah, miss this one.
Yes, it then works with commit 8ab934f93b5ad3d0af4ad419d2531235a75d672c.
Sorry, please ignore this.


Thanks,
Yu

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
@ 2011-07-22  6:14         ` Liu Yu-B13201
  0 siblings, 0 replies; 33+ messages in thread
From: Liu Yu-B13201 @ 2011-07-22  6:14 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Yoder Stuart-B08248, xen-devel, qemu-devel, stefano.stabellini

 

> -----Original Message-----
> From: Alexander Graf [mailto:agraf@suse.de] 
> Sent: Friday, July 22, 2011 2:00 PM
> To: Liu Yu-B13201
> Cc: stefano.stabellini@eu.citrix.com; qemu-devel@nongnu.org; 
> xen-devel@lists.xensource.com; Yoder Stuart-B08248
> Subject: Re: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor 
> cpu_physical_memory_map
> 
> 
> On 22.07.2011, at 07:42, Liu Yu-B13201 wrote:
> 
> > 
> > 
> >> -----Original Message-----
> >> From: qemu-devel-bounces+yu.liu=freescale.com@nongnu.org 
> >> [mailto:qemu-devel-bounces+yu.liu=freescale.com@nongnu.org] 
> >> On Behalf Of stefano.stabellini@eu.citrix.com
> >> Sent: Friday, May 20, 2011 1:36 AM
> >> To: qemu-devel@nongnu.org
> >> Cc: xen-devel@lists.xensource.com; agraf@suse.de; Stefano 
> Stabellini
> >> Subject: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor 
> >> cpu_physical_memory_map
> >> 
> >> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> 
> >> Introduce qemu_ram_ptr_length that takes an address and a size as
> >> parameters rather than just an address.
> >> 
> >> Refactor cpu_physical_memory_map so that we call 
> >> qemu_ram_ptr_length only
> >> once rather than calling qemu_get_ram_ptr one time per page.
> >> This is not only more efficient but also tries to simplify 
> >> the logic of
> >> the function.
> >> Currently we are relying on the fact that all the pages are mapped
> >> contiguously in qemu's address space: we have a check to make 
> >> sure that
> >> the virtual address returned by qemu_get_ram_ptr from the 
> >> second call on
> >> is consecutive. Now we are making this more explicit 
> replacing all the
> >> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
> >> passing a size argument.
> >> 
> >> Signed-off-by: Stefano Stabellini 
> <stefano.stabellini@eu.citrix.com>
> >> CC: agraf@suse.de
> >> CC: anthony@codemonkey.ws
> >> ---
> >> cpu-common.h |    1 +
> >> exec.c       |   51 
> >> ++++++++++++++++++++++++++++++++++-----------------
> >> 2 files changed, 35 insertions(+), 17 deletions(-)
> >> 
> >> diff --git a/cpu-common.h b/cpu-common.h
> >> index 151c32c..085aacb 100644
> >> --- a/cpu-common.h
> >> +++ b/cpu-common.h
> >> @@ -64,6 +64,7 @@ void qemu_ram_free(ram_addr_t addr);
> >> void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
> >> /* This should only be used for ram local to a device.  */
> >> void *qemu_get_ram_ptr(ram_addr_t addr);
> >> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
> >> target_phys_addr_t *size);
> >> /* Same but slower, to use for migration, where the order of
> >>  * RAMBlocks must not change. */
> >> void *qemu_safe_ram_ptr(ram_addr_t addr);
> >> diff --git a/exec.c b/exec.c
> >> index 21f21f0..ff9c174 100644
> >> --- a/exec.c
> >> +++ b/exec.c
> >> @@ -3111,6 +3111,31 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
> >>     return NULL;
> >> }
> >> 
> >> +/* Return a host pointer to guest's ram. Similar to 
> qemu_get_ram_ptr
> >> + * but takes a size argument */
> >> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
> >> target_phys_addr_t *size)
> >> +{
> >> +    if (xen_mapcache_enabled())
> >> +        return qemu_map_cache(addr, *size, 1);
> >> +    else {
> >> +        RAMBlock *block;
> >> +
> >> +        QLIST_FOREACH(block, &ram_list.blocks, next) {
> >> +            if (addr - block->offset < block->length) {
> >> +                if (addr - block->offset + *size > block->length)
> >> +                    *size = block->length - addr + block->offset;
> >> +                return block->host + (addr - block->offset);
> >> +            }
> >> +        }
> >> +
> >> +        fprintf(stderr, "Bad ram offset %" PRIx64 "\n", 
> >> (uint64_t)addr);
> >> +        abort();
> >> +
> >> +        *size = 0;
> >> +        return NULL;
> >> +    }
> >> +}
> >> +
> >> void qemu_put_ram_ptr(void *addr)
> >> {
> >>     trace_qemu_put_ram_ptr(addr);
> >> @@ -3972,14 +3997,12 @@ void 
> >> *cpu_physical_memory_map(target_phys_addr_t addr,
> >>                               int is_write)
> >> {
> >>     target_phys_addr_t len = *plen;
> >> -    target_phys_addr_t done = 0;
> >> +    target_phys_addr_t todo = 0;
> >>     int l;
> >> -    uint8_t *ret = NULL;
> >> -    uint8_t *ptr;
> >>     target_phys_addr_t page;
> >>     unsigned long pd;
> >>     PhysPageDesc *p;
> >> -    unsigned long addr1;
> >> +    target_phys_addr_t addr1 = addr;
> >> 
> >>     while (len > 0) {
> >>         page = addr & TARGET_PAGE_MASK;
> >> @@ -3994,7 +4017,7 @@ void 
> >> *cpu_physical_memory_map(target_phys_addr_t addr,
> >>         }
> >> 
> >>         if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) {
> >> -            if (done || bounce.buffer) {
> >> +            if (todo || bounce.buffer) {
> >>                 break;
> >>             }
> >>             bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, 
> >> TARGET_PAGE_SIZE);
> >> @@ -4003,23 +4026,17 @@ void 
> >> *cpu_physical_memory_map(target_phys_addr_t addr,
> >>             if (!is_write) {
> >>                 cpu_physical_memory_read(addr, bounce.buffer, l);
> >>             }
> >> -            ptr = bounce.buffer;
> >> -        } else {
> >> -            addr1 = (pd & TARGET_PAGE_MASK) + (addr & 
> >> ~TARGET_PAGE_MASK);
> >> -            ptr = qemu_get_ram_ptr(addr1);
> >> -        }
> >> -        if (!done) {
> >> -            ret = ptr;
> >> -        } else if (ret + done != ptr) {
> >> -            break;
> >> +
> >> +            *plen = l;
> >> +            return bounce.buffer;
> >>         }
> >> 
> >>         len -= l;
> >>         addr += l;
> >> -        done += l;
> >> +        todo += l;
> >>     }
> >> -    *plen = done;
> >> -    return ret;
> >> +    *plen = todo;
> >> +    return qemu_ram_ptr_length(addr1, plen);
> >> }
> >> 
> >> /* Unmaps a memory region previously mapped by 
> >> cpu_physical_memory_map().
> >> -- 
> >> 1.7.2.3
> > 
> > Hello Stefano,
> > 
> > This commit breaks the case that guest memory doesn't start from 0.
> > 
> > In previous code
> > 	addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK);
> > This transfer guest physical addr to qemu ram_addr, and so 
> that it can pass the ram range checking.
> > 
> > But current code
> > 	addr1 = addr
> > this make it fail to pass the ram range checking.
> 
> Are you sure it's still broken with commit 
> 8ab934f93b5ad3d0af4ad419d2531235a75d672c? If so, mind to 
> pinpoint where exactly?
> 

Ah, miss this one.
Yes, it then works with commit 8ab934f93b5ad3d0af4ad419d2531235a75d672c.
Sorry, please ignore this.


Thanks,
Yu

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2011-07-22  6:44 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-05-19 17:34 [Qemu-devel] [PATCH v2 0/5] xen mapcache fixes and improvements Stefano Stabellini
2011-05-19 17:34 ` Stefano Stabellini
2011-05-19 17:35 ` [Qemu-devel] [PATCH v2 1/5] xen: fix qemu_map_cache with size != MCACHE_BUCKET_SIZE stefano.stabellini
2011-05-19 17:35   ` stefano.stabellini
2011-05-19 17:35 ` [Qemu-devel] [PATCH v2 2/5] xen: remove qemu_map_cache_unlock stefano.stabellini
2011-05-19 17:35   ` stefano.stabellini
2011-05-19 17:35 ` [Qemu-devel] [PATCH v2 3/5] xen: remove xen_map_block and xen_unmap_block stefano.stabellini
2011-05-19 17:35   ` stefano.stabellini
2011-05-19 17:35 ` [Qemu-devel] [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map stefano.stabellini
2011-05-19 17:35   ` stefano.stabellini
2011-07-11 22:17   ` [Qemu-devel] " Jan Kiszka
2011-07-11 22:17     ` Jan Kiszka
2011-07-12  6:14     ` [Qemu-devel] " Alexander Graf
2011-07-12  6:14       ` Alexander Graf
2011-07-12  6:21     ` [Qemu-devel] " Alexander Graf
2011-07-12  6:21       ` Alexander Graf
2011-07-12  6:28     ` [Qemu-devel] " Alexander Graf
2011-07-12  6:28       ` Alexander Graf
2011-07-12  7:15       ` [Qemu-devel] " Jan Kiszka
2011-07-12  7:15         ` Jan Kiszka
2011-07-12  7:18         ` [Qemu-devel] " Paolo Bonzini
2011-07-12  7:18           ` Paolo Bonzini
2011-07-12  7:48           ` [Qemu-devel] " Markus Armbruster
2011-07-22  5:42   ` Liu Yu-B13201
2011-07-22  5:42     ` Liu Yu-B13201
2011-07-22  5:59     ` [Qemu-devel] " Alexander Graf
2011-07-22  5:59       ` Alexander Graf
2011-07-22  6:14       ` [Qemu-devel] " Liu Yu-B13201
2011-07-22  6:14         ` Liu Yu-B13201
2011-05-19 17:35 ` [Qemu-devel] [PATCH v2 5/5] xen: mapcache performance improvements stefano.stabellini
2011-05-19 17:35   ` stefano.stabellini
2011-05-27 23:30 ` [Qemu-devel] [PATCH v2 0/5] xen mapcache fixes and improvements Alexander Graf
2011-05-27 23:30   ` Alexander Graf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.