All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] Fix wide ioport access cracking
@ 2011-08-11  7:40 ` Avi Kivity
  0 siblings, 0 replies; 23+ messages in thread
From: Avi Kivity @ 2011-08-11  7:40 UTC (permalink / raw)
  To: Anthony Liguori, Gerhard Wiesinger; +Cc: qemu-devel, kvm

The memory API automatically cracks wide memory accesses into narrower
(usually byte) accesses when needed.  Unfortunately this wasn't implemented
for ioports, leading to an lsi53c895a failure.

This series implements cracking for ioports as well.

Note that the dual implementation is due to the fact the memory API is layered
on top of the original qemu API; eventually the same code will be used for
both ioports and mmio.

Avi Kivity (2):
  memory: abstract cracking of write access ops into a function
  memory: crack wide ioport accesses into smaller ones when needed

 memory.c |  120 +++++++++++++++++++++++++++++++++++++++----------------------
 1 files changed, 77 insertions(+), 43 deletions(-)

-- 
1.7.5.3


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH 0/2] Fix wide ioport access cracking
@ 2011-08-11  7:40 ` Avi Kivity
  0 siblings, 0 replies; 23+ messages in thread
From: Avi Kivity @ 2011-08-11  7:40 UTC (permalink / raw)
  To: Anthony Liguori, Gerhard Wiesinger; +Cc: qemu-devel, kvm

The memory API automatically cracks wide memory accesses into narrower
(usually byte) accesses when needed.  Unfortunately this wasn't implemented
for ioports, leading to an lsi53c895a failure.

This series implements cracking for ioports as well.

Note that the dual implementation is due to the fact the memory API is layered
on top of the original qemu API; eventually the same code will be used for
both ioports and mmio.

Avi Kivity (2):
  memory: abstract cracking of write access ops into a function
  memory: crack wide ioport accesses into smaller ones when needed

 memory.c |  120 +++++++++++++++++++++++++++++++++++++++----------------------
 1 files changed, 77 insertions(+), 43 deletions(-)

-- 
1.7.5.3

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/2] memory: abstract cracking of write access ops into a function
  2011-08-11  7:40 ` [Qemu-devel] " Avi Kivity
@ 2011-08-11  7:40   ` Avi Kivity
  -1 siblings, 0 replies; 23+ messages in thread
From: Avi Kivity @ 2011-08-11  7:40 UTC (permalink / raw)
  To: Anthony Liguori, Gerhard Wiesinger; +Cc: qemu-devel, kvm

The memory API automatically cracks large reads and writes into smaller
ones when needed.  Factor out this mechanism, which is now duplicated between
memory reads and memory writes, into a function.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 memory.c |  109 ++++++++++++++++++++++++++++++++++++++-----------------------
 1 files changed, 68 insertions(+), 41 deletions(-)

diff --git a/memory.c b/memory.c
index beff98c..3c18e70 100644
--- a/memory.c
+++ b/memory.c
@@ -226,6 +226,65 @@ static void flatview_simplify(FlatView *view)
     }
 }
 
+static void memory_region_read_accessor(void *opaque,
+                                        target_phys_addr_t addr,
+                                        uint64_t *value,
+                                        unsigned size,
+                                        unsigned shift,
+                                        uint64_t mask)
+{
+    MemoryRegion *mr = opaque;
+    uint64_t tmp;
+
+    tmp = mr->ops->read(mr->opaque, addr, size);
+    *value |= (tmp & mask) << shift;
+}
+
+static void memory_region_write_accessor(void *opaque,
+                                         target_phys_addr_t addr,
+                                         uint64_t *value,
+                                         unsigned size,
+                                         unsigned shift,
+                                         uint64_t mask)
+{
+    MemoryRegion *mr = opaque;
+    uint64_t tmp;
+
+    tmp = (*value >> shift) & mask;
+    mr->ops->write(mr->opaque, addr, tmp, size);
+}
+
+static void access_with_adjusted_size(target_phys_addr_t addr,
+                                      uint64_t *value,
+                                      unsigned size,
+                                      unsigned access_size_min,
+                                      unsigned access_size_max,
+                                      void (*access)(void *opaque,
+                                                     target_phys_addr_t addr,
+                                                     uint64_t *value,
+                                                     unsigned size,
+                                                     unsigned shift,
+                                                     uint64_t mask),
+                                      void *opaque)
+{
+    uint64_t access_mask;
+    unsigned access_size;
+    unsigned i;
+
+    if (!access_size_min) {
+        access_size_min = 1;
+    }
+    if (!access_size_max) {
+        access_size_max = 4;
+    }
+    access_size = MAX(MIN(size, access_size_max), access_size_min);
+    access_mask = -1ULL >> (64 - access_size * 8);
+    for (i = 0; i < size; i += access_size) {
+        /* FIXME: big-endian support */
+        access(opaque, addr + i, value, access_size, i * 8, access_mask);
+    }
+}
+
 static void memory_region_prepare_ram_addr(MemoryRegion *mr);
 
 static void as_memory_range_add(AddressSpace *as, FlatRange *fr)
@@ -744,10 +803,7 @@ static uint32_t memory_region_read_thunk_n(void *_mr,
                                            unsigned size)
 {
     MemoryRegion *mr = _mr;
-    unsigned access_size, access_size_min, access_size_max;
-    uint64_t access_mask;
-    uint32_t data = 0, tmp;
-    unsigned i;
+    uint64_t data = 0;
 
     if (!memory_region_access_valid(mr, addr, size)) {
         return -1U; /* FIXME: better signalling */
@@ -758,23 +814,10 @@ static uint32_t memory_region_read_thunk_n(void *_mr,
     }
 
     /* FIXME: support unaligned access */
-
-    access_size_min = mr->ops->impl.min_access_size;
-    if (!access_size_min) {
-        access_size_min = 1;
-    }
-    access_size_max = mr->ops->impl.max_access_size;
-    if (!access_size_max) {
-        access_size_max = 4;
-    }
-    access_size = MAX(MIN(size, access_size_max), access_size_min);
-    access_mask = -1ULL >> (64 - access_size * 8);
-    addr += mr->offset;
-    for (i = 0; i < size; i += access_size) {
-        /* FIXME: big-endian support */
-        tmp = mr->ops->read(mr->opaque, addr + i, access_size);
-        data |= (tmp & access_mask) << (i * 8);
-    }
+    access_with_adjusted_size(addr + mr->offset, &data, size,
+                              mr->ops->impl.min_access_size,
+                              mr->ops->impl.max_access_size,
+                              memory_region_read_accessor, mr);
 
     return data;
 }
@@ -785,9 +828,6 @@ static void memory_region_write_thunk_n(void *_mr,
                                         uint64_t data)
 {
     MemoryRegion *mr = _mr;
-    unsigned access_size, access_size_min, access_size_max;
-    uint64_t access_mask;
-    unsigned i;
 
     if (!memory_region_access_valid(mr, addr, size)) {
         return; /* FIXME: better signalling */
@@ -799,23 +839,10 @@ static void memory_region_write_thunk_n(void *_mr,
     }
 
     /* FIXME: support unaligned access */
-
-    access_size_min = mr->ops->impl.min_access_size;
-    if (!access_size_min) {
-        access_size_min = 1;
-    }
-    access_size_max = mr->ops->impl.max_access_size;
-    if (!access_size_max) {
-        access_size_max = 4;
-    }
-    access_size = MAX(MIN(size, access_size_max), access_size_min);
-    access_mask = -1ULL >> (64 - access_size * 8);
-    addr += mr->offset;
-    for (i = 0; i < size; i += access_size) {
-        /* FIXME: big-endian support */
-        mr->ops->write(mr->opaque, addr + i, (data >> (i * 8)) & access_mask,
-                       access_size);
-    }
+    access_with_adjusted_size(addr + mr->offset, &data, size,
+                              mr->ops->impl.min_access_size,
+                              mr->ops->impl.max_access_size,
+                              memory_region_write_accessor, mr);
 }
 
 static uint32_t memory_region_read_thunk_b(void *mr, target_phys_addr_t addr)
-- 
1.7.5.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH 1/2] memory: abstract cracking of write access ops into a function
@ 2011-08-11  7:40   ` Avi Kivity
  0 siblings, 0 replies; 23+ messages in thread
From: Avi Kivity @ 2011-08-11  7:40 UTC (permalink / raw)
  To: Anthony Liguori, Gerhard Wiesinger; +Cc: qemu-devel, kvm

The memory API automatically cracks large reads and writes into smaller
ones when needed.  Factor out this mechanism, which is now duplicated between
memory reads and memory writes, into a function.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 memory.c |  109 ++++++++++++++++++++++++++++++++++++++-----------------------
 1 files changed, 68 insertions(+), 41 deletions(-)

diff --git a/memory.c b/memory.c
index beff98c..3c18e70 100644
--- a/memory.c
+++ b/memory.c
@@ -226,6 +226,65 @@ static void flatview_simplify(FlatView *view)
     }
 }
 
+static void memory_region_read_accessor(void *opaque,
+                                        target_phys_addr_t addr,
+                                        uint64_t *value,
+                                        unsigned size,
+                                        unsigned shift,
+                                        uint64_t mask)
+{
+    MemoryRegion *mr = opaque;
+    uint64_t tmp;
+
+    tmp = mr->ops->read(mr->opaque, addr, size);
+    *value |= (tmp & mask) << shift;
+}
+
+static void memory_region_write_accessor(void *opaque,
+                                         target_phys_addr_t addr,
+                                         uint64_t *value,
+                                         unsigned size,
+                                         unsigned shift,
+                                         uint64_t mask)
+{
+    MemoryRegion *mr = opaque;
+    uint64_t tmp;
+
+    tmp = (*value >> shift) & mask;
+    mr->ops->write(mr->opaque, addr, tmp, size);
+}
+
+static void access_with_adjusted_size(target_phys_addr_t addr,
+                                      uint64_t *value,
+                                      unsigned size,
+                                      unsigned access_size_min,
+                                      unsigned access_size_max,
+                                      void (*access)(void *opaque,
+                                                     target_phys_addr_t addr,
+                                                     uint64_t *value,
+                                                     unsigned size,
+                                                     unsigned shift,
+                                                     uint64_t mask),
+                                      void *opaque)
+{
+    uint64_t access_mask;
+    unsigned access_size;
+    unsigned i;
+
+    if (!access_size_min) {
+        access_size_min = 1;
+    }
+    if (!access_size_max) {
+        access_size_max = 4;
+    }
+    access_size = MAX(MIN(size, access_size_max), access_size_min);
+    access_mask = -1ULL >> (64 - access_size * 8);
+    for (i = 0; i < size; i += access_size) {
+        /* FIXME: big-endian support */
+        access(opaque, addr + i, value, access_size, i * 8, access_mask);
+    }
+}
+
 static void memory_region_prepare_ram_addr(MemoryRegion *mr);
 
 static void as_memory_range_add(AddressSpace *as, FlatRange *fr)
@@ -744,10 +803,7 @@ static uint32_t memory_region_read_thunk_n(void *_mr,
                                            unsigned size)
 {
     MemoryRegion *mr = _mr;
-    unsigned access_size, access_size_min, access_size_max;
-    uint64_t access_mask;
-    uint32_t data = 0, tmp;
-    unsigned i;
+    uint64_t data = 0;
 
     if (!memory_region_access_valid(mr, addr, size)) {
         return -1U; /* FIXME: better signalling */
@@ -758,23 +814,10 @@ static uint32_t memory_region_read_thunk_n(void *_mr,
     }
 
     /* FIXME: support unaligned access */
-
-    access_size_min = mr->ops->impl.min_access_size;
-    if (!access_size_min) {
-        access_size_min = 1;
-    }
-    access_size_max = mr->ops->impl.max_access_size;
-    if (!access_size_max) {
-        access_size_max = 4;
-    }
-    access_size = MAX(MIN(size, access_size_max), access_size_min);
-    access_mask = -1ULL >> (64 - access_size * 8);
-    addr += mr->offset;
-    for (i = 0; i < size; i += access_size) {
-        /* FIXME: big-endian support */
-        tmp = mr->ops->read(mr->opaque, addr + i, access_size);
-        data |= (tmp & access_mask) << (i * 8);
-    }
+    access_with_adjusted_size(addr + mr->offset, &data, size,
+                              mr->ops->impl.min_access_size,
+                              mr->ops->impl.max_access_size,
+                              memory_region_read_accessor, mr);
 
     return data;
 }
@@ -785,9 +828,6 @@ static void memory_region_write_thunk_n(void *_mr,
                                         uint64_t data)
 {
     MemoryRegion *mr = _mr;
-    unsigned access_size, access_size_min, access_size_max;
-    uint64_t access_mask;
-    unsigned i;
 
     if (!memory_region_access_valid(mr, addr, size)) {
         return; /* FIXME: better signalling */
@@ -799,23 +839,10 @@ static void memory_region_write_thunk_n(void *_mr,
     }
 
     /* FIXME: support unaligned access */
-
-    access_size_min = mr->ops->impl.min_access_size;
-    if (!access_size_min) {
-        access_size_min = 1;
-    }
-    access_size_max = mr->ops->impl.max_access_size;
-    if (!access_size_max) {
-        access_size_max = 4;
-    }
-    access_size = MAX(MIN(size, access_size_max), access_size_min);
-    access_mask = -1ULL >> (64 - access_size * 8);
-    addr += mr->offset;
-    for (i = 0; i < size; i += access_size) {
-        /* FIXME: big-endian support */
-        mr->ops->write(mr->opaque, addr + i, (data >> (i * 8)) & access_mask,
-                       access_size);
-    }
+    access_with_adjusted_size(addr + mr->offset, &data, size,
+                              mr->ops->impl.min_access_size,
+                              mr->ops->impl.max_access_size,
+                              memory_region_write_accessor, mr);
 }
 
 static uint32_t memory_region_read_thunk_b(void *mr, target_phys_addr_t addr)
-- 
1.7.5.3

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 2/2] memory: crack wide ioport accesses into smaller ones when needed
  2011-08-11  7:40 ` [Qemu-devel] " Avi Kivity
@ 2011-08-11  7:40   ` Avi Kivity
  -1 siblings, 0 replies; 23+ messages in thread
From: Avi Kivity @ 2011-08-11  7:40 UTC (permalink / raw)
  To: Anthony Liguori, Gerhard Wiesinger; +Cc: qemu-devel, kvm

The memory API supports cracking wide accesses into narrower ones
when needed; but this was no implemented for the pio address space,
causing lsi53c895a's IO BAR to malfunction.

Fix by correctly cracking wide accesses when needed.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 memory.c |   11 +++++++++--
 1 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/memory.c b/memory.c
index 3c18e70..81032b6 100644
--- a/memory.c
+++ b/memory.c
@@ -400,7 +400,11 @@ static void memory_region_iorange_read(IORange *iorange,
         }
         return;
     }
-    *data = mr->ops->read(mr->opaque, offset, width);
+    *data = 0;
+    access_with_adjusted_size(offset, data, width,
+                              mr->ops->impl.min_access_size,
+                              mr->ops->impl.max_access_size,
+                              memory_region_read_accessor, mr);
 }
 
 static void memory_region_iorange_write(IORange *iorange,
@@ -418,7 +422,10 @@ static void memory_region_iorange_write(IORange *iorange,
         }
         return;
     }
-    mr->ops->write(mr->opaque, offset, data, width);
+    access_with_adjusted_size(offset, &data, width,
+                              mr->ops->impl.min_access_size,
+                              mr->ops->impl.max_access_size,
+                              memory_region_write_accessor, mr);
 }
 
 static const IORangeOps memory_region_iorange_ops = {
-- 
1.7.5.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH 2/2] memory: crack wide ioport accesses into smaller ones when needed
@ 2011-08-11  7:40   ` Avi Kivity
  0 siblings, 0 replies; 23+ messages in thread
From: Avi Kivity @ 2011-08-11  7:40 UTC (permalink / raw)
  To: Anthony Liguori, Gerhard Wiesinger; +Cc: qemu-devel, kvm

The memory API supports cracking wide accesses into narrower ones
when needed; but this was no implemented for the pio address space,
causing lsi53c895a's IO BAR to malfunction.

Fix by correctly cracking wide accesses when needed.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 memory.c |   11 +++++++++--
 1 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/memory.c b/memory.c
index 3c18e70..81032b6 100644
--- a/memory.c
+++ b/memory.c
@@ -400,7 +400,11 @@ static void memory_region_iorange_read(IORange *iorange,
         }
         return;
     }
-    *data = mr->ops->read(mr->opaque, offset, width);
+    *data = 0;
+    access_with_adjusted_size(offset, data, width,
+                              mr->ops->impl.min_access_size,
+                              mr->ops->impl.max_access_size,
+                              memory_region_read_accessor, mr);
 }
 
 static void memory_region_iorange_write(IORange *iorange,
@@ -418,7 +422,10 @@ static void memory_region_iorange_write(IORange *iorange,
         }
         return;
     }
-    mr->ops->write(mr->opaque, offset, data, width);
+    access_with_adjusted_size(offset, &data, width,
+                              mr->ops->impl.min_access_size,
+                              mr->ops->impl.max_access_size,
+                              memory_region_write_accessor, mr);
 }
 
 static const IORangeOps memory_region_iorange_ops = {
-- 
1.7.5.3

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/2] Fix wide ioport access cracking
  2011-08-11  7:40 ` [Qemu-devel] " Avi Kivity
@ 2011-08-11  8:25   ` Gerhard Wiesinger
  -1 siblings, 0 replies; 23+ messages in thread
From: Gerhard Wiesinger @ 2011-08-11  8:25 UTC (permalink / raw)
  To: Avi Kivity; +Cc: qemu-devel, kvm

Hello Avi,

Thank for the fast fix. Unfortunatly it still doesn't work (but LSI BIOS 
is initialized correctly).

I'm getting at boot time:
qemu-system-x86_64: /qemu-kvm-test/memory.c:1168: 
memory_region_del_subregion: Assertion `subregion->parent == mr' failed.

Thnx.

Ciao,
Gerhard

--
http://www.wiesinger.com/


On Thu, 11 Aug 2011, Avi Kivity wrote:

> The memory API automatically cracks wide memory accesses into narrower
> (usually byte) accesses when needed.  Unfortunately this wasn't implemented
> for ioports, leading to an lsi53c895a failure.
>
> This series implements cracking for ioports as well.
>
> Note that the dual implementation is due to the fact the memory API is layered
> on top of the original qemu API; eventually the same code will be used for
> both ioports and mmio.
>
> Avi Kivity (2):
>  memory: abstract cracking of write access ops into a function
>  memory: crack wide ioport accesses into smaller ones when needed
>
> memory.c |  120 +++++++++++++++++++++++++++++++++++++++----------------------
> 1 files changed, 77 insertions(+), 43 deletions(-)
>
> -- 
> 1.7.5.3
>
>
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Fix wide ioport access cracking
@ 2011-08-11  8:25   ` Gerhard Wiesinger
  0 siblings, 0 replies; 23+ messages in thread
From: Gerhard Wiesinger @ 2011-08-11  8:25 UTC (permalink / raw)
  To: Avi Kivity; +Cc: qemu-devel, kvm

Hello Avi,

Thank for the fast fix. Unfortunatly it still doesn't work (but LSI BIOS 
is initialized correctly).

I'm getting at boot time:
qemu-system-x86_64: /qemu-kvm-test/memory.c:1168: 
memory_region_del_subregion: Assertion `subregion->parent == mr' failed.

Thnx.

Ciao,
Gerhard

--
http://www.wiesinger.com/


On Thu, 11 Aug 2011, Avi Kivity wrote:

> The memory API automatically cracks wide memory accesses into narrower
> (usually byte) accesses when needed.  Unfortunately this wasn't implemented
> for ioports, leading to an lsi53c895a failure.
>
> This series implements cracking for ioports as well.
>
> Note that the dual implementation is due to the fact the memory API is layered
> on top of the original qemu API; eventually the same code will be used for
> both ioports and mmio.
>
> Avi Kivity (2):
>  memory: abstract cracking of write access ops into a function
>  memory: crack wide ioport accesses into smaller ones when needed
>
> memory.c |  120 +++++++++++++++++++++++++++++++++++++++----------------------
> 1 files changed, 77 insertions(+), 43 deletions(-)
>
> -- 
> 1.7.5.3
>
>
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/2] Fix wide ioport access cracking
  2011-08-11  8:25   ` [Qemu-devel] " Gerhard Wiesinger
@ 2011-08-11  8:27     ` Avi Kivity
  -1 siblings, 0 replies; 23+ messages in thread
From: Avi Kivity @ 2011-08-11  8:27 UTC (permalink / raw)
  To: Gerhard Wiesinger; +Cc: qemu-devel, kvm

On 08/11/2011 11:25 AM, Gerhard Wiesinger wrote:
> Hello Avi,
>
> Thank for the fast fix. Unfortunatly it still doesn't work (but LSI 
> BIOS is initialized correctly).
>
> I'm getting at boot time:
> qemu-system-x86_64: /qemu-kvm-test/memory.c:1168: 
> memory_region_del_subregion: Assertion `subregion->parent == mr' failed.

What OS are you booting?  What is your qemu command line?  How early is 
the failure?

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Fix wide ioport access cracking
@ 2011-08-11  8:27     ` Avi Kivity
  0 siblings, 0 replies; 23+ messages in thread
From: Avi Kivity @ 2011-08-11  8:27 UTC (permalink / raw)
  To: Gerhard Wiesinger; +Cc: qemu-devel, kvm

On 08/11/2011 11:25 AM, Gerhard Wiesinger wrote:
> Hello Avi,
>
> Thank for the fast fix. Unfortunatly it still doesn't work (but LSI 
> BIOS is initialized correctly).
>
> I'm getting at boot time:
> qemu-system-x86_64: /qemu-kvm-test/memory.c:1168: 
> memory_region_del_subregion: Assertion `subregion->parent == mr' failed.

What OS are you booting?  What is your qemu command line?  How early is 
the failure?

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Fix wide ioport access cracking
  2011-08-11  8:27     ` [Qemu-devel] " Avi Kivity
@ 2011-08-11  8:29       ` Avi Kivity
  -1 siblings, 0 replies; 23+ messages in thread
From: Avi Kivity @ 2011-08-11  8:29 UTC (permalink / raw)
  To: Gerhard Wiesinger; +Cc: Anthony Liguori, qemu-devel, kvm

On 08/11/2011 11:27 AM, Avi Kivity wrote:
> On 08/11/2011 11:25 AM, Gerhard Wiesinger wrote:
>> Hello Avi,
>>
>> Thank for the fast fix. Unfortunatly it still doesn't work (but LSI 
>> BIOS is initialized correctly).
>>
>> I'm getting at boot time:
>> qemu-system-x86_64: /qemu-kvm-test/memory.c:1168: 
>> memory_region_del_subregion: Assertion `subregion->parent == mr' failed.
>
> What OS are you booting?  What is your qemu command line?  How early 
> is the failure?
>

Alternatively, build with debug information (./configure --enable-debug) 
enable core dumps ('ulimit -c unlimited'), and post a backtrace:

  gdb qemu-system-x86_64 /path/to/core
  (gdb) bt

It should be immediately apparent where the failure is.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Fix wide ioport access cracking
@ 2011-08-11  8:29       ` Avi Kivity
  0 siblings, 0 replies; 23+ messages in thread
From: Avi Kivity @ 2011-08-11  8:29 UTC (permalink / raw)
  To: Gerhard Wiesinger; +Cc: qemu-devel, kvm

On 08/11/2011 11:27 AM, Avi Kivity wrote:
> On 08/11/2011 11:25 AM, Gerhard Wiesinger wrote:
>> Hello Avi,
>>
>> Thank for the fast fix. Unfortunatly it still doesn't work (but LSI 
>> BIOS is initialized correctly).
>>
>> I'm getting at boot time:
>> qemu-system-x86_64: /qemu-kvm-test/memory.c:1168: 
>> memory_region_del_subregion: Assertion `subregion->parent == mr' failed.
>
> What OS are you booting?  What is your qemu command line?  How early 
> is the failure?
>

Alternatively, build with debug information (./configure --enable-debug) 
enable core dumps ('ulimit -c unlimited'), and post a backtrace:

  gdb qemu-system-x86_64 /path/to/core
  (gdb) bt

It should be immediately apparent where the failure is.

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/2] Fix wide ioport access cracking
  2011-08-11  8:29       ` Avi Kivity
@ 2011-08-11  9:01         ` Gerhard Wiesinger
  -1 siblings, 0 replies; 23+ messages in thread
From: Gerhard Wiesinger @ 2011-08-11  9:01 UTC (permalink / raw)
  To: Avi Kivity; +Cc: qemu-devel, kvm

Hello Avi,

#0  0x0000003a060328f5 in raise () from /lib64/libc.so.6
#1  0x0000003a060340d5 in abort () from /lib64/libc.so.6
#2  0x0000003a0602b8b5 in __assert_fail () from /lib64/libc.so.6
#3  0x0000000000435339 in memory_region_del_subregion (mr=<value optimized out>, subregion=<value optimized out>)    at /root/download/qemu/git/qemu-kvm-test/memory.c:1168
#4  0x000000000041eb9b in pci_update_mappings (d=0x1a90bc0) at /root/download/qemu/git/qemu-kvm-test/hw/pci.c:1134
#5  0x0000000000420a9c in pci_default_write_config (d=0x1a90bc0, addr=4, val=<value optimized out>, l=<value optimized out>)     at /root/download/qemu/git/qemu-kvm-test/hw/pci.c:1213
#6  0x00000000004329a6 in kvm_handle_io (env=0x1931af0) at /root/download/qemu/git/qemu-kvm-test/kvm-all.c:858
#7  kvm_cpu_exec (env=0x1931af0) at /root/download/qemu/git/qemu-kvm-test/kvm-all.c:997
#8  0x000000000040bd4a in qemu_kvm_cpu_thread_fn (arg=0x1931af0) at /root/download/qemu/git/qemu-kvm-test/cpus.c:806
#9  0x0000003a06807761 in start_thread () from /lib64/libpthread.so.0
#10 0x0000003a060e098d in clone () from /lib64/libc.so.6

Command line:
qemu-system-x86_64 -drive file=gerhard.img,media=disk,if=scsi,bus=0,unit=0 -drive file=gerhard2.img,media=disk,if=scsi,bus=0,unit=1 -drive file=gerhard3.img,media=disk,if=scsi,bus=0,unit=2 -boot order=c -m 256 -k de -vga vmware -vnc :0 -bios bios.bin -option-rom 8xx_64.rom -net nic,model=pcnet,macaddr=1a:46:0b:cc:aa:bb -net tap,ifname=tap0,script=no,downscript=no

BTW: Is the new memory API faster? Any ideas how to optimize (if not)?

I don't know if you remember but I was looking for fast access for the 
following use cases for DOS legacy for KVM:
1.) Page switching in the area of 0xA0000-0xAFFFF (linear frame buffer 
mapping) through INT 0x10 function
2.) Access the memory page

As far as I saw there are 2 different virtualization approaches 
(different in VMWare VGA and cirrus VGA):
1.) Just remember the page on the INT 0x10 function setter and virtualize 
each access to the page.
Advantages: Fast page switching
Disadvantages: Each access is virtualized which is slow (you pointed out 
that each switch from non virtualized to virtualized is very slow and 
requires thousands of CPU cycles, see archive)

2.) mapping in the INT 0x10 function through memory mapping functions and 
direct access to the mapped memory area without virtualization.
Advantages: Fast direct access
Disadvantages with old API: was very slow (was about 1000 switches per 
second or even lower as far as I remember)
As far as I found it out it came from (maybe a linear list issue?):
static int cpu_notify_sync_dirty_bitmap(target_phys_addr_t start,
                                         target_phys_addr_t end)
{
     CPUPhysMemoryClient *client;
     QLIST_FOREACH(client, &memory_client_list, list) {
         int r = client->sync_dirty_bitmap(client, start, end);
         if (r < 0)
             return r;
     }
     return 0;
}

kvm_physical_sync_dirty_bitmap

I think variant 2 is the preferred one but with optimized switching of 
mapping.

Thnx.

Ciao,
Gerhard

--
http://www.wiesinger.com/


On Thu, 11 Aug 2011, Avi Kivity wrote:
> Alternatively, build with debug information (./configure --enable-debug) 
> enable core dumps ('ulimit -c unlimited'), and post a backtrace:
>
> gdb qemu-system-x86_64 /path/to/core
> (gdb) bt
>
> It should be immediately apparent where the failure is.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Fix wide ioport access cracking
@ 2011-08-11  9:01         ` Gerhard Wiesinger
  0 siblings, 0 replies; 23+ messages in thread
From: Gerhard Wiesinger @ 2011-08-11  9:01 UTC (permalink / raw)
  To: Avi Kivity; +Cc: qemu-devel, kvm

Hello Avi,

#0  0x0000003a060328f5 in raise () from /lib64/libc.so.6
#1  0x0000003a060340d5 in abort () from /lib64/libc.so.6
#2  0x0000003a0602b8b5 in __assert_fail () from /lib64/libc.so.6
#3  0x0000000000435339 in memory_region_del_subregion (mr=<value optimized out>, subregion=<value optimized out>)    at /root/download/qemu/git/qemu-kvm-test/memory.c:1168
#4  0x000000000041eb9b in pci_update_mappings (d=0x1a90bc0) at /root/download/qemu/git/qemu-kvm-test/hw/pci.c:1134
#5  0x0000000000420a9c in pci_default_write_config (d=0x1a90bc0, addr=4, val=<value optimized out>, l=<value optimized out>)     at /root/download/qemu/git/qemu-kvm-test/hw/pci.c:1213
#6  0x00000000004329a6 in kvm_handle_io (env=0x1931af0) at /root/download/qemu/git/qemu-kvm-test/kvm-all.c:858
#7  kvm_cpu_exec (env=0x1931af0) at /root/download/qemu/git/qemu-kvm-test/kvm-all.c:997
#8  0x000000000040bd4a in qemu_kvm_cpu_thread_fn (arg=0x1931af0) at /root/download/qemu/git/qemu-kvm-test/cpus.c:806
#9  0x0000003a06807761 in start_thread () from /lib64/libpthread.so.0
#10 0x0000003a060e098d in clone () from /lib64/libc.so.6

Command line:
qemu-system-x86_64 -drive file=gerhard.img,media=disk,if=scsi,bus=0,unit=0 -drive file=gerhard2.img,media=disk,if=scsi,bus=0,unit=1 -drive file=gerhard3.img,media=disk,if=scsi,bus=0,unit=2 -boot order=c -m 256 -k de -vga vmware -vnc :0 -bios bios.bin -option-rom 8xx_64.rom -net nic,model=pcnet,macaddr=1a:46:0b:cc:aa:bb -net tap,ifname=tap0,script=no,downscript=no

BTW: Is the new memory API faster? Any ideas how to optimize (if not)?

I don't know if you remember but I was looking for fast access for the 
following use cases for DOS legacy for KVM:
1.) Page switching in the area of 0xA0000-0xAFFFF (linear frame buffer 
mapping) through INT 0x10 function
2.) Access the memory page

As far as I saw there are 2 different virtualization approaches 
(different in VMWare VGA and cirrus VGA):
1.) Just remember the page on the INT 0x10 function setter and virtualize 
each access to the page.
Advantages: Fast page switching
Disadvantages: Each access is virtualized which is slow (you pointed out 
that each switch from non virtualized to virtualized is very slow and 
requires thousands of CPU cycles, see archive)

2.) mapping in the INT 0x10 function through memory mapping functions and 
direct access to the mapped memory area without virtualization.
Advantages: Fast direct access
Disadvantages with old API: was very slow (was about 1000 switches per 
second or even lower as far as I remember)
As far as I found it out it came from (maybe a linear list issue?):
static int cpu_notify_sync_dirty_bitmap(target_phys_addr_t start,
                                         target_phys_addr_t end)
{
     CPUPhysMemoryClient *client;
     QLIST_FOREACH(client, &memory_client_list, list) {
         int r = client->sync_dirty_bitmap(client, start, end);
         if (r < 0)
             return r;
     }
     return 0;
}

kvm_physical_sync_dirty_bitmap

I think variant 2 is the preferred one but with optimized switching of 
mapping.

Thnx.

Ciao,
Gerhard

--
http://www.wiesinger.com/


On Thu, 11 Aug 2011, Avi Kivity wrote:
> Alternatively, build with debug information (./configure --enable-debug) 
> enable core dumps ('ulimit -c unlimited'), and post a backtrace:
>
> gdb qemu-system-x86_64 /path/to/core
> (gdb) bt
>
> It should be immediately apparent where the failure is.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Fix wide ioport access cracking
  2011-08-11  9:01         ` [Qemu-devel] " Gerhard Wiesinger
  (?)
@ 2011-08-11  9:44         ` Avi Kivity
  2011-08-11 16:08           ` Gerhard Wiesinger
  2011-08-11 16:11           ` Gerhard Wiesinger
  -1 siblings, 2 replies; 23+ messages in thread
From: Avi Kivity @ 2011-08-11  9:44 UTC (permalink / raw)
  To: Gerhard Wiesinger; +Cc: qemu-devel, kvm

On 08/11/2011 12:01 PM, Gerhard Wiesinger wrote:
> Hello Avi,
>
> #0  0x0000003a060328f5 in raise () from /lib64/libc.so.6
> #1  0x0000003a060340d5 in abort () from /lib64/libc.so.6
> #2  0x0000003a0602b8b5 in __assert_fail () from /lib64/libc.so.6
> #3  0x0000000000435339 in memory_region_del_subregion (mr=<value 
> optimized out>, subregion=<value optimized out>)    at 
> /root/download/qemu/git/qemu-kvm-test/memory.c:1168
> #4  0x000000000041eb9b in pci_update_mappings (d=0x1a90bc0) at 
> /root/download/qemu/git/qemu-kvm-test/hw/pci.c:1134
> #5  0x0000000000420a9c in pci_default_write_config (d=0x1a90bc0, 
> addr=4, val=<value optimized out>, l=<value optimized out>)     at 
> /root/download/qemu/git/qemu-kvm-test/hw/pci.c:1213
> #6  0x00000000004329a6 in kvm_handle_io (env=0x1931af0) at 
> /root/download/qemu/git/qemu-kvm-test/kvm-all.c:858
> #7  kvm_cpu_exec (env=0x1931af0) at 
> /root/download/qemu/git/qemu-kvm-test/kvm-all.c:997
> #8  0x000000000040bd4a in qemu_kvm_cpu_thread_fn (arg=0x1931af0) at 
> /root/download/qemu/git/qemu-kvm-test/cpus.c:806
> #9  0x0000003a06807761 in start_thread () from /lib64/libpthread.so.0
> #10 0x0000003a060e098d in clone () from /lib64/libc.so.6
>

In frame 4, can you print out i, *r, and d->io_regions[0 through 6]?  
Some of them may be optimized out unfortunately.


>
> BTW: Is the new memory API faster? Any ideas how to optimize (if not)?
>

Currently it has no effect on run time performance.

> I don't know if you remember but I was looking for fast access for the 
> following use cases for DOS legacy for KVM:
> 1.) Page switching in the area of 0xA0000-0xAFFFF (linear frame buffer 
> mapping) through INT 0x10 function
> 2.) Access the memory page
>
> As far as I saw there are 2 different virtualization approaches 
> (different in VMWare VGA and cirrus VGA):
> 1.) Just remember the page on the INT 0x10 function setter and 
> virtualize each access to the page.
> Advantages: Fast page switching
> Disadvantages: Each access is virtualized which is slow (you pointed 
> out that each switch from non virtualized to virtualized is very slow 
> and requires thousands of CPU cycles, see archive)
>
> 2.) mapping in the INT 0x10 function through memory mapping functions 
> and direct access to the mapped memory area without virtualization.
> Advantages: Fast direct access
> Disadvantages with old API: was very slow (was about 1000 switches per 
> second or even lower as far as I remember)
> As far as I found it out it came from (maybe a linear list issue?):
> static int cpu_notify_sync_dirty_bitmap(target_phys_addr_t start,
>                                         target_phys_addr_t end)
> {
>     CPUPhysMemoryClient *client;
>     QLIST_FOREACH(client, &memory_client_list, list) {
>         int r = client->sync_dirty_bitmap(client, start, end);
>         if (r < 0)
>             return r;
>     }
>     return 0;
> }
>
> kvm_physical_sync_dirty_bitmap
>
> I think variant 2 is the preferred one but with optimized switching of 
> mapping.

This should be faster today with really new kernels (the problem is not 
in qemu) but I'm not sure if it's fast enough.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Fix wide ioport access cracking
  2011-08-11  9:44         ` Avi Kivity
@ 2011-08-11 16:08           ` Gerhard Wiesinger
  2011-08-11 16:20             ` Avi Kivity
  2011-08-11 16:11           ` Gerhard Wiesinger
  1 sibling, 1 reply; 23+ messages in thread
From: Gerhard Wiesinger @ 2011-08-11 16:08 UTC (permalink / raw)
  To: Avi Kivity; +Cc: qemu-devel, kvm

On Thu, 11 Aug 2011, Avi Kivity wrote:

> On 08/11/2011 12:01 PM, Gerhard Wiesinger wrote:
>> Hello Avi,
>> 
>> #0  0x0000003a060328f5 in raise () from /lib64/libc.so.6
>> #1  0x0000003a060340d5 in abort () from /lib64/libc.so.6
>> #2  0x0000003a0602b8b5 in __assert_fail () from /lib64/libc.so.6
>> #3  0x0000000000435339 in memory_region_del_subregion (mr=<value optimized 
>> out>, subregion=<value optimized out>)    at 
>> /root/download/qemu/git/qemu-kvm-test/memory.c:1168
>> #4  0x000000000041eb9b in pci_update_mappings (d=0x1a90bc0) at 
>> /root/download/qemu/git/qemu-kvm-test/hw/pci.c:1134
>> #5  0x0000000000420a9c in pci_default_write_config (d=0x1a90bc0, addr=4, 
>> val=<value optimized out>, l=<value optimized out>)     at 
>> /root/download/qemu/git/qemu-kvm-test/hw/pci.c:1213
>> #6  0x00000000004329a6 in kvm_handle_io (env=0x1931af0) at 
>> /root/download/qemu/git/qemu-kvm-test/kvm-all.c:858
>> #7  kvm_cpu_exec (env=0x1931af0) at 
>> /root/download/qemu/git/qemu-kvm-test/kvm-all.c:997
>> #8  0x000000000040bd4a in qemu_kvm_cpu_thread_fn (arg=0x1931af0) at 
>> /root/download/qemu/git/qemu-kvm-test/cpus.c:806
>> #9  0x0000003a06807761 in start_thread () from /lib64/libpthread.so.0
>> #10 0x0000003a060e098d in clone () from /lib64/libc.so.6
>> 
>
> In frame 4, can you print out i, *r, and d->io_regions[0 through 6]?  Some of 
> them may be optimized out unfortunately.

See below.

Ciao,
Gerhard

(gdb) frame 4
#4  0x000000000041eb9b in pci_update_mappings (d=0x1a90bc0)
     at /root/download/qemu/git/qemu-kvm-test/hw/pci.c:1134
1134                memory_region_del_subregion(r->address_space, 
r->memory);
(gdb) print i
$1 = <value optimized out>
(gdb) print *r
$2 = {addr = 22058952032257, size = 32, filtered_size = 
171717340864446496,
   type = 1 '\001', memory = 0x1a90000, address_space = 0x200019282f0}
(gdb) print d->io_regions[0]
$3 = {addr = 22058952032257, size = 32, filtered_size = 
171717340864446496,
   type = 1 '\001', memory = 0x1a90000, address_space = 0x200019282f0}
(gdb) print d->io_regions[1]
$4 = {addr = 17113088, size = 32, filtered_size = 32, type = 0 '\000',
   memory = 0x1a911c8, address_space = 0x1920000}
(gdb) print d->io_regions[2]
$5 = {addr = 0, size = 0, filtered_size = 0, type = 0 '\000', memory = 
0x0,
   address_space = 0x0}
(gdb) print d->io_regions[3]
$6 = {addr = 0, size = 0, filtered_size = 0, type = 239 '\357', memory = 
0x0,
   address_space = 0x0}
(gdb) print d->io_regions[4]
$7 = {addr = 0, size = 0, filtered_size = 0, type = 0 '\000', memory = 
0x0,
   address_space = 0x0}
(gdb) print d->io_regions[5]
$8 = {addr = 0, size = 0, filtered_size = 0, type = 0 '\000', memory = 
0x0,
   address_space = 0x0}
(gdb) print d->io_regions[6]
$9 = {addr = 0, size = 0, filtered_size = 0, type = 0 '\000', memory = 
0x0,
   address_space = 0x0}

--
http://www.wiesinger.com/

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Fix wide ioport access cracking
  2011-08-11  9:44         ` Avi Kivity
  2011-08-11 16:08           ` Gerhard Wiesinger
@ 2011-08-11 16:11           ` Gerhard Wiesinger
  2011-08-11 16:15             ` Avi Kivity
  1 sibling, 1 reply; 23+ messages in thread
From: Gerhard Wiesinger @ 2011-08-11 16:11 UTC (permalink / raw)
  To: Avi Kivity; +Cc: qemu-devel, kvm

On Thu, 11 Aug 2011, Avi Kivity wrote:
> This should be faster today with really new kernels (the problem is not in 
> qemu) but I'm not sure if it's fast enough.

What's a "really new" kernel? In which version were performance 
optimizations done? (Currently I'm using 2.6.34.7, hadn't time yet to 
update from FC13 to FC15 ...)

Ciao,
Gerhard

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Fix wide ioport access cracking
  2011-08-11 16:11           ` Gerhard Wiesinger
@ 2011-08-11 16:15             ` Avi Kivity
  0 siblings, 0 replies; 23+ messages in thread
From: Avi Kivity @ 2011-08-11 16:15 UTC (permalink / raw)
  To: Gerhard Wiesinger; +Cc: qemu-devel, kvm

On 08/11/2011 07:11 PM, Gerhard Wiesinger wrote:
> On Thu, 11 Aug 2011, Avi Kivity wrote:
>> This should be faster today with really new kernels (the problem is 
>> not in qemu) but I'm not sure if it's fast enough.
>
> What's a "really new" kernel? In which version were performance 
> optimizations done? (Currently I'm using 2.6.34.7, hadn't time yet to 
> update from FC13 to FC15 ...)

Not sure really... some of the improvements were in kvm itself 
(rcu_note_context_switch), some in the rcu core.

Try out F15, it will give you 3.0 (and gnome-shell).

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Fix wide ioport access cracking
  2011-08-11 16:08           ` Gerhard Wiesinger
@ 2011-08-11 16:20             ` Avi Kivity
  2011-08-11 16:22               ` Avi Kivity
  0 siblings, 1 reply; 23+ messages in thread
From: Avi Kivity @ 2011-08-11 16:20 UTC (permalink / raw)
  To: Gerhard Wiesinger; +Cc: qemu-devel, kvm

On 08/11/2011 07:08 PM, Gerhard Wiesinger wrote:
>
> (gdb) frame 4
> #4  0x000000000041eb9b in pci_update_mappings (d=0x1a90bc0)
>     at /root/download/qemu/git/qemu-kvm-test/hw/pci.c:1134
> 1134                memory_region_del_subregion(r->address_space, 
> r->memory);
> (gdb) print i
> $1 = <value optimized out>
> (gdb) print *r
> $2 = {addr = 22058952032257, size = 32, filtered_size = 
> 171717340864446496,
>   type = 1 '\001', memory = 0x1a90000, address_space = 0x200019282f0}
> (gdb) print d->io_regions[0]
> $3 = {addr = 22058952032257, size = 32, filtered_size = 
> 171717340864446496,
>   type = 1 '\001', memory = 0x1a90000, address_space = 0x200019282f0}

Yikes, this looks like corruption, it the leading 0x2000 in 
address_space is out of place.

Can you step through lsi pci bar registration and place a data 
breakpoint on address_space, and see where it gets this value?

'addr' looks bad too.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Fix wide ioport access cracking
  2011-08-11 16:20             ` Avi Kivity
@ 2011-08-11 16:22               ` Avi Kivity
  2011-08-11 19:01                 ` Gerhard Wiesinger
  0 siblings, 1 reply; 23+ messages in thread
From: Avi Kivity @ 2011-08-11 16:22 UTC (permalink / raw)
  To: Gerhard Wiesinger; +Cc: qemu-devel, kvm

On 08/11/2011 07:20 PM, Avi Kivity wrote:
> On 08/11/2011 07:08 PM, Gerhard Wiesinger wrote:
>>
>> (gdb) frame 4
>> #4  0x000000000041eb9b in pci_update_mappings (d=0x1a90bc0)
>>     at /root/download/qemu/git/qemu-kvm-test/hw/pci.c:1134
>> 1134                memory_region_del_subregion(r->address_space, 
>> r->memory);
>> (gdb) print i
>> $1 = <value optimized out>
>> (gdb) print *r
>> $2 = {addr = 22058952032257, size = 32, filtered_size = 
>> 171717340864446496,
>>   type = 1 '\001', memory = 0x1a90000, address_space = 0x200019282f0}
>> (gdb) print d->io_regions[0]
>> $3 = {addr = 22058952032257, size = 32, filtered_size = 
>> 171717340864446496,
>>   type = 1 '\001', memory = 0x1a90000, address_space = 0x200019282f0}
>
> Yikes, this looks like corruption, it the leading 0x2000 in 
> address_space is out of place.
>
> Can you step through lsi pci bar registration and place a data 
> breakpoint on address_space, and see where it gets this value?
>
> 'addr' looks bad too.
>

Or maybe it's just -O2 screwing up debug information.  Please change 
./configure to set -O1 and redo.

Please print *r.memory as well.


-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Fix wide ioport access cracking
  2011-08-11 16:22               ` Avi Kivity
@ 2011-08-11 19:01                 ` Gerhard Wiesinger
  2011-08-22 10:46                   ` Avi Kivity
  0 siblings, 1 reply; 23+ messages in thread
From: Gerhard Wiesinger @ 2011-08-11 19:01 UTC (permalink / raw)
  To: Avi Kivity; +Cc: qemu-devel, kvm

On Thu, 11 Aug 2011, Avi Kivity wrote:
> Or maybe it's just -O2 screwing up debug information.  Please change 
> ./configure to set -O1 and redo.
>
> Please print *r.memory as well.

./configure --target-list=x86_64-softmmu,i386-softmmu --enable-debug
Rest below.

Ciao,
Gerhard

--
http://www.wiesinger.com/

(gdb) bt
#0  0x0000003a060328f5 in raise () from /lib64/libc.so.6
#1  0x0000003a060340d5 in abort () from /lib64/libc.so.6
#2  0x0000003a0602b8b5 in __assert_fail () from /lib64/libc.so.6
#3  0x0000000000447ace in memory_region_del_subregion (mr=0x20002c802f0, subregion=0x2de0000)
     at /root/download/qemu/git/qemu-kvm-test/memory.c:1168
#4  0x0000000000427671 in pci_update_mappings (d=0x2de8b80) at /root/download/qemu/git/qemu-kvm-test/hw/pci.c:1134
#5  0x0000000000427a7a in pci_default_write_config (d=0x2de8b80, addr=4, val=0, l=2) at /root/download/qemu/git/qemu-kvm-test/hw/pci.c:1213
#6  0x00000000005c33bf in pci_host_config_write_common (pci_dev=0x2de8b80, addr=4, limit=256, val=7, len=2)
     at /root/download/qemu/git/qemu-kvm-test/hw/pci_host.c:54
#7  0x00000000005c34d1 in pci_data_write (s=0x2cafb10, addr=2147489796, val=7, len=2)
     at /root/download/qemu/git/qemu-kvm-test/hw/pci_host.c:75
#8  0x00000000005c36b1 in pci_host_data_write (handler=0x2cafae0, addr=3324, val=7, len=2)
     at /root/download/qemu/git/qemu-kvm-test/hw/pci_host.c:125
#9  0x000000000043937c in ioport_simple_writew (opaque=0x2cafae0, addr=3324, value=7) at /root/download/qemu/git/qemu-kvm-test/rwhandler.c:50
#10 0x00000000004a82f5 in ioport_write (index=1, address=3324, data=7) at ioport.c:81
#11 0x00000000004a8d51 in cpu_outw (addr=3324, val=7) at ioport.c:280
#12 0x0000000000441066 in kvm_handle_io (port=3324, data=0x7f58b0304000, direction=1, size=2, count=1)
     at /root/download/qemu/git/qemu-kvm-test/kvm-all.c:858
#13 0x00000000004415d1 in kvm_cpu_exec (env=0x2c89b00) at /root/download/qemu/git/qemu-kvm-test/kvm-all.c:997
#14 0x000000000040bddf in qemu_kvm_cpu_thread_fn (arg=0x2c89b00) at /root/download/qemu/git/qemu-kvm-test/cpus.c:806
#15 0x0000003a06807761 in start_thread () from /lib64/libpthread.so.0
#16 0x0000003a060e098d in clone () from /lib64/libc.so.6
(gdb) frame 4
#4  0x0000000000427671 in pci_update_mappings (d=0x2de8b80) at /root/download/qemu/git/qemu-kvm-test/hw/pci.c:1134
1134                memory_region_del_subregion(r->address_space, r->memory);
(gdb) print i
$1 = 0
(gdb) print *r
$2 = {addr = 22058952032257, size = 32, filtered_size = 171717340864446496, type = 1 '\001', memory = 0x2de0000, address_space =
     0x20002c802f0}
(gdb) print d->io_regions[0]
$3 = {addr = 22058952032257, size = 32, filtered_size = 171717340864446496, type = 1 '\001', memory = 0x2de0000, address_space =
     0x20002c802f0}
(gdb) print d->io_regions[1]
$4 = {addr = 17113088, size = 32, filtered_size = 32, type = 0 '\000', memory = 0x2de9188, address_space = 0x2c80000}
(gdb) print d->io_regions[2]
$5 = {addr = 0, size = 0, filtered_size = 0, type = 0 '\000', memory = 0x0, address_space = 0x0}
(gdb) print d->io_regions[3]
$6 = {addr = 0, size = 0, filtered_size = 0, type = 207 '\317', memory = 0x0, address_space = 0x0}
(gdb) print d->io_regions[4]
$7 = {addr = 0, size = 0, filtered_size = 0, type = 0 '\000', memory = 0x0, address_space = 0x0}
(gdb) print d->io_regions[5]
$8 = {addr = 0, size = 0, filtered_size = 0, type = 0 '\000', memory = 0x0, address_space = 0x0}
(gdb) print d->io_regions[6]
$9 = {addr = 0, size = 0, filtered_size = 0, type = 0 '\000', memory = 0x0, address_space = 0x0}
(gdb) print *r.memory
$10 = {ops = 0x615f646e6573000a, opaque = 0x646d635f69706174, parent = 0x2064616572203a20, size = 8297917989298270821, addr =
     3469246654059929972, offset = 2683426788631148594, backend_registered = 48, ram_addr = 7597679723851768942, iorange = {ops =
     0x44203a20646d635f, base = 8295758535554257234, len = 8386112019083850853}, terminates = 117, alias = 0x6d635f627375000a, alias_offset =
     7575161725715242852, priority = 1881488740, may_overlap = 32, subregions = {tqh_first = 0x6f632064253d6574, tqh_last =
     0x622064253d746e75}, subregions_link = {tqe_next = 0x6675622064253d73, tqe_prev = 0x425355000a70253d}, coalesced = {tqh_first =
     0x696d736e61727420, tqh_last = 0x6166206e6f697373}, name = 0x7473000a64656c69 <Address 0x7473000a64656c69 out of bounds>,
   dirty_log_mask = 117 'u', ioeventfd_nb = 1680161395, ioeventfds = 0x5f6b736964000a3a}


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Fix wide ioport access cracking
  2011-08-11 19:01                 ` Gerhard Wiesinger
@ 2011-08-22 10:46                   ` Avi Kivity
  0 siblings, 0 replies; 23+ messages in thread
From: Avi Kivity @ 2011-08-22 10:46 UTC (permalink / raw)
  To: Gerhard Wiesinger; +Cc: qemu-devel, kvm

[-- Attachment #1: Type: text/plain, Size: 538 bytes --]

On 08/11/2011 10:01 PM, Gerhard Wiesinger wrote:
> On Thu, 11 Aug 2011, Avi Kivity wrote:
>> Or maybe it's just -O2 screwing up debug information.  Please change 
>> ./configure to set -O1 and redo.
>>
>> Please print *r.memory as well.
>
> ./configure --target-list=x86_64-softmmu,i386-softmmu --enable-debug
> Rest below.
>

Please run again using

    gdb -x memory.gdb --args qemu-system-x86_64 ...

and the attached memory.gdb.  Please post the entire log generated.

-- 
error compiling committee.c: too many arguments to function


[-- Attachment #2: memory.gdb --]
[-- Type: text/plain, Size: 629 bytes --]

handle SIGUSR2 pass noprint
handle SIG38 pass noprint
def dump_mr
  set $mr = $arg0
  printf "%p/%p addr %x parent %p name %s\n", $mr, $mr.ops, $mr.addr, $mr.parent, $mr.name
end
break memory_region_add_subregion
commands 1
  dump_mr subregion
  cont
end
break memory_region_del_subregion
commands 2
  dump_mr subregion
  printf "parent %p\n", mr
  cont
end
break memory_region_destroy
commands 3
  dump_mr mr
  cont
end
break memory_region_init
commands 4
  cont
end
break memory_region_init_io
commands 5
  cont
end
break memory_region_init_ram_ptr
commands 6
  cont
end
break memory_region_init_alias
commands 7
  cont
end
run

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Fix wide ioport access cracking
  2011-08-11  7:40 ` [Qemu-devel] " Avi Kivity
                   ` (3 preceding siblings ...)
  (?)
@ 2011-08-22 14:42 ` Anthony Liguori
  -1 siblings, 0 replies; 23+ messages in thread
From: Anthony Liguori @ 2011-08-22 14:42 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Gerhard Wiesinger, qemu-devel, kvm

On 08/11/2011 02:40 AM, Avi Kivity wrote:
> The memory API automatically cracks wide memory accesses into narrower
> (usually byte) accesses when needed.  Unfortunately this wasn't implemented
> for ioports, leading to an lsi53c895a failure.
>
> This series implements cracking for ioports as well.
>
> Note that the dual implementation is due to the fact the memory API is layered
> on top of the original qemu API; eventually the same code will be used for
> both ioports and mmio.
>
> Avi Kivity (2):
>    memory: abstract cracking of write access ops into a function
>    memory: crack wide ioport accesses into smaller ones when needed
>
>   memory.c |  120 +++++++++++++++++++++++++++++++++++++++----------------------
>   1 files changed, 77 insertions(+), 43 deletions(-)

Applied.  Thanks.

Regards,

Anthony Liguori



^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2011-08-22 14:42 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-08-11  7:40 [PATCH 0/2] Fix wide ioport access cracking Avi Kivity
2011-08-11  7:40 ` [Qemu-devel] " Avi Kivity
2011-08-11  7:40 ` [PATCH 1/2] memory: abstract cracking of write access ops into a function Avi Kivity
2011-08-11  7:40   ` [Qemu-devel] " Avi Kivity
2011-08-11  7:40 ` [PATCH 2/2] memory: crack wide ioport accesses into smaller ones when needed Avi Kivity
2011-08-11  7:40   ` [Qemu-devel] " Avi Kivity
2011-08-11  8:25 ` [PATCH 0/2] Fix wide ioport access cracking Gerhard Wiesinger
2011-08-11  8:25   ` [Qemu-devel] " Gerhard Wiesinger
2011-08-11  8:27   ` Avi Kivity
2011-08-11  8:27     ` [Qemu-devel] " Avi Kivity
2011-08-11  8:29     ` Avi Kivity
2011-08-11  8:29       ` Avi Kivity
2011-08-11  9:01       ` Gerhard Wiesinger
2011-08-11  9:01         ` [Qemu-devel] " Gerhard Wiesinger
2011-08-11  9:44         ` Avi Kivity
2011-08-11 16:08           ` Gerhard Wiesinger
2011-08-11 16:20             ` Avi Kivity
2011-08-11 16:22               ` Avi Kivity
2011-08-11 19:01                 ` Gerhard Wiesinger
2011-08-22 10:46                   ` Avi Kivity
2011-08-11 16:11           ` Gerhard Wiesinger
2011-08-11 16:15             ` Avi Kivity
2011-08-22 14:42 ` Anthony Liguori

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.