All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2][uqmaster] kvm: Flush coalesced MMIO buffer periodly
@ 2010-01-26  9:41 ` Sheng Yang
  0 siblings, 0 replies; 10+ messages in thread
From: Sheng Yang @ 2010-01-26  9:41 UTC (permalink / raw)
  To: Marcelo Tosatti, Avi Kivity; +Cc: Anthony Liguori, kvm, qemu-devel, Sheng Yang

The default action of coalesced MMIO is, cache the writing in buffer, until:
1. The buffer is full.
2. Or the exit to QEmu due to other reasons.

But this would result in a very late writing in some condition.
1. The each time write to MMIO content is small.
2. The writing interval is big.
3. No need for input or accessing other devices frequently.

This issue was observed in a experimental embbed system. The test image
simply print "test" every 1 seconds. The output in QEmu meets expectation,
but the output in KVM is delayed for seconds.

Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
handler. By this way, We don't need vcpu explicit exit to QEmu to
handle this issue.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
---
 cpu-all.h |    2 ++
 exec.c    |    6 ++++++
 kvm-all.c |   21 +++++++++++++--------
 kvm.h     |    1 +
 vl.c      |    2 ++
 5 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/cpu-all.h b/cpu-all.h
index 57b69f8..1ccc9a8 100644
--- a/cpu-all.h
+++ b/cpu-all.h
@@ -915,6 +915,8 @@ void qemu_register_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
 
 void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
 
+void qemu_flush_coalesced_mmio_buffer(void);
+
 /*******************************************/
 /* host CPU ticks (if available) */
 
diff --git a/exec.c b/exec.c
index 1190591..6875370 100644
--- a/exec.c
+++ b/exec.c
@@ -2406,6 +2406,12 @@ void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size)
         kvm_uncoalesce_mmio_region(addr, size);
 }
 
+void qemu_flush_coalesced_mmio_buffer(void)
+{
+    if (kvm_enabled())
+        kvm_flush_coalesced_mmio_buffer();
+}
+
 ram_addr_t qemu_ram_alloc(ram_addr_t size)
 {
     RAMBlock *new_block;
diff --git a/kvm-all.c b/kvm-all.c
index 15ec38e..889fc42 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -59,6 +59,7 @@ struct KVMState
     int vmfd;
     int regs_modified;
     int coalesced_mmio;
+    struct kvm_coalesced_mmio_ring *coalesced_mmio_ring;
     int broken_set_mem_region;
     int migration_log;
     int vcpu_events;
@@ -200,6 +201,12 @@ int kvm_init_vcpu(CPUState *env)
         goto err;
     }
 
+#ifdef KVM_CAP_COALESCED_MMIO
+    if (s->coalesced_mmio && !s->coalesced_mmio_ring)
+        s->coalesced_mmio_ring = (void *) env->kvm_run +
+		s->coalesced_mmio * PAGE_SIZE;
+#endif
+
     ret = kvm_arch_init_vcpu(env);
     if (ret == 0) {
         qemu_register_reset(kvm_reset_vcpu, env);
@@ -466,10 +473,10 @@ int kvm_init(int smp_cpus)
         goto err;
     }
 
+    s->coalesced_mmio = 0;
+    s->coalesced_mmio_ring = NULL;
 #ifdef KVM_CAP_COALESCED_MMIO
     s->coalesced_mmio = kvm_check_extension(s, KVM_CAP_COALESCED_MMIO);
-#else
-    s->coalesced_mmio = 0;
 #endif
 
     s->broken_set_mem_region = 1;
@@ -544,14 +551,12 @@ static int kvm_handle_io(uint16_t port, void *data, int direction, int size,
     return 1;
 }
 
-static void kvm_run_coalesced_mmio(CPUState *env, struct kvm_run *run)
+void kvm_flush_coalesced_mmio_buffer(void)
 {
 #ifdef KVM_CAP_COALESCED_MMIO
     KVMState *s = kvm_state;
-    if (s->coalesced_mmio) {
-        struct kvm_coalesced_mmio_ring *ring;
-
-        ring = (void *)run + (s->coalesced_mmio * TARGET_PAGE_SIZE);
+    if (s->coalesced_mmio_ring) {
+        struct kvm_coalesced_mmio_ring *ring = s->coalesced_mmio_ring;
         while (ring->first != ring->last) {
             struct kvm_coalesced_mmio *ent;
 
@@ -609,7 +614,7 @@ int kvm_cpu_exec(CPUState *env)
             abort();
         }
 
-        kvm_run_coalesced_mmio(env, run);
+        kvm_flush_coalesced_mmio_buffer();
 
         ret = 0; /* exit loop */
         switch (run->exit_reason) {
diff --git a/kvm.h b/kvm.h
index 1c93ac5..59cba18 100644
--- a/kvm.h
+++ b/kvm.h
@@ -53,6 +53,7 @@ void kvm_setup_guest_memory(void *start, size_t size);
 
 int kvm_coalesce_mmio_region(target_phys_addr_t start, ram_addr_t size);
 int kvm_uncoalesce_mmio_region(target_phys_addr_t start, ram_addr_t size);
+void kvm_flush_coalesced_mmio_buffer(void);
 
 int kvm_insert_breakpoint(CPUState *current_env, target_ulong addr,
                           target_ulong len, int type);
diff --git a/vl.c b/vl.c
index 2b0b653..1f0c536 100644
--- a/vl.c
+++ b/vl.c
@@ -3193,6 +3193,7 @@ static void gui_update(void *opaque)
     DisplayState *ds = opaque;
     DisplayChangeListener *dcl = ds->listeners;
 
+    qemu_flush_coalesced_mmio_buffer();
     dpy_refresh(ds);
 
     while (dcl != NULL) {
@@ -3208,6 +3209,7 @@ static void nographic_update(void *opaque)
 {
     uint64_t interval = GUI_REFRESH_INTERVAL;
 
+    qemu_flush_coalesced_mmio_buffer();
     qemu_mod_timer(nographic_timer, interval + qemu_get_clock(rt_clock));
 }
 
-- 
1.5.4.5


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Qemu-devel] [PATCH v2][uqmaster] kvm: Flush coalesced MMIO buffer periodly
@ 2010-01-26  9:41 ` Sheng Yang
  0 siblings, 0 replies; 10+ messages in thread
From: Sheng Yang @ 2010-01-26  9:41 UTC (permalink / raw)
  To: Marcelo Tosatti, Avi Kivity; +Cc: Sheng Yang, kvm, qemu-devel

The default action of coalesced MMIO is, cache the writing in buffer, until:
1. The buffer is full.
2. Or the exit to QEmu due to other reasons.

But this would result in a very late writing in some condition.
1. The each time write to MMIO content is small.
2. The writing interval is big.
3. No need for input or accessing other devices frequently.

This issue was observed in a experimental embbed system. The test image
simply print "test" every 1 seconds. The output in QEmu meets expectation,
but the output in KVM is delayed for seconds.

Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
handler. By this way, We don't need vcpu explicit exit to QEmu to
handle this issue.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
---
 cpu-all.h |    2 ++
 exec.c    |    6 ++++++
 kvm-all.c |   21 +++++++++++++--------
 kvm.h     |    1 +
 vl.c      |    2 ++
 5 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/cpu-all.h b/cpu-all.h
index 57b69f8..1ccc9a8 100644
--- a/cpu-all.h
+++ b/cpu-all.h
@@ -915,6 +915,8 @@ void qemu_register_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
 
 void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
 
+void qemu_flush_coalesced_mmio_buffer(void);
+
 /*******************************************/
 /* host CPU ticks (if available) */
 
diff --git a/exec.c b/exec.c
index 1190591..6875370 100644
--- a/exec.c
+++ b/exec.c
@@ -2406,6 +2406,12 @@ void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size)
         kvm_uncoalesce_mmio_region(addr, size);
 }
 
+void qemu_flush_coalesced_mmio_buffer(void)
+{
+    if (kvm_enabled())
+        kvm_flush_coalesced_mmio_buffer();
+}
+
 ram_addr_t qemu_ram_alloc(ram_addr_t size)
 {
     RAMBlock *new_block;
diff --git a/kvm-all.c b/kvm-all.c
index 15ec38e..889fc42 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -59,6 +59,7 @@ struct KVMState
     int vmfd;
     int regs_modified;
     int coalesced_mmio;
+    struct kvm_coalesced_mmio_ring *coalesced_mmio_ring;
     int broken_set_mem_region;
     int migration_log;
     int vcpu_events;
@@ -200,6 +201,12 @@ int kvm_init_vcpu(CPUState *env)
         goto err;
     }
 
+#ifdef KVM_CAP_COALESCED_MMIO
+    if (s->coalesced_mmio && !s->coalesced_mmio_ring)
+        s->coalesced_mmio_ring = (void *) env->kvm_run +
+		s->coalesced_mmio * PAGE_SIZE;
+#endif
+
     ret = kvm_arch_init_vcpu(env);
     if (ret == 0) {
         qemu_register_reset(kvm_reset_vcpu, env);
@@ -466,10 +473,10 @@ int kvm_init(int smp_cpus)
         goto err;
     }
 
+    s->coalesced_mmio = 0;
+    s->coalesced_mmio_ring = NULL;
 #ifdef KVM_CAP_COALESCED_MMIO
     s->coalesced_mmio = kvm_check_extension(s, KVM_CAP_COALESCED_MMIO);
-#else
-    s->coalesced_mmio = 0;
 #endif
 
     s->broken_set_mem_region = 1;
@@ -544,14 +551,12 @@ static int kvm_handle_io(uint16_t port, void *data, int direction, int size,
     return 1;
 }
 
-static void kvm_run_coalesced_mmio(CPUState *env, struct kvm_run *run)
+void kvm_flush_coalesced_mmio_buffer(void)
 {
 #ifdef KVM_CAP_COALESCED_MMIO
     KVMState *s = kvm_state;
-    if (s->coalesced_mmio) {
-        struct kvm_coalesced_mmio_ring *ring;
-
-        ring = (void *)run + (s->coalesced_mmio * TARGET_PAGE_SIZE);
+    if (s->coalesced_mmio_ring) {
+        struct kvm_coalesced_mmio_ring *ring = s->coalesced_mmio_ring;
         while (ring->first != ring->last) {
             struct kvm_coalesced_mmio *ent;
 
@@ -609,7 +614,7 @@ int kvm_cpu_exec(CPUState *env)
             abort();
         }
 
-        kvm_run_coalesced_mmio(env, run);
+        kvm_flush_coalesced_mmio_buffer();
 
         ret = 0; /* exit loop */
         switch (run->exit_reason) {
diff --git a/kvm.h b/kvm.h
index 1c93ac5..59cba18 100644
--- a/kvm.h
+++ b/kvm.h
@@ -53,6 +53,7 @@ void kvm_setup_guest_memory(void *start, size_t size);
 
 int kvm_coalesce_mmio_region(target_phys_addr_t start, ram_addr_t size);
 int kvm_uncoalesce_mmio_region(target_phys_addr_t start, ram_addr_t size);
+void kvm_flush_coalesced_mmio_buffer(void);
 
 int kvm_insert_breakpoint(CPUState *current_env, target_ulong addr,
                           target_ulong len, int type);
diff --git a/vl.c b/vl.c
index 2b0b653..1f0c536 100644
--- a/vl.c
+++ b/vl.c
@@ -3193,6 +3193,7 @@ static void gui_update(void *opaque)
     DisplayState *ds = opaque;
     DisplayChangeListener *dcl = ds->listeners;
 
+    qemu_flush_coalesced_mmio_buffer();
     dpy_refresh(ds);
 
     while (dcl != NULL) {
@@ -3208,6 +3209,7 @@ static void nographic_update(void *opaque)
 {
     uint64_t interval = GUI_REFRESH_INTERVAL;
 
+    qemu_flush_coalesced_mmio_buffer();
     qemu_mod_timer(nographic_timer, interval + qemu_get_clock(rt_clock));
 }
 
-- 
1.5.4.5

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2][uqmaster] kvm: Flush coalesced MMIO buffer periodly
  2010-01-26  9:41 ` [Qemu-devel] " Sheng Yang
@ 2010-01-26  9:59   ` Alexander Graf
  -1 siblings, 0 replies; 10+ messages in thread
From: Alexander Graf @ 2010-01-26  9:59 UTC (permalink / raw)
  To: Sheng Yang; +Cc: Marcelo Tosatti, Avi Kivity, Anthony Liguori, kvm, qemu-devel


On 26.01.2010, at 10:41, Sheng Yang wrote:

> The default action of coalesced MMIO is, cache the writing in buffer, until:
> 1. The buffer is full.
> 2. Or the exit to QEmu due to other reasons.
> 
> But this would result in a very late writing in some condition.
> 1. The each time write to MMIO content is small.
> 2. The writing interval is big.
> 3. No need for input or accessing other devices frequently.
> 
> This issue was observed in a experimental embbed system. The test image
> simply print "test" every 1 seconds. The output in QEmu meets expectation,
> but the output in KVM is delayed for seconds.
> 
> Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
> handler. By this way, We don't need vcpu explicit exit to QEmu to
> handle this issue.
> 
> Signed-off-by: Sheng Yang <sheng@linux.intel.com>
> ---
> cpu-all.h |    2 ++
> exec.c    |    6 ++++++
> kvm-all.c |   21 +++++++++++++--------
> kvm.h     |    1 +
> vl.c      |    2 ++
> 5 files changed, 24 insertions(+), 8 deletions(-)
> 
> diff --git a/cpu-all.h b/cpu-all.h
> index 57b69f8..1ccc9a8 100644
> --- a/cpu-all.h
> +++ b/cpu-all.h
> @@ -915,6 +915,8 @@ void qemu_register_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
> 
> void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
> 
> +void qemu_flush_coalesced_mmio_buffer(void);
> +
> /*******************************************/
> /* host CPU ticks (if available) */
> 
> diff --git a/exec.c b/exec.c
> index 1190591..6875370 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2406,6 +2406,12 @@ void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size)
>         kvm_uncoalesce_mmio_region(addr, size);
> }
> 
> +void qemu_flush_coalesced_mmio_buffer(void)
> +{
> +    if (kvm_enabled())
> +        kvm_flush_coalesced_mmio_buffer();
> +}
> +
> ram_addr_t qemu_ram_alloc(ram_addr_t size)
> {
>     RAMBlock *new_block;
> diff --git a/kvm-all.c b/kvm-all.c
> index 15ec38e..889fc42 100644
> --- a/kvm-all.c
> +++ b/kvm-all.c
> @@ -59,6 +59,7 @@ struct KVMState
>     int vmfd;
>     int regs_modified;
>     int coalesced_mmio;
> +    struct kvm_coalesced_mmio_ring *coalesced_mmio_ring;

I guess this needs to be guarded by an #ifdef?


Alex

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Qemu-devel] Re: [PATCH v2][uqmaster] kvm: Flush coalesced MMIO buffer periodly
@ 2010-01-26  9:59   ` Alexander Graf
  0 siblings, 0 replies; 10+ messages in thread
From: Alexander Graf @ 2010-01-26  9:59 UTC (permalink / raw)
  To: Sheng Yang; +Cc: qemu-devel, Marcelo Tosatti, Avi Kivity, kvm


On 26.01.2010, at 10:41, Sheng Yang wrote:

> The default action of coalesced MMIO is, cache the writing in buffer, until:
> 1. The buffer is full.
> 2. Or the exit to QEmu due to other reasons.
> 
> But this would result in a very late writing in some condition.
> 1. The each time write to MMIO content is small.
> 2. The writing interval is big.
> 3. No need for input or accessing other devices frequently.
> 
> This issue was observed in a experimental embbed system. The test image
> simply print "test" every 1 seconds. The output in QEmu meets expectation,
> but the output in KVM is delayed for seconds.
> 
> Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
> handler. By this way, We don't need vcpu explicit exit to QEmu to
> handle this issue.
> 
> Signed-off-by: Sheng Yang <sheng@linux.intel.com>
> ---
> cpu-all.h |    2 ++
> exec.c    |    6 ++++++
> kvm-all.c |   21 +++++++++++++--------
> kvm.h     |    1 +
> vl.c      |    2 ++
> 5 files changed, 24 insertions(+), 8 deletions(-)
> 
> diff --git a/cpu-all.h b/cpu-all.h
> index 57b69f8..1ccc9a8 100644
> --- a/cpu-all.h
> +++ b/cpu-all.h
> @@ -915,6 +915,8 @@ void qemu_register_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
> 
> void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
> 
> +void qemu_flush_coalesced_mmio_buffer(void);
> +
> /*******************************************/
> /* host CPU ticks (if available) */
> 
> diff --git a/exec.c b/exec.c
> index 1190591..6875370 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2406,6 +2406,12 @@ void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size)
>         kvm_uncoalesce_mmio_region(addr, size);
> }
> 
> +void qemu_flush_coalesced_mmio_buffer(void)
> +{
> +    if (kvm_enabled())
> +        kvm_flush_coalesced_mmio_buffer();
> +}
> +
> ram_addr_t qemu_ram_alloc(ram_addr_t size)
> {
>     RAMBlock *new_block;
> diff --git a/kvm-all.c b/kvm-all.c
> index 15ec38e..889fc42 100644
> --- a/kvm-all.c
> +++ b/kvm-all.c
> @@ -59,6 +59,7 @@ struct KVMState
>     int vmfd;
>     int regs_modified;
>     int coalesced_mmio;
> +    struct kvm_coalesced_mmio_ring *coalesced_mmio_ring;

I guess this needs to be guarded by an #ifdef?


Alex

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2][uqmaster] kvm: Flush coalesced MMIO buffer periodly
  2010-01-26  9:59   ` [Qemu-devel] " Alexander Graf
@ 2010-01-26 11:17     ` Sheng Yang
  -1 siblings, 0 replies; 10+ messages in thread
From: Sheng Yang @ 2010-01-26 11:17 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Marcelo Tosatti, Avi Kivity, Anthony Liguori, kvm, qemu-devel

On Tue, Jan 26, 2010 at 10:59:17AM +0100, Alexander Graf wrote:
> 
> On 26.01.2010, at 10:41, Sheng Yang wrote:
> 
> > --- a/kvm-all.c
> > +++ b/kvm-all.c
> > @@ -59,6 +59,7 @@ struct KVMState
> >     int vmfd;
> >     int regs_modified;
> >     int coalesced_mmio;
> > +    struct kvm_coalesced_mmio_ring *coalesced_mmio_ring;
> 
> I guess this needs to be guarded by an #ifdef?

Oh, yes. Thanks for reminder. :)

-- 
regards
Yang, Sheng

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Qemu-devel] Re: [PATCH v2][uqmaster] kvm: Flush coalesced MMIO buffer periodly
@ 2010-01-26 11:17     ` Sheng Yang
  0 siblings, 0 replies; 10+ messages in thread
From: Sheng Yang @ 2010-01-26 11:17 UTC (permalink / raw)
  To: Alexander Graf; +Cc: qemu-devel, Marcelo Tosatti, Avi Kivity, kvm

On Tue, Jan 26, 2010 at 10:59:17AM +0100, Alexander Graf wrote:
> 
> On 26.01.2010, at 10:41, Sheng Yang wrote:
> 
> > --- a/kvm-all.c
> > +++ b/kvm-all.c
> > @@ -59,6 +59,7 @@ struct KVMState
> >     int vmfd;
> >     int regs_modified;
> >     int coalesced_mmio;
> > +    struct kvm_coalesced_mmio_ring *coalesced_mmio_ring;
> 
> I guess this needs to be guarded by an #ifdef?

Oh, yes. Thanks for reminder. :)

-- 
regards
Yang, Sheng

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3][uqmaster] kvm: Flush coalesced MMIO buffer periodly
  2010-01-26  9:59   ` [Qemu-devel] " Alexander Graf
@ 2010-01-26 11:21     ` Sheng Yang
  -1 siblings, 0 replies; 10+ messages in thread
From: Sheng Yang @ 2010-01-26 11:21 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti
  Cc: Anthony Liguori, Alexander Graf, kvm, qemu-devel, Sheng Yang

The default action of coalesced MMIO is, cache the writing in buffer, until:
1. The buffer is full.
2. Or the exit to QEmu due to other reasons.

But this would result in a very late writing in some condition.
1. The each time write to MMIO content is small.
2. The writing interval is big.
3. No need for input or accessing other devices frequently.

This issue was observed in a experimental embbed system. The test image
simply print "test" every 1 seconds. The output in QEmu meets expectation,
but the output in KVM is delayed for seconds.

Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
handler. By this way, We don't need vcpu explicit exit to QEmu to
handle this issue.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
---
 cpu-all.h |    2 ++
 exec.c    |    6 ++++++
 kvm-all.c |   23 +++++++++++++++--------
 kvm.h     |    1 +
 vl.c      |    2 ++
 5 files changed, 26 insertions(+), 8 deletions(-)

diff --git a/cpu-all.h b/cpu-all.h
index 57b69f8..1ccc9a8 100644
--- a/cpu-all.h
+++ b/cpu-all.h
@@ -915,6 +915,8 @@ void qemu_register_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
 
 void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
 
+void qemu_flush_coalesced_mmio_buffer(void);
+
 /*******************************************/
 /* host CPU ticks (if available) */
 
diff --git a/exec.c b/exec.c
index 1190591..6875370 100644
--- a/exec.c
+++ b/exec.c
@@ -2406,6 +2406,12 @@ void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size)
         kvm_uncoalesce_mmio_region(addr, size);
 }
 
+void qemu_flush_coalesced_mmio_buffer(void)
+{
+    if (kvm_enabled())
+        kvm_flush_coalesced_mmio_buffer();
+}
+
 ram_addr_t qemu_ram_alloc(ram_addr_t size)
 {
     RAMBlock *new_block;
diff --git a/kvm-all.c b/kvm-all.c
index 15ec38e..f8350c9 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -59,6 +59,9 @@ struct KVMState
     int vmfd;
     int regs_modified;
     int coalesced_mmio;
+#ifdef KVM_CAP_COALESCED_MMIO
+    struct kvm_coalesced_mmio_ring *coalesced_mmio_ring;
+#endif
     int broken_set_mem_region;
     int migration_log;
     int vcpu_events;
@@ -200,6 +203,12 @@ int kvm_init_vcpu(CPUState *env)
         goto err;
     }
 
+#ifdef KVM_CAP_COALESCED_MMIO
+    if (s->coalesced_mmio && !s->coalesced_mmio_ring)
+        s->coalesced_mmio_ring = (void *) env->kvm_run +
+		s->coalesced_mmio * PAGE_SIZE;
+#endif
+
     ret = kvm_arch_init_vcpu(env);
     if (ret == 0) {
         qemu_register_reset(kvm_reset_vcpu, env);
@@ -466,10 +475,10 @@ int kvm_init(int smp_cpus)
         goto err;
     }
 
+    s->coalesced_mmio = 0;
 #ifdef KVM_CAP_COALESCED_MMIO
     s->coalesced_mmio = kvm_check_extension(s, KVM_CAP_COALESCED_MMIO);
-#else
-    s->coalesced_mmio = 0;
+    s->coalesced_mmio_ring = NULL;
 #endif
 
     s->broken_set_mem_region = 1;
@@ -544,14 +553,12 @@ static int kvm_handle_io(uint16_t port, void *data, int direction, int size,
     return 1;
 }
 
-static void kvm_run_coalesced_mmio(CPUState *env, struct kvm_run *run)
+void kvm_flush_coalesced_mmio_buffer(void)
 {
 #ifdef KVM_CAP_COALESCED_MMIO
     KVMState *s = kvm_state;
-    if (s->coalesced_mmio) {
-        struct kvm_coalesced_mmio_ring *ring;
-
-        ring = (void *)run + (s->coalesced_mmio * TARGET_PAGE_SIZE);
+    if (s->coalesced_mmio_ring) {
+        struct kvm_coalesced_mmio_ring *ring = s->coalesced_mmio_ring;
         while (ring->first != ring->last) {
             struct kvm_coalesced_mmio *ent;
 
@@ -609,7 +616,7 @@ int kvm_cpu_exec(CPUState *env)
             abort();
         }
 
-        kvm_run_coalesced_mmio(env, run);
+        kvm_flush_coalesced_mmio_buffer();
 
         ret = 0; /* exit loop */
         switch (run->exit_reason) {
diff --git a/kvm.h b/kvm.h
index 1c93ac5..59cba18 100644
--- a/kvm.h
+++ b/kvm.h
@@ -53,6 +53,7 @@ void kvm_setup_guest_memory(void *start, size_t size);
 
 int kvm_coalesce_mmio_region(target_phys_addr_t start, ram_addr_t size);
 int kvm_uncoalesce_mmio_region(target_phys_addr_t start, ram_addr_t size);
+void kvm_flush_coalesced_mmio_buffer(void);
 
 int kvm_insert_breakpoint(CPUState *current_env, target_ulong addr,
                           target_ulong len, int type);
diff --git a/vl.c b/vl.c
index 2b0b653..1f0c536 100644
--- a/vl.c
+++ b/vl.c
@@ -3193,6 +3193,7 @@ static void gui_update(void *opaque)
     DisplayState *ds = opaque;
     DisplayChangeListener *dcl = ds->listeners;
 
+    qemu_flush_coalesced_mmio_buffer();
     dpy_refresh(ds);
 
     while (dcl != NULL) {
@@ -3208,6 +3209,7 @@ static void nographic_update(void *opaque)
 {
     uint64_t interval = GUI_REFRESH_INTERVAL;
 
+    qemu_flush_coalesced_mmio_buffer();
     qemu_mod_timer(nographic_timer, interval + qemu_get_clock(rt_clock));
 }
 
-- 
1.5.4.5


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Qemu-devel] [PATCH v3][uqmaster] kvm: Flush coalesced MMIO buffer periodly
@ 2010-01-26 11:21     ` Sheng Yang
  0 siblings, 0 replies; 10+ messages in thread
From: Sheng Yang @ 2010-01-26 11:21 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti; +Cc: qemu-devel, Sheng Yang, Alexander Graf, kvm

The default action of coalesced MMIO is, cache the writing in buffer, until:
1. The buffer is full.
2. Or the exit to QEmu due to other reasons.

But this would result in a very late writing in some condition.
1. The each time write to MMIO content is small.
2. The writing interval is big.
3. No need for input or accessing other devices frequently.

This issue was observed in a experimental embbed system. The test image
simply print "test" every 1 seconds. The output in QEmu meets expectation,
but the output in KVM is delayed for seconds.

Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
handler. By this way, We don't need vcpu explicit exit to QEmu to
handle this issue.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
---
 cpu-all.h |    2 ++
 exec.c    |    6 ++++++
 kvm-all.c |   23 +++++++++++++++--------
 kvm.h     |    1 +
 vl.c      |    2 ++
 5 files changed, 26 insertions(+), 8 deletions(-)

diff --git a/cpu-all.h b/cpu-all.h
index 57b69f8..1ccc9a8 100644
--- a/cpu-all.h
+++ b/cpu-all.h
@@ -915,6 +915,8 @@ void qemu_register_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
 
 void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
 
+void qemu_flush_coalesced_mmio_buffer(void);
+
 /*******************************************/
 /* host CPU ticks (if available) */
 
diff --git a/exec.c b/exec.c
index 1190591..6875370 100644
--- a/exec.c
+++ b/exec.c
@@ -2406,6 +2406,12 @@ void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size)
         kvm_uncoalesce_mmio_region(addr, size);
 }
 
+void qemu_flush_coalesced_mmio_buffer(void)
+{
+    if (kvm_enabled())
+        kvm_flush_coalesced_mmio_buffer();
+}
+
 ram_addr_t qemu_ram_alloc(ram_addr_t size)
 {
     RAMBlock *new_block;
diff --git a/kvm-all.c b/kvm-all.c
index 15ec38e..f8350c9 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -59,6 +59,9 @@ struct KVMState
     int vmfd;
     int regs_modified;
     int coalesced_mmio;
+#ifdef KVM_CAP_COALESCED_MMIO
+    struct kvm_coalesced_mmio_ring *coalesced_mmio_ring;
+#endif
     int broken_set_mem_region;
     int migration_log;
     int vcpu_events;
@@ -200,6 +203,12 @@ int kvm_init_vcpu(CPUState *env)
         goto err;
     }
 
+#ifdef KVM_CAP_COALESCED_MMIO
+    if (s->coalesced_mmio && !s->coalesced_mmio_ring)
+        s->coalesced_mmio_ring = (void *) env->kvm_run +
+		s->coalesced_mmio * PAGE_SIZE;
+#endif
+
     ret = kvm_arch_init_vcpu(env);
     if (ret == 0) {
         qemu_register_reset(kvm_reset_vcpu, env);
@@ -466,10 +475,10 @@ int kvm_init(int smp_cpus)
         goto err;
     }
 
+    s->coalesced_mmio = 0;
 #ifdef KVM_CAP_COALESCED_MMIO
     s->coalesced_mmio = kvm_check_extension(s, KVM_CAP_COALESCED_MMIO);
-#else
-    s->coalesced_mmio = 0;
+    s->coalesced_mmio_ring = NULL;
 #endif
 
     s->broken_set_mem_region = 1;
@@ -544,14 +553,12 @@ static int kvm_handle_io(uint16_t port, void *data, int direction, int size,
     return 1;
 }
 
-static void kvm_run_coalesced_mmio(CPUState *env, struct kvm_run *run)
+void kvm_flush_coalesced_mmio_buffer(void)
 {
 #ifdef KVM_CAP_COALESCED_MMIO
     KVMState *s = kvm_state;
-    if (s->coalesced_mmio) {
-        struct kvm_coalesced_mmio_ring *ring;
-
-        ring = (void *)run + (s->coalesced_mmio * TARGET_PAGE_SIZE);
+    if (s->coalesced_mmio_ring) {
+        struct kvm_coalesced_mmio_ring *ring = s->coalesced_mmio_ring;
         while (ring->first != ring->last) {
             struct kvm_coalesced_mmio *ent;
 
@@ -609,7 +616,7 @@ int kvm_cpu_exec(CPUState *env)
             abort();
         }
 
-        kvm_run_coalesced_mmio(env, run);
+        kvm_flush_coalesced_mmio_buffer();
 
         ret = 0; /* exit loop */
         switch (run->exit_reason) {
diff --git a/kvm.h b/kvm.h
index 1c93ac5..59cba18 100644
--- a/kvm.h
+++ b/kvm.h
@@ -53,6 +53,7 @@ void kvm_setup_guest_memory(void *start, size_t size);
 
 int kvm_coalesce_mmio_region(target_phys_addr_t start, ram_addr_t size);
 int kvm_uncoalesce_mmio_region(target_phys_addr_t start, ram_addr_t size);
+void kvm_flush_coalesced_mmio_buffer(void);
 
 int kvm_insert_breakpoint(CPUState *current_env, target_ulong addr,
                           target_ulong len, int type);
diff --git a/vl.c b/vl.c
index 2b0b653..1f0c536 100644
--- a/vl.c
+++ b/vl.c
@@ -3193,6 +3193,7 @@ static void gui_update(void *opaque)
     DisplayState *ds = opaque;
     DisplayChangeListener *dcl = ds->listeners;
 
+    qemu_flush_coalesced_mmio_buffer();
     dpy_refresh(ds);
 
     while (dcl != NULL) {
@@ -3208,6 +3209,7 @@ static void nographic_update(void *opaque)
 {
     uint64_t interval = GUI_REFRESH_INTERVAL;
 
+    qemu_flush_coalesced_mmio_buffer();
     qemu_mod_timer(nographic_timer, interval + qemu_get_clock(rt_clock));
 }
 
-- 
1.5.4.5

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v3][uqmaster] kvm: Flush coalesced MMIO buffer periodly
  2010-01-26 11:21     ` [Qemu-devel] " Sheng Yang
@ 2010-01-26 12:43       ` Marcelo Tosatti
  -1 siblings, 0 replies; 10+ messages in thread
From: Marcelo Tosatti @ 2010-01-26 12:43 UTC (permalink / raw)
  To: Sheng Yang; +Cc: Avi Kivity, Anthony Liguori, Alexander Graf, kvm, qemu-devel

On Tue, Jan 26, 2010 at 07:21:16PM +0800, Sheng Yang wrote:
> The default action of coalesced MMIO is, cache the writing in buffer, until:
> 1. The buffer is full.
> 2. Or the exit to QEmu due to other reasons.
> 
> But this would result in a very late writing in some condition.
> 1. The each time write to MMIO content is small.
> 2. The writing interval is big.
> 3. No need for input or accessing other devices frequently.
> 
> This issue was observed in a experimental embbed system. The test image
> simply print "test" every 1 seconds. The output in QEmu meets expectation,
> but the output in KVM is delayed for seconds.
> 
> Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
> handler. By this way, We don't need vcpu explicit exit to QEmu to
> handle this issue.
> 
> Signed-off-by: Sheng Yang <sheng@linux.intel.com>

Applied, thanks.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Qemu-devel] Re: [PATCH v3][uqmaster] kvm: Flush coalesced MMIO buffer periodly
@ 2010-01-26 12:43       ` Marcelo Tosatti
  0 siblings, 0 replies; 10+ messages in thread
From: Marcelo Tosatti @ 2010-01-26 12:43 UTC (permalink / raw)
  To: Sheng Yang; +Cc: qemu-devel, kvm, Avi Kivity, Alexander Graf

On Tue, Jan 26, 2010 at 07:21:16PM +0800, Sheng Yang wrote:
> The default action of coalesced MMIO is, cache the writing in buffer, until:
> 1. The buffer is full.
> 2. Or the exit to QEmu due to other reasons.
> 
> But this would result in a very late writing in some condition.
> 1. The each time write to MMIO content is small.
> 2. The writing interval is big.
> 3. No need for input or accessing other devices frequently.
> 
> This issue was observed in a experimental embbed system. The test image
> simply print "test" every 1 seconds. The output in QEmu meets expectation,
> but the output in KVM is delayed for seconds.
> 
> Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
> handler. By this way, We don't need vcpu explicit exit to QEmu to
> handle this issue.
> 
> Signed-off-by: Sheng Yang <sheng@linux.intel.com>

Applied, thanks.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2010-01-26 12:43 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-01-26  9:41 [PATCH v2][uqmaster] kvm: Flush coalesced MMIO buffer periodly Sheng Yang
2010-01-26  9:41 ` [Qemu-devel] " Sheng Yang
2010-01-26  9:59 ` Alexander Graf
2010-01-26  9:59   ` [Qemu-devel] " Alexander Graf
2010-01-26 11:17   ` Sheng Yang
2010-01-26 11:17     ` [Qemu-devel] " Sheng Yang
2010-01-26 11:21   ` [PATCH v3][uqmaster] " Sheng Yang
2010-01-26 11:21     ` [Qemu-devel] " Sheng Yang
2010-01-26 12:43     ` Marcelo Tosatti
2010-01-26 12:43       ` [Qemu-devel] " Marcelo Tosatti

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.