All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] target-i386: kvm: cache KVM_GET_SUPPORTED_CPUID data
@ 2016-06-13  2:21 ` Chao Peng
  0 siblings, 0 replies; 10+ messages in thread
From: Chao Peng @ 2016-06-13  2:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: kvm, Paolo Bonzini, Eduardo Habkost, Marcelo Tosatti, Richard Henderson

KVM_GET_SUPPORTED_CPUID ioctl is called frequently when initializing
CPU. Depends on CPU features and CPU count, the number of calls can be
extremely high which slows down QEMU booting significantly. In our
testing, we saw 5922 calls with switches:

    -cpu SandyBridge -smp 6,sockets=6,cores=1,threads=1

This ioctl takes more than 100ms, which is almost half of the total
QEMU startup time.

While for most cases the data returned from two different invocations
are not changed, that means, we can cache the data to avoid trapping
into kernel for the second time. To make sure the cache safe one
assumption is desirable: the ioctl is stateless. This is not true
however, at least for some CPUID leaves.

The good part is even the ioctl is not fully stateless, we can still
cache the return value if we know the data is unchanged for the leaves
we are interested in. Actually this should be true for most invocations
and looks all the places in current code hold true.

A non-cached version can be introduced if refresh is required in the
future.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
---
 target-i386/kvm.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index abf50e6..1a4d751 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -107,6 +107,8 @@ static int has_xsave;
 static int has_xcrs;
 static int has_pit_state2;
 
+static struct kvm_cpuid2 *cpuid_cache;
+
 int kvm_has_pit_state2(void)
 {
     return has_pit_state2;
@@ -200,9 +202,14 @@ static struct kvm_cpuid2 *get_supported_cpuid(KVMState *s)
 {
     struct kvm_cpuid2 *cpuid;
     int max = 1;
+
+    if (cpuid_cache != NULL) {
+        return cpuid_cache;
+    }
     while ((cpuid = try_get_cpuid(s, max)) == NULL) {
         max *= 2;
     }
+    cpuid_cache = cpuid;
     return cpuid;
 }
 
@@ -320,8 +327,6 @@ uint32_t kvm_arch_get_supported_cpuid(KVMState *s, uint32_t function,
         ret |= cpuid_1_edx & CPUID_EXT2_AMD_ALIASES;
     }
 
-    g_free(cpuid);
-
     /* fallback for older kernels */
     if ((function == KVM_CPUID_FEATURES) && !found) {
         ret = get_para_features(s);
@@ -1090,6 +1095,12 @@ static void register_smram_listener(Notifier *n, void *unused)
                                  &smram_address_space, 1);
 }
 
+static Notifier kvm_exit_notifier;
+static void kvm_arch_destroy(Notifier *n, void *unused)
+{
+    g_free(cpuid_cache);
+}
+
 int kvm_arch_init(MachineState *ms, KVMState *s)
 {
     uint64_t identity_base = 0xfffbc000;
@@ -1165,6 +1176,9 @@ int kvm_arch_init(MachineState *ms, KVMState *s)
         smram_machine_done.notify = register_smram_listener;
         qemu_add_machine_init_done_notifier(&smram_machine_done);
     }
+
+    kvm_exit_notifier.notify = kvm_arch_destroy;
+    qemu_add_exit_notifier(&kvm_exit_notifier);
     return 0;
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Qemu-devel] [PATCH] target-i386: kvm: cache KVM_GET_SUPPORTED_CPUID data
@ 2016-06-13  2:21 ` Chao Peng
  0 siblings, 0 replies; 10+ messages in thread
From: Chao Peng @ 2016-06-13  2:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: kvm, Paolo Bonzini, Eduardo Habkost, Marcelo Tosatti, Richard Henderson

KVM_GET_SUPPORTED_CPUID ioctl is called frequently when initializing
CPU. Depends on CPU features and CPU count, the number of calls can be
extremely high which slows down QEMU booting significantly. In our
testing, we saw 5922 calls with switches:

    -cpu SandyBridge -smp 6,sockets=6,cores=1,threads=1

This ioctl takes more than 100ms, which is almost half of the total
QEMU startup time.

While for most cases the data returned from two different invocations
are not changed, that means, we can cache the data to avoid trapping
into kernel for the second time. To make sure the cache safe one
assumption is desirable: the ioctl is stateless. This is not true
however, at least for some CPUID leaves.

The good part is even the ioctl is not fully stateless, we can still
cache the return value if we know the data is unchanged for the leaves
we are interested in. Actually this should be true for most invocations
and looks all the places in current code hold true.

A non-cached version can be introduced if refresh is required in the
future.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
---
 target-i386/kvm.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index abf50e6..1a4d751 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -107,6 +107,8 @@ static int has_xsave;
 static int has_xcrs;
 static int has_pit_state2;
 
+static struct kvm_cpuid2 *cpuid_cache;
+
 int kvm_has_pit_state2(void)
 {
     return has_pit_state2;
@@ -200,9 +202,14 @@ static struct kvm_cpuid2 *get_supported_cpuid(KVMState *s)
 {
     struct kvm_cpuid2 *cpuid;
     int max = 1;
+
+    if (cpuid_cache != NULL) {
+        return cpuid_cache;
+    }
     while ((cpuid = try_get_cpuid(s, max)) == NULL) {
         max *= 2;
     }
+    cpuid_cache = cpuid;
     return cpuid;
 }
 
@@ -320,8 +327,6 @@ uint32_t kvm_arch_get_supported_cpuid(KVMState *s, uint32_t function,
         ret |= cpuid_1_edx & CPUID_EXT2_AMD_ALIASES;
     }
 
-    g_free(cpuid);
-
     /* fallback for older kernels */
     if ((function == KVM_CPUID_FEATURES) && !found) {
         ret = get_para_features(s);
@@ -1090,6 +1095,12 @@ static void register_smram_listener(Notifier *n, void *unused)
                                  &smram_address_space, 1);
 }
 
+static Notifier kvm_exit_notifier;
+static void kvm_arch_destroy(Notifier *n, void *unused)
+{
+    g_free(cpuid_cache);
+}
+
 int kvm_arch_init(MachineState *ms, KVMState *s)
 {
     uint64_t identity_base = 0xfffbc000;
@@ -1165,6 +1176,9 @@ int kvm_arch_init(MachineState *ms, KVMState *s)
         smram_machine_done.notify = register_smram_listener;
         qemu_add_machine_init_done_notifier(&smram_machine_done);
     }
+
+    kvm_exit_notifier.notify = kvm_arch_destroy;
+    qemu_add_exit_notifier(&kvm_exit_notifier);
     return 0;
 }
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] target-i386: kvm: cache KVM_GET_SUPPORTED_CPUID data
  2016-06-13  2:21 ` [Qemu-devel] " Chao Peng
@ 2016-06-13 10:02   ` Paolo Bonzini
  -1 siblings, 0 replies; 10+ messages in thread
From: Paolo Bonzini @ 2016-06-13 10:02 UTC (permalink / raw)
  To: Chao Peng, qemu-devel
  Cc: kvm, Eduardo Habkost, Marcelo Tosatti, Richard Henderson



On 13/06/2016 04:21, Chao Peng wrote:
> KVM_GET_SUPPORTED_CPUID ioctl is called frequently when initializing
> CPU. Depends on CPU features and CPU count, the number of calls can be
> extremely high which slows down QEMU booting significantly. In our
> testing, we saw 5922 calls with switches:
> 
>     -cpu SandyBridge -smp 6,sockets=6,cores=1,threads=1
> 
> This ioctl takes more than 100ms, which is almost half of the total
> QEMU startup time.
> 
> While for most cases the data returned from two different invocations
> are not changed, that means, we can cache the data to avoid trapping
> into kernel for the second time. To make sure the cache safe one
> assumption is desirable: the ioctl is stateless. This is not true
> however, at least for some CPUID leaves.

Which are the CPUID leaves for which KVM_GET_SUPPORTED_CPUID is not
stateless?  I cannot find any.

> The good part is even the ioctl is not fully stateless, we can still
> cache the return value if we know the data is unchanged for the leaves
> we are interested in. Actually this should be true for most invocations
> and looks all the places in current code hold true.
> 
> A non-cached version can be introduced if refresh is required in the
> future.

[...]

> 
> +static Notifier kvm_exit_notifier;
> +static void kvm_arch_destroy(Notifier *n, void *unused)
> +{
> +    g_free(cpuid_cache);
> +}
> +
>  int kvm_arch_init(MachineState *ms, KVMState *s)
>  {
>      uint64_t identity_base = 0xfffbc000;
> @@ -1165,6 +1176,9 @@ int kvm_arch_init(MachineState *ms, KVMState *s)
>          smram_machine_done.notify = register_smram_listener;
>          qemu_add_machine_init_done_notifier(&smram_machine_done);
>      }
> +
> +    kvm_exit_notifier.notify = kvm_arch_destroy;
> +    qemu_add_exit_notifier(&kvm_exit_notifier);
>      return 0;


This part is unnecessary; the OS takes care of freeing the heap on exit.

Thanks,

Paolo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH] target-i386: kvm: cache KVM_GET_SUPPORTED_CPUID data
@ 2016-06-13 10:02   ` Paolo Bonzini
  0 siblings, 0 replies; 10+ messages in thread
From: Paolo Bonzini @ 2016-06-13 10:02 UTC (permalink / raw)
  To: Chao Peng, qemu-devel
  Cc: kvm, Eduardo Habkost, Marcelo Tosatti, Richard Henderson



On 13/06/2016 04:21, Chao Peng wrote:
> KVM_GET_SUPPORTED_CPUID ioctl is called frequently when initializing
> CPU. Depends on CPU features and CPU count, the number of calls can be
> extremely high which slows down QEMU booting significantly. In our
> testing, we saw 5922 calls with switches:
> 
>     -cpu SandyBridge -smp 6,sockets=6,cores=1,threads=1
> 
> This ioctl takes more than 100ms, which is almost half of the total
> QEMU startup time.
> 
> While for most cases the data returned from two different invocations
> are not changed, that means, we can cache the data to avoid trapping
> into kernel for the second time. To make sure the cache safe one
> assumption is desirable: the ioctl is stateless. This is not true
> however, at least for some CPUID leaves.

Which are the CPUID leaves for which KVM_GET_SUPPORTED_CPUID is not
stateless?  I cannot find any.

> The good part is even the ioctl is not fully stateless, we can still
> cache the return value if we know the data is unchanged for the leaves
> we are interested in. Actually this should be true for most invocations
> and looks all the places in current code hold true.
> 
> A non-cached version can be introduced if refresh is required in the
> future.

[...]

> 
> +static Notifier kvm_exit_notifier;
> +static void kvm_arch_destroy(Notifier *n, void *unused)
> +{
> +    g_free(cpuid_cache);
> +}
> +
>  int kvm_arch_init(MachineState *ms, KVMState *s)
>  {
>      uint64_t identity_base = 0xfffbc000;
> @@ -1165,6 +1176,9 @@ int kvm_arch_init(MachineState *ms, KVMState *s)
>          smram_machine_done.notify = register_smram_listener;
>          qemu_add_machine_init_done_notifier(&smram_machine_done);
>      }
> +
> +    kvm_exit_notifier.notify = kvm_arch_destroy;
> +    qemu_add_exit_notifier(&kvm_exit_notifier);
>      return 0;


This part is unnecessary; the OS takes care of freeing the heap on exit.

Thanks,

Paolo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] target-i386: kvm: cache KVM_GET_SUPPORTED_CPUID data
  2016-06-13 10:02   ` [Qemu-devel] " Paolo Bonzini
@ 2016-06-14  5:01     ` Chao Peng
  -1 siblings, 0 replies; 10+ messages in thread
From: Chao Peng @ 2016-06-14  5:01 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: qemu-devel, kvm, Eduardo Habkost, Marcelo Tosatti, Richard Henderson

On Mon, Jun 13, 2016 at 12:02:41PM +0200, Paolo Bonzini wrote:
> 
> 
> On 13/06/2016 04:21, Chao Peng wrote:
> > KVM_GET_SUPPORTED_CPUID ioctl is called frequently when initializing
> > CPU. Depends on CPU features and CPU count, the number of calls can be
> > extremely high which slows down QEMU booting significantly. In our
> > testing, we saw 5922 calls with switches:
> > 
> >     -cpu SandyBridge -smp 6,sockets=6,cores=1,threads=1
> > 
> > This ioctl takes more than 100ms, which is almost half of the total
> > QEMU startup time.
> > 
> > While for most cases the data returned from two different invocations
> > are not changed, that means, we can cache the data to avoid trapping
> > into kernel for the second time. To make sure the cache safe one
> > assumption is desirable: the ioctl is stateless. This is not true
> > however, at least for some CPUID leaves.
> 
> Which are the CPUID leaves for which KVM_GET_SUPPORTED_CPUID is not
> stateless?  I cannot find any.

I have though leaf 0xd, sub leaf 1 is not stateless, as the size of
xsave buffer(EBX) is based on XCR0 | IA32_XSS. But after looking KVM
code more carefully, seems I was wrong. The code calculates EBX with the
host xcr0 but not guest xcr0, nor guest IA32_XSS (not sure if this is
the correct behavior), so it can always returns constant data on a
certain machine.

If this is not an issue, then looks we can cache it safely.

> 
> > The good part is even the ioctl is not fully stateless, we can still
> > cache the return value if we know the data is unchanged for the leaves
> > we are interested in. Actually this should be true for most invocations
> > and looks all the places in current code hold true.
> > 
> > A non-cached version can be introduced if refresh is required in the
> > future.
> 
> [...]
> 
> > 
> > +static Notifier kvm_exit_notifier;
> > +static void kvm_arch_destroy(Notifier *n, void *unused)
> > +{
> > +    g_free(cpuid_cache);
> > +}
> > +
> >  int kvm_arch_init(MachineState *ms, KVMState *s)
> >  {
> >      uint64_t identity_base = 0xfffbc000;
> > @@ -1165,6 +1176,9 @@ int kvm_arch_init(MachineState *ms, KVMState *s)
> >          smram_machine_done.notify = register_smram_listener;
> >          qemu_add_machine_init_done_notifier(&smram_machine_done);
> >      }
> > +
> > +    kvm_exit_notifier.notify = kvm_arch_destroy;
> > +    qemu_add_exit_notifier(&kvm_exit_notifier);
> >      return 0;
> 
> 
> This part is unnecessary; the OS takes care of freeing the heap on exit.

Make sense. Thanks.

Chao

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH] target-i386: kvm: cache KVM_GET_SUPPORTED_CPUID data
@ 2016-06-14  5:01     ` Chao Peng
  0 siblings, 0 replies; 10+ messages in thread
From: Chao Peng @ 2016-06-14  5:01 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: qemu-devel, kvm, Eduardo Habkost, Marcelo Tosatti, Richard Henderson

On Mon, Jun 13, 2016 at 12:02:41PM +0200, Paolo Bonzini wrote:
> 
> 
> On 13/06/2016 04:21, Chao Peng wrote:
> > KVM_GET_SUPPORTED_CPUID ioctl is called frequently when initializing
> > CPU. Depends on CPU features and CPU count, the number of calls can be
> > extremely high which slows down QEMU booting significantly. In our
> > testing, we saw 5922 calls with switches:
> > 
> >     -cpu SandyBridge -smp 6,sockets=6,cores=1,threads=1
> > 
> > This ioctl takes more than 100ms, which is almost half of the total
> > QEMU startup time.
> > 
> > While for most cases the data returned from two different invocations
> > are not changed, that means, we can cache the data to avoid trapping
> > into kernel for the second time. To make sure the cache safe one
> > assumption is desirable: the ioctl is stateless. This is not true
> > however, at least for some CPUID leaves.
> 
> Which are the CPUID leaves for which KVM_GET_SUPPORTED_CPUID is not
> stateless?  I cannot find any.

I have though leaf 0xd, sub leaf 1 is not stateless, as the size of
xsave buffer(EBX) is based on XCR0 | IA32_XSS. But after looking KVM
code more carefully, seems I was wrong. The code calculates EBX with the
host xcr0 but not guest xcr0, nor guest IA32_XSS (not sure if this is
the correct behavior), so it can always returns constant data on a
certain machine.

If this is not an issue, then looks we can cache it safely.

> 
> > The good part is even the ioctl is not fully stateless, we can still
> > cache the return value if we know the data is unchanged for the leaves
> > we are interested in. Actually this should be true for most invocations
> > and looks all the places in current code hold true.
> > 
> > A non-cached version can be introduced if refresh is required in the
> > future.
> 
> [...]
> 
> > 
> > +static Notifier kvm_exit_notifier;
> > +static void kvm_arch_destroy(Notifier *n, void *unused)
> > +{
> > +    g_free(cpuid_cache);
> > +}
> > +
> >  int kvm_arch_init(MachineState *ms, KVMState *s)
> >  {
> >      uint64_t identity_base = 0xfffbc000;
> > @@ -1165,6 +1176,9 @@ int kvm_arch_init(MachineState *ms, KVMState *s)
> >          smram_machine_done.notify = register_smram_listener;
> >          qemu_add_machine_init_done_notifier(&smram_machine_done);
> >      }
> > +
> > +    kvm_exit_notifier.notify = kvm_arch_destroy;
> > +    qemu_add_exit_notifier(&kvm_exit_notifier);
> >      return 0;
> 
> 
> This part is unnecessary; the OS takes care of freeing the heap on exit.

Make sense. Thanks.

Chao

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] target-i386: kvm: cache KVM_GET_SUPPORTED_CPUID data
  2016-06-14  5:01     ` [Qemu-devel] " Chao Peng
@ 2016-06-14  8:21       ` Paolo Bonzini
  -1 siblings, 0 replies; 10+ messages in thread
From: Paolo Bonzini @ 2016-06-14  8:21 UTC (permalink / raw)
  To: Chao Peng
  Cc: qemu-devel, kvm, Eduardo Habkost, Marcelo Tosatti, Richard Henderson



On 14/06/2016 07:01, Chao Peng wrote:
>> > 
>> > Which are the CPUID leaves for which KVM_GET_SUPPORTED_CPUID is not
>> > stateless?  I cannot find any.
> I have though leaf 0xd, sub leaf 1 is not stateless, as the size of
> xsave buffer(EBX) is based on XCR0 | IA32_XSS. But after looking KVM
> code more carefully, seems I was wrong. The code calculates EBX with the
> host xcr0 but not guest xcr0, nor guest IA32_XSS (not sure if this is
> the correct behavior), so it can always returns constant data on a
> certain machine.

Indeed, KVM computes the correct value at runtime, but
KVM_GET_SUPPORTED_CPUID runs before there is a value for guest XCR0 or
guest IA32_XSS.

Thanks, I've queued the patch for QEMU 2.7.

Paolo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH] target-i386: kvm: cache KVM_GET_SUPPORTED_CPUID data
@ 2016-06-14  8:21       ` Paolo Bonzini
  0 siblings, 0 replies; 10+ messages in thread
From: Paolo Bonzini @ 2016-06-14  8:21 UTC (permalink / raw)
  To: Chao Peng
  Cc: qemu-devel, kvm, Eduardo Habkost, Marcelo Tosatti, Richard Henderson



On 14/06/2016 07:01, Chao Peng wrote:
>> > 
>> > Which are the CPUID leaves for which KVM_GET_SUPPORTED_CPUID is not
>> > stateless?  I cannot find any.
> I have though leaf 0xd, sub leaf 1 is not stateless, as the size of
> xsave buffer(EBX) is based on XCR0 | IA32_XSS. But after looking KVM
> code more carefully, seems I was wrong. The code calculates EBX with the
> host xcr0 but not guest xcr0, nor guest IA32_XSS (not sure if this is
> the correct behavior), so it can always returns constant data on a
> certain machine.

Indeed, KVM computes the correct value at runtime, but
KVM_GET_SUPPORTED_CPUID runs before there is a value for guest XCR0 or
guest IA32_XSS.

Thanks, I've queued the patch for QEMU 2.7.

Paolo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] target-i386: kvm: cache KVM_GET_SUPPORTED_CPUID data
  2016-06-14  8:21       ` [Qemu-devel] " Paolo Bonzini
@ 2016-06-14  8:31         ` Chao Peng
  -1 siblings, 0 replies; 10+ messages in thread
From: Chao Peng @ 2016-06-14  8:31 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: qemu-devel, kvm, Eduardo Habkost, Marcelo Tosatti, Richard Henderson

On Tue, Jun 14, 2016 at 10:21:41AM +0200, Paolo Bonzini wrote:
> 
> 
> On 14/06/2016 07:01, Chao Peng wrote:
> >> > 
> >> > Which are the CPUID leaves for which KVM_GET_SUPPORTED_CPUID is not
> >> > stateless?  I cannot find any.
> > I have though leaf 0xd, sub leaf 1 is not stateless, as the size of
> > xsave buffer(EBX) is based on XCR0 | IA32_XSS. But after looking KVM
> > code more carefully, seems I was wrong. The code calculates EBX with the
> > host xcr0 but not guest xcr0, nor guest IA32_XSS (not sure if this is
> > the correct behavior), so it can always returns constant data on a
> > certain machine.
> 
> Indeed, KVM computes the correct value at runtime, but
> KVM_GET_SUPPORTED_CPUID runs before there is a value for guest XCR0 or
> guest IA32_XSS.

Yes, this is the point.

> 
> Thanks, I've queued the patch for QEMU 2.7.

Thanks :)

Chao

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH] target-i386: kvm: cache KVM_GET_SUPPORTED_CPUID data
@ 2016-06-14  8:31         ` Chao Peng
  0 siblings, 0 replies; 10+ messages in thread
From: Chao Peng @ 2016-06-14  8:31 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: qemu-devel, kvm, Eduardo Habkost, Marcelo Tosatti, Richard Henderson

On Tue, Jun 14, 2016 at 10:21:41AM +0200, Paolo Bonzini wrote:
> 
> 
> On 14/06/2016 07:01, Chao Peng wrote:
> >> > 
> >> > Which are the CPUID leaves for which KVM_GET_SUPPORTED_CPUID is not
> >> > stateless?  I cannot find any.
> > I have though leaf 0xd, sub leaf 1 is not stateless, as the size of
> > xsave buffer(EBX) is based on XCR0 | IA32_XSS. But after looking KVM
> > code more carefully, seems I was wrong. The code calculates EBX with the
> > host xcr0 but not guest xcr0, nor guest IA32_XSS (not sure if this is
> > the correct behavior), so it can always returns constant data on a
> > certain machine.
> 
> Indeed, KVM computes the correct value at runtime, but
> KVM_GET_SUPPORTED_CPUID runs before there is a value for guest XCR0 or
> guest IA32_XSS.

Yes, this is the point.

> 
> Thanks, I've queued the patch for QEMU 2.7.

Thanks :)

Chao

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-06-14  8:42 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-13  2:21 [PATCH] target-i386: kvm: cache KVM_GET_SUPPORTED_CPUID data Chao Peng
2016-06-13  2:21 ` [Qemu-devel] " Chao Peng
2016-06-13 10:02 ` Paolo Bonzini
2016-06-13 10:02   ` [Qemu-devel] " Paolo Bonzini
2016-06-14  5:01   ` Chao Peng
2016-06-14  5:01     ` [Qemu-devel] " Chao Peng
2016-06-14  8:21     ` Paolo Bonzini
2016-06-14  8:21       ` [Qemu-devel] " Paolo Bonzini
2016-06-14  8:31       ` Chao Peng
2016-06-14  8:31         ` [Qemu-devel] " Chao Peng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.