All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 1/3] KVM: Add new -cpu best
@ 2012-06-26 16:39 Alexander Graf
  2012-06-26 16:39 ` [PATCH v2 2/3] KVM: Use -cpu best as default on x86 Alexander Graf
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Alexander Graf @ 2012-06-26 16:39 UTC (permalink / raw)
  To: qemu-devel qemu-devel; +Cc: Avi Kivity, KVM list, Ryan Harper, Anthony Liguori

During discussions on whether to make -cpu host the default in SLE, I found
myself disagreeing to the thought, because it potentially opens a big can
of worms for potential bugs. But if I already am so opposed to it for SLE, how
can it possibly be reasonable to default to -cpu host in upstream QEMU? And
what would a sane default look like?

So I had this idea of looping through all available CPU definitions. We can
pretty well tell if our host is able to execute any of them by checking the
respective flags and seeing if our host has all features the CPU definition
requires. With that, we can create a -cpu type that would fall back to the
"best known CPU definition" that our host can fulfill. On my Phenom II
system for example, that would be -cpu phenom.

With this approach we can test and verify that CPU types actually work at
any random user setup, because we can always verify that all the -cpu types
we ship actually work. And we only default to some clever mechanism that
chooses from one of these.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 target-i386/cpu.c |   81 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 81 insertions(+), 0 deletions(-)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index fdd95be..98cc1ec 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -558,6 +558,85 @@ static int cpu_x86_fill_host(x86_def_t *x86_cpu_def)
     return 0;
 }
 
+/* Are all guest feature bits present on the host? */
+static bool cpu_x86_feature_subset(uint32_t host, uint32_t guest)
+{
+    int i;
+
+    for (i = 0; i < 32; i++) {
+        uint32_t mask = 1 << i;
+        if ((guest & mask) && !(host & mask)) {
+            return false;
+        }
+    }
+
+    return true;
+}
+
+/* Does the host support all the features of the CPU definition? */
+static bool cpu_x86_fits_host(x86_def_t *x86_cpu_def)
+{
+    uint32_t eax = 0, ebx = 0, ecx = 0, edx = 0;
+
+    host_cpuid(0x0, 0, &eax, &ebx, &ecx, &edx);
+    if (x86_cpu_def->level > eax) {
+        return false;
+    }
+    if ((x86_cpu_def->vendor1 != ebx) ||
+        (x86_cpu_def->vendor2 != edx) ||
+        (x86_cpu_def->vendor3 != ecx)) {
+        return false;
+    }
+
+    host_cpuid(0x1, 0, &eax, &ebx, &ecx, &edx);
+    if (!cpu_x86_feature_subset(ecx, x86_cpu_def->ext_features) ||
+        !cpu_x86_feature_subset(edx, x86_cpu_def->features)) {
+        return false;
+    }
+
+    host_cpuid(0x80000000, 0, &eax, &ebx, &ecx, &edx);
+    if (x86_cpu_def->xlevel > eax) {
+        return false;
+    }
+
+    host_cpuid(0x80000001, 0, &eax, &ebx, &ecx, &edx);
+    if (!cpu_x86_feature_subset(edx, x86_cpu_def->ext2_features) ||
+        !cpu_x86_feature_subset(ecx, x86_cpu_def->ext3_features)) {
+        return false;
+    }
+
+    return true;
+}
+
+/* Returns true when new_def is higher versioned than old_def */
+static int cpu_x86_fits_higher(x86_def_t *new_def, x86_def_t *old_def)
+{
+    int old_fammod = (old_def->family << 24) | (old_def->model << 8)
+                   | (old_def->stepping);
+    int new_fammod = (new_def->family << 24) | (new_def->model << 8)
+                   | (new_def->stepping);
+
+    return new_fammod > old_fammod;
+}
+
+static void cpu_x86_fill_best(x86_def_t *x86_cpu_def)
+{
+    x86_def_t *def;
+
+    x86_cpu_def->family = 0;
+    x86_cpu_def->model = 0;
+    for (def = x86_defs; def; def = def->next) {
+        if (cpu_x86_fits_host(def) && cpu_x86_fits_higher(def, x86_cpu_def)) {
+            memcpy(x86_cpu_def, def, sizeof(*def));
+        }
+    }
+
+    if (!x86_cpu_def->family && !x86_cpu_def->model) {
+        fprintf(stderr, "No fitting CPU model found!\n");
+        exit(1);
+    }
+}
+
 static int unavailable_host_feature(struct model_features_t *f, uint32_t mask)
 {
     int i;
@@ -878,6 +957,8 @@ static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, const char *cpu_model)
             break;
     if (kvm_enabled() && name && strcmp(name, "host") == 0) {
         cpu_x86_fill_host(x86_cpu_def);
+    } else if (kvm_enabled() && name && strcmp(name, "best") == 0) {
+        cpu_x86_fill_best(x86_cpu_def);
     } else if (!def) {
         goto error;
     } else {
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 2/3] KVM: Use -cpu best as default on x86
  2012-06-26 16:39 [PATCH v2 1/3] KVM: Add new -cpu best Alexander Graf
@ 2012-06-26 16:39 ` Alexander Graf
  2012-07-02 14:27     ` [Qemu-devel] " Avi Kivity
  2012-06-26 16:39 ` [PATCH v2 3/3] i386: KVM: List -cpu host and best in -cpu ? Alexander Graf
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Alexander Graf @ 2012-06-26 16:39 UTC (permalink / raw)
  To: qemu-devel qemu-devel; +Cc: Avi Kivity, KVM list, Ryan Harper, Anthony Liguori

When running QEMU without -cpu parameter, the user usually wants a sane
default. So far, we're using the qemu64/qemu32 CPU type, which basically
means "the maximum TCG can emulate".

That's a really good default when using TCG, but when running with KVM
we much rather want a default saying "the maximum performance I can get".

Fortunately we just added an option that gives us the best performance
while still staying safe on the testability side of things: -cpu best.
So all we need to do is make -cpu best the default when the user doesn't
explicitly specify a CPU type.

This fixes a lot of subtile breakage in the GNU toolchain (libgmp) which
hicks up on QEMU's non-existent CPU models.

This patch also adds a new pc-1.2 machine type to stay backwards compatible
with older versions of QEMU.

Signed-off-by: Alexander Graf <agraf@suse.de>

---

v1 -> v2:

  - rebase
---
 hw/pc_piix.c |   45 ++++++++++++++++++++++++++++++++++++---------
 1 files changed, 36 insertions(+), 9 deletions(-)

diff --git a/hw/pc_piix.c b/hw/pc_piix.c
index eae258c..eafd383 100644
--- a/hw/pc_piix.c
+++ b/hw/pc_piix.c
@@ -127,7 +127,8 @@ static void pc_init1(MemoryRegion *system_memory,
                      const char *initrd_filename,
                      const char *cpu_model,
                      int pci_enabled,
-                     int kvmclock_enabled)
+                     int kvmclock_enabled,
+                     int may_cpu_best)
 {
     int i;
     ram_addr_t below_4g_mem_size, above_4g_mem_size;
@@ -149,6 +150,9 @@ static void pc_init1(MemoryRegion *system_memory,
     MemoryRegion *rom_memory;
     void *fw_cfg = NULL;
 
+    if (!cpu_model && kvm_enabled() && may_cpu_best) {
+        cpu_model = "best";
+    }
     pc_cpus_init(cpu_model);
 
     if (kvmclock_enabled) {
@@ -298,7 +302,21 @@ static void pc_init_pci(ram_addr_t ram_size,
              get_system_io(),
              ram_size, boot_device,
              kernel_filename, kernel_cmdline,
-             initrd_filename, cpu_model, 1, 1);
+             initrd_filename, cpu_model, 1, 1, 1);
+}
+
+static void pc_init_pci_oldcpu(ram_addr_t ram_size,
+                               const char *boot_device,
+                               const char *kernel_filename,
+                               const char *kernel_cmdline,
+                               const char *initrd_filename,
+                               const char *cpu_model)
+{
+    pc_init1(get_system_memory(),
+             get_system_io(),
+             ram_size, boot_device,
+             kernel_filename, kernel_cmdline,
+             initrd_filename, cpu_model, 1, 1, 0);
 }
 
 static void pc_init_pci_no_kvmclock(ram_addr_t ram_size,
@@ -312,7 +330,7 @@ static void pc_init_pci_no_kvmclock(ram_addr_t ram_size,
              get_system_io(),
              ram_size, boot_device,
              kernel_filename, kernel_cmdline,
-             initrd_filename, cpu_model, 1, 0);
+             initrd_filename, cpu_model, 1, 0, 0);
 }
 
 static void pc_init_isa(ram_addr_t ram_size,
@@ -328,7 +346,7 @@ static void pc_init_isa(ram_addr_t ram_size,
              get_system_io(),
              ram_size, boot_device,
              kernel_filename, kernel_cmdline,
-             initrd_filename, cpu_model, 0, 1);
+             initrd_filename, cpu_model, 0, 1, 0);
 }
 
 #ifdef CONFIG_XEN
@@ -349,8 +367,8 @@ static void pc_xen_hvm_init(ram_addr_t ram_size,
 }
 #endif
 
-static QEMUMachine pc_machine_v1_1 = {
-    .name = "pc-1.1",
+static QEMUMachine pc_machine_v1_2 = {
+    .name = "pc-1.2",
     .alias = "pc",
     .desc = "Standard PC",
     .init = pc_init_pci,
@@ -358,6 +376,14 @@ static QEMUMachine pc_machine_v1_1 = {
     .is_default = 1,
 };
 
+static QEMUMachine pc_machine_v1_1 = {
+    .name = "pc-1.1",
+    .desc = "Standard PC",
+    .init = pc_init_pci_oldcpu,
+    .max_cpus = 255,
+    .is_default = 1,
+};
+
 #define PC_COMPAT_1_0 \
         {\
             .driver   = "pc-sysfw",\
@@ -384,7 +410,7 @@ static QEMUMachine pc_machine_v1_1 = {
 static QEMUMachine pc_machine_v1_0 = {
     .name = "pc-1.0",
     .desc = "Standard PC",
-    .init = pc_init_pci,
+    .init = pc_init_pci_oldcpu,
     .max_cpus = 255,
     .compat_props = (GlobalProperty[]) {
         PC_COMPAT_1_0,
@@ -399,7 +425,7 @@ static QEMUMachine pc_machine_v1_0 = {
 static QEMUMachine pc_machine_v0_15 = {
     .name = "pc-0.15",
     .desc = "Standard PC",
-    .init = pc_init_pci,
+    .init = pc_init_pci_oldcpu,
     .max_cpus = 255,
     .compat_props = (GlobalProperty[]) {
         PC_COMPAT_0_15,
@@ -431,7 +457,7 @@ static QEMUMachine pc_machine_v0_15 = {
 static QEMUMachine pc_machine_v0_14 = {
     .name = "pc-0.14",
     .desc = "Standard PC",
-    .init = pc_init_pci,
+    .init = pc_init_pci_oldcpu,
     .max_cpus = 255,
     .compat_props = (GlobalProperty[]) {
         PC_COMPAT_0_14, 
@@ -612,6 +638,7 @@ static QEMUMachine xenfv_machine = {
 
 static void pc_machine_init(void)
 {
+    qemu_register_machine(&pc_machine_v1_2);
     qemu_register_machine(&pc_machine_v1_1);
     qemu_register_machine(&pc_machine_v1_0);
     qemu_register_machine(&pc_machine_v0_15);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 3/3] i386: KVM: List -cpu host and best in -cpu ?
  2012-06-26 16:39 [PATCH v2 1/3] KVM: Add new -cpu best Alexander Graf
  2012-06-26 16:39 ` [PATCH v2 2/3] KVM: Use -cpu best as default on x86 Alexander Graf
@ 2012-06-26 16:39 ` Alexander Graf
  2012-07-02 14:02   ` Alexander Graf
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Alexander Graf @ 2012-06-26 16:39 UTC (permalink / raw)
  To: qemu-devel qemu-devel; +Cc: Anthony Liguori, Ryan Harper, Avi Kivity, KVM list

The kvm_enabled() helper doesn't work in a function as early as -cpu ?
yet. It also doesn't make sense to list the -cpu ? output conditional on
the -enable-kvm parameter. So let's always mention -cpu host in the
CPU list when KVM is supported on that configuration.

In addition, this patch also adds listing of -cpu best in the -cpu ?
list, so that people know that this option exists.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 target-i386/cpu.c |    7 ++++---
 1 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index 98cc1ec..6c20798 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -1199,9 +1199,10 @@ void x86_cpu_list(FILE *f, fprintf_function cpu_fprintf, const char *optarg)
             (*cpu_fprintf)(f, "\n");
         }
     }
-    if (kvm_enabled()) {
-        (*cpu_fprintf)(f, "x86 %16s\n", "[host]");
-    }
+#ifdef CONFIG_KVM
+    (*cpu_fprintf)(f, "x86 %16s\n", "KVM only: [host]");
+    (*cpu_fprintf)(f, "x86 %16s\n", "KVM only: [best]");
+#endif
 }
 
 int cpu_x86_register(X86CPU *cpu, const char *cpu_model)
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 1/3] KVM: Add new -cpu best
  2012-06-26 16:39 [PATCH v2 1/3] KVM: Add new -cpu best Alexander Graf
@ 2012-07-02 14:02   ` Alexander Graf
  2012-06-26 16:39 ` [PATCH v2 3/3] i386: KVM: List -cpu host and best in -cpu ? Alexander Graf
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Alexander Graf @ 2012-07-02 14:02 UTC (permalink / raw)
  To: qemu-devel qemu-devel; +Cc: Anthony Liguori, Ryan Harper, Avi Kivity, KVM list


On 26.06.2012, at 18:39, Alexander Graf wrote:

> During discussions on whether to make -cpu host the default in SLE, I found
> myself disagreeing to the thought, because it potentially opens a big can
> of worms for potential bugs. But if I already am so opposed to it for SLE, how
> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
> what would a sane default look like?
> 
> So I had this idea of looping through all available CPU definitions. We can
> pretty well tell if our host is able to execute any of them by checking the
> respective flags and seeing if our host has all features the CPU definition
> requires. With that, we can create a -cpu type that would fall back to the
> "best known CPU definition" that our host can fulfill. On my Phenom II
> system for example, that would be -cpu phenom.
> 
> With this approach we can test and verify that CPU types actually work at
> any random user setup, because we can always verify that all the -cpu types
> we ship actually work. And we only default to some clever mechanism that
> chooses from one of these.
> 
> Signed-off-by: Alexander Graf <agraf@suse.de>

Ping :)


Alex


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 1/3] KVM: Add new -cpu best
@ 2012-07-02 14:02   ` Alexander Graf
  0 siblings, 0 replies; 13+ messages in thread
From: Alexander Graf @ 2012-07-02 14:02 UTC (permalink / raw)
  To: qemu-devel qemu-devel; +Cc: Ryan Harper, Avi Kivity, Anthony Liguori, KVM list


On 26.06.2012, at 18:39, Alexander Graf wrote:

> During discussions on whether to make -cpu host the default in SLE, I found
> myself disagreeing to the thought, because it potentially opens a big can
> of worms for potential bugs. But if I already am so opposed to it for SLE, how
> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
> what would a sane default look like?
> 
> So I had this idea of looping through all available CPU definitions. We can
> pretty well tell if our host is able to execute any of them by checking the
> respective flags and seeing if our host has all features the CPU definition
> requires. With that, we can create a -cpu type that would fall back to the
> "best known CPU definition" that our host can fulfill. On my Phenom II
> system for example, that would be -cpu phenom.
> 
> With this approach we can test and verify that CPU types actually work at
> any random user setup, because we can always verify that all the -cpu types
> we ship actually work. And we only default to some clever mechanism that
> chooses from one of these.
> 
> Signed-off-by: Alexander Graf <agraf@suse.de>

Ping :)


Alex

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 1/3] KVM: Add new -cpu best
  2012-06-26 16:39 [PATCH v2 1/3] KVM: Add new -cpu best Alexander Graf
@ 2012-07-02 14:24   ` Andreas Färber
  2012-06-26 16:39 ` [PATCH v2 3/3] i386: KVM: List -cpu host and best in -cpu ? Alexander Graf
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Andreas Färber @ 2012-07-02 14:24 UTC (permalink / raw)
  To: Alexander Graf
  Cc: qemu-devel qemu-devel, Anthony Liguori, Ryan Harper, Avi Kivity,
	KVM list

Am 26.06.2012 18:39, schrieb Alexander Graf:
> During discussions on whether to make -cpu host the default in SLE, I found

s/make -cpu host the default/support/?

> myself disagreeing to the thought, because it potentially opens a big can
> of worms for potential bugs. But if I already am so opposed to it for SLE, how
> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
> what would a sane default look like?
> 
> So I had this idea of looping through all available CPU definitions. We can
> pretty well tell if our host is able to execute any of them by checking the
> respective flags and seeing if our host has all features the CPU definition
> requires. With that, we can create a -cpu type that would fall back to the
> "best known CPU definition" that our host can fulfill. On my Phenom II
> system for example, that would be -cpu phenom.
> 
> With this approach we can test and verify that CPU types actually work at
> any random user setup, because we can always verify that all the -cpu types
> we ship actually work. And we only default to some clever mechanism that
> chooses from one of these.
> 
> Signed-off-by: Alexander Graf <agraf@suse.de>

Despite the long commit message a cover letter would've been nice. ;)

Anything that operates on x86_def_t will obviously need to be refactored
when we agree on the course for x86 CPU subclasses.
But no objection to getting it done some way that works today.

Andreas

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 1/3] KVM: Add new -cpu best
@ 2012-07-02 14:24   ` Andreas Färber
  0 siblings, 0 replies; 13+ messages in thread
From: Andreas Färber @ 2012-07-02 14:24 UTC (permalink / raw)
  To: Alexander Graf
  Cc: KVM list, Ryan Harper, qemu-devel qemu-devel, Anthony Liguori,
	Avi Kivity

Am 26.06.2012 18:39, schrieb Alexander Graf:
> During discussions on whether to make -cpu host the default in SLE, I found

s/make -cpu host the default/support/?

> myself disagreeing to the thought, because it potentially opens a big can
> of worms for potential bugs. But if I already am so opposed to it for SLE, how
> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
> what would a sane default look like?
> 
> So I had this idea of looping through all available CPU definitions. We can
> pretty well tell if our host is able to execute any of them by checking the
> respective flags and seeing if our host has all features the CPU definition
> requires. With that, we can create a -cpu type that would fall back to the
> "best known CPU definition" that our host can fulfill. On my Phenom II
> system for example, that would be -cpu phenom.
> 
> With this approach we can test and verify that CPU types actually work at
> any random user setup, because we can always verify that all the -cpu types
> we ship actually work. And we only default to some clever mechanism that
> chooses from one of these.
> 
> Signed-off-by: Alexander Graf <agraf@suse.de>

Despite the long commit message a cover letter would've been nice. ;)

Anything that operates on x86_def_t will obviously need to be refactored
when we agree on the course for x86 CPU subclasses.
But no objection to getting it done some way that works today.

Andreas

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/3] KVM: Add new -cpu best
  2012-06-26 16:39 [PATCH v2 1/3] KVM: Add new -cpu best Alexander Graf
@ 2012-07-02 14:25   ` Avi Kivity
  2012-06-26 16:39 ` [PATCH v2 3/3] i386: KVM: List -cpu host and best in -cpu ? Alexander Graf
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Avi Kivity @ 2012-07-02 14:25 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Anthony Liguori, Ryan Harper, qemu-devel qemu-devel, KVM list

On 06/26/2012 07:39 PM, Alexander Graf wrote:
> During discussions on whether to make -cpu host the default in SLE, I found
> myself disagreeing to the thought, because it potentially opens a big can
> of worms for potential bugs. But if I already am so opposed to it for SLE, how
> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
> what would a sane default look like?
> 
> So I had this idea of looping through all available CPU definitions. We can
> pretty well tell if our host is able to execute any of them by checking the
> respective flags and seeing if our host has all features the CPU definition
> requires. With that, we can create a -cpu type that would fall back to the
> "best known CPU definition" that our host can fulfill. On my Phenom II
> system for example, that would be -cpu phenom.
> 
> With this approach we can test and verify that CPU types actually work at
> any random user setup, because we can always verify that all the -cpu types
> we ship actually work. And we only default to some clever mechanism that
> chooses from one of these.
> 
>  
> +/* Are all guest feature bits present on the host? */
> +static bool cpu_x86_feature_subset(uint32_t host, uint32_t guest)
> +{
> +    int i;
> +
> +    for (i = 0; i < 32; i++) {
> +        uint32_t mask = 1 << i;
> +        if ((guest & mask) && !(host & mask)) {
> +            return false;
> +        }
> +    }
> +
> +    return true;

    return !(guest & ~host);


> +}



> +
> +
> +
> +static void cpu_x86_fill_best(x86_def_t *x86_cpu_def)
> +{
> +    x86_def_t *def;
> +
> +    x86_cpu_def->family = 0;
> +    x86_cpu_def->model = 0;
> +    for (def = x86_defs; def; def = def->next) {
> +        if (cpu_x86_fits_host(def) && cpu_x86_fits_higher(def, x86_cpu_def)) {
> +            memcpy(x86_cpu_def, def, sizeof(*def));
> +        }
      *x86_cpu_def = *def;
> +    }
> +
> +    if (!x86_cpu_def->family && !x86_cpu_def->model) {
> +        fprintf(stderr, "No fitting CPU model found!\n");
> +        exit(1);
> +    }
> +}
> +
>  static int unavailable_host_feature(struct model_features_t *f, uint32_t mask)
>  {
>      int i;
> @@ -878,6 +957,8 @@ static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, const char *cpu_model)
>              break;
>      if (kvm_enabled() && name && strcmp(name, "host") == 0) {
>          cpu_x86_fill_host(x86_cpu_def);
> +    } else if (kvm_enabled() && name && strcmp(name, "best") == 0) {
> +        cpu_x86_fill_best(x86_cpu_def);
>      } else if (!def) {
>          goto error;
>      } else {
> 

Should we copy the cache size etc. from the host?


-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 1/3] KVM: Add new -cpu best
@ 2012-07-02 14:25   ` Avi Kivity
  0 siblings, 0 replies; 13+ messages in thread
From: Avi Kivity @ 2012-07-02 14:25 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Anthony Liguori, Ryan Harper, qemu-devel qemu-devel, KVM list

On 06/26/2012 07:39 PM, Alexander Graf wrote:
> During discussions on whether to make -cpu host the default in SLE, I found
> myself disagreeing to the thought, because it potentially opens a big can
> of worms for potential bugs. But if I already am so opposed to it for SLE, how
> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
> what would a sane default look like?
> 
> So I had this idea of looping through all available CPU definitions. We can
> pretty well tell if our host is able to execute any of them by checking the
> respective flags and seeing if our host has all features the CPU definition
> requires. With that, we can create a -cpu type that would fall back to the
> "best known CPU definition" that our host can fulfill. On my Phenom II
> system for example, that would be -cpu phenom.
> 
> With this approach we can test and verify that CPU types actually work at
> any random user setup, because we can always verify that all the -cpu types
> we ship actually work. And we only default to some clever mechanism that
> chooses from one of these.
> 
>  
> +/* Are all guest feature bits present on the host? */
> +static bool cpu_x86_feature_subset(uint32_t host, uint32_t guest)
> +{
> +    int i;
> +
> +    for (i = 0; i < 32; i++) {
> +        uint32_t mask = 1 << i;
> +        if ((guest & mask) && !(host & mask)) {
> +            return false;
> +        }
> +    }
> +
> +    return true;

    return !(guest & ~host);


> +}



> +
> +
> +
> +static void cpu_x86_fill_best(x86_def_t *x86_cpu_def)
> +{
> +    x86_def_t *def;
> +
> +    x86_cpu_def->family = 0;
> +    x86_cpu_def->model = 0;
> +    for (def = x86_defs; def; def = def->next) {
> +        if (cpu_x86_fits_host(def) && cpu_x86_fits_higher(def, x86_cpu_def)) {
> +            memcpy(x86_cpu_def, def, sizeof(*def));
> +        }
      *x86_cpu_def = *def;
> +    }
> +
> +    if (!x86_cpu_def->family && !x86_cpu_def->model) {
> +        fprintf(stderr, "No fitting CPU model found!\n");
> +        exit(1);
> +    }
> +}
> +
>  static int unavailable_host_feature(struct model_features_t *f, uint32_t mask)
>  {
>      int i;
> @@ -878,6 +957,8 @@ static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, const char *cpu_model)
>              break;
>      if (kvm_enabled() && name && strcmp(name, "host") == 0) {
>          cpu_x86_fill_host(x86_cpu_def);
> +    } else if (kvm_enabled() && name && strcmp(name, "best") == 0) {
> +        cpu_x86_fill_best(x86_cpu_def);
>      } else if (!def) {
>          goto error;
>      } else {
> 

Should we copy the cache size etc. from the host?


-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/3] KVM: Use -cpu best as default on x86
  2012-06-26 16:39 ` [PATCH v2 2/3] KVM: Use -cpu best as default on x86 Alexander Graf
@ 2012-07-02 14:27     ` Avi Kivity
  0 siblings, 0 replies; 13+ messages in thread
From: Avi Kivity @ 2012-07-02 14:27 UTC (permalink / raw)
  To: Alexander Graf
  Cc: qemu-devel qemu-devel, KVM list, Ryan Harper, Anthony Liguori

On 06/26/2012 07:39 PM, Alexander Graf wrote:
> When running QEMU without -cpu parameter, the user usually wants a sane
> default. So far, we're using the qemu64/qemu32 CPU type, which basically
> means "the maximum TCG can emulate".
> 
> That's a really good default when using TCG, but when running with KVM
> we much rather want a default saying "the maximum performance I can get".
> 
> Fortunately we just added an option that gives us the best performance
> while still staying safe on the testability side of things: -cpu best.
> So all we need to do is make -cpu best the default when the user doesn't
> explicitly specify a CPU type.
> 
> This fixes a lot of subtile breakage in the GNU toolchain (libgmp) which

subtle

> hicks up on QEMU's non-existent CPU models.
> 
> This patch also adds a new pc-1.2 machine type to stay backwards compatible
> with older versions of QEMU.

-- 
error compiling committee.c: too many arguments to function



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 2/3] KVM: Use -cpu best as default on x86
@ 2012-07-02 14:27     ` Avi Kivity
  0 siblings, 0 replies; 13+ messages in thread
From: Avi Kivity @ 2012-07-02 14:27 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Anthony Liguori, Ryan Harper, qemu-devel qemu-devel, KVM list

On 06/26/2012 07:39 PM, Alexander Graf wrote:
> When running QEMU without -cpu parameter, the user usually wants a sane
> default. So far, we're using the qemu64/qemu32 CPU type, which basically
> means "the maximum TCG can emulate".
> 
> That's a really good default when using TCG, but when running with KVM
> we much rather want a default saying "the maximum performance I can get".
> 
> Fortunately we just added an option that gives us the best performance
> while still staying safe on the testability side of things: -cpu best.
> So all we need to do is make -cpu best the default when the user doesn't
> explicitly specify a CPU type.
> 
> This fixes a lot of subtile breakage in the GNU toolchain (libgmp) which

subtle

> hicks up on QEMU's non-existent CPU models.
> 
> This patch also adds a new pc-1.2 machine type to stay backwards compatible
> with older versions of QEMU.

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/3] KVM: Add new -cpu best
  2012-07-02 14:25   ` [Qemu-devel] " Avi Kivity
@ 2012-07-09 11:57     ` Alexander Graf
  -1 siblings, 0 replies; 13+ messages in thread
From: Alexander Graf @ 2012-07-09 11:57 UTC (permalink / raw)
  To: Avi Kivity; +Cc: qemu-devel qemu-devel, KVM list, Ryan Harper, Anthony Liguori


On 02.07.2012, at 16:25, Avi Kivity wrote:

> On 06/26/2012 07:39 PM, Alexander Graf wrote:
>> During discussions on whether to make -cpu host the default in SLE, I found
>> myself disagreeing to the thought, because it potentially opens a big can
>> of worms for potential bugs. But if I already am so opposed to it for SLE, how
>> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
>> what would a sane default look like?
>> 
>> So I had this idea of looping through all available CPU definitions. We can
>> pretty well tell if our host is able to execute any of them by checking the
>> respective flags and seeing if our host has all features the CPU definition
>> requires. With that, we can create a -cpu type that would fall back to the
>> "best known CPU definition" that our host can fulfill. On my Phenom II
>> system for example, that would be -cpu phenom.
>> 
>> With this approach we can test and verify that CPU types actually work at
>> any random user setup, because we can always verify that all the -cpu types
>> we ship actually work. And we only default to some clever mechanism that
>> chooses from one of these.
>> 
>> 
>> +/* Are all guest feature bits present on the host? */
>> +static bool cpu_x86_feature_subset(uint32_t host, uint32_t guest)
>> +{
>> +    int i;
>> +
>> +    for (i = 0; i < 32; i++) {
>> +        uint32_t mask = 1 << i;
>> +        if ((guest & mask) && !(host & mask)) {
>> +            return false;
>> +        }
>> +    }
>> +
>> +    return true;
> 
>    return !(guest & ~host);

I guess it helps to think :).

> 
> 
>> +}
> 
> 
> 
>> +
>> +
>> +
>> +static void cpu_x86_fill_best(x86_def_t *x86_cpu_def)
>> +{
>> +    x86_def_t *def;
>> +
>> +    x86_cpu_def->family = 0;
>> +    x86_cpu_def->model = 0;
>> +    for (def = x86_defs; def; def = def->next) {
>> +        if (cpu_x86_fits_host(def) && cpu_x86_fits_higher(def, x86_cpu_def)) {
>> +            memcpy(x86_cpu_def, def, sizeof(*def));
>> +        }
>      *x86_cpu_def = *def;
>> +    }
>> +
>> +    if (!x86_cpu_def->family && !x86_cpu_def->model) {
>> +        fprintf(stderr, "No fitting CPU model found!\n");
>> +        exit(1);
>> +    }
>> +}
>> +
>> static int unavailable_host_feature(struct model_features_t *f, uint32_t mask)
>> {
>>     int i;
>> @@ -878,6 +957,8 @@ static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, const char *cpu_model)
>>             break;
>>     if (kvm_enabled() && name && strcmp(name, "host") == 0) {
>>         cpu_x86_fill_host(x86_cpu_def);
>> +    } else if (kvm_enabled() && name && strcmp(name, "best") == 0) {
>> +        cpu_x86_fill_best(x86_cpu_def);
>>     } else if (!def) {
>>         goto error;
>>     } else {
>> 
> 
> Should we copy the cache size etc. from the host?

I don't think so. We should rather make sure we always have cpu descriptions available close to what people out there actually use.


Alex


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 1/3] KVM: Add new -cpu best
@ 2012-07-09 11:57     ` Alexander Graf
  0 siblings, 0 replies; 13+ messages in thread
From: Alexander Graf @ 2012-07-09 11:57 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Anthony Liguori, Ryan Harper, qemu-devel qemu-devel, KVM list


On 02.07.2012, at 16:25, Avi Kivity wrote:

> On 06/26/2012 07:39 PM, Alexander Graf wrote:
>> During discussions on whether to make -cpu host the default in SLE, I found
>> myself disagreeing to the thought, because it potentially opens a big can
>> of worms for potential bugs. But if I already am so opposed to it for SLE, how
>> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
>> what would a sane default look like?
>> 
>> So I had this idea of looping through all available CPU definitions. We can
>> pretty well tell if our host is able to execute any of them by checking the
>> respective flags and seeing if our host has all features the CPU definition
>> requires. With that, we can create a -cpu type that would fall back to the
>> "best known CPU definition" that our host can fulfill. On my Phenom II
>> system for example, that would be -cpu phenom.
>> 
>> With this approach we can test and verify that CPU types actually work at
>> any random user setup, because we can always verify that all the -cpu types
>> we ship actually work. And we only default to some clever mechanism that
>> chooses from one of these.
>> 
>> 
>> +/* Are all guest feature bits present on the host? */
>> +static bool cpu_x86_feature_subset(uint32_t host, uint32_t guest)
>> +{
>> +    int i;
>> +
>> +    for (i = 0; i < 32; i++) {
>> +        uint32_t mask = 1 << i;
>> +        if ((guest & mask) && !(host & mask)) {
>> +            return false;
>> +        }
>> +    }
>> +
>> +    return true;
> 
>    return !(guest & ~host);

I guess it helps to think :).

> 
> 
>> +}
> 
> 
> 
>> +
>> +
>> +
>> +static void cpu_x86_fill_best(x86_def_t *x86_cpu_def)
>> +{
>> +    x86_def_t *def;
>> +
>> +    x86_cpu_def->family = 0;
>> +    x86_cpu_def->model = 0;
>> +    for (def = x86_defs; def; def = def->next) {
>> +        if (cpu_x86_fits_host(def) && cpu_x86_fits_higher(def, x86_cpu_def)) {
>> +            memcpy(x86_cpu_def, def, sizeof(*def));
>> +        }
>      *x86_cpu_def = *def;
>> +    }
>> +
>> +    if (!x86_cpu_def->family && !x86_cpu_def->model) {
>> +        fprintf(stderr, "No fitting CPU model found!\n");
>> +        exit(1);
>> +    }
>> +}
>> +
>> static int unavailable_host_feature(struct model_features_t *f, uint32_t mask)
>> {
>>     int i;
>> @@ -878,6 +957,8 @@ static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, const char *cpu_model)
>>             break;
>>     if (kvm_enabled() && name && strcmp(name, "host") == 0) {
>>         cpu_x86_fill_host(x86_cpu_def);
>> +    } else if (kvm_enabled() && name && strcmp(name, "best") == 0) {
>> +        cpu_x86_fill_best(x86_cpu_def);
>>     } else if (!def) {
>>         goto error;
>>     } else {
>> 
> 
> Should we copy the cache size etc. from the host?

I don't think so. We should rather make sure we always have cpu descriptions available close to what people out there actually use.


Alex

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2012-07-09 11:57 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-26 16:39 [PATCH v2 1/3] KVM: Add new -cpu best Alexander Graf
2012-06-26 16:39 ` [PATCH v2 2/3] KVM: Use -cpu best as default on x86 Alexander Graf
2012-07-02 14:27   ` Avi Kivity
2012-07-02 14:27     ` [Qemu-devel] " Avi Kivity
2012-06-26 16:39 ` [PATCH v2 3/3] i386: KVM: List -cpu host and best in -cpu ? Alexander Graf
2012-07-02 14:02 ` [Qemu-devel] [PATCH v2 1/3] KVM: Add new -cpu best Alexander Graf
2012-07-02 14:02   ` Alexander Graf
2012-07-02 14:24 ` Andreas Färber
2012-07-02 14:24   ` Andreas Färber
2012-07-02 14:25 ` Avi Kivity
2012-07-02 14:25   ` [Qemu-devel] " Avi Kivity
2012-07-09 11:57   ` Alexander Graf
2012-07-09 11:57     ` [Qemu-devel] " Alexander Graf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.