All of lore.kernel.org
 help / color / mirror / Atom feed
* -cpu host (was Re: [Qemu-devel] KVM call minutes for 2013-08-06)
@ 2013-08-08 12:51 ` Peter Maydell
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2013-08-08 12:51 UTC (permalink / raw)
  To: quintela; +Cc: KVM devel mailing list, qemu-devel qemu-devel, kvmarm

[I missed this KVM call but the stuff about -cpu host ties into
an issue we've been grappling with for ARM KVM, so it seems
a reasonable jumping-off-point.]

On 6 August 2013 16:15, Juan Quintela <quintela@redhat.com> wrote:
> 2013-08-06
> ----------
>
> What libvirt needs/miss Today?
> - how to handle machine types? creating them inside qemu?
> - qemu --cpu help
>   only shows cpus,  not what features qemu will use
> - qemu -cpu host
>   what does this exactly means?  kvm removes same flags.
> - Important to know if migration would work.
> - Machine types sometimes disable some feature, so cpu alone is not
>   enough.

> - kernel removes some features because it knows it can't be virtualised
> - qemu adds some others because it knows it don't need host support
> - and then lots of features in the middle

So, coming at this from an ARM perspective:
Should any target arch that supports KVM also support "-cpu host"?
If so, what should it do? Is there a description somewhere of
what the x86 and PPC semantics of -cpu host are?

For ARM you can't get at feature info of the host from userspace
(unless you want to get into parsing /proc/cpuinfo), so my current
idea is to have KVM_ARM_VCPU_INIT support a target-cpu-type
which means "whatever host CPU is". Then when we've created the
vcpu we can populate QEMU's idea of what the CPU features are
by using the existing ioctls for reading the cp15 registers of
the vcpu.

The other unresolved thing is what "-cpu host" ought to mean
for the CPU's on-chip peripherals (of which the major one is
the interrupt controller) -- if the host is an A57 should
this imply that you always get the A57's GICv3, or is it OK
to provide an A57 with a GICv2? At the moment QEMU models the
per-cpu peripherals in a somewhat more semi-detached fashion
than is the case in silicon, treating them as more a part
of the board model than of the cpu itself. Having '-cpu host'
not affect them might be the pragmatic choice, since it fits
with what QEMU currently does and with kernel-side situations
where the host CPU may only be able to show the guest VM a
GICv2 view of the world (or only a GICv3, as the case may be).
For this to work it does require that guests figure out what
their per-cpu peripherals are by looking at the device tree
rather than saying "oh, this is an A57, I know all A57s
have this", of course...

-- PMM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [Qemu-devel] -cpu host (was Re:  KVM call minutes for 2013-08-06)
@ 2013-08-08 12:51 ` Peter Maydell
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2013-08-08 12:51 UTC (permalink / raw)
  To: quintela; +Cc: qemu-devel qemu-devel, KVM devel mailing list, kvmarm

[I missed this KVM call but the stuff about -cpu host ties into
an issue we've been grappling with for ARM KVM, so it seems
a reasonable jumping-off-point.]

On 6 August 2013 16:15, Juan Quintela <quintela@redhat.com> wrote:
> 2013-08-06
> ----------
>
> What libvirt needs/miss Today?
> - how to handle machine types? creating them inside qemu?
> - qemu --cpu help
>   only shows cpus,  not what features qemu will use
> - qemu -cpu host
>   what does this exactly means?  kvm removes same flags.
> - Important to know if migration would work.
> - Machine types sometimes disable some feature, so cpu alone is not
>   enough.

> - kernel removes some features because it knows it can't be virtualised
> - qemu adds some others because it knows it don't need host support
> - and then lots of features in the middle

So, coming at this from an ARM perspective:
Should any target arch that supports KVM also support "-cpu host"?
If so, what should it do? Is there a description somewhere of
what the x86 and PPC semantics of -cpu host are?

For ARM you can't get at feature info of the host from userspace
(unless you want to get into parsing /proc/cpuinfo), so my current
idea is to have KVM_ARM_VCPU_INIT support a target-cpu-type
which means "whatever host CPU is". Then when we've created the
vcpu we can populate QEMU's idea of what the CPU features are
by using the existing ioctls for reading the cp15 registers of
the vcpu.

The other unresolved thing is what "-cpu host" ought to mean
for the CPU's on-chip peripherals (of which the major one is
the interrupt controller) -- if the host is an A57 should
this imply that you always get the A57's GICv3, or is it OK
to provide an A57 with a GICv2? At the moment QEMU models the
per-cpu peripherals in a somewhat more semi-detached fashion
than is the case in silicon, treating them as more a part
of the board model than of the cpu itself. Having '-cpu host'
not affect them might be the pragmatic choice, since it fits
with what QEMU currently does and with kernel-side situations
where the host CPU may only be able to show the guest VM a
GICv2 view of the world (or only a GICv3, as the case may be).
For this to work it does require that guests figure out what
their per-cpu peripherals are by looking at the device tree
rather than saying "oh, this is an A57, I know all A57s
have this", of course...

-- PMM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re:  KVM call minutes for 2013-08-06)
  2013-08-08 12:51 ` [Qemu-devel] -cpu host (was " Peter Maydell
@ 2013-08-08 15:55   ` Andreas Färber
  -1 siblings, 0 replies; 29+ messages in thread
From: Andreas Färber @ 2013-08-08 15:55 UTC (permalink / raw)
  To: Peter Maydell; +Cc: quintela, qemu-devel, KVM devel mailing list, kvmarm

Hi Peter,

Am 08.08.2013 14:51, schrieb Peter Maydell:
> [I missed this KVM call but the stuff about -cpu host ties into
> an issue we've been grappling with for ARM KVM, so it seems
> a reasonable jumping-off-point.]
> 
> On 6 August 2013 16:15, Juan Quintela <quintela@redhat.com> wrote:
>> 2013-08-06
>> ----------
>>
>> What libvirt needs/miss Today?
>> - how to handle machine types? creating them inside qemu?
>> - qemu --cpu help
>>   only shows cpus,  not what features qemu will use
>> - qemu -cpu host
>>   what does this exactly means?  kvm removes same flags.
>> - Important to know if migration would work.
>> - Machine types sometimes disable some feature, so cpu alone is not
>>   enough.
> 
>> - kernel removes some features because it knows it can't be virtualised
>> - qemu adds some others because it knows it don't need host support
>> - and then lots of features in the middle
> 
> So, coming at this from an ARM perspective:
> Should any target arch that supports KVM also support "-cpu host"?
> If so, what should it do?

I think that depends on the target and whether/what is useful.

> Is there a description somewhere of
> what the x86 and PPC semantics of -cpu host are?

I'm afraid our usual documentation will be reading the source code. ;)

x86 was first to implement -cpu host and passed through pretty much all
host features even if they would not work without additional support
code. I've seen a bunch of bugs where that leads to GMP and others
breaking badly. Lately in the case of PMU we've started to limit that.
Alex proposed -cpu best, which was never merged to date. It was similar
to how ppc's -cpu host works:

ppc matches the Processor Version Register (PVR) in kvm.c against its
known models from cpu-models.c (strictly today, mask being discussed).
The PVR can be read from userspace via mfpvr alias to mfspr (Move From
Special Purpose Register; possibly emulated for userspace by kernel?).
CPU features are all QEMU-driven AFAIU, through the "CPU families" in
translate_init.c. Beware, everything is highly macro'fied in ppc code.

> For ARM you can't get at feature info of the host from userspace
> (unless you want to get into parsing /proc/cpuinfo), so my current
> idea is to have KVM_ARM_VCPU_INIT support a target-cpu-type
> which means "whatever host CPU is". Then when we've created the
> vcpu we can populate QEMU's idea of what the CPU features are
> by using the existing ioctls for reading the cp15 registers of
> the vcpu.

Sounds sane to me iff those cp15 registers all work with KVM and don't
need any additional KVM/QEMU/device code.

> The other unresolved thing is what "-cpu host" ought to mean
> for the CPU's on-chip peripherals (of which the major one is
> the interrupt controller) -- if the host is an A57 should
> this imply that you always get the A57's GICv3, or is it OK
> to provide an A57 with a GICv2? At the moment QEMU models the
> per-cpu peripherals in a somewhat more semi-detached fashion
> than is the case in silicon, treating them as more a part
> of the board model than of the cpu itself.

Feel free to submit patches changing that. Prerequisite should then be
to have those devices be pure TYPE_DEVICE rather than
TYPE_SYS_BUS_DEVICE, or otherwise you'll run into the same hot-plug trap
as we did with the x86 APIC (we had to invent a hotpluggable ICC bus as
interim solution).

> Having '-cpu host'
> not affect them might be the pragmatic choice, since it fits
> with what QEMU currently does and with kernel-side situations
> where the host CPU may only be able to show the guest VM a
> GICv2 view of the world (or only a GICv3, as the case may be).
> For this to work it does require that guests figure out what
> their per-cpu peripherals are by looking at the device tree
> rather than saying "oh, this is an A57, I know all A57s
> have this", of course...

Without directly answering the question and continuing from above, my
personal view has been that we need to get away from the current CPU
model to a) how hardware is structured and b) how we want to have things
behave in virtualized environments.

Take x86 as an example: CPUState corresponds to a hyperthread today, but
we want hotplug to work like it does on a physical machine: hot-adding
on socket-level only. Beyond just building the topology with Container
objects, that means having a Xeon-X5-4242 object that has-a CPU core
has-a CPU thread and any devices the particular layers bring along.

For SoCs I have been proposing - for sh7750 and lately tegra2 - to model
"the black chip on the board" as a TYPE_DEVICE for encapsulation across
boards. Meaning the GIC would no longer be instantiated on the board but
as part of an object, and -smp and -cpu would as a consequence loose in
influence.

We could interpret -cpu host as instantiate the host's SoC object. But
the mainstream SoC for KVM virtualization is exynos5, and no one sat
down to model exynos5 in QEMU so far, so that would be moot. Versatile
Express is rather unlikely to match the host environment KVM is used in,
and when using Soft Macros (or what ARM calls their FPGA-based
emulation) then things get fuzzy anyway.

Similar problem for CPU hotplug: there is no real match in physical ARM
hardware that we can copy for KVM/QEMU. It's all mixed in one chip where
we can only enable/disable things via MMIO in physical reality.

You recently proposed to have the CPUs in the a*mpcore_priv object,
which also happens to own the GIC. Having the CPU model be a property of
a*mpcore would complicate a lot of things QOM-wise but for the question
at hand would allow to exchange the GIC based on CPU model, so I'm
undecided. a9mpcore_priv with a cortex-a15 doesn't make much sense
though, given there's a15mpcore_priv with different amount of IRQs and
less/different child devices.

Given that ARM SoCs are much less standardized then x86 PCs, I would
conclude that passing random CPUs into a board/SoC does not make sense
and should at least be limited to known-good combinations such as
Cortex-A7 vs. Cortex-A15. If someone wants to experiment with
modifications of SoCs/boards then we should rather provide more easy
ways to let someone derive a custom board config then pretending that
the user can just set a magical command line argument to cherry-pick
parts of the system.

Regards,
Andreas

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
@ 2013-08-08 15:55   ` Andreas Färber
  0 siblings, 0 replies; 29+ messages in thread
From: Andreas Färber @ 2013-08-08 15:55 UTC (permalink / raw)
  To: Peter Maydell; +Cc: kvmarm, qemu-devel, KVM devel mailing list, quintela

Hi Peter,

Am 08.08.2013 14:51, schrieb Peter Maydell:
> [I missed this KVM call but the stuff about -cpu host ties into
> an issue we've been grappling with for ARM KVM, so it seems
> a reasonable jumping-off-point.]
> 
> On 6 August 2013 16:15, Juan Quintela <quintela@redhat.com> wrote:
>> 2013-08-06
>> ----------
>>
>> What libvirt needs/miss Today?
>> - how to handle machine types? creating them inside qemu?
>> - qemu --cpu help
>>   only shows cpus,  not what features qemu will use
>> - qemu -cpu host
>>   what does this exactly means?  kvm removes same flags.
>> - Important to know if migration would work.
>> - Machine types sometimes disable some feature, so cpu alone is not
>>   enough.
> 
>> - kernel removes some features because it knows it can't be virtualised
>> - qemu adds some others because it knows it don't need host support
>> - and then lots of features in the middle
> 
> So, coming at this from an ARM perspective:
> Should any target arch that supports KVM also support "-cpu host"?
> If so, what should it do?

I think that depends on the target and whether/what is useful.

> Is there a description somewhere of
> what the x86 and PPC semantics of -cpu host are?

I'm afraid our usual documentation will be reading the source code. ;)

x86 was first to implement -cpu host and passed through pretty much all
host features even if they would not work without additional support
code. I've seen a bunch of bugs where that leads to GMP and others
breaking badly. Lately in the case of PMU we've started to limit that.
Alex proposed -cpu best, which was never merged to date. It was similar
to how ppc's -cpu host works:

ppc matches the Processor Version Register (PVR) in kvm.c against its
known models from cpu-models.c (strictly today, mask being discussed).
The PVR can be read from userspace via mfpvr alias to mfspr (Move From
Special Purpose Register; possibly emulated for userspace by kernel?).
CPU features are all QEMU-driven AFAIU, through the "CPU families" in
translate_init.c. Beware, everything is highly macro'fied in ppc code.

> For ARM you can't get at feature info of the host from userspace
> (unless you want to get into parsing /proc/cpuinfo), so my current
> idea is to have KVM_ARM_VCPU_INIT support a target-cpu-type
> which means "whatever host CPU is". Then when we've created the
> vcpu we can populate QEMU's idea of what the CPU features are
> by using the existing ioctls for reading the cp15 registers of
> the vcpu.

Sounds sane to me iff those cp15 registers all work with KVM and don't
need any additional KVM/QEMU/device code.

> The other unresolved thing is what "-cpu host" ought to mean
> for the CPU's on-chip peripherals (of which the major one is
> the interrupt controller) -- if the host is an A57 should
> this imply that you always get the A57's GICv3, or is it OK
> to provide an A57 with a GICv2? At the moment QEMU models the
> per-cpu peripherals in a somewhat more semi-detached fashion
> than is the case in silicon, treating them as more a part
> of the board model than of the cpu itself.

Feel free to submit patches changing that. Prerequisite should then be
to have those devices be pure TYPE_DEVICE rather than
TYPE_SYS_BUS_DEVICE, or otherwise you'll run into the same hot-plug trap
as we did with the x86 APIC (we had to invent a hotpluggable ICC bus as
interim solution).

> Having '-cpu host'
> not affect them might be the pragmatic choice, since it fits
> with what QEMU currently does and with kernel-side situations
> where the host CPU may only be able to show the guest VM a
> GICv2 view of the world (or only a GICv3, as the case may be).
> For this to work it does require that guests figure out what
> their per-cpu peripherals are by looking at the device tree
> rather than saying "oh, this is an A57, I know all A57s
> have this", of course...

Without directly answering the question and continuing from above, my
personal view has been that we need to get away from the current CPU
model to a) how hardware is structured and b) how we want to have things
behave in virtualized environments.

Take x86 as an example: CPUState corresponds to a hyperthread today, but
we want hotplug to work like it does on a physical machine: hot-adding
on socket-level only. Beyond just building the topology with Container
objects, that means having a Xeon-X5-4242 object that has-a CPU core
has-a CPU thread and any devices the particular layers bring along.

For SoCs I have been proposing - for sh7750 and lately tegra2 - to model
"the black chip on the board" as a TYPE_DEVICE for encapsulation across
boards. Meaning the GIC would no longer be instantiated on the board but
as part of an object, and -smp and -cpu would as a consequence loose in
influence.

We could interpret -cpu host as instantiate the host's SoC object. But
the mainstream SoC for KVM virtualization is exynos5, and no one sat
down to model exynos5 in QEMU so far, so that would be moot. Versatile
Express is rather unlikely to match the host environment KVM is used in,
and when using Soft Macros (or what ARM calls their FPGA-based
emulation) then things get fuzzy anyway.

Similar problem for CPU hotplug: there is no real match in physical ARM
hardware that we can copy for KVM/QEMU. It's all mixed in one chip where
we can only enable/disable things via MMIO in physical reality.

You recently proposed to have the CPUs in the a*mpcore_priv object,
which also happens to own the GIC. Having the CPU model be a property of
a*mpcore would complicate a lot of things QOM-wise but for the question
at hand would allow to exchange the GIC based on CPU model, so I'm
undecided. a9mpcore_priv with a cortex-a15 doesn't make much sense
though, given there's a15mpcore_priv with different amount of IRQs and
less/different child devices.

Given that ARM SoCs are much less standardized then x86 PCs, I would
conclude that passing random CPUs into a board/SoC does not make sense
and should at least be limited to known-good combinations such as
Cortex-A7 vs. Cortex-A15. If someone wants to experiment with
modifications of SoCs/boards then we should rather provide more easy
ways to let someone derive a custom board config then pretending that
the user can just set a magical command line argument to cherry-pick
parts of the system.

Regards,
Andreas

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
  2013-08-08 15:55   ` Andreas Färber
@ 2013-08-08 18:20     ` Peter Maydell
  -1 siblings, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2013-08-08 18:20 UTC (permalink / raw)
  To: Andreas Färber; +Cc: quintela, qemu-devel, KVM devel mailing list, kvmarm

On 8 August 2013 16:55, Andreas Färber <afaerber@suse.de> wrote:
> Am 08.08.2013 14:51, schrieb Peter Maydell:
>> So, coming at this from an ARM perspective:
>> Should any target arch that supports KVM also support "-cpu host"?
>> If so, what should it do?
>
> I think that depends on the target and whether/what is useful.

The most immediate problem we have is we don't want to have
to give QEMU a lot of info about v8 CPUs which it doesn't
really need to have just in order to start a VM; I think
-cpu host would fix that particular problem.

>> Is there a description somewhere of
>> what the x86 and PPC semantics of -cpu host are?
>
> I'm afraid our usual documentation will be reading the source code. ;)
>
> x86 was first to implement -cpu host and passed through pretty much all
> host features even if they would not work without additional support
> code. I've seen a bunch of bugs where that leads to GMP and others
> breaking badly. Lately in the case of PMU we've started to limit that.
> Alex proposed -cpu best, which was never merged to date. It was similar
> to how ppc's -cpu host works:
>
> ppc matches the Processor Version Register (PVR) in kvm.c against its
> known models from cpu-models.c (strictly today, mask being discussed).
> The PVR can be read from userspace via mfpvr alias to mfspr (Move From
> Special Purpose Register; possibly emulated for userspace by kernel?).
> CPU features are all QEMU-driven AFAIU, through the "CPU families" in
> translate_init.c. Beware, everything is highly macro'fied in ppc code.

In theory we could do a similar thing for ARM (pull the CPU
implementer/part numbers out of cpuinfo and match them against
QEMU's list of known CPUs). However that means you can't run
KVM on a CPU which QEMU doesn't know about, which was one
of the reasons for the approach I suggested below.

>> For ARM you can't get at feature info of the host from userspace
>> (unless you want to get into parsing /proc/cpuinfo), so my current
>> idea is to have KVM_ARM_VCPU_INIT support a target-cpu-type
>> which means "whatever host CPU is". Then when we've created the
>> vcpu we can populate QEMU's idea of what the CPU features are
>> by using the existing ioctls for reading the cp15 registers of
>> the vcpu.
>
> Sounds sane to me iff those cp15 registers all work with KVM and don't
> need any additional KVM/QEMU/device code.

Yes; KVM won't tell us about CP15 registers unless they
are exposed to the guest VM (that is, we're querying the
VCPU, not the host CPU). More generally, the cp15 "tuple
list" code I landed a couple of months back makes the kernel
the authoritative source for which cp15 registers exist and
what their values are -- in -enable-kvm mode QEMU no longer
cares about them (its own list of which registers exist for
which CPU is used only for TCG).

>> The other unresolved thing is what "-cpu host" ought to mean
>> for the CPU's on-chip peripherals (of which the major one is
>> the interrupt controller) -- if the host is an A57 should
>> this imply that you always get the A57's GICv3, or is it OK
>> to provide an A57 with a GICv2? At the moment QEMU models the
>> per-cpu peripherals in a somewhat more semi-detached fashion
>> than is the case in silicon, treating them as more a part
>> of the board model than of the cpu itself.
>
> Feel free to submit patches changing that. Prerequisite should
> then be to have those devices be pure TYPE_DEVICE rather than
> TYPE_SYS_BUS_DEVICE, or otherwise you'll run into the same
> hot-plug trap as we did with the x86 APIC (we had to invent a
> hotpluggable ICC bus as interim solution).

Mmm. I'm not sure what cpu hotplug should be in the ARM world
since obviously you can't hotplug a SoC (one possibility is
that we don't actually hotplug CPUs, we just create N of them
but leave most of them "powered off" via a power-control API
like PSCI).

>> Having '-cpu host'
>> not affect them might be the pragmatic choice, since it fits
>> with what QEMU currently does and with kernel-side situations
>> where the host CPU may only be able to show the guest VM a
>> GICv2 view of the world (or only a GICv3, as the case may be).
>> For this to work it does require that guests figure out what
>> their per-cpu peripherals are by looking at the device tree
>> rather than saying "oh, this is an A57, I know all A57s
>> have this", of course...
>
> Without directly answering the question and continuing from above, my
> personal view has been that we need to get away from the current CPU
> model to a) how hardware is structured and b) how we want to have things
> behave in virtualized environments.
>
> Take x86 as an example: CPUState corresponds to a hyperthread today, but
> we want hotplug to work like it does on a physical machine: hot-adding
> on socket-level only. Beyond just building the topology with Container
> objects, that means having a Xeon-X5-4242 object that has-a CPU core
> has-a CPU thread and any devices the particular layers bring along.
>
> For SoCs I have been proposing - for sh7750 and lately tegra2 - to model
> "the black chip on the board" as a TYPE_DEVICE for encapsulation across
> boards. Meaning the GIC would no longer be instantiated on the board but
> as part of an object, and -smp and -cpu would as a consequence loose in
> influence.

Yes, I agree with this as a general approach.

> We could interpret -cpu host as instantiate the host's SoC object. But
> the mainstream SoC for KVM virtualization is exynos5, and no one sat
> down to model exynos5 in QEMU so far, so that would be moot. Versatile
> Express is rather unlikely to match the host environment KVM is used in,
> and when using Soft Macros (or what ARM calls their FPGA-based
> emulation) then things get fuzzy anyway.

Agreed that '-cpu host' shouldn't instantiate a whole SoC. I think
the most useful behaviour would be that (for example) an A15 SoC
model should permit only either "-cpu cortex-a15" (pointless but
preserves backwards compatibility for command lines) or "-cpu host"
(only allowed when KVM enabled, possibly including a check that the
host CPU is 'close enough' to the SoC CPU, if we can define what
we mean by 'close enough'...).

> Similar problem for CPU hotplug: there is no real match in physical ARM
> hardware that we can copy for KVM/QEMU. It's all mixed in one chip where
> we can only enable/disable things via MMIO in physical reality.

This is why I like the idea of addressing the "give this VM more/fewer
CPUs" requirement via implementing power control and PSCI : it
actually does match what the hardware does to give more or fewer
cores to an OS. However I don't know what ARM server hardware
is likely to do in the way of hotplug...

> You recently proposed to have the CPUs in the a*mpcore_priv object,
> which also happens to own the GIC. Having the CPU model be a property of
> a*mpcore would complicate a lot of things QOM-wise but for the question
> at hand would allow to exchange the GIC based on CPU model, so I'm
> undecided. a9mpcore_priv with a cortex-a15 doesn't make much sense
> though, given there's a15mpcore_priv with different amount of IRQs and
> less/different child devices.

The patches I sent out today that get rid of arm_pic ought to
make it a little easier to move CPUs into the a*mpcore containers
if we want to go down that path.

> Given that ARM SoCs are much less standardized then x86 PCs, I would
> conclude that passing random CPUs into a board/SoC does not make sense
> and should at least be limited to known-good combinations such as
> Cortex-A7 vs. Cortex-A15.

I totally agree with this. The only reason we don't error out more
than we do already is a combination of inertia and not having a
nice infrastructure for boards to put limits on what the user can
pass on the command line [compare also "how does a board say it
needs a kernel or a flash image" and "how does a board say that
it can only handle up to 512MB of RAM"...]

-- PMM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
@ 2013-08-08 18:20     ` Peter Maydell
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2013-08-08 18:20 UTC (permalink / raw)
  To: Andreas Färber; +Cc: kvmarm, qemu-devel, KVM devel mailing list, quintela

On 8 August 2013 16:55, Andreas Färber <afaerber@suse.de> wrote:
> Am 08.08.2013 14:51, schrieb Peter Maydell:
>> So, coming at this from an ARM perspective:
>> Should any target arch that supports KVM also support "-cpu host"?
>> If so, what should it do?
>
> I think that depends on the target and whether/what is useful.

The most immediate problem we have is we don't want to have
to give QEMU a lot of info about v8 CPUs which it doesn't
really need to have just in order to start a VM; I think
-cpu host would fix that particular problem.

>> Is there a description somewhere of
>> what the x86 and PPC semantics of -cpu host are?
>
> I'm afraid our usual documentation will be reading the source code. ;)
>
> x86 was first to implement -cpu host and passed through pretty much all
> host features even if they would not work without additional support
> code. I've seen a bunch of bugs where that leads to GMP and others
> breaking badly. Lately in the case of PMU we've started to limit that.
> Alex proposed -cpu best, which was never merged to date. It was similar
> to how ppc's -cpu host works:
>
> ppc matches the Processor Version Register (PVR) in kvm.c against its
> known models from cpu-models.c (strictly today, mask being discussed).
> The PVR can be read from userspace via mfpvr alias to mfspr (Move From
> Special Purpose Register; possibly emulated for userspace by kernel?).
> CPU features are all QEMU-driven AFAIU, through the "CPU families" in
> translate_init.c. Beware, everything is highly macro'fied in ppc code.

In theory we could do a similar thing for ARM (pull the CPU
implementer/part numbers out of cpuinfo and match them against
QEMU's list of known CPUs). However that means you can't run
KVM on a CPU which QEMU doesn't know about, which was one
of the reasons for the approach I suggested below.

>> For ARM you can't get at feature info of the host from userspace
>> (unless you want to get into parsing /proc/cpuinfo), so my current
>> idea is to have KVM_ARM_VCPU_INIT support a target-cpu-type
>> which means "whatever host CPU is". Then when we've created the
>> vcpu we can populate QEMU's idea of what the CPU features are
>> by using the existing ioctls for reading the cp15 registers of
>> the vcpu.
>
> Sounds sane to me iff those cp15 registers all work with KVM and don't
> need any additional KVM/QEMU/device code.

Yes; KVM won't tell us about CP15 registers unless they
are exposed to the guest VM (that is, we're querying the
VCPU, not the host CPU). More generally, the cp15 "tuple
list" code I landed a couple of months back makes the kernel
the authoritative source for which cp15 registers exist and
what their values are -- in -enable-kvm mode QEMU no longer
cares about them (its own list of which registers exist for
which CPU is used only for TCG).

>> The other unresolved thing is what "-cpu host" ought to mean
>> for the CPU's on-chip peripherals (of which the major one is
>> the interrupt controller) -- if the host is an A57 should
>> this imply that you always get the A57's GICv3, or is it OK
>> to provide an A57 with a GICv2? At the moment QEMU models the
>> per-cpu peripherals in a somewhat more semi-detached fashion
>> than is the case in silicon, treating them as more a part
>> of the board model than of the cpu itself.
>
> Feel free to submit patches changing that. Prerequisite should
> then be to have those devices be pure TYPE_DEVICE rather than
> TYPE_SYS_BUS_DEVICE, or otherwise you'll run into the same
> hot-plug trap as we did with the x86 APIC (we had to invent a
> hotpluggable ICC bus as interim solution).

Mmm. I'm not sure what cpu hotplug should be in the ARM world
since obviously you can't hotplug a SoC (one possibility is
that we don't actually hotplug CPUs, we just create N of them
but leave most of them "powered off" via a power-control API
like PSCI).

>> Having '-cpu host'
>> not affect them might be the pragmatic choice, since it fits
>> with what QEMU currently does and with kernel-side situations
>> where the host CPU may only be able to show the guest VM a
>> GICv2 view of the world (or only a GICv3, as the case may be).
>> For this to work it does require that guests figure out what
>> their per-cpu peripherals are by looking at the device tree
>> rather than saying "oh, this is an A57, I know all A57s
>> have this", of course...
>
> Without directly answering the question and continuing from above, my
> personal view has been that we need to get away from the current CPU
> model to a) how hardware is structured and b) how we want to have things
> behave in virtualized environments.
>
> Take x86 as an example: CPUState corresponds to a hyperthread today, but
> we want hotplug to work like it does on a physical machine: hot-adding
> on socket-level only. Beyond just building the topology with Container
> objects, that means having a Xeon-X5-4242 object that has-a CPU core
> has-a CPU thread and any devices the particular layers bring along.
>
> For SoCs I have been proposing - for sh7750 and lately tegra2 - to model
> "the black chip on the board" as a TYPE_DEVICE for encapsulation across
> boards. Meaning the GIC would no longer be instantiated on the board but
> as part of an object, and -smp and -cpu would as a consequence loose in
> influence.

Yes, I agree with this as a general approach.

> We could interpret -cpu host as instantiate the host's SoC object. But
> the mainstream SoC for KVM virtualization is exynos5, and no one sat
> down to model exynos5 in QEMU so far, so that would be moot. Versatile
> Express is rather unlikely to match the host environment KVM is used in,
> and when using Soft Macros (or what ARM calls their FPGA-based
> emulation) then things get fuzzy anyway.

Agreed that '-cpu host' shouldn't instantiate a whole SoC. I think
the most useful behaviour would be that (for example) an A15 SoC
model should permit only either "-cpu cortex-a15" (pointless but
preserves backwards compatibility for command lines) or "-cpu host"
(only allowed when KVM enabled, possibly including a check that the
host CPU is 'close enough' to the SoC CPU, if we can define what
we mean by 'close enough'...).

> Similar problem for CPU hotplug: there is no real match in physical ARM
> hardware that we can copy for KVM/QEMU. It's all mixed in one chip where
> we can only enable/disable things via MMIO in physical reality.

This is why I like the idea of addressing the "give this VM more/fewer
CPUs" requirement via implementing power control and PSCI : it
actually does match what the hardware does to give more or fewer
cores to an OS. However I don't know what ARM server hardware
is likely to do in the way of hotplug...

> You recently proposed to have the CPUs in the a*mpcore_priv object,
> which also happens to own the GIC. Having the CPU model be a property of
> a*mpcore would complicate a lot of things QOM-wise but for the question
> at hand would allow to exchange the GIC based on CPU model, so I'm
> undecided. a9mpcore_priv with a cortex-a15 doesn't make much sense
> though, given there's a15mpcore_priv with different amount of IRQs and
> less/different child devices.

The patches I sent out today that get rid of arm_pic ought to
make it a little easier to move CPUs into the a*mpcore containers
if we want to go down that path.

> Given that ARM SoCs are much less standardized then x86 PCs, I would
> conclude that passing random CPUs into a board/SoC does not make sense
> and should at least be limited to known-good combinations such as
> Cortex-A7 vs. Cortex-A15.

I totally agree with this. The only reason we don't error out more
than we do already is a combination of inertia and not having a
nice infrastructure for boards to put limits on what the user can
pass on the command line [compare also "how does a board say it
needs a kernel or a flash image" and "how does a board say that
it can only handle up to 512MB of RAM"...]

-- PMM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
  2013-08-08 18:20     ` Peter Maydell
@ 2013-08-08 18:39       ` Christoffer Dall
  -1 siblings, 0 replies; 29+ messages in thread
From: Christoffer Dall @ 2013-08-08 18:39 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Andreas Färber, quintela, qemu-devel,
	KVM devel mailing list, kvmarm

On Thu, Aug 08, 2013 at 07:20:41PM +0100, Peter Maydell wrote:
> On 8 August 2013 16:55, Andreas Färber <afaerber@suse.de> wrote:
> > Am 08.08.2013 14:51, schrieb Peter Maydell:
> >> So, coming at this from an ARM perspective:
> >> Should any target arch that supports KVM also support "-cpu host"?
> >> If so, what should it do?
> >
> > I think that depends on the target and whether/what is useful.
> 
> The most immediate problem we have is we don't want to have
> to give QEMU a lot of info about v8 CPUs which it doesn't
> really need to have just in order to start a VM; I think
> -cpu host would fix that particular problem.
> 
> >> Is there a description somewhere of
> >> what the x86 and PPC semantics of -cpu host are?
> >
> > I'm afraid our usual documentation will be reading the source code. ;)
> >
> > x86 was first to implement -cpu host and passed through pretty much all
> > host features even if they would not work without additional support
> > code. I've seen a bunch of bugs where that leads to GMP and others
> > breaking badly. Lately in the case of PMU we've started to limit that.
> > Alex proposed -cpu best, which was never merged to date. It was similar
> > to how ppc's -cpu host works:
> >
> > ppc matches the Processor Version Register (PVR) in kvm.c against its
> > known models from cpu-models.c (strictly today, mask being discussed).
> > The PVR can be read from userspace via mfpvr alias to mfspr (Move From
> > Special Purpose Register; possibly emulated for userspace by kernel?).
> > CPU features are all QEMU-driven AFAIU, through the "CPU families" in
> > translate_init.c. Beware, everything is highly macro'fied in ppc code.
> 
> In theory we could do a similar thing for ARM (pull the CPU
> implementer/part numbers out of cpuinfo and match them against
> QEMU's list of known CPUs). However that means you can't run
> KVM on a CPU which QEMU doesn't know about, which was one
> of the reasons for the approach I suggested below.
> 
> >> For ARM you can't get at feature info of the host from userspace
> >> (unless you want to get into parsing /proc/cpuinfo), so my current
> >> idea is to have KVM_ARM_VCPU_INIT support a target-cpu-type
> >> which means "whatever host CPU is". Then when we've created the
> >> vcpu we can populate QEMU's idea of what the CPU features are
> >> by using the existing ioctls for reading the cp15 registers of
> >> the vcpu.
> >
> > Sounds sane to me iff those cp15 registers all work with KVM and don't
> > need any additional KVM/QEMU/device code.
> 
> Yes; KVM won't tell us about CP15 registers unless they
> are exposed to the guest VM (that is, we're querying the
> VCPU, not the host CPU). More generally, the cp15 "tuple
> list" code I landed a couple of months back makes the kernel
> the authoritative source for which cp15 registers exist and
> what their values are -- in -enable-kvm mode QEMU no longer
> cares about them (its own list of which registers exist for
> which CPU is used only for TCG).
> 
FWIW, from the kernel point of view I'd much prefer to return "this is
the type of VCPU that I prefer to emulate" to user space on this current
host than having QEMU come up with its own suggestion for CPU and asking
the kernel for it.  The reason is that it gives us slightly more freedom
in how we choose to support a given host SoC in that we can say that we
at least support core A on core B, so if user space can deal with a
virtual core A, we should be good.

-Christoffer

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
@ 2013-08-08 18:39       ` Christoffer Dall
  0 siblings, 0 replies; 29+ messages in thread
From: Christoffer Dall @ 2013-08-08 18:39 UTC (permalink / raw)
  To: Peter Maydell
  Cc: qemu-devel, kvmarm, Andreas Färber, KVM devel mailing list,
	quintela

On Thu, Aug 08, 2013 at 07:20:41PM +0100, Peter Maydell wrote:
> On 8 August 2013 16:55, Andreas Färber <afaerber@suse.de> wrote:
> > Am 08.08.2013 14:51, schrieb Peter Maydell:
> >> So, coming at this from an ARM perspective:
> >> Should any target arch that supports KVM also support "-cpu host"?
> >> If so, what should it do?
> >
> > I think that depends on the target and whether/what is useful.
> 
> The most immediate problem we have is we don't want to have
> to give QEMU a lot of info about v8 CPUs which it doesn't
> really need to have just in order to start a VM; I think
> -cpu host would fix that particular problem.
> 
> >> Is there a description somewhere of
> >> what the x86 and PPC semantics of -cpu host are?
> >
> > I'm afraid our usual documentation will be reading the source code. ;)
> >
> > x86 was first to implement -cpu host and passed through pretty much all
> > host features even if they would not work without additional support
> > code. I've seen a bunch of bugs where that leads to GMP and others
> > breaking badly. Lately in the case of PMU we've started to limit that.
> > Alex proposed -cpu best, which was never merged to date. It was similar
> > to how ppc's -cpu host works:
> >
> > ppc matches the Processor Version Register (PVR) in kvm.c against its
> > known models from cpu-models.c (strictly today, mask being discussed).
> > The PVR can be read from userspace via mfpvr alias to mfspr (Move From
> > Special Purpose Register; possibly emulated for userspace by kernel?).
> > CPU features are all QEMU-driven AFAIU, through the "CPU families" in
> > translate_init.c. Beware, everything is highly macro'fied in ppc code.
> 
> In theory we could do a similar thing for ARM (pull the CPU
> implementer/part numbers out of cpuinfo and match them against
> QEMU's list of known CPUs). However that means you can't run
> KVM on a CPU which QEMU doesn't know about, which was one
> of the reasons for the approach I suggested below.
> 
> >> For ARM you can't get at feature info of the host from userspace
> >> (unless you want to get into parsing /proc/cpuinfo), so my current
> >> idea is to have KVM_ARM_VCPU_INIT support a target-cpu-type
> >> which means "whatever host CPU is". Then when we've created the
> >> vcpu we can populate QEMU's idea of what the CPU features are
> >> by using the existing ioctls for reading the cp15 registers of
> >> the vcpu.
> >
> > Sounds sane to me iff those cp15 registers all work with KVM and don't
> > need any additional KVM/QEMU/device code.
> 
> Yes; KVM won't tell us about CP15 registers unless they
> are exposed to the guest VM (that is, we're querying the
> VCPU, not the host CPU). More generally, the cp15 "tuple
> list" code I landed a couple of months back makes the kernel
> the authoritative source for which cp15 registers exist and
> what their values are -- in -enable-kvm mode QEMU no longer
> cares about them (its own list of which registers exist for
> which CPU is used only for TCG).
> 
FWIW, from the kernel point of view I'd much prefer to return "this is
the type of VCPU that I prefer to emulate" to user space on this current
host than having QEMU come up with its own suggestion for CPU and asking
the kernel for it.  The reason is that it gives us slightly more freedom
in how we choose to support a given host SoC in that we can say that we
at least support core A on core B, so if user space can deal with a
virtual core A, we should be good.

-Christoffer

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
  2013-08-08 18:39       ` Christoffer Dall
@ 2013-08-08 19:05         ` Peter Maydell
  -1 siblings, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2013-08-08 19:05 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Andreas Färber, quintela, qemu-devel,
	KVM devel mailing list, kvmarm

On 8 August 2013 19:39, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> FWIW, from the kernel point of view I'd much prefer to return "this is
> the type of VCPU that I prefer to emulate" to user space on this current
> host than having QEMU come up with its own suggestion for CPU and asking
> the kernel for it.  The reason is that it gives us slightly more freedom
> in how we choose to support a given host SoC in that we can say that we
> at least support core A on core B, so if user space can deal with a
> virtual core A, we should be good.

Hmm, I'm not sure how useful a "query support" kind of API would
be to QEMU. QEMU is basically going to have two use cases:
(1) "I want an A15" [ie -cpu cortex-a15]
(2) "give me whatever you have and I'll cope" [ie -cpu host]

so my thought was that we could just have the kernel support
    init.target = KVM_ARM_TARGET_HOST;
    memset(init.features, 0, sizeof(init.features));
    ret = kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);

(in the same way we currently ask for KVM_ARM_TARGET_CORTEX_A15).

I guess we could have a "return preferred target value"
VM ioctl, but it seems a bit pointless given that the
only thing userspace is going to do with the return
value is immediately feed it back to the kernel...

-- PMM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
@ 2013-08-08 19:05         ` Peter Maydell
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2013-08-08 19:05 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: qemu-devel, kvmarm, Andreas Färber, KVM devel mailing list,
	quintela

On 8 August 2013 19:39, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> FWIW, from the kernel point of view I'd much prefer to return "this is
> the type of VCPU that I prefer to emulate" to user space on this current
> host than having QEMU come up with its own suggestion for CPU and asking
> the kernel for it.  The reason is that it gives us slightly more freedom
> in how we choose to support a given host SoC in that we can say that we
> at least support core A on core B, so if user space can deal with a
> virtual core A, we should be good.

Hmm, I'm not sure how useful a "query support" kind of API would
be to QEMU. QEMU is basically going to have two use cases:
(1) "I want an A15" [ie -cpu cortex-a15]
(2) "give me whatever you have and I'll cope" [ie -cpu host]

so my thought was that we could just have the kernel support
    init.target = KVM_ARM_TARGET_HOST;
    memset(init.features, 0, sizeof(init.features));
    ret = kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);

(in the same way we currently ask for KVM_ARM_TARGET_CORTEX_A15).

I guess we could have a "return preferred target value"
VM ioctl, but it seems a bit pointless given that the
only thing userspace is going to do with the return
value is immediately feed it back to the kernel...

-- PMM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
  2013-08-08 19:05         ` Peter Maydell
@ 2013-08-08 19:29           ` Christoffer Dall
  -1 siblings, 0 replies; 29+ messages in thread
From: Christoffer Dall @ 2013-08-08 19:29 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Andreas Färber, quintela, qemu-devel,
	KVM devel mailing list, kvmarm

On Thu, Aug 08, 2013 at 08:05:11PM +0100, Peter Maydell wrote:
> On 8 August 2013 19:39, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> > FWIW, from the kernel point of view I'd much prefer to return "this is
> > the type of VCPU that I prefer to emulate" to user space on this current
> > host than having QEMU come up with its own suggestion for CPU and asking
> > the kernel for it.  The reason is that it gives us slightly more freedom
> > in how we choose to support a given host SoC in that we can say that we
> > at least support core A on core B, so if user space can deal with a
> > virtual core A, we should be good.
> 
> Hmm, I'm not sure how useful a "query support" kind of API would
> be to QEMU. QEMU is basically going to have two use cases:
> (1) "I want an A15" [ie -cpu cortex-a15]
> (2) "give me whatever you have and I'll cope" [ie -cpu host]
> 
> so my thought was that we could just have the kernel support
>     init.target = KVM_ARM_TARGET_HOST;
>     memset(init.features, 0, sizeof(init.features));
>     ret = kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
> 
> (in the same way we currently ask for KVM_ARM_TARGET_CORTEX_A15).
> 
> I guess we could have a "return preferred target value"
> VM ioctl, but it seems a bit pointless given that the
> only thing userspace is going to do with the return
> value is immediately feed it back to the kernel...
> 
My thinking was that the result of cpu = KVM_ARM_TARGET_HOST would be
the same as x = kvm_get_target_host(), cpu = x, but at the same time
letting QEMU know what it's dealing with.  Perhaps QEMU doesn't need
this for emulation, but isn't it useful for
save/restore/migration/debugging scenarios?

So, if you just use the KVM_ARM_TARGET_HOST value, do you expect the
kernel to just set the base address of the GIC interface, or?

-Christoffer

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
@ 2013-08-08 19:29           ` Christoffer Dall
  0 siblings, 0 replies; 29+ messages in thread
From: Christoffer Dall @ 2013-08-08 19:29 UTC (permalink / raw)
  To: Peter Maydell
  Cc: qemu-devel, kvmarm, Andreas Färber, KVM devel mailing list,
	quintela

On Thu, Aug 08, 2013 at 08:05:11PM +0100, Peter Maydell wrote:
> On 8 August 2013 19:39, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> > FWIW, from the kernel point of view I'd much prefer to return "this is
> > the type of VCPU that I prefer to emulate" to user space on this current
> > host than having QEMU come up with its own suggestion for CPU and asking
> > the kernel for it.  The reason is that it gives us slightly more freedom
> > in how we choose to support a given host SoC in that we can say that we
> > at least support core A on core B, so if user space can deal with a
> > virtual core A, we should be good.
> 
> Hmm, I'm not sure how useful a "query support" kind of API would
> be to QEMU. QEMU is basically going to have two use cases:
> (1) "I want an A15" [ie -cpu cortex-a15]
> (2) "give me whatever you have and I'll cope" [ie -cpu host]
> 
> so my thought was that we could just have the kernel support
>     init.target = KVM_ARM_TARGET_HOST;
>     memset(init.features, 0, sizeof(init.features));
>     ret = kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
> 
> (in the same way we currently ask for KVM_ARM_TARGET_CORTEX_A15).
> 
> I guess we could have a "return preferred target value"
> VM ioctl, but it seems a bit pointless given that the
> only thing userspace is going to do with the return
> value is immediately feed it back to the kernel...
> 
My thinking was that the result of cpu = KVM_ARM_TARGET_HOST would be
the same as x = kvm_get_target_host(), cpu = x, but at the same time
letting QEMU know what it's dealing with.  Perhaps QEMU doesn't need
this for emulation, but isn't it useful for
save/restore/migration/debugging scenarios?

So, if you just use the KVM_ARM_TARGET_HOST value, do you expect the
kernel to just set the base address of the GIC interface, or?

-Christoffer

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
  2013-08-08 19:29           ` Christoffer Dall
@ 2013-08-08 20:48             ` Peter Maydell
  -1 siblings, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2013-08-08 20:48 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Andreas Färber, quintela, qemu-devel,
	KVM devel mailing list, kvmarm

On 8 August 2013 20:29, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> On Thu, Aug 08, 2013 at 08:05:11PM +0100, Peter Maydell wrote:
>> On 8 August 2013 19:39, Christoffer Dall <christoffer.dall@linaro.org> wrote:
>> > FWIW, from the kernel point of view I'd much prefer to return "this is
>> > the type of VCPU that I prefer to emulate" to user space on this current
>> > host than having QEMU come up with its own suggestion for CPU and asking
>> > the kernel for it.  The reason is that it gives us slightly more freedom
>> > in how we choose to support a given host SoC in that we can say that we
>> > at least support core A on core B, so if user space can deal with a
>> > virtual core A, we should be good.
>>
>> Hmm, I'm not sure how useful a "query support" kind of API would
>> be to QEMU. QEMU is basically going to have two use cases:
>> (1) "I want an A15" [ie -cpu cortex-a15]
>> (2) "give me whatever you have and I'll cope" [ie -cpu host]
>>
>> so my thought was that we could just have the kernel support
>>     init.target = KVM_ARM_TARGET_HOST;
>>     memset(init.features, 0, sizeof(init.features));
>>     ret = kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
>>
>> (in the same way we currently ask for KVM_ARM_TARGET_CORTEX_A15).
>>
>> I guess we could have a "return preferred target value"
>> VM ioctl, but it seems a bit pointless given that the
>> only thing userspace is going to do with the return
>> value is immediately feed it back to the kernel...
>>
> My thinking was that the result of cpu = KVM_ARM_TARGET_HOST would be
> the same as x = kvm_get_target_host(), cpu = x, but at the same time
> letting QEMU know what it's dealing with.  Perhaps QEMU doesn't need
> this for emulation, but isn't it useful for
> save/restore/migration/debugging scenarios?

For migration we don't care because we just send everything
over the wire and let the receiving kernel decide (where
it will presumably reject if the MIDR value doesn't match).

There are some cases where we want to know what kind of CPU
we actually got, though the only one I could think of was if
we're constructing a device tree for mach-virt, what do we put
in the cpu node's "compatible" property? (what does the kernel
do with that anyway?) I had planned to key that off the MIDR
value, though. (As an aside, if there was a way to get the
actual 'compatible' string from the host kernel rather than
having to maintain a table of KVM_ARM_TARGET_* and/or MIDR
to compatible-string values that would be neat.)

> So, if you just use the KVM_ARM_TARGET_HOST value, do you expect the
> kernel to just set the base address of the GIC interface, or?

So in this view of the world, we keep the GIC separate from
the CPU itself (which allows things like "give me an A57
vcpu but actually it's got a GICv2 because that's all the
host hardware/kernel can do"). The GIC base address is then
a property of the board model we're running rather than of
the CPU (and for mach-virt we just set it to something
convenient). This would be done via the new-style irqchip
creation/config API rather than what we have in tree today.
Presumably we could have a similar thing for the irqchip
of "tell me what kind of irqchip you can instantiate and/or
give me what you've got". I guess here we do need a way to
find out what the kernel can do since (a) the kernel might
be able to accelerate both GICv2 and v3 with no inherent
preference and (b) there's no handy GIC-version-register
like the MIDR we can use to track what we got. That might
argue for a similar approach for the CPU proper, by analogy.

-- PMM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
@ 2013-08-08 20:48             ` Peter Maydell
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2013-08-08 20:48 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: qemu-devel, kvmarm, Andreas Färber, KVM devel mailing list,
	quintela

On 8 August 2013 20:29, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> On Thu, Aug 08, 2013 at 08:05:11PM +0100, Peter Maydell wrote:
>> On 8 August 2013 19:39, Christoffer Dall <christoffer.dall@linaro.org> wrote:
>> > FWIW, from the kernel point of view I'd much prefer to return "this is
>> > the type of VCPU that I prefer to emulate" to user space on this current
>> > host than having QEMU come up with its own suggestion for CPU and asking
>> > the kernel for it.  The reason is that it gives us slightly more freedom
>> > in how we choose to support a given host SoC in that we can say that we
>> > at least support core A on core B, so if user space can deal with a
>> > virtual core A, we should be good.
>>
>> Hmm, I'm not sure how useful a "query support" kind of API would
>> be to QEMU. QEMU is basically going to have two use cases:
>> (1) "I want an A15" [ie -cpu cortex-a15]
>> (2) "give me whatever you have and I'll cope" [ie -cpu host]
>>
>> so my thought was that we could just have the kernel support
>>     init.target = KVM_ARM_TARGET_HOST;
>>     memset(init.features, 0, sizeof(init.features));
>>     ret = kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
>>
>> (in the same way we currently ask for KVM_ARM_TARGET_CORTEX_A15).
>>
>> I guess we could have a "return preferred target value"
>> VM ioctl, but it seems a bit pointless given that the
>> only thing userspace is going to do with the return
>> value is immediately feed it back to the kernel...
>>
> My thinking was that the result of cpu = KVM_ARM_TARGET_HOST would be
> the same as x = kvm_get_target_host(), cpu = x, but at the same time
> letting QEMU know what it's dealing with.  Perhaps QEMU doesn't need
> this for emulation, but isn't it useful for
> save/restore/migration/debugging scenarios?

For migration we don't care because we just send everything
over the wire and let the receiving kernel decide (where
it will presumably reject if the MIDR value doesn't match).

There are some cases where we want to know what kind of CPU
we actually got, though the only one I could think of was if
we're constructing a device tree for mach-virt, what do we put
in the cpu node's "compatible" property? (what does the kernel
do with that anyway?) I had planned to key that off the MIDR
value, though. (As an aside, if there was a way to get the
actual 'compatible' string from the host kernel rather than
having to maintain a table of KVM_ARM_TARGET_* and/or MIDR
to compatible-string values that would be neat.)

> So, if you just use the KVM_ARM_TARGET_HOST value, do you expect the
> kernel to just set the base address of the GIC interface, or?

So in this view of the world, we keep the GIC separate from
the CPU itself (which allows things like "give me an A57
vcpu but actually it's got a GICv2 because that's all the
host hardware/kernel can do"). The GIC base address is then
a property of the board model we're running rather than of
the CPU (and for mach-virt we just set it to something
convenient). This would be done via the new-style irqchip
creation/config API rather than what we have in tree today.
Presumably we could have a similar thing for the irqchip
of "tell me what kind of irqchip you can instantiate and/or
give me what you've got". I guess here we do need a way to
find out what the kernel can do since (a) the kernel might
be able to accelerate both GICv2 and v3 with no inherent
preference and (b) there's no handy GIC-version-register
like the MIDR we can use to track what we got. That might
argue for a similar approach for the CPU proper, by analogy.

-- PMM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
  2013-08-08 20:48             ` Peter Maydell
@ 2013-08-08 20:57               ` Christoffer Dall
  -1 siblings, 0 replies; 29+ messages in thread
From: Christoffer Dall @ 2013-08-08 20:57 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Andreas Färber, quintela, qemu-devel,
	KVM devel mailing list, kvmarm

On Thu, Aug 08, 2013 at 09:48:23PM +0100, Peter Maydell wrote:
> On 8 August 2013 20:29, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> > On Thu, Aug 08, 2013 at 08:05:11PM +0100, Peter Maydell wrote:
> >> On 8 August 2013 19:39, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> >> > FWIW, from the kernel point of view I'd much prefer to return "this is
> >> > the type of VCPU that I prefer to emulate" to user space on this current
> >> > host than having QEMU come up with its own suggestion for CPU and asking
> >> > the kernel for it.  The reason is that it gives us slightly more freedom
> >> > in how we choose to support a given host SoC in that we can say that we
> >> > at least support core A on core B, so if user space can deal with a
> >> > virtual core A, we should be good.
> >>
> >> Hmm, I'm not sure how useful a "query support" kind of API would
> >> be to QEMU. QEMU is basically going to have two use cases:
> >> (1) "I want an A15" [ie -cpu cortex-a15]
> >> (2) "give me whatever you have and I'll cope" [ie -cpu host]
> >>
> >> so my thought was that we could just have the kernel support
> >>     init.target = KVM_ARM_TARGET_HOST;
> >>     memset(init.features, 0, sizeof(init.features));
> >>     ret = kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
> >>
> >> (in the same way we currently ask for KVM_ARM_TARGET_CORTEX_A15).
> >>
> >> I guess we could have a "return preferred target value"
> >> VM ioctl, but it seems a bit pointless given that the
> >> only thing userspace is going to do with the return
> >> value is immediately feed it back to the kernel...
> >>
> > My thinking was that the result of cpu = KVM_ARM_TARGET_HOST would be
> > the same as x = kvm_get_target_host(), cpu = x, but at the same time
> > letting QEMU know what it's dealing with.  Perhaps QEMU doesn't need
> > this for emulation, but isn't it useful for
> > save/restore/migration/debugging scenarios?
> 
> For migration we don't care because we just send everything
> over the wire and let the receiving kernel decide (where
> it will presumably reject if the MIDR value doesn't match).
> 
> There are some cases where we want to know what kind of CPU
> we actually got, though the only one I could think of was if
> we're constructing a device tree for mach-virt, what do we put
> in the cpu node's "compatible" property? (what does the kernel
> do with that anyway?) I had planned to key that off the MIDR
> value, though. (As an aside, if there was a way to get the
> actual 'compatible' string from the host kernel rather than
> having to maintain a table of KVM_ARM_TARGET_* and/or MIDR
> to compatible-string values that would be neat.)
> 

ok, fair enough, as long as you can get what you need from the MIDR then
you're good I guess.

> > So, if you just use the KVM_ARM_TARGET_HOST value, do you expect the
> > kernel to just set the base address of the GIC interface, or?
> 
> So in this view of the world, we keep the GIC separate from
> the CPU itself (which allows things like "give me an A57
> vcpu but actually it's got a GICv2 because that's all the
> host hardware/kernel can do"). The GIC base address is then
> a property of the board model we're running rather than of
> the CPU (and for mach-virt we just set it to something
> convenient). This would be done via the new-style irqchip
> creation/config API rather than what we have in tree today.
> Presumably we could have a similar thing for the irqchip
> of "tell me what kind of irqchip you can instantiate and/or
> give me what you've got". I guess here we do need a way to
> find out what the kernel can do since (a) the kernel might
> be able to accelerate both GICv2 and v3 with no inherent
> preference and (b) there's no handy GIC-version-register
> like the MIDR we can use to track what we got. That might
> argue for a similar approach for the CPU proper, by analogy.

I'm fine with having a discovery mechanism for the GIC and not for the
CPU, if there's an implicit discovery mechanism for the CPU.  But are
we sure there will not be cases where we want to know the list of
available CPUs that the kernel can actually emulate on this host as
opposed to "just use the best one".  Maybe that's only relevant for the
"give me an A15 case" and then you can just do trial-and-error, perhaps.

-Christoffer

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
@ 2013-08-08 20:57               ` Christoffer Dall
  0 siblings, 0 replies; 29+ messages in thread
From: Christoffer Dall @ 2013-08-08 20:57 UTC (permalink / raw)
  To: Peter Maydell
  Cc: qemu-devel, kvmarm, Andreas Färber, KVM devel mailing list,
	quintela

On Thu, Aug 08, 2013 at 09:48:23PM +0100, Peter Maydell wrote:
> On 8 August 2013 20:29, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> > On Thu, Aug 08, 2013 at 08:05:11PM +0100, Peter Maydell wrote:
> >> On 8 August 2013 19:39, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> >> > FWIW, from the kernel point of view I'd much prefer to return "this is
> >> > the type of VCPU that I prefer to emulate" to user space on this current
> >> > host than having QEMU come up with its own suggestion for CPU and asking
> >> > the kernel for it.  The reason is that it gives us slightly more freedom
> >> > in how we choose to support a given host SoC in that we can say that we
> >> > at least support core A on core B, so if user space can deal with a
> >> > virtual core A, we should be good.
> >>
> >> Hmm, I'm not sure how useful a "query support" kind of API would
> >> be to QEMU. QEMU is basically going to have two use cases:
> >> (1) "I want an A15" [ie -cpu cortex-a15]
> >> (2) "give me whatever you have and I'll cope" [ie -cpu host]
> >>
> >> so my thought was that we could just have the kernel support
> >>     init.target = KVM_ARM_TARGET_HOST;
> >>     memset(init.features, 0, sizeof(init.features));
> >>     ret = kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
> >>
> >> (in the same way we currently ask for KVM_ARM_TARGET_CORTEX_A15).
> >>
> >> I guess we could have a "return preferred target value"
> >> VM ioctl, but it seems a bit pointless given that the
> >> only thing userspace is going to do with the return
> >> value is immediately feed it back to the kernel...
> >>
> > My thinking was that the result of cpu = KVM_ARM_TARGET_HOST would be
> > the same as x = kvm_get_target_host(), cpu = x, but at the same time
> > letting QEMU know what it's dealing with.  Perhaps QEMU doesn't need
> > this for emulation, but isn't it useful for
> > save/restore/migration/debugging scenarios?
> 
> For migration we don't care because we just send everything
> over the wire and let the receiving kernel decide (where
> it will presumably reject if the MIDR value doesn't match).
> 
> There are some cases where we want to know what kind of CPU
> we actually got, though the only one I could think of was if
> we're constructing a device tree for mach-virt, what do we put
> in the cpu node's "compatible" property? (what does the kernel
> do with that anyway?) I had planned to key that off the MIDR
> value, though. (As an aside, if there was a way to get the
> actual 'compatible' string from the host kernel rather than
> having to maintain a table of KVM_ARM_TARGET_* and/or MIDR
> to compatible-string values that would be neat.)
> 

ok, fair enough, as long as you can get what you need from the MIDR then
you're good I guess.

> > So, if you just use the KVM_ARM_TARGET_HOST value, do you expect the
> > kernel to just set the base address of the GIC interface, or?
> 
> So in this view of the world, we keep the GIC separate from
> the CPU itself (which allows things like "give me an A57
> vcpu but actually it's got a GICv2 because that's all the
> host hardware/kernel can do"). The GIC base address is then
> a property of the board model we're running rather than of
> the CPU (and for mach-virt we just set it to something
> convenient). This would be done via the new-style irqchip
> creation/config API rather than what we have in tree today.
> Presumably we could have a similar thing for the irqchip
> of "tell me what kind of irqchip you can instantiate and/or
> give me what you've got". I guess here we do need a way to
> find out what the kernel can do since (a) the kernel might
> be able to accelerate both GICv2 and v3 with no inherent
> preference and (b) there's no handy GIC-version-register
> like the MIDR we can use to track what we got. That might
> argue for a similar approach for the CPU proper, by analogy.

I'm fine with having a discovery mechanism for the GIC and not for the
CPU, if there's an implicit discovery mechanism for the CPU.  But are
we sure there will not be cases where we want to know the list of
available CPUs that the kernel can actually emulate on this host as
opposed to "just use the best one".  Maybe that's only relevant for the
"give me an A15 case" and then you can just do trial-and-error, perhaps.

-Christoffer

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
  2013-08-08 20:57               ` Christoffer Dall
@ 2013-08-08 21:16                 ` Peter Maydell
  -1 siblings, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2013-08-08 21:16 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Andreas Färber, quintela, qemu-devel,
	KVM devel mailing list, kvmarm

On 8 August 2013 21:57, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> I'm fine with having a discovery mechanism for the GIC and not for the
> CPU, if there's an implicit discovery mechanism for the CPU.  But are
> we sure there will not be cases where we want to know the list of
> available CPUs that the kernel can actually emulate on this host as
> opposed to "just use the best one".

I guess "telling the user in advance whether enabling kvm is
going to work" (either via -help output or via monitor to
tell libvirt about it) might want that, and for that matter
maybe kvmtool might want to present a list of available CPUs.

(I probably wouldn't implement the 'say if it's going to
work in -help' bit immediately though because QEMU's internal
structure makes it a little awkward. But if we might want
it it's probably better for the kernel API to be there
from the start.)

-- PMM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
@ 2013-08-08 21:16                 ` Peter Maydell
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2013-08-08 21:16 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: qemu-devel, kvmarm, Andreas Färber, KVM devel mailing list,
	quintela

On 8 August 2013 21:57, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> I'm fine with having a discovery mechanism for the GIC and not for the
> CPU, if there's an implicit discovery mechanism for the CPU.  But are
> we sure there will not be cases where we want to know the list of
> available CPUs that the kernel can actually emulate on this host as
> opposed to "just use the best one".

I guess "telling the user in advance whether enabling kvm is
going to work" (either via -help output or via monitor to
tell libvirt about it) might want that, and for that matter
maybe kvmtool might want to present a list of available CPUs.

(I probably wouldn't implement the 'say if it's going to
work in -help' bit immediately though because QEMU's internal
structure makes it a little awkward. But if we might want
it it's probably better for the kernel API to be there
from the start.)

-- PMM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
  2013-08-08 12:51 ` [Qemu-devel] -cpu host (was " Peter Maydell
  (?)
  (?)
@ 2013-08-09 13:12 ` Peter Maydell
  2013-08-09 20:07     ` Andreas Färber
  -1 siblings, 1 reply; 29+ messages in thread
From: Peter Maydell @ 2013-08-09 13:12 UTC (permalink / raw)
  To: kvmarm; +Cc: qemu-devel qemu-devel, KVM devel mailing list, Juan Quintela

On 8 August 2013 13:51, Peter Maydell <peter.maydell@linaro.org> wrote:
> For ARM you can't get at feature info of the host from userspace
> (unless you want to get into parsing /proc/cpuinfo), so my current
> idea is to have KVM_ARM_VCPU_INIT support a target-cpu-type
> which means "whatever host CPU is".

To expand on this for the 64 bit situation:
 * although in theory we could support a 32-bit-compiled QEMU
   binary on a 64-bit host kernel, I think there's not much
   need for it
 * if you run a 64-bit QEMU on a 64-bit host and ask VCPU_INIT
   for a 'host' CPU, you get a 64 bit CPU
 * you can add the feature flag '32 bit VM please' when
   making the VCPU_INIT call, which gets you the same
   host CPU but forced into 32 bit mode (this flag & behaviour
   exist in the kernel today) -- in QEMU I guess we have a
   '-cpu host32' which drives this, or possibly add support
   for "-cpu host,+32bitvm" style syntax.

NB that the API for reading and writing registers isn't the
same for "64 bit CPU in 32 bit mode" as for a native 32 bit
CPU -- the view of the guest that QEMU sees in the former
case is the same view that a 64 bit hypervisor sees of a
32 bit guest. I think that to avoid huge ifdefs it will
be cleaner to have
 target-arm/kvm.c [common functions]
 target-arm/kvm32.c [init_vcpu, get_registers, etc for 32 bit]
 target-arm/kvm64.c [ditto, 64 bit]

and configure only sets CONFIG_KVM for aarch64-on-aarch64
and arm-on-arm.

-- PMM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
  2013-08-08 19:29           ` Christoffer Dall
@ 2013-08-09 17:53             ` Eduardo Habkost
  -1 siblings, 0 replies; 29+ messages in thread
From: Eduardo Habkost @ 2013-08-09 17:53 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Peter Maydell, qemu-devel, kvmarm, Andreas Färber,
	KVM devel mailing list, quintela

On Thu, Aug 08, 2013 at 12:29:07PM -0700, Christoffer Dall wrote:
> On Thu, Aug 08, 2013 at 08:05:11PM +0100, Peter Maydell wrote:
> > On 8 August 2013 19:39, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> > > FWIW, from the kernel point of view I'd much prefer to return "this is
> > > the type of VCPU that I prefer to emulate" to user space on this current
> > > host than having QEMU come up with its own suggestion for CPU and asking
> > > the kernel for it.  The reason is that it gives us slightly more freedom
> > > in how we choose to support a given host SoC in that we can say that we
> > > at least support core A on core B, so if user space can deal with a
> > > virtual core A, we should be good.
> > 
> > Hmm, I'm not sure how useful a "query support" kind of API would
> > be to QEMU. QEMU is basically going to have two use cases:
> > (1) "I want an A15" [ie -cpu cortex-a15]
> > (2) "give me whatever you have and I'll cope" [ie -cpu host]
> > 
> > so my thought was that we could just have the kernel support
> >     init.target = KVM_ARM_TARGET_HOST;
> >     memset(init.features, 0, sizeof(init.features));
> >     ret = kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
> > 
> > (in the same way we currently ask for KVM_ARM_TARGET_CORTEX_A15).
> > 
> > I guess we could have a "return preferred target value"
> > VM ioctl, but it seems a bit pointless given that the
> > only thing userspace is going to do with the return
> > value is immediately feed it back to the kernel...
> > 
> My thinking was that the result of cpu = KVM_ARM_TARGET_HOST would be
> the same as x = kvm_get_target_host(), cpu = x, but at the same time
> letting QEMU know what it's dealing with.  Perhaps QEMU doesn't need
> this for emulation, but isn't it useful for
> save/restore/migration/debugging scenarios?

FWIW, this is how it works on x86: QEMU calls GET_SUPPORTED_CPUID and
then uses the result on a SET_CPUID call.

(Well, at least that's the general idea, but the details are a bit more
complicated than that. e.g.: some features can be enabled only if some
QEMU options are set as well, like tsc-deadline and x2apic, that require
the in-kernel irqchip to be enabled.)

Even if you don't really need this 2-step method today, doing this would
allow QEMU to fiddle with some bits if necessary in the future.

-- 
Eduardo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
@ 2013-08-09 17:53             ` Eduardo Habkost
  0 siblings, 0 replies; 29+ messages in thread
From: Eduardo Habkost @ 2013-08-09 17:53 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Peter Maydell, KVM devel mailing list, quintela, qemu-devel,
	kvmarm, Andreas Färber

On Thu, Aug 08, 2013 at 12:29:07PM -0700, Christoffer Dall wrote:
> On Thu, Aug 08, 2013 at 08:05:11PM +0100, Peter Maydell wrote:
> > On 8 August 2013 19:39, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> > > FWIW, from the kernel point of view I'd much prefer to return "this is
> > > the type of VCPU that I prefer to emulate" to user space on this current
> > > host than having QEMU come up with its own suggestion for CPU and asking
> > > the kernel for it.  The reason is that it gives us slightly more freedom
> > > in how we choose to support a given host SoC in that we can say that we
> > > at least support core A on core B, so if user space can deal with a
> > > virtual core A, we should be good.
> > 
> > Hmm, I'm not sure how useful a "query support" kind of API would
> > be to QEMU. QEMU is basically going to have two use cases:
> > (1) "I want an A15" [ie -cpu cortex-a15]
> > (2) "give me whatever you have and I'll cope" [ie -cpu host]
> > 
> > so my thought was that we could just have the kernel support
> >     init.target = KVM_ARM_TARGET_HOST;
> >     memset(init.features, 0, sizeof(init.features));
> >     ret = kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
> > 
> > (in the same way we currently ask for KVM_ARM_TARGET_CORTEX_A15).
> > 
> > I guess we could have a "return preferred target value"
> > VM ioctl, but it seems a bit pointless given that the
> > only thing userspace is going to do with the return
> > value is immediately feed it back to the kernel...
> > 
> My thinking was that the result of cpu = KVM_ARM_TARGET_HOST would be
> the same as x = kvm_get_target_host(), cpu = x, but at the same time
> letting QEMU know what it's dealing with.  Perhaps QEMU doesn't need
> this for emulation, but isn't it useful for
> save/restore/migration/debugging scenarios?

FWIW, this is how it works on x86: QEMU calls GET_SUPPORTED_CPUID and
then uses the result on a SET_CPUID call.

(Well, at least that's the general idea, but the details are a bit more
complicated than that. e.g.: some features can be enabled only if some
QEMU options are set as well, like tsc-deadline and x2apic, that require
the in-kernel irqchip to be enabled.)

Even if you don't really need this 2-step method today, doing this would
allow QEMU to fiddle with some bits if necessary in the future.

-- 
Eduardo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
  2013-08-09 13:12 ` Peter Maydell
@ 2013-08-09 20:07     ` Andreas Färber
  0 siblings, 0 replies; 29+ messages in thread
From: Andreas Färber @ 2013-08-09 20:07 UTC (permalink / raw)
  To: Peter Maydell; +Cc: kvmarm, qemu-devel, KVM devel mailing list, Juan Quintela

Am 09.08.2013 15:12, schrieb Peter Maydell:
> possibly add support
>    for "-cpu host,+32bitvm" style syntax.

Please use only property-name=value style syntax.

Andreas

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
@ 2013-08-09 20:07     ` Andreas Färber
  0 siblings, 0 replies; 29+ messages in thread
From: Andreas Färber @ 2013-08-09 20:07 UTC (permalink / raw)
  To: Peter Maydell; +Cc: Juan Quintela, kvmarm, KVM devel mailing list, qemu-devel

Am 09.08.2013 15:12, schrieb Peter Maydell:
> possibly add support
>    for "-cpu host,+32bitvm" style syntax.

Please use only property-name=value style syntax.

Andreas

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re:  KVM call minutes for 2013-08-06)
  2013-08-08 15:55   ` Andreas Färber
@ 2013-08-25 13:14     ` Gleb Natapov
  -1 siblings, 0 replies; 29+ messages in thread
From: Gleb Natapov @ 2013-08-25 13:14 UTC (permalink / raw)
  To: Andreas Färber
  Cc: Peter Maydell, quintela, qemu-devel, KVM devel mailing list, kvmarm

On Thu, Aug 08, 2013 at 05:55:09PM +0200, Andreas Färber wrote:
> Hi Peter,
> 
> Am 08.08.2013 14:51, schrieb Peter Maydell:
> > [I missed this KVM call but the stuff about -cpu host ties into
> > an issue we've been grappling with for ARM KVM, so it seems
> > a reasonable jumping-off-point.]
> > 
> > On 6 August 2013 16:15, Juan Quintela <quintela@redhat.com> wrote:
> >> 2013-08-06
> >> ----------
> >>
> >> What libvirt needs/miss Today?
> >> - how to handle machine types? creating them inside qemu?
> >> - qemu --cpu help
> >>   only shows cpus,  not what features qemu will use
> >> - qemu -cpu host
> >>   what does this exactly means?  kvm removes same flags.
> >> - Important to know if migration would work.
> >> - Machine types sometimes disable some feature, so cpu alone is not
> >>   enough.
> > 
> >> - kernel removes some features because it knows it can't be virtualised
> >> - qemu adds some others because it knows it don't need host support
> >> - and then lots of features in the middle
> > 
> > So, coming at this from an ARM perspective:
> > Should any target arch that supports KVM also support "-cpu host"?
> > If so, what should it do?
> 
> I think that depends on the target and whether/what is useful.
> 
> > Is there a description somewhere of
> > what the x86 and PPC semantics of -cpu host are?
> 
> I'm afraid our usual documentation will be reading the source code. ;)
> 
> x86 was first to implement -cpu host and passed through pretty much all
> host features even if they would not work without additional support
> code.
This is definitely not true. Only features that will work are passsed through.
Actually the definition of "-cpu host" for x86 can be: advertise any
feature that can be supported on this host/qemu combo.

>       I've seen a bunch of bugs where that leads to GMP and others
> breaking badly. Lately in the case of PMU we've started to limit that.
The problem with PMU was that PMU capabilities was passed through even
for non "-cpu host". There was no problem with "-cpu host".

> Alex proposed -cpu best, which was never merged to date. It was similar
> to how ppc's -cpu host works:
According to http://wiki.qemu.org/Features/CPUModels#-cpu_host_vs_-cpu_best
it should select predefined cpu model closest to host one. Useful, bit
not the same as "-cpu host".

> 
> ppc matches the Processor Version Register (PVR) in kvm.c against its
> known models from cpu-models.c (strictly today, mask being discussed).
> The PVR can be read from userspace via mfpvr alias to mfspr (Move From
> Special Purpose Register; possibly emulated for userspace by kernel?).
> CPU features are all QEMU-driven AFAIU, through the "CPU families" in
> translate_init.c. Beware, everything is highly macro'fied in ppc code.
> 

--
			Gleb.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
@ 2013-08-25 13:14     ` Gleb Natapov
  0 siblings, 0 replies; 29+ messages in thread
From: Gleb Natapov @ 2013-08-25 13:14 UTC (permalink / raw)
  To: Andreas Färber
  Cc: Peter Maydell, kvmarm, qemu-devel, KVM devel mailing list, quintela

On Thu, Aug 08, 2013 at 05:55:09PM +0200, Andreas Färber wrote:
> Hi Peter,
> 
> Am 08.08.2013 14:51, schrieb Peter Maydell:
> > [I missed this KVM call but the stuff about -cpu host ties into
> > an issue we've been grappling with for ARM KVM, so it seems
> > a reasonable jumping-off-point.]
> > 
> > On 6 August 2013 16:15, Juan Quintela <quintela@redhat.com> wrote:
> >> 2013-08-06
> >> ----------
> >>
> >> What libvirt needs/miss Today?
> >> - how to handle machine types? creating them inside qemu?
> >> - qemu --cpu help
> >>   only shows cpus,  not what features qemu will use
> >> - qemu -cpu host
> >>   what does this exactly means?  kvm removes same flags.
> >> - Important to know if migration would work.
> >> - Machine types sometimes disable some feature, so cpu alone is not
> >>   enough.
> > 
> >> - kernel removes some features because it knows it can't be virtualised
> >> - qemu adds some others because it knows it don't need host support
> >> - and then lots of features in the middle
> > 
> > So, coming at this from an ARM perspective:
> > Should any target arch that supports KVM also support "-cpu host"?
> > If so, what should it do?
> 
> I think that depends on the target and whether/what is useful.
> 
> > Is there a description somewhere of
> > what the x86 and PPC semantics of -cpu host are?
> 
> I'm afraid our usual documentation will be reading the source code. ;)
> 
> x86 was first to implement -cpu host and passed through pretty much all
> host features even if they would not work without additional support
> code.
This is definitely not true. Only features that will work are passsed through.
Actually the definition of "-cpu host" for x86 can be: advertise any
feature that can be supported on this host/qemu combo.

>       I've seen a bunch of bugs where that leads to GMP and others
> breaking badly. Lately in the case of PMU we've started to limit that.
The problem with PMU was that PMU capabilities was passed through even
for non "-cpu host". There was no problem with "-cpu host".

> Alex proposed -cpu best, which was never merged to date. It was similar
> to how ppc's -cpu host works:
According to http://wiki.qemu.org/Features/CPUModels#-cpu_host_vs_-cpu_best
it should select predefined cpu model closest to host one. Useful, bit
not the same as "-cpu host".

> 
> ppc matches the Processor Version Register (PVR) in kvm.c against its
> known models from cpu-models.c (strictly today, mask being discussed).
> The PVR can be read from userspace via mfpvr alias to mfspr (Move From
> Special Purpose Register; possibly emulated for userspace by kernel?).
> CPU features are all QEMU-driven AFAIU, through the "CPU families" in
> translate_init.c. Beware, everything is highly macro'fied in ppc code.
> 

--
			Gleb.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
  2013-08-09 17:53             ` Eduardo Habkost
@ 2013-08-25 13:49               ` Gleb Natapov
  -1 siblings, 0 replies; 29+ messages in thread
From: Gleb Natapov @ 2013-08-25 13:49 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: Christoffer Dall, Peter Maydell, qemu-devel, kvmarm,
	Andreas Färber, KVM devel mailing list, quintela

On Fri, Aug 09, 2013 at 02:53:30PM -0300, Eduardo Habkost wrote:
> On Thu, Aug 08, 2013 at 12:29:07PM -0700, Christoffer Dall wrote:
> > On Thu, Aug 08, 2013 at 08:05:11PM +0100, Peter Maydell wrote:
> > > On 8 August 2013 19:39, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> > > > FWIW, from the kernel point of view I'd much prefer to return "this is
> > > > the type of VCPU that I prefer to emulate" to user space on this current
> > > > host than having QEMU come up with its own suggestion for CPU and asking
> > > > the kernel for it.  The reason is that it gives us slightly more freedom
> > > > in how we choose to support a given host SoC in that we can say that we
> > > > at least support core A on core B, so if user space can deal with a
> > > > virtual core A, we should be good.
> > > 
> > > Hmm, I'm not sure how useful a "query support" kind of API would
> > > be to QEMU. QEMU is basically going to have two use cases:
> > > (1) "I want an A15" [ie -cpu cortex-a15]
> > > (2) "give me whatever you have and I'll cope" [ie -cpu host]
> > > 
> > > so my thought was that we could just have the kernel support
> > >     init.target = KVM_ARM_TARGET_HOST;
> > >     memset(init.features, 0, sizeof(init.features));
> > >     ret = kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
> > > 
> > > (in the same way we currently ask for KVM_ARM_TARGET_CORTEX_A15).
> > > 
> > > I guess we could have a "return preferred target value"
> > > VM ioctl, but it seems a bit pointless given that the
> > > only thing userspace is going to do with the return
> > > value is immediately feed it back to the kernel...
> > > 
> > My thinking was that the result of cpu = KVM_ARM_TARGET_HOST would be
> > the same as x = kvm_get_target_host(), cpu = x, but at the same time
> > letting QEMU know what it's dealing with.  Perhaps QEMU doesn't need
> > this for emulation, but isn't it useful for
> > save/restore/migration/debugging scenarios?
> 
> FWIW, this is how it works on x86: QEMU calls GET_SUPPORTED_CPUID and
> then uses the result on a SET_CPUID call.
> 
> (Well, at least that's the general idea, but the details are a bit more
> complicated than that. e.g.: some features can be enabled only if some
> QEMU options are set as well, like tsc-deadline and x2apic, that require
> the in-kernel irqchip to be enabled.)
> 
> Even if you don't really need this 2-step method today, doing this would
> allow QEMU to fiddle with some bits if necessary in the future.
> 
Without 2-step method how QEMU ARM knows that it can create cpu type
user asked for? On x86 QEMU queries kernel about what feature can be
supported and then "and" it with requested features (feature can be
requested implicitly, by specifying cpu model name like "Neahalem", or
explicitly, by adding +feature_name on -cpu parameter). -cpu host is
just a by product of this algorithm, it says enables everything that is
supported ("supported & 1" instead of "supported & requested").

--
			Gleb.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
@ 2013-08-25 13:49               ` Gleb Natapov
  0 siblings, 0 replies; 29+ messages in thread
From: Gleb Natapov @ 2013-08-25 13:49 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: Peter Maydell, KVM devel mailing list, quintela, qemu-devel,
	kvmarm, Andreas Färber, Christoffer Dall

On Fri, Aug 09, 2013 at 02:53:30PM -0300, Eduardo Habkost wrote:
> On Thu, Aug 08, 2013 at 12:29:07PM -0700, Christoffer Dall wrote:
> > On Thu, Aug 08, 2013 at 08:05:11PM +0100, Peter Maydell wrote:
> > > On 8 August 2013 19:39, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> > > > FWIW, from the kernel point of view I'd much prefer to return "this is
> > > > the type of VCPU that I prefer to emulate" to user space on this current
> > > > host than having QEMU come up with its own suggestion for CPU and asking
> > > > the kernel for it.  The reason is that it gives us slightly more freedom
> > > > in how we choose to support a given host SoC in that we can say that we
> > > > at least support core A on core B, so if user space can deal with a
> > > > virtual core A, we should be good.
> > > 
> > > Hmm, I'm not sure how useful a "query support" kind of API would
> > > be to QEMU. QEMU is basically going to have two use cases:
> > > (1) "I want an A15" [ie -cpu cortex-a15]
> > > (2) "give me whatever you have and I'll cope" [ie -cpu host]
> > > 
> > > so my thought was that we could just have the kernel support
> > >     init.target = KVM_ARM_TARGET_HOST;
> > >     memset(init.features, 0, sizeof(init.features));
> > >     ret = kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
> > > 
> > > (in the same way we currently ask for KVM_ARM_TARGET_CORTEX_A15).
> > > 
> > > I guess we could have a "return preferred target value"
> > > VM ioctl, but it seems a bit pointless given that the
> > > only thing userspace is going to do with the return
> > > value is immediately feed it back to the kernel...
> > > 
> > My thinking was that the result of cpu = KVM_ARM_TARGET_HOST would be
> > the same as x = kvm_get_target_host(), cpu = x, but at the same time
> > letting QEMU know what it's dealing with.  Perhaps QEMU doesn't need
> > this for emulation, but isn't it useful for
> > save/restore/migration/debugging scenarios?
> 
> FWIW, this is how it works on x86: QEMU calls GET_SUPPORTED_CPUID and
> then uses the result on a SET_CPUID call.
> 
> (Well, at least that's the general idea, but the details are a bit more
> complicated than that. e.g.: some features can be enabled only if some
> QEMU options are set as well, like tsc-deadline and x2apic, that require
> the in-kernel irqchip to be enabled.)
> 
> Even if you don't really need this 2-step method today, doing this would
> allow QEMU to fiddle with some bits if necessary in the future.
> 
Without 2-step method how QEMU ARM knows that it can create cpu type
user asked for? On x86 QEMU queries kernel about what feature can be
supported and then "and" it with requested features (feature can be
requested implicitly, by specifying cpu model name like "Neahalem", or
explicitly, by adding +feature_name on -cpu parameter). -cpu host is
just a by product of this algorithm, it says enables everything that is
supported ("supported & 1" instead of "supported & requested").

--
			Gleb.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: -cpu host (was Re: KVM call minutes for 2013-08-06)
  2013-08-08 18:20     ` Peter Maydell
@ 2013-08-25 13:55       ` Gleb Natapov
  -1 siblings, 0 replies; 29+ messages in thread
From: Gleb Natapov @ 2013-08-25 13:55 UTC (permalink / raw)
  To: Peter Maydell
  Cc: qemu-devel, kvmarm, Andreas Färber, KVM devel mailing list,
	quintela

On Thu, Aug 08, 2013 at 07:20:41PM +0100, Peter Maydell wrote:
> >> For ARM you can't get at feature info of the host from userspace
> >> (unless you want to get into parsing /proc/cpuinfo), so my current
> >> idea is to have KVM_ARM_VCPU_INIT support a target-cpu-type
> >> which means "whatever host CPU is". Then when we've created the
> >> vcpu we can populate QEMU's idea of what the CPU features are
> >> by using the existing ioctls for reading the cp15 registers of
> >> the vcpu.
> >
> > Sounds sane to me iff those cp15 registers all work with KVM and don't
> > need any additional KVM/QEMU/device code.
> 
> Yes; KVM won't tell us about CP15 registers unless they
> are exposed to the guest VM (that is, we're querying the
So why not implement something similar to x86 cpuid thing. Have
separate ioctls to query what CP15 registers KVM can support and what
is exposed to a guest?

> VCPU, not the host CPU). More generally, the cp15 "tuple
> list" code I landed a couple of months back makes the kernel
> the authoritative source for which cp15 registers exist and
> what their values are -- in -enable-kvm mode QEMU no longer
> cares about them (its own list of which registers exist for
> which CPU is used only for TCG).
> 
--
			Gleb.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] -cpu host (was Re: KVM call minutes for 2013-08-06)
@ 2013-08-25 13:55       ` Gleb Natapov
  0 siblings, 0 replies; 29+ messages in thread
From: Gleb Natapov @ 2013-08-25 13:55 UTC (permalink / raw)
  To: Peter Maydell
  Cc: qemu-devel, kvmarm, Andreas Färber, KVM devel mailing list,
	quintela

On Thu, Aug 08, 2013 at 07:20:41PM +0100, Peter Maydell wrote:
> >> For ARM you can't get at feature info of the host from userspace
> >> (unless you want to get into parsing /proc/cpuinfo), so my current
> >> idea is to have KVM_ARM_VCPU_INIT support a target-cpu-type
> >> which means "whatever host CPU is". Then when we've created the
> >> vcpu we can populate QEMU's idea of what the CPU features are
> >> by using the existing ioctls for reading the cp15 registers of
> >> the vcpu.
> >
> > Sounds sane to me iff those cp15 registers all work with KVM and don't
> > need any additional KVM/QEMU/device code.
> 
> Yes; KVM won't tell us about CP15 registers unless they
> are exposed to the guest VM (that is, we're querying the
So why not implement something similar to x86 cpuid thing. Have
separate ioctls to query what CP15 registers KVM can support and what
is exposed to a guest?

> VCPU, not the host CPU). More generally, the cp15 "tuple
> list" code I landed a couple of months back makes the kernel
> the authoritative source for which cp15 registers exist and
> what their values are -- in -enable-kvm mode QEMU no longer
> cares about them (its own list of which registers exist for
> which CPU is used only for TCG).
> 
--
			Gleb.

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2013-08-25 13:55 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-08 12:51 -cpu host (was Re: [Qemu-devel] KVM call minutes for 2013-08-06) Peter Maydell
2013-08-08 12:51 ` [Qemu-devel] -cpu host (was " Peter Maydell
2013-08-08 15:55 ` Andreas Färber
2013-08-08 15:55   ` Andreas Färber
2013-08-08 18:20   ` Peter Maydell
2013-08-08 18:20     ` Peter Maydell
2013-08-08 18:39     ` Christoffer Dall
2013-08-08 18:39       ` Christoffer Dall
2013-08-08 19:05       ` Peter Maydell
2013-08-08 19:05         ` Peter Maydell
2013-08-08 19:29         ` Christoffer Dall
2013-08-08 19:29           ` Christoffer Dall
2013-08-08 20:48           ` Peter Maydell
2013-08-08 20:48             ` Peter Maydell
2013-08-08 20:57             ` Christoffer Dall
2013-08-08 20:57               ` Christoffer Dall
2013-08-08 21:16               ` Peter Maydell
2013-08-08 21:16                 ` Peter Maydell
2013-08-09 17:53           ` Eduardo Habkost
2013-08-09 17:53             ` Eduardo Habkost
2013-08-25 13:49             ` Gleb Natapov
2013-08-25 13:49               ` Gleb Natapov
2013-08-25 13:55     ` Gleb Natapov
2013-08-25 13:55       ` [Qemu-devel] " Gleb Natapov
2013-08-25 13:14   ` Gleb Natapov
2013-08-25 13:14     ` Gleb Natapov
2013-08-09 13:12 ` Peter Maydell
2013-08-09 20:07   ` Andreas Färber
2013-08-09 20:07     ` Andreas Färber

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.