qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Laszlo Ersek <lersek@redhat.com>
To: Eduardo Habkost <ehabkost@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	qemu-devel@nongnu.org, "Gerd Hoffmann" <kraxel@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Philippe Mathieu-Daudé" <philmd@redhat.com>,
	"Richard Henderson" <rth@twiddle.net>
Subject: Re: [RFC 0/3] acpi: cphp: add CPHP_GET_CPU_ID_CMD command to cpu hotplug MMIO interface
Date: Fri, 11 Oct 2019 10:01:42 +0200	[thread overview]
Message-ID: <e17adca7-f5f4-3a28-a4a2-6b921c1c2e2f@redhat.com> (raw)
In-Reply-To: <20191010192039.GE4084@habkost.net>

On 10/10/19 21:20, Eduardo Habkost wrote:
> On Thu, Oct 10, 2019 at 05:57:54PM +0200, Igor Mammedov wrote:
>> On Thu, 10 Oct 2019 09:59:42 -0400
>> "Michael S. Tsirkin" <mst@redhat.com> wrote:
>>
>>> On Thu, Oct 10, 2019 at 03:39:12PM +0200, Igor Mammedov wrote:
>>>> On Thu, 10 Oct 2019 05:56:55 -0400
>>>> "Michael S. Tsirkin" <mst@redhat.com> wrote:
>>>>
>>>>> On Wed, Oct 09, 2019 at 09:22:49AM -0400, Igor Mammedov wrote:
>>>>>> As an alternative to passing to firmware topology info via new fwcfg files
>>>>>> so it could recreate APIC IDs based on it and order CPUs are enumerated,
>>>>>>
>>>>>> extend CPU hotplug interface to return APIC ID as response to the new command
>>>>>> CPHP_GET_CPU_ID_CMD.  
>>>>>
>>>>> One big piece missing here is motivation:
>>>> I thought the only willing reader was Laszlo (who is aware of context)
>>>> so I skipped on details and confused others :/
>>>>
>>>>> Who's going to use this interface?
>>>> In current state it's for firmware, since ACPI tables can cheat
>>>> by having APIC IDs statically built in.
>>>>
>>>> If we were creating CPU objects in ACPI dynamically
>>>> we would be using this command as well.
>>>
>>> I'm not sure how it's even possible to create devices dynamically. Well
>>> I guess it's possible with LoadTable. Is this what you had in
>>> mind?
>>
>> Yep. I even played this shiny toy and I can say it's very tempting one.
>> On the  other side, even problem of legacy OSes not working with it aside,
>> it's hard to debug and reproduce compared to static tables.
>> So from maintaining pov I dislike it enough to be against it.
>>
>>
>>>> It would save
>>>> us quite a bit space in ACPI blob but it would be a pain
>>>> to debug and diagnose problems in ACPI tables, so I'd rather
>>>> stay with static CPU descriptions in ACPI tables for the sake
>>>> of maintenance.
>>>>> So far CPU hotplug was used by the ACPI, so we didn't
>>>>> really commit to a fixed interface too strongly.
>>>>>
>>>>> Is this a replacement to Laszlo's fw cfg interface?
>>>>> If yes is the idea that OVMF going to depend on CPU hotplug directly then?
>>>>> It does not depend on it now, does it?
>>>> It doesn't, but then it doesn't support cpu hotplug,
>>>> OVMF(SMM) needs to cooperate with QEMU "and" ACPI tables to perform
>>>> the task and using the same interface/code path between all involved
>>>> parties makes the task easier with the least amount of duplicated
>>>> interfaces and more robust.
>>>>
>>>> Re-implementing alternative interface for firmware (fwcfg or what not)
>>>> would work as well, but it's only question of time when ACPI and
>>>> this new interface disagree on how world works and process falls
>>>> apart.
>>>
>>> Then we should consider switching acpi to use fw cfg.
>>> Or build another interface that can scale.
>>
>> Could be an option, it would be a pain to write a driver in AML for fwcfg access though
>> (I've looked at possibility to access fwcfg from AML about a year ago and gave up.
>> I'm definitely not volunteering for the second attempt and can't even give an estimate
>> it it's viable approach).
>>
>> But what scaling issue you are talking about, exactly?
>> With current CPU hotplug interface we can handle upto UNIT32_MAX cpus, and extend
>> interface without need to increase IO window we are using now.
>>
>> Granted IO access it not fastest compared to fwcfg in DMA mode, but we already
>> doing stop machine when switching to SMM which is orders of magnitude slower.
>> Consensus was to compromise on speed of CPU hotplug versus more complex and more
>> problematic unicast SMM mode in OVMF (can't find a particular email but we have discussed
>> it with Laszlo already, when I considered ways to optimize hotplug speed)
> 
> If we were designing the interface from the ground up, I would
> agree with Michael.  But I don't see why we would reimplement
> everything from scratch now, if just providing the
> cpu_selector => cpu_hardware_id mapping to firmware is enough to
> make the existing interface work.
> 
> If somebody is really unhappy with the current interface and
> wants to implement a new purely fw_cfg-based one (and write the
> corresponding ACPI code), they would be welcome.

Let me re-iterate the difficulties quickly:

- DMA-based fw_cfg is troublesome in SEV guests (do you want to mess
with page table entries in AML methods? or pre-allocate an always
decrypted opregion? how large?)

- IO port based fw_cfg does not support writes (and I reckon that, when
the *OS* handles a hotplug event, it does have to talk back to QEMU)

- the CPU hotplug AML would have to arbitrate with Linux's own fw_cfg
driver (which exposes fw_cfg files to userspace, yay! /s)

In the phys world, CPU hotplug takes dedicated RAS hardware. Shoehorning
CPU hotplug into *firmware* config, when in two use cases [*], the
firmware shouldn't even know about CPU hotplug, feels messy.

[*] being (a) SeaBIOS, and (b) OVMF built without SMM

> I just don't see why we should spend our time doing that now.

I have to agree, we're already spread thin.

... I must admit: I didn't expect this, but now I've grown to *prefer*
the CPU hotplug register block!

Laszlo


  reply	other threads:[~2019-10-11  8:03 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-09 13:22 [RFC 0/3] acpi: cphp: add CPHP_GET_CPU_ID_CMD command to cpu hotplug MMIO interface Igor Mammedov
2019-10-09 13:22 ` [RFC 1/3] acpi: cpuhp: fix 'Command data' description is spec Igor Mammedov
2019-10-10 12:33   ` Laszlo Ersek
2019-10-17 15:41     ` Igor Mammedov
2019-10-18 13:24       ` Laszlo Ersek
2019-10-10 13:31   ` Laszlo Ersek
2019-10-10 13:36     ` Laszlo Ersek
2019-10-22 17:17       ` Christophe de Dinechin
2019-10-22 17:37         ` Laszlo Ersek
2019-10-09 13:22 ` [RFC 2/3] acpi: cpuhp: add typical usecases into spec Igor Mammedov
2019-10-10 13:04   ` Laszlo Ersek
2019-10-10 13:15     ` Laszlo Ersek
2019-10-10 14:13   ` Laszlo Ersek
2019-10-18 14:45     ` Igor Mammedov
2019-10-09 13:22 ` [RFC 3/3] acpi: cpuhp: add CPHP_GET_CPU_ID_CMD command Igor Mammedov
2019-10-10 14:56   ` Laszlo Ersek
2019-10-10 15:06     ` Michael S. Tsirkin
2019-10-10 17:23       ` Igor Mammedov
2019-10-10 17:53       ` Laszlo Ersek
2019-10-10 19:26       ` Eduardo Habkost
2019-10-11  8:07         ` Laszlo Ersek
2019-10-18 16:18     ` Igor Mammedov
2019-10-21 13:06       ` Laszlo Ersek
2019-10-22 12:39         ` Laszlo Ersek
2019-10-22 14:42           ` Igor Mammedov
2019-10-22 15:49             ` Laszlo Ersek
2019-10-23 14:59               ` Igor Mammedov
2019-10-24 15:07   ` Philippe Mathieu-Daudé
2019-10-10  9:56 ` [RFC 0/3] acpi: cphp: add CPHP_GET_CPU_ID_CMD command to cpu hotplug MMIO interface Michael S. Tsirkin
2019-10-10 13:39   ` Igor Mammedov
2019-10-10 13:59     ` Michael S. Tsirkin
2019-10-10 15:57       ` Igor Mammedov
2019-10-10 18:15         ` Michael S. Tsirkin
2019-10-11  7:41           ` Laszlo Ersek
2019-10-10 19:20         ` Eduardo Habkost
2019-10-11  8:01           ` Laszlo Ersek [this message]
2019-10-11 13:00             ` Michael S. Tsirkin
2019-10-11 16:13               ` Laszlo Ersek
2019-10-11 10:47           ` Igor Mammedov
2019-10-11  6:54         ` Laszlo Ersek
2019-10-10 14:16     ` Eduardo Habkost
2019-10-10 14:49       ` Michael S. Tsirkin
2019-10-10 17:09       ` Igor Mammedov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e17adca7-f5f4-3a28-a4a2-6b921c1c2e2f@redhat.com \
    --to=lersek@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=kraxel@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=philmd@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=rth@twiddle.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).