linux-acpi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Marc Zyngier <marc.zyngier@arm.com>
To: Jia He <jiakernel2@gmail.com>,
	Maran Wilson <maran.wilson@oracle.com>,
	James Morse <james.morse@arm.com>,
	Xiongfeng Wang <wangxiongfeng2@huawei.com>
Cc: jonathan.cameron@huawei.com, john.garry@huawei.com,
	rjw@rjwysocki.net, linux-kernel@vger.kernel.org,
	linux-acpi@vger.kernel.org, huawei.libin@huawei.com,
	guohanjun@huawei.com, kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [RFC PATCH v2 0/3] Support CPU hotplug for ARM64
Date: Tue, 16 Jul 2019 09:32:44 +0100	[thread overview]
Message-ID: <682936cd-139e-64d5-9ab2-4ebd09e89e3d@arm.com> (raw)
In-Reply-To: <1a7b2f39-ca77-5b5f-cbb5-6356e51b0d7a@gmail.com>

Hi Jia,

On 16/07/2019 08:59, Jia He wrote:
> Hi Marc
> 
> On 2019/7/10 17:15, Marc Zyngier wrote:
>> On 09/07/2019 20:06, Maran Wilson wrote:
>>> On 7/5/2019 3:12 AM, James Morse wrote:
>>>> Hi guys,
>>>>
>>>> (CC: +kvmarm list)
>>>>
>>>> On 29/06/2019 03:42, Xiongfeng Wang wrote:
>>>>> This patchset mark all the GICC node in MADT as possible CPUs even though it
>>>>> is disabled. But only those enabled GICC node are marked as present CPUs.
>>>>> So that kernel will initialize some CPU related data structure in advance before
>>>>> the CPU is actually hot added into the system. This patchset also implement
>>>>> 'acpi_(un)map_cpu()' and 'arch_(un)register_cpu()' for ARM64. These functions are
>>>>> needed to enable CPU hotplug.
>>>>>
>>>>> To support CPU hotplug, we need to add all the possible GICC node in MADT
>>>>> including those CPUs that are not present but may be hot added later. Those
>>>>> CPUs are marked as disabled in GICC nodes.
>>>> ... what do you need this for?
>>>>
>>>> (The term cpu-hotplug in the arm world almost never means hot-adding a new package/die to
>>>> the platform, we usually mean taking CPUs online/offline for power management. e.g.
>>>> cpuhp_offline_cpu_device())
>>>>
>>>> It looks like you're adding support for hot-adding a new package/die to the platform ...
>>>> but only for virtualisation.
>>>>
>>>> I don't see why this is needed for virtualisation. The in-kernel irqchip needs to know
>>>> these vcpu exist before you can enter the guest for the first time. You can't create them
>>>> late. At best you're saving the host scheduling a vcpu that is offline. Is this really a
>>>> problem?
>>>>
>>>> If we moved PSCI support to user-space, you could avoid creating host vcpu threads until
>>>> the guest brings the vcpu online, which would solve that problem, and save the host
>>>> resources for the thread too. (and its acpi/dt agnostic)
>>>>
>>>> I don't see the difference here between booting the guest with 'maxcpus=1', and bringing
>>>> the vcpu online later. The only real difference seems to be moving the can-be-online
>>>> policy into the hypervisor/VMM...
>>> Isn't that an important distinction from a cloud service provider's
>>> perspective?
>>>
>>> As far as I understand it, you also need CPU hotplug capabilities to
>>> support things like Kata runtime under Kubernetes. i.e. when
>>> implementing your containers in the form of light weight VMs for the
>>> additional security ... and the orchestration layer cannot determine
>>> ahead of time how much CPU/memory resources are going to be needed to
>>> run the pod(s).
>> Why would it be any different? You can pre-allocate your vcpus, leave
>> them parked until some external agent decides to signal the container
>> that it it can use another bunch of CPUs. At that point, the container
>> must actively boot these vcpus (they aren't going to come up by magic).
>>
>> Given that you must have sized your virtual platform to deal with the
>> maximum set of resources you anticipate (think of the GIC
>> redistributors, for example), I really wonder what you gain here.
> I agree with your point in GIC aspect. It will mess up things if it makes
> 
> GIC resource hotpluggable in qemu.

It is far worse than just a mess. You'd need to come up with a way to
place your redistributors in memory, and tell the running guest where
these redistributors are. Currently, there is no method to describe such
changes to the address space, and I certainly don't want QEMU to invent
one. This needs to be modeled after what would happen on real HW.

> But it also would be better that vmm
> 
> only startup limited vcpu thread resource.
> 
> How about:
> 
> 1. qemu only starts only N vcpu thread (-smp N, maxcpus=M)
> 
> 2. qemu reserves the GIC resource with maxium M vcpu number

Note that this implies actually initializing M vcpus in the VM. You may
not have created the corresponding (M - N) threads, but the vcpus will
exist. Can you please quantify how much you'd save by doing that?

> 3. when qmp cmd cpu hotplug-add is triggerred, send a GED event to guest kernel
> 
> 4. guest kernel recv it and trigger the acpi plug process.
> 
> Currently ACPI_CPU_HOTPLUG is enabled for Kconfig but completely not workable.

Well, there so far *zero* CPU_HOTPLUG in the arm64 kernel other than
getting CPUs in and out of PSCI.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

  reply	other threads:[~2019-07-16  8:32 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-29  2:42 [RFC PATCH v2 0/3] Support CPU hotplug for ARM64 Xiongfeng Wang
2019-06-29  2:42 ` [RFC PATCH v2 1/3] ACPI / scan: evaluate _STA for processors declared via ASL Device statement Xiongfeng Wang
2019-06-29  2:42 ` [RFC PATCH v2 2/3] arm64: mark all the GICC nodes in MADT as possible cpu Xiongfeng Wang
2019-07-04  6:46   ` Jia He
2019-07-04  8:18     ` Xiongfeng Wang
2019-06-29  2:42 ` [RFC PATCH v2 3/3] arm64: Add CPU hotplug support Xiongfeng Wang
2019-07-05 10:12 ` [RFC PATCH v2 0/3] Support CPU hotplug for ARM64 James Morse
2019-07-09 19:06   ` Maran Wilson
2019-07-10  9:15     ` Marc Zyngier
2019-07-10 16:05       ` Maran Wilson
2019-07-15 13:43         ` James Morse
2019-07-16  7:59       ` Jia He
2019-07-16  8:32         ` Marc Zyngier [this message]
2019-07-16  7:52   ` Xiongfeng Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=682936cd-139e-64d5-9ab2-4ebd09e89e3d@arm.com \
    --to=marc.zyngier@arm.com \
    --cc=guohanjun@huawei.com \
    --cc=huawei.libin@huawei.com \
    --cc=james.morse@arm.com \
    --cc=jiakernel2@gmail.com \
    --cc=john.garry@huawei.com \
    --cc=jonathan.cameron@huawei.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maran.wilson@oracle.com \
    --cc=rjw@rjwysocki.net \
    --cc=wangxiongfeng2@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).