From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:42248) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fiItz-0004Xe-5o for qemu-devel@nongnu.org; Wed, 25 Jul 2018 08:28:26 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fiItw-0002dS-5l for qemu-devel@nongnu.org; Wed, 25 Jul 2018 08:28:23 -0400 Date: Wed, 25 Jul 2018 14:28:06 +0200 From: Andrew Jones Message-ID: <20180725122806.g2gpvdbrbdkriprg@kamzik.brq.redhat.com> References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> <74427c65-b860-d576-04f9-766253285210@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <74427c65-b860-d576-04f9-766253285210@arm.com> Subject: Re: [Qemu-devel] VCPU hotplug on KVM/ARM List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Marc Zyngier Cc: Maran Wilson , kvmarm@lists.cs.columbia.edu, bthakur@codeaurora.org, Christoffer Dall , Christoffer Dall , peter.maydell@linaro.org, david@redhat.com, qemu-devel@nongnu.org, Christoffer Dall , qemu-arm@nongnu.org, imammedo@redhat.com On Wed, Jul 25, 2018 at 11:40:54AM +0100, Marc Zyngier wrote: > On 24/07/18 19:35, Maran Wilson wrote: > > It's been a few months since this email thread died off. Has anyone > > started working on a potential solution that would allow VCPU hotplug on > > KVM/ARM ? Or is this a project that is still waiting for an owner who > > has the time and inclination to get started? > > This is typically a project for someone who would have this particular > itch to scratch, and who has a demonstrable need for this functionality. > > Work wise, it would have to include adding physical CPU hotplug support > to the arm64 kernel as a precondition, before worrying about doing it in > KVM. > > For KVM itself, particular area of interests would be: > - Making GICv3 redistributors magically appear in the IPA space > - Live resizing of GICv3 structures > - Dynamic allocation of MPIDR, and mapping with vcpu_id I have CPU topology description patches on the QEMU list now[*]. A next step for me is to this MPIDR work. I probably won't get to it until the end of August though. [*] http://lists.nongnu.org/archive/html/qemu-devel/2018-07/msg01168.html > > This should keep someone busy for a good couple of weeks (give or take a > few months). :-) > > That being said, I'd rather see support in QEMU first, creating all the > vcpu/redistributors upfront, and signalling the hotplug event via the > virtual firmware. And then post some numbers to show that creating all > the vcpus upfront is not acceptable. I think the upfront allocation, allocating all possible cpus, but only activating all present cpus, was the planned approach. What were the concerns about that approach? Just vcpu memory overhead for too many overly ambitious VM configs? Thanks, drew From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Jones Subject: Re: [Qemu-devel] VCPU hotplug on KVM/ARM Date: Wed, 25 Jul 2018 14:28:06 +0200 Message-ID: <20180725122806.g2gpvdbrbdkriprg@kamzik.brq.redhat.com> References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> <74427c65-b860-d576-04f9-766253285210@arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 40D3A40A67 for ; Wed, 25 Jul 2018 08:28:15 -0400 (EDT) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Bi3ZYrgUhIIT for ; Wed, 25 Jul 2018 08:28:14 -0400 (EDT) Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 2DFF04045F for ; Wed, 25 Jul 2018 08:28:14 -0400 (EDT) Content-Disposition: inline In-Reply-To: <74427c65-b860-d576-04f9-766253285210@arm.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: Marc Zyngier Cc: Christoffer Dall , Christoffer Dall , david@redhat.com, qemu-devel@nongnu.org, Christoffer Dall , qemu-arm@nongnu.org, imammedo@redhat.com, kvmarm@lists.cs.columbia.edu List-Id: kvmarm@lists.cs.columbia.edu On Wed, Jul 25, 2018 at 11:40:54AM +0100, Marc Zyngier wrote: > On 24/07/18 19:35, Maran Wilson wrote: > > It's been a few months since this email thread died off. Has anyone > > started working on a potential solution that would allow VCPU hotplug on > > KVM/ARM ? Or is this a project that is still waiting for an owner who > > has the time and inclination to get started? > > This is typically a project for someone who would have this particular > itch to scratch, and who has a demonstrable need for this functionality. > > Work wise, it would have to include adding physical CPU hotplug support > to the arm64 kernel as a precondition, before worrying about doing it in > KVM. > > For KVM itself, particular area of interests would be: > - Making GICv3 redistributors magically appear in the IPA space > - Live resizing of GICv3 structures > - Dynamic allocation of MPIDR, and mapping with vcpu_id I have CPU topology description patches on the QEMU list now[*]. A next step for me is to this MPIDR work. I probably won't get to it until the end of August though. [*] http://lists.nongnu.org/archive/html/qemu-devel/2018-07/msg01168.html > > This should keep someone busy for a good couple of weeks (give or take a > few months). :-) > > That being said, I'd rather see support in QEMU first, creating all the > vcpu/redistributors upfront, and signalling the hotplug event via the > virtual firmware. And then post some numbers to show that creating all > the vcpus upfront is not acceptable. I think the upfront allocation, allocating all possible cpus, but only activating all present cpus, was the planned approach. What were the concerns about that approach? Just vcpu memory overhead for too many overly ambitious VM configs? Thanks, drew