From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: About releasing vcpu when closing vcpu fd Date: Thu, 29 May 2014 11:12:04 +0300 Message-ID: <20140529081203.GA32254@minantech.com> References: <537AEC13.1000804@cn.fujitsu.com> <20140523094345.GC5306@minantech.com> <5386C838.3070102@cn.fujitsu.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: ChenFan , Gleb Natapov , Paolo Bonzini , kvm@vger.kernel.org To: Gu Zheng Return-path: Received: from mail-wg0-f41.google.com ([74.125.82.41]:32807 "EHLO mail-wg0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755249AbaE2IMQ (ORCPT ); Thu, 29 May 2014 04:12:16 -0400 Received: by mail-wg0-f41.google.com with SMTP id z12so12544642wgg.24 for ; Thu, 29 May 2014 01:12:15 -0700 (PDT) Content-Disposition: inline In-Reply-To: <5386C838.3070102@cn.fujitsu.com> Sender: kvm-owner@vger.kernel.org List-ID: On Thu, May 29, 2014 at 01:40:08PM +0800, Gu Zheng wrote: > >> There was a patch(from Chen Fan, last august) about releasing vcpu when > >> closing vcpu fd , but > >> your comment said "Attempt where made to make it possible to destroy > >> individual vcpus separately from destroying VM before, but they were > >> unsuccessful thus far." > >> So what is the pain here? If we want to achieve the goal, what should we do? > >> Looking forward to your further comments.:) > >> > > CPU array is accessed locklessly in a lot of places, so it will have to be RCUified. > > There was attempt to do so 2 year or so ago, but it didn't go anyware. Adding locks is > > to big a price to pay for ability to free a little bit of memory by destroying vcpu. > > Yes, it's a pain here. But if we want to implement "vcpu hot-remove", this must be > fixed sooner or later. Why? "vcpu hot-remove" already works (or at least worked in the past for some value of "work"). No need to destroy vcpu completely, just park it and tell a guest not to use it via ACPI hot unplug event. > And any guys working on kvm "vcpu hot-remove" now? > > > An > > alternative may be to make sure that stopped vcpu takes as little memory as possible. > > Yeah. But if we add a new vcpu with the old id that we stopped before, it will fail. > No need to create vcpu again, just unpark it and notify a guest via ACPI hot plug event that vcpu can be used now. -- Gleb.