All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: About releasing vcpu when closing vcpu fd
       [not found] ` <20140523094345.GC5306@minantech.com>
@ 2014-05-29  5:40   ` Gu Zheng
  2014-05-29  8:12     ` Gleb Natapov
  0 siblings, 1 reply; 10+ messages in thread
From: Gu Zheng @ 2014-05-29  5:40 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: ChenFan, Gleb Natapov, Paolo Bonzini, kvm

Hi Gleb,

On 05/23/2014 05:43 PM, Gleb Natapov wrote:

> CCing Paolo.
> 
> On Tue, May 20, 2014 at 01:45:55PM +0800, Gu Zheng wrote:
>> Hi Gleb,
>> Excuse me for offline noisy.
> You will get much quicker response if you'll post to the list :)

Got it.:)

> 
>> There was a patch(from Chen Fan, last august) about releasing vcpu when
>> closing vcpu fd <http://www.spinics.net/lists/kvm/msg95701.html>, but
>> your comment said "Attempt where made to make it possible to destroy 
>> individual vcpus separately from destroying VM before, but they were
>> unsuccessful thus far."
>> So what is the pain here? If we want to achieve the goal, what should we do?
>> Looking forward to your further comments.:)
>>
> CPU array is accessed locklessly in a lot of places, so it will have to be RCUified.
> There was attempt to do so 2 year or so ago, but it didn't go anyware. Adding locks is
> to big a price to pay for ability to free a little bit of memory by destroying vcpu. 

Yes, it's a pain here. But if we want to implement "vcpu hot-remove", this must be
fixed sooner or later.
And any guys working on kvm "vcpu hot-remove" now?

> An
> alternative may be to make sure that stopped vcpu takes as little memory as possible.

Yeah. But if we add a new vcpu with the old id that we stopped before, it will fail.

Best regards,
Gu

> 
> --
> 			Gleb.
> 



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: About releasing vcpu when closing vcpu fd
  2014-05-29  5:40   ` About releasing vcpu when closing vcpu fd Gu Zheng
@ 2014-05-29  8:12     ` Gleb Natapov
  2014-06-06 13:02       ` Anshul Makkar
  0 siblings, 1 reply; 10+ messages in thread
From: Gleb Natapov @ 2014-05-29  8:12 UTC (permalink / raw)
  To: Gu Zheng; +Cc: ChenFan, Gleb Natapov, Paolo Bonzini, kvm

On Thu, May 29, 2014 at 01:40:08PM +0800, Gu Zheng wrote:
> >> There was a patch(from Chen Fan, last august) about releasing vcpu when
> >> closing vcpu fd <http://www.spinics.net/lists/kvm/msg95701.html>, but
> >> your comment said "Attempt where made to make it possible to destroy 
> >> individual vcpus separately from destroying VM before, but they were
> >> unsuccessful thus far."
> >> So what is the pain here? If we want to achieve the goal, what should we do?
> >> Looking forward to your further comments.:)
> >>
> > CPU array is accessed locklessly in a lot of places, so it will have to be RCUified.
> > There was attempt to do so 2 year or so ago, but it didn't go anyware. Adding locks is
> > to big a price to pay for ability to free a little bit of memory by destroying vcpu. 
> 
> Yes, it's a pain here. But if we want to implement "vcpu hot-remove", this must be
> fixed sooner or later.
Why?  "vcpu hot-remove" already works (or at least worked in the past
for some value of "work").  No need to destroy vcpu completely, just
park it and tell a guest not to use it via ACPI hot unplug event.

> And any guys working on kvm "vcpu hot-remove" now?
> 
> > An
> > alternative may be to make sure that stopped vcpu takes as little memory as possible.
> 
> Yeah. But if we add a new vcpu with the old id that we stopped before, it will fail.
> 
No need to create vcpu again, just unpark it and notify a guest via ACPI hot plug event that
vcpu can be used now.

--
			Gleb.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: About releasing vcpu when closing vcpu fd
  2014-05-29  8:12     ` Gleb Natapov
@ 2014-06-06 13:02       ` Anshul Makkar
  2014-06-06 13:36         ` Gleb Natapov
  0 siblings, 1 reply; 10+ messages in thread
From: Anshul Makkar @ 2014-06-06 13:02 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Gu Zheng, ChenFan, Gleb Natapov, Paolo Bonzini, kvm, Igor Mammedov

IIRC, Igor was of the opinion that  patch for vcpu deletion will be
incomplete till its handled properly in kvm i.e vcpus are destroyed
completely. http://comments.gmane.org/gmane.comp.emulators.kvm.devel/114347
.

So can the above proposal  where just vcpus can be  disabled and
reused in qemu is an acceptable solution ?

Thanks
Anshul Makkar

On Thu, May 29, 2014 at 10:12 AM, Gleb Natapov <gleb@kernel.org> wrote:
> On Thu, May 29, 2014 at 01:40:08PM +0800, Gu Zheng wrote:
>> >> There was a patch(from Chen Fan, last august) about releasing vcpu when
>> >> closing vcpu fd <http://www.spinics.net/lists/kvm/msg95701.html>, but
>> >> your comment said "Attempt where made to make it possible to destroy
>> >> individual vcpus separately from destroying VM before, but they were
>> >> unsuccessful thus far."
>> >> So what is the pain here? If we want to achieve the goal, what should we do?
>> >> Looking forward to your further comments.:)
>> >>
>> > CPU array is accessed locklessly in a lot of places, so it will have to be RCUified.
>> > There was attempt to do so 2 year or so ago, but it didn't go anyware. Adding locks is
>> > to big a price to pay for ability to free a little bit of memory by destroying vcpu.
>>
>> Yes, it's a pain here. But if we want to implement "vcpu hot-remove", this must be
>> fixed sooner or later.
> Why?  "vcpu hot-remove" already works (or at least worked in the past
> for some value of "work").  No need to destroy vcpu completely, just
> park it and tell a guest not to use it via ACPI hot unplug event.
>
>> And any guys working on kvm "vcpu hot-remove" now?
>>
>> > An
>> > alternative may be to make sure that stopped vcpu takes as little memory as possible.
>>
>> Yeah. But if we add a new vcpu with the old id that we stopped before, it will fail.
>>
> No need to create vcpu again, just unpark it and notify a guest via ACPI hot plug event that
> vcpu can be used now.
>
> --
>                         Gleb.
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: About releasing vcpu when closing vcpu fd
  2014-06-06 13:02       ` Anshul Makkar
@ 2014-06-06 13:36         ` Gleb Natapov
  2014-06-06 13:41           ` Anshul Makkar
  0 siblings, 1 reply; 10+ messages in thread
From: Gleb Natapov @ 2014-06-06 13:36 UTC (permalink / raw)
  To: Anshul Makkar
  Cc: Gu Zheng, ChenFan, Gleb Natapov, Paolo Bonzini, kvm, Igor Mammedov

On Fri, Jun 06, 2014 at 03:02:59PM +0200, Anshul Makkar wrote:
> IIRC, Igor was of the opinion that  patch for vcpu deletion will be
> incomplete till its handled properly in kvm i.e vcpus are destroyed
> completely. http://comments.gmane.org/gmane.comp.emulators.kvm.devel/114347
> .
> 
> So can the above proposal  where just vcpus can be  disabled and
> reused in qemu is an acceptable solution ?
> 
If by "above proposal" you mean the proposal in the email you linked,
then no since it tries to destroy vcpu, but does it incorrectly. If you
mean proposal to "park" unplugged vcpu, so that guest will not be able
to use it, then yes, it is pragmatic path forward.


> Thanks
> Anshul Makkar
> 
> On Thu, May 29, 2014 at 10:12 AM, Gleb Natapov <gleb@kernel.org> wrote:
> > On Thu, May 29, 2014 at 01:40:08PM +0800, Gu Zheng wrote:
> >> >> There was a patch(from Chen Fan, last august) about releasing vcpu when
> >> >> closing vcpu fd <http://www.spinics.net/lists/kvm/msg95701.html>, but
> >> >> your comment said "Attempt where made to make it possible to destroy
> >> >> individual vcpus separately from destroying VM before, but they were
> >> >> unsuccessful thus far."
> >> >> So what is the pain here? If we want to achieve the goal, what should we do?
> >> >> Looking forward to your further comments.:)
> >> >>
> >> > CPU array is accessed locklessly in a lot of places, so it will have to be RCUified.
> >> > There was attempt to do so 2 year or so ago, but it didn't go anyware. Adding locks is
> >> > to big a price to pay for ability to free a little bit of memory by destroying vcpu.
> >>
> >> Yes, it's a pain here. But if we want to implement "vcpu hot-remove", this must be
> >> fixed sooner or later.
> > Why?  "vcpu hot-remove" already works (or at least worked in the past
> > for some value of "work").  No need to destroy vcpu completely, just
> > park it and tell a guest not to use it via ACPI hot unplug event.
> >
> >> And any guys working on kvm "vcpu hot-remove" now?
> >>
> >> > An
> >> > alternative may be to make sure that stopped vcpu takes as little memory as possible.
> >>
> >> Yeah. But if we add a new vcpu with the old id that we stopped before, it will fail.
> >>
> > No need to create vcpu again, just unpark it and notify a guest via ACPI hot plug event that
> > vcpu can be used now.
> >
> > --
> >                         Gleb.
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
			Gleb.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: About releasing vcpu when closing vcpu fd
  2014-06-06 13:36         ` Gleb Natapov
@ 2014-06-06 13:41           ` Anshul Makkar
  2014-06-30 14:41             ` Anshul Makkar
  0 siblings, 1 reply; 10+ messages in thread
From: Anshul Makkar @ 2014-06-06 13:41 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Gu Zheng, ChenFan, Gleb Natapov, Paolo Bonzini, kvm, Igor Mammedov

Oh yes, sorry for the ambiguity.  I meant proposal to "park" unplugged vcpus.

Thanks for the suggesting the practical approach.

Anshul Makkar

On Fri, Jun 6, 2014 at 3:36 PM, Gleb Natapov <gleb@minantech.com> wrote:
> On Fri, Jun 06, 2014 at 03:02:59PM +0200, Anshul Makkar wrote:
>> IIRC, Igor was of the opinion that  patch for vcpu deletion will be
>> incomplete till its handled properly in kvm i.e vcpus are destroyed
>> completely. http://comments.gmane.org/gmane.comp.emulators.kvm.devel/114347
>> .
>>
>> So can the above proposal  where just vcpus can be  disabled and
>> reused in qemu is an acceptable solution ?
>>
> If by "above proposal" you mean the proposal in the email you linked,
> then no since it tries to destroy vcpu, but does it incorrectly. If you
> mean proposal to "park" unplugged vcpu, so that guest will not be able
> to use it, then yes, it is pragmatic path forward.
>
>
>> Thanks
>> Anshul Makkar
>>
>> On Thu, May 29, 2014 at 10:12 AM, Gleb Natapov <gleb@kernel.org> wrote:
>> > On Thu, May 29, 2014 at 01:40:08PM +0800, Gu Zheng wrote:
>> >> >> There was a patch(from Chen Fan, last august) about releasing vcpu when
>> >> >> closing vcpu fd <http://www.spinics.net/lists/kvm/msg95701.html>, but
>> >> >> your comment said "Attempt where made to make it possible to destroy
>> >> >> individual vcpus separately from destroying VM before, but they were
>> >> >> unsuccessful thus far."
>> >> >> So what is the pain here? If we want to achieve the goal, what should we do?
>> >> >> Looking forward to your further comments.:)
>> >> >>
>> >> > CPU array is accessed locklessly in a lot of places, so it will have to be RCUified.
>> >> > There was attempt to do so 2 year or so ago, but it didn't go anyware. Adding locks is
>> >> > to big a price to pay for ability to free a little bit of memory by destroying vcpu.
>> >>
>> >> Yes, it's a pain here. But if we want to implement "vcpu hot-remove", this must be
>> >> fixed sooner or later.
>> > Why?  "vcpu hot-remove" already works (or at least worked in the past
>> > for some value of "work").  No need to destroy vcpu completely, just
>> > park it and tell a guest not to use it via ACPI hot unplug event.
>> >
>> >> And any guys working on kvm "vcpu hot-remove" now?
>> >>
>> >> > An
>> >> > alternative may be to make sure that stopped vcpu takes as little memory as possible.
>> >>
>> >> Yeah. But if we add a new vcpu with the old id that we stopped before, it will fail.
>> >>
>> > No need to create vcpu again, just unpark it and notify a guest via ACPI hot plug event that
>> > vcpu can be used now.
>> >
>> > --
>> >                         Gleb.
>> > --
>> > To unsubscribe from this list: send the line "unsubscribe kvm" in
>> > the body of a message to majordomo@vger.kernel.org
>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> --
>                         Gleb.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: About releasing vcpu when closing vcpu fd
  2014-06-06 13:41           ` Anshul Makkar
@ 2014-06-30 14:41             ` Anshul Makkar
  2014-07-01  2:07               ` Gu Zheng
  2014-07-02  9:43               ` Igor Mammedov
  0 siblings, 2 replies; 10+ messages in thread
From: Anshul Makkar @ 2014-06-30 14:41 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Gu Zheng, ChenFan, Gleb Natapov, Paolo Bonzini, kvm, Igor Mammedov

Hi,

Currently as per the specs for cpu_hot(un)plug, ACPI GPE Block:  IO
ports 0xafe0-0xafe3  where each bit corresponds to each CPU.

Currently, EJ0 method in acpi-dsdt-cpu-hotplu.dsl doesn't do anything.

Method(CPEJ, 2, NotSerialized) {
        // _EJ0 method - eject callback
        Sleep(200)
    }

I want to implement a notification mechanism for CPU hotunplug just
like we have in memory hotunplug where in we write to particular IO
port and this read/write is caught in the memory-hotplug.c .

So, just want a suggestion as to whether I should expand the IO port
range from 0xafe0 to 0xafe4 (addition of 1 byte), where last byte is
for notification of EJ0 event.

Or if you have any other suggestion, please share.

Thanks
Anshul Makkar

On Fri, Jun 6, 2014 at 3:41 PM, Anshul Makkar
<anshul.makkar@profitbricks.com> wrote:
> Oh yes, sorry for the ambiguity.  I meant proposal to "park" unplugged vcpus.
>
> Thanks for the suggesting the practical approach.
>
> Anshul Makkar
>
> On Fri, Jun 6, 2014 at 3:36 PM, Gleb Natapov <gleb@minantech.com> wrote:
>> On Fri, Jun 06, 2014 at 03:02:59PM +0200, Anshul Makkar wrote:
>>> IIRC, Igor was of the opinion that  patch for vcpu deletion will be
>>> incomplete till its handled properly in kvm i.e vcpus are destroyed
>>> completely. http://comments.gmane.org/gmane.comp.emulators.kvm.devel/114347
>>> .
>>>
>>> So can the above proposal  where just vcpus can be  disabled and
>>> reused in qemu is an acceptable solution ?
>>>
>> If by "above proposal" you mean the proposal in the email you linked,
>> then no since it tries to destroy vcpu, but does it incorrectly. If you
>> mean proposal to "park" unplugged vcpu, so that guest will not be able
>> to use it, then yes, it is pragmatic path forward.
>>
>>
>>> Thanks
>>> Anshul Makkar
>>>
>>> On Thu, May 29, 2014 at 10:12 AM, Gleb Natapov <gleb@kernel.org> wrote:
>>> > On Thu, May 29, 2014 at 01:40:08PM +0800, Gu Zheng wrote:
>>> >> >> There was a patch(from Chen Fan, last august) about releasing vcpu when
>>> >> >> closing vcpu fd <http://www.spinics.net/lists/kvm/msg95701.html>, but
>>> >> >> your comment said "Attempt where made to make it possible to destroy
>>> >> >> individual vcpus separately from destroying VM before, but they were
>>> >> >> unsuccessful thus far."
>>> >> >> So what is the pain here? If we want to achieve the goal, what should we do?
>>> >> >> Looking forward to your further comments.:)
>>> >> >>
>>> >> > CPU array is accessed locklessly in a lot of places, so it will have to be RCUified.
>>> >> > There was attempt to do so 2 year or so ago, but it didn't go anyware. Adding locks is
>>> >> > to big a price to pay for ability to free a little bit of memory by destroying vcpu.
>>> >>
>>> >> Yes, it's a pain here. But if we want to implement "vcpu hot-remove", this must be
>>> >> fixed sooner or later.
>>> > Why?  "vcpu hot-remove" already works (or at least worked in the past
>>> > for some value of "work").  No need to destroy vcpu completely, just
>>> > park it and tell a guest not to use it via ACPI hot unplug event.
>>> >
>>> >> And any guys working on kvm "vcpu hot-remove" now?
>>> >>
>>> >> > An
>>> >> > alternative may be to make sure that stopped vcpu takes as little memory as possible.
>>> >>
>>> >> Yeah. But if we add a new vcpu with the old id that we stopped before, it will fail.
>>> >>
>>> > No need to create vcpu again, just unpark it and notify a guest via ACPI hot plug event that
>>> > vcpu can be used now.
>>> >
>>> > --
>>> >                         Gleb.
>>> > --
>>> > To unsubscribe from this list: send the line "unsubscribe kvm" in
>>> > the body of a message to majordomo@vger.kernel.org
>>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>> --
>>                         Gleb.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: About releasing vcpu when closing vcpu fd
  2014-06-30 14:41             ` Anshul Makkar
@ 2014-07-01  2:07               ` Gu Zheng
  2014-07-11  9:59                 ` Anshul Makkar
  2014-07-02  9:43               ` Igor Mammedov
  1 sibling, 1 reply; 10+ messages in thread
From: Gu Zheng @ 2014-07-01  2:07 UTC (permalink / raw)
  To: Anshul Makkar
  Cc: Gleb Natapov, ChenFan, Gleb Natapov, Paolo Bonzini, kvm, Igor Mammedov

Hi Anshul,
On 06/30/2014 10:41 PM, Anshul Makkar wrote:

> Hi,
> 
> Currently as per the specs for cpu_hot(un)plug, ACPI GPE Block:  IO
> ports 0xafe0-0xafe3  where each bit corresponds to each CPU.
> 
> Currently, EJ0 method in acpi-dsdt-cpu-hotplu.dsl doesn't do anything.
> 
> Method(CPEJ, 2, NotSerialized) {
>         // _EJ0 method - eject callback
>         Sleep(200)
>     }
> 
> I want to implement a notification mechanism for CPU hotunplug just
> like we have in memory hotunplug where in we write to particular IO
> port and this read/write is caught in the memory-hotplug.c .
> 
> So, just want a suggestion as to whether I should expand the IO port
> range from 0xafe0 to 0xafe4 (addition of 1 byte), where last byte is
> for notification of EJ0 event.
> 
> Or if you have any other suggestion, please share.

In fact, Chen Fan has implemented this feature in his previous vcup hot remove
patchset:
http://lists.gnu.org/archive/html/qemu-devel/2013-12/msg04266.html
As you know, it is based on the cleaning up kvm vcpus as you mentioned the in
previous thread, and it has not been applied for some reason.
So I am trying to respin a new one based on Chen Fan's previous patchset recently,
and if nothing else, I will send it to the community in the coming week. So if you
like, please hold on for a moment.;)

Thanks,
Gu

> 
> Thanks
> Anshul Makkar
> 
> On Fri, Jun 6, 2014 at 3:41 PM, Anshul Makkar
> <anshul.makkar@profitbricks.com> wrote:
>> Oh yes, sorry for the ambiguity.  I meant proposal to "park" unplugged vcpus.
>>
>> Thanks for the suggesting the practical approach.
>>
>> Anshul Makkar
>>
>> On Fri, Jun 6, 2014 at 3:36 PM, Gleb Natapov <gleb@minantech.com> wrote:
>>> On Fri, Jun 06, 2014 at 03:02:59PM +0200, Anshul Makkar wrote:
>>>> IIRC, Igor was of the opinion that  patch for vcpu deletion will be
>>>> incomplete till its handled properly in kvm i.e vcpus are destroyed
>>>> completely. http://comments.gmane.org/gmane.comp.emulators.kvm.devel/114347
>>>> .
>>>>
>>>> So can the above proposal  where just vcpus can be  disabled and
>>>> reused in qemu is an acceptable solution ?
>>>>
>>> If by "above proposal" you mean the proposal in the email you linked,
>>> then no since it tries to destroy vcpu, but does it incorrectly. If you
>>> mean proposal to "park" unplugged vcpu, so that guest will not be able
>>> to use it, then yes, it is pragmatic path forward.
>>>
>>>
>>>> Thanks
>>>> Anshul Makkar
>>>>
>>>> On Thu, May 29, 2014 at 10:12 AM, Gleb Natapov <gleb@kernel.org> wrote:
>>>>> On Thu, May 29, 2014 at 01:40:08PM +0800, Gu Zheng wrote:
>>>>>>>> There was a patch(from Chen Fan, last august) about releasing vcpu when
>>>>>>>> closing vcpu fd <http://www.spinics.net/lists/kvm/msg95701.html>, but
>>>>>>>> your comment said "Attempt where made to make it possible to destroy
>>>>>>>> individual vcpus separately from destroying VM before, but they were
>>>>>>>> unsuccessful thus far."
>>>>>>>> So what is the pain here? If we want to achieve the goal, what should we do?
>>>>>>>> Looking forward to your further comments.:)
>>>>>>>>
>>>>>>> CPU array is accessed locklessly in a lot of places, so it will have to be RCUified.
>>>>>>> There was attempt to do so 2 year or so ago, but it didn't go anyware. Adding locks is
>>>>>>> to big a price to pay for ability to free a little bit of memory by destroying vcpu.
>>>>>>
>>>>>> Yes, it's a pain here. But if we want to implement "vcpu hot-remove", this must be
>>>>>> fixed sooner or later.
>>>>> Why?  "vcpu hot-remove" already works (or at least worked in the past
>>>>> for some value of "work").  No need to destroy vcpu completely, just
>>>>> park it and tell a guest not to use it via ACPI hot unplug event.
>>>>>
>>>>>> And any guys working on kvm "vcpu hot-remove" now?
>>>>>>
>>>>>>> An
>>>>>>> alternative may be to make sure that stopped vcpu takes as little memory as possible.
>>>>>>
>>>>>> Yeah. But if we add a new vcpu with the old id that we stopped before, it will fail.
>>>>>>
>>>>> No need to create vcpu again, just unpark it and notify a guest via ACPI hot plug event that
>>>>> vcpu can be used now.
>>>>>
>>>>> --
>>>>>                         Gleb.
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>> --
>>>                         Gleb.
> .
> 



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: About releasing vcpu when closing vcpu fd
  2014-06-30 14:41             ` Anshul Makkar
  2014-07-01  2:07               ` Gu Zheng
@ 2014-07-02  9:43               ` Igor Mammedov
  2014-07-04  8:30                 ` Anshul Makkar
  1 sibling, 1 reply; 10+ messages in thread
From: Igor Mammedov @ 2014-07-02  9:43 UTC (permalink / raw)
  To: Anshul Makkar
  Cc: Gleb Natapov, Gu Zheng, ChenFan, Gleb Natapov, Paolo Bonzini, kvm

On Mon, 30 Jun 2014 16:41:07 +0200
Anshul Makkar <anshul.makkar@profitbricks.com> wrote:

> Hi,
> 
> Currently as per the specs for cpu_hot(un)plug, ACPI GPE Block:  IO
> ports 0xafe0-0xafe3  where each bit corresponds to each CPU.
> 
> Currently, EJ0 method in acpi-dsdt-cpu-hotplu.dsl doesn't do anything.
> 
> Method(CPEJ, 2, NotSerialized) {
>         // _EJ0 method - eject callback
>         Sleep(200)
>     }
> 
> I want to implement a notification mechanism for CPU hotunplug just
> like we have in memory hotunplug where in we write to particular IO
> port and this read/write is caught in the memory-hotplug.c.
> 
> So, just want a suggestion as to whether I should expand the IO port
> range from 0xafe0 to 0xafe4 (addition of 1 byte), where last byte is
> for notification of EJ0 event.
I have it in my TODO list to rewrite CPU hotplug IO interface to be
similar with memory hotplug one. So you can try to it, it will
allow to drop CPUs bitmask and make interface scalable to more then
256 cpus.

> 
> Or if you have any other suggestion, please share.
> 
> Thanks
> Anshul Makkar
> 
> On Fri, Jun 6, 2014 at 3:41 PM, Anshul Makkar
> <anshul.makkar@profitbricks.com> wrote:
> > Oh yes, sorry for the ambiguity.  I meant proposal to "park" unplugged vcpus.
> >
> > Thanks for the suggesting the practical approach.
> >
> > Anshul Makkar
> >
> > On Fri, Jun 6, 2014 at 3:36 PM, Gleb Natapov <gleb@minantech.com> wrote:
> >> On Fri, Jun 06, 2014 at 03:02:59PM +0200, Anshul Makkar wrote:
> >>> IIRC, Igor was of the opinion that  patch for vcpu deletion will be
> >>> incomplete till its handled properly in kvm i.e vcpus are destroyed
> >>> completely. http://comments.gmane.org/gmane.comp.emulators.kvm.devel/114347
> >>> .
> >>>
> >>> So can the above proposal  where just vcpus can be  disabled and
> >>> reused in qemu is an acceptable solution ?
> >>>
> >> If by "above proposal" you mean the proposal in the email you linked,
> >> then no since it tries to destroy vcpu, but does it incorrectly. If you
> >> mean proposal to "park" unplugged vcpu, so that guest will not be able
> >> to use it, then yes, it is pragmatic path forward.
> >>
> >>
> >>> Thanks
> >>> Anshul Makkar
> >>>
> >>> On Thu, May 29, 2014 at 10:12 AM, Gleb Natapov <gleb@kernel.org> wrote:
> >>> > On Thu, May 29, 2014 at 01:40:08PM +0800, Gu Zheng wrote:
> >>> >> >> There was a patch(from Chen Fan, last august) about releasing vcpu when
> >>> >> >> closing vcpu fd <http://www.spinics.net/lists/kvm/msg95701.html>, but
> >>> >> >> your comment said "Attempt where made to make it possible to destroy
> >>> >> >> individual vcpus separately from destroying VM before, but they were
> >>> >> >> unsuccessful thus far."
> >>> >> >> So what is the pain here? If we want to achieve the goal, what should we do?
> >>> >> >> Looking forward to your further comments.:)
> >>> >> >>
> >>> >> > CPU array is accessed locklessly in a lot of places, so it will have to be RCUified.
> >>> >> > There was attempt to do so 2 year or so ago, but it didn't go anyware. Adding locks is
> >>> >> > to big a price to pay for ability to free a little bit of memory by destroying vcpu.
> >>> >>
> >>> >> Yes, it's a pain here. But if we want to implement "vcpu hot-remove", this must be
> >>> >> fixed sooner or later.
> >>> > Why?  "vcpu hot-remove" already works (or at least worked in the past
> >>> > for some value of "work").  No need to destroy vcpu completely, just
> >>> > park it and tell a guest not to use it via ACPI hot unplug event.
> >>> >
> >>> >> And any guys working on kvm "vcpu hot-remove" now?
> >>> >>
> >>> >> > An
> >>> >> > alternative may be to make sure that stopped vcpu takes as little memory as possible.
> >>> >>
> >>> >> Yeah. But if we add a new vcpu with the old id that we stopped before, it will fail.
> >>> >>
> >>> > No need to create vcpu again, just unpark it and notify a guest via ACPI hot plug event that
> >>> > vcpu can be used now.
> >>> >
> >>> > --
> >>> >                         Gleb.
> >>> > --
> >>> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> >>> > the body of a message to majordomo@vger.kernel.org
> >>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>
> >> --
> >>                         Gleb.


-- 
Regards,
  Igor

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: About releasing vcpu when closing vcpu fd
  2014-07-02  9:43               ` Igor Mammedov
@ 2014-07-04  8:30                 ` Anshul Makkar
  0 siblings, 0 replies; 10+ messages in thread
From: Anshul Makkar @ 2014-07-04  8:30 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Gleb Natapov, Gu Zheng, ChenFan, Gleb Natapov, Paolo Bonzini, kvm

Hi Gu Zheng,

Will prefer to wait for the moment . There is no point doing duplicate
things in parallel.

Thanks
Anshul Makkar


On Wed, Jul 2, 2014 at 11:43 AM, Igor Mammedov <imammedo@redhat.com> wrote:
> On Mon, 30 Jun 2014 16:41:07 +0200
> Anshul Makkar <anshul.makkar@profitbricks.com> wrote:
>
>> Hi,
>>
>> Currently as per the specs for cpu_hot(un)plug, ACPI GPE Block:  IO
>> ports 0xafe0-0xafe3  where each bit corresponds to each CPU.
>>
>> Currently, EJ0 method in acpi-dsdt-cpu-hotplu.dsl doesn't do anything.
>>
>> Method(CPEJ, 2, NotSerialized) {
>>         // _EJ0 method - eject callback
>>         Sleep(200)
>>     }
>>
>> I want to implement a notification mechanism for CPU hotunplug just
>> like we have in memory hotunplug where in we write to particular IO
>> port and this read/write is caught in the memory-hotplug.c.
>>
>> So, just want a suggestion as to whether I should expand the IO port
>> range from 0xafe0 to 0xafe4 (addition of 1 byte), where last byte is
>> for notification of EJ0 event.
> I have it in my TODO list to rewrite CPU hotplug IO interface to be
> similar with memory hotplug one. So you can try to it, it will
> allow to drop CPUs bitmask and make interface scalable to more then
> 256 cpus.
>
>>
>> Or if you have any other suggestion, please share.
>>
>> Thanks
>> Anshul Makkar
>>
>> On Fri, Jun 6, 2014 at 3:41 PM, Anshul Makkar
>> <anshul.makkar@profitbricks.com> wrote:
>> > Oh yes, sorry for the ambiguity.  I meant proposal to "park" unplugged vcpus.
>> >
>> > Thanks for the suggesting the practical approach.
>> >
>> > Anshul Makkar
>> >
>> > On Fri, Jun 6, 2014 at 3:36 PM, Gleb Natapov <gleb@minantech.com> wrote:
>> >> On Fri, Jun 06, 2014 at 03:02:59PM +0200, Anshul Makkar wrote:
>> >>> IIRC, Igor was of the opinion that  patch for vcpu deletion will be
>> >>> incomplete till its handled properly in kvm i.e vcpus are destroyed
>> >>> completely. http://comments.gmane.org/gmane.comp.emulators.kvm.devel/114347
>> >>> .
>> >>>
>> >>> So can the above proposal  where just vcpus can be  disabled and
>> >>> reused in qemu is an acceptable solution ?
>> >>>
>> >> If by "above proposal" you mean the proposal in the email you linked,
>> >> then no since it tries to destroy vcpu, but does it incorrectly. If you
>> >> mean proposal to "park" unplugged vcpu, so that guest will not be able
>> >> to use it, then yes, it is pragmatic path forward.
>> >>
>> >>
>> >>> Thanks
>> >>> Anshul Makkar
>> >>>
>> >>> On Thu, May 29, 2014 at 10:12 AM, Gleb Natapov <gleb@kernel.org> wrote:
>> >>> > On Thu, May 29, 2014 at 01:40:08PM +0800, Gu Zheng wrote:
>> >>> >> >> There was a patch(from Chen Fan, last august) about releasing vcpu when
>> >>> >> >> closing vcpu fd <http://www.spinics.net/lists/kvm/msg95701.html>, but
>> >>> >> >> your comment said "Attempt where made to make it possible to destroy
>> >>> >> >> individual vcpus separately from destroying VM before, but they were
>> >>> >> >> unsuccessful thus far."
>> >>> >> >> So what is the pain here? If we want to achieve the goal, what should we do?
>> >>> >> >> Looking forward to your further comments.:)
>> >>> >> >>
>> >>> >> > CPU array is accessed locklessly in a lot of places, so it will have to be RCUified.
>> >>> >> > There was attempt to do so 2 year or so ago, but it didn't go anyware. Adding locks is
>> >>> >> > to big a price to pay for ability to free a little bit of memory by destroying vcpu.
>> >>> >>
>> >>> >> Yes, it's a pain here. But if we want to implement "vcpu hot-remove", this must be
>> >>> >> fixed sooner or later.
>> >>> > Why?  "vcpu hot-remove" already works (or at least worked in the past
>> >>> > for some value of "work").  No need to destroy vcpu completely, just
>> >>> > park it and tell a guest not to use it via ACPI hot unplug event.
>> >>> >
>> >>> >> And any guys working on kvm "vcpu hot-remove" now?
>> >>> >>
>> >>> >> > An
>> >>> >> > alternative may be to make sure that stopped vcpu takes as little memory as possible.
>> >>> >>
>> >>> >> Yeah. But if we add a new vcpu with the old id that we stopped before, it will fail.
>> >>> >>
>> >>> > No need to create vcpu again, just unpark it and notify a guest via ACPI hot plug event that
>> >>> > vcpu can be used now.
>> >>> >
>> >>> > --
>> >>> >                         Gleb.
>> >>> > --
>> >>> > To unsubscribe from this list: send the line "unsubscribe kvm" in
>> >>> > the body of a message to majordomo@vger.kernel.org
>> >>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >>
>> >> --
>> >>                         Gleb.
>
>
> --
> Regards,
>   Igor

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: About releasing vcpu when closing vcpu fd
  2014-07-01  2:07               ` Gu Zheng
@ 2014-07-11  9:59                 ` Anshul Makkar
  0 siblings, 0 replies; 10+ messages in thread
From: Anshul Makkar @ 2014-07-11  9:59 UTC (permalink / raw)
  To: Gu Zheng
  Cc: Gleb Natapov, ChenFan, Gleb Natapov, Paolo Bonzini, kvm, Igor Mammedov

Hi Gu,

Sorry, just wanted to check whether you are going to release patchset
or it will take some more time.

Thanks
Anshul Makkar

On Tue, Jul 1, 2014 at 4:07 AM, Gu Zheng <guz.fnst@cn.fujitsu.com> wrote:
> Hi Anshul,
> On 06/30/2014 10:41 PM, Anshul Makkar wrote:
>
>> Hi,
>>
>> Currently as per the specs for cpu_hot(un)plug, ACPI GPE Block:  IO
>> ports 0xafe0-0xafe3  where each bit corresponds to each CPU.
>>
>> Currently, EJ0 method in acpi-dsdt-cpu-hotplu.dsl doesn't do anything.
>>
>> Method(CPEJ, 2, NotSerialized) {
>>         // _EJ0 method - eject callback
>>         Sleep(200)
>>     }
>>
>> I want to implement a notification mechanism for CPU hotunplug just
>> like we have in memory hotunplug where in we write to particular IO
>> port and this read/write is caught in the memory-hotplug.c .
>>
>> So, just want a suggestion as to whether I should expand the IO port
>> range from 0xafe0 to 0xafe4 (addition of 1 byte), where last byte is
>> for notification of EJ0 event.
>>
>> Or if you have any other suggestion, please share.
>
> In fact, Chen Fan has implemented this feature in his previous vcup hot remove
> patchset:
> http://lists.gnu.org/archive/html/qemu-devel/2013-12/msg04266.html
> As you know, it is based on the cleaning up kvm vcpus as you mentioned the in
> previous thread, and it has not been applied for some reason.
> So I am trying to respin a new one based on Chen Fan's previous patchset recently,
> and if nothing else, I will send it to the community in the coming week. So if you
> like, please hold on for a moment.;)
>
> Thanks,
> Gu
>
>>
>> Thanks
>> Anshul Makkar
>>
>> On Fri, Jun 6, 2014 at 3:41 PM, Anshul Makkar
>> <anshul.makkar@profitbricks.com> wrote:
>>> Oh yes, sorry for the ambiguity.  I meant proposal to "park" unplugged vcpus.
>>>
>>> Thanks for the suggesting the practical approach.
>>>
>>> Anshul Makkar
>>>
>>> On Fri, Jun 6, 2014 at 3:36 PM, Gleb Natapov <gleb@minantech.com> wrote:
>>>> On Fri, Jun 06, 2014 at 03:02:59PM +0200, Anshul Makkar wrote:
>>>>> IIRC, Igor was of the opinion that  patch for vcpu deletion will be
>>>>> incomplete till its handled properly in kvm i.e vcpus are destroyed
>>>>> completely. http://comments.gmane.org/gmane.comp.emulators.kvm.devel/114347
>>>>> .
>>>>>
>>>>> So can the above proposal  where just vcpus can be  disabled and
>>>>> reused in qemu is an acceptable solution ?
>>>>>
>>>> If by "above proposal" you mean the proposal in the email you linked,
>>>> then no since it tries to destroy vcpu, but does it incorrectly. If you
>>>> mean proposal to "park" unplugged vcpu, so that guest will not be able
>>>> to use it, then yes, it is pragmatic path forward.
>>>>
>>>>
>>>>> Thanks
>>>>> Anshul Makkar
>>>>>
>>>>> On Thu, May 29, 2014 at 10:12 AM, Gleb Natapov <gleb@kernel.org> wrote:
>>>>>> On Thu, May 29, 2014 at 01:40:08PM +0800, Gu Zheng wrote:
>>>>>>>>> There was a patch(from Chen Fan, last august) about releasing vcpu when
>>>>>>>>> closing vcpu fd <http://www.spinics.net/lists/kvm/msg95701.html>, but
>>>>>>>>> your comment said "Attempt where made to make it possible to destroy
>>>>>>>>> individual vcpus separately from destroying VM before, but they were
>>>>>>>>> unsuccessful thus far."
>>>>>>>>> So what is the pain here? If we want to achieve the goal, what should we do?
>>>>>>>>> Looking forward to your further comments.:)
>>>>>>>>>
>>>>>>>> CPU array is accessed locklessly in a lot of places, so it will have to be RCUified.
>>>>>>>> There was attempt to do so 2 year or so ago, but it didn't go anyware. Adding locks is
>>>>>>>> to big a price to pay for ability to free a little bit of memory by destroying vcpu.
>>>>>>>
>>>>>>> Yes, it's a pain here. But if we want to implement "vcpu hot-remove", this must be
>>>>>>> fixed sooner or later.
>>>>>> Why?  "vcpu hot-remove" already works (or at least worked in the past
>>>>>> for some value of "work").  No need to destroy vcpu completely, just
>>>>>> park it and tell a guest not to use it via ACPI hot unplug event.
>>>>>>
>>>>>>> And any guys working on kvm "vcpu hot-remove" now?
>>>>>>>
>>>>>>>> An
>>>>>>>> alternative may be to make sure that stopped vcpu takes as little memory as possible.
>>>>>>>
>>>>>>> Yeah. But if we add a new vcpu with the old id that we stopped before, it will fail.
>>>>>>>
>>>>>> No need to create vcpu again, just unpark it and notify a guest via ACPI hot plug event that
>>>>>> vcpu can be used now.
>>>>>>
>>>>>> --
>>>>>>                         Gleb.
>>>>>> --
>>>>>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>>> --
>>>>                         Gleb.
>> .
>>
>
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-07-11  9:59 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <537AEC13.1000804@cn.fujitsu.com>
     [not found] ` <20140523094345.GC5306@minantech.com>
2014-05-29  5:40   ` About releasing vcpu when closing vcpu fd Gu Zheng
2014-05-29  8:12     ` Gleb Natapov
2014-06-06 13:02       ` Anshul Makkar
2014-06-06 13:36         ` Gleb Natapov
2014-06-06 13:41           ` Anshul Makkar
2014-06-30 14:41             ` Anshul Makkar
2014-07-01  2:07               ` Gu Zheng
2014-07-11  9:59                 ` Anshul Makkar
2014-07-02  9:43               ` Igor Mammedov
2014-07-04  8:30                 ` Anshul Makkar

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.