linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [patch 0/3] cpuidle-haltpoll driver (v2)
       [not found] <20190603225242.289109849@amt.cnet>
@ 2019-06-07  9:49 ` Rafael J. Wysocki
  2019-06-07 17:16   ` Marcelo Tosatti
  2019-06-10 14:59   ` Marcelo Tosatti
       [not found] ` <20190603225254.212931277@amt.cnet>
  1 sibling, 2 replies; 10+ messages in thread
From: Rafael J. Wysocki @ 2019-06-07  9:49 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: kvm-devel, Paolo Bonzini,
	Radim Krčmář,
	Andrea Arcangeli, Peter Zijlstra, Wanpeng Li,
	Konrad Rzeszutek Wilk, Raslan KarimAllah, Boris Ostrovsky,
	Ankur Arora, Christian Borntraeger, Linux PM, Peter Zijlstra

On 6/4/2019 12:52 AM, Marcelo Tosatti wrote:
> The cpuidle-haltpoll driver allows the guest vcpus to poll for a specified
> amount of time before halting. This provides the following benefits
> to host side polling:
>
>          1) The POLL flag is set while polling is performed, which allows
>             a remote vCPU to avoid sending an IPI (and the associated
>             cost of handling the IPI) when performing a wakeup.
>
>          2) The HLT VM-exit cost can be avoided.
>
> The downside of guest side polling is that polling is performed
> even with other runnable tasks in the host.
>
> Results comparing halt_poll_ns and server/client application
> where a small packet is ping-ponged:
>
> host                                        --> 31.33
> halt_poll_ns=300000 / no guest busy spin    --> 33.40   (93.8%)
> halt_poll_ns=0 / guest_halt_poll_ns=300000  --> 32.73   (95.7%)
>
> For the SAP HANA benchmarks (where idle_spin is a parameter
> of the previous version of the patch, results should be the
> same):
>
> hpns == halt_poll_ns
>
>                            idle_spin=0/   idle_spin=800/    idle_spin=0/
>                            hpns=200000    hpns=0            hpns=800000
> DeleteC06T03 (100 thread) 1.76           1.71 (-3%)        1.78   (+1%)
> InsertC16T02 (100 thread) 2.14           2.07 (-3%)        2.18   (+1.8%)
> DeleteC00T01 (1 thread)   1.34           1.28 (-4.5%)	   1.29   (-3.7%)
> UpdateC00T03 (1 thread)   4.72           4.18 (-12%)	   4.53   (-5%)
>
> V2:
>
> - Move from x86 to generic code (Paolo/Christian).
> - Add auto-tuning logic (Paolo).
> - Add MSR to disable host side polling (Paolo).
>
>
>
First of all, please CC power management patches (including cpuidle, 
cpufreq etc) to linux-pm@vger.kernel.org (there are people on that list 
who may want to see your changes before they go in) and CC cpuidle 
material (in particular) to Peter Zijlstra.

Second, I'm not a big fan of this approach to be honest, as it kind of 
is a driver trying to play the role of a governor.

We have a "polling state" already that could be used here in principle 
so I wonder what would be wrong with that.  Also note that there seems 
to be at least some code duplication between your code and the "polling 
state" implementation, so maybe it would be possible to do some things 
in a common way?



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch 1/3] drivers/cpuidle: add cpuidle-haltpoll driver
       [not found] ` <20190603225254.212931277@amt.cnet>
@ 2019-06-07  9:54   ` Rafael J. Wysocki
  0 siblings, 0 replies; 10+ messages in thread
From: Rafael J. Wysocki @ 2019-06-07  9:54 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: kvm-devel, Paolo Bonzini,
	Radim Krčmář,
	Andrea Arcangeli, Peter Zijlstra, Wanpeng Li,
	Konrad Rzeszutek Wilk, Raslan KarimAllah, Boris Ostrovsky,
	Ankur Arora, Christian Borntraeger, Linux PM

On 6/4/2019 12:52 AM, Marcelo Tosatti wrote:

It is rather inconvenient to comment patches posted as attachments.

Anyway, IMO adding trace events at this point is premature, please move 
adding them to a separate patch at the end of the series.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch 0/3] cpuidle-haltpoll driver (v2)
  2019-06-07  9:49 ` [patch 0/3] cpuidle-haltpoll driver (v2) Rafael J. Wysocki
@ 2019-06-07 17:16   ` Marcelo Tosatti
  2019-06-07 18:22     ` Paolo Bonzini
  2019-06-10 14:59   ` Marcelo Tosatti
  1 sibling, 1 reply; 10+ messages in thread
From: Marcelo Tosatti @ 2019-06-07 17:16 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: kvm-devel, Paolo Bonzini, Radim KrÄ?máÅ?,
	Andrea Arcangeli, Peter Zijlstra, Wanpeng Li,
	Konrad Rzeszutek Wilk, Raslan KarimAllah, Boris Ostrovsky,
	Ankur Arora, Christian Borntraeger, Linux PM

On Fri, Jun 07, 2019 at 11:49:51AM +0200, Rafael J. Wysocki wrote:
> On 6/4/2019 12:52 AM, Marcelo Tosatti wrote:
> >The cpuidle-haltpoll driver allows the guest vcpus to poll for a specified
> >amount of time before halting. This provides the following benefits
> >to host side polling:
> >
> >         1) The POLL flag is set while polling is performed, which allows
> >            a remote vCPU to avoid sending an IPI (and the associated
> >            cost of handling the IPI) when performing a wakeup.
> >
> >         2) The HLT VM-exit cost can be avoided.
> >
> >The downside of guest side polling is that polling is performed
> >even with other runnable tasks in the host.
> >
> >Results comparing halt_poll_ns and server/client application
> >where a small packet is ping-ponged:
> >
> >host                                        --> 31.33
> >halt_poll_ns=300000 / no guest busy spin    --> 33.40   (93.8%)
> >halt_poll_ns=0 / guest_halt_poll_ns=300000  --> 32.73   (95.7%)
> >
> >For the SAP HANA benchmarks (where idle_spin is a parameter
> >of the previous version of the patch, results should be the
> >same):
> >
> >hpns == halt_poll_ns
> >
> >                           idle_spin=0/   idle_spin=800/    idle_spin=0/
> >                           hpns=200000    hpns=0            hpns=800000
> >DeleteC06T03 (100 thread) 1.76           1.71 (-3%)        1.78   (+1%)
> >InsertC16T02 (100 thread) 2.14           2.07 (-3%)        2.18   (+1.8%)
> >DeleteC00T01 (1 thread)   1.34           1.28 (-4.5%)	   1.29   (-3.7%)
> >UpdateC00T03 (1 thread)   4.72           4.18 (-12%)	   4.53   (-5%)
> >
> >V2:
> >
> >- Move from x86 to generic code (Paolo/Christian).
> >- Add auto-tuning logic (Paolo).
> >- Add MSR to disable host side polling (Paolo).
> >
> >
> >
> First of all, please CC power management patches (including cpuidle,
> cpufreq etc) to linux-pm@vger.kernel.org (there are people on that
> list who may want to see your changes before they go in) and CC
> cpuidle material (in particular) to Peter Zijlstra.

Ok, Peter is CC'ed, will include linux-pm@vger in the next patches.

> Second, I'm not a big fan of this approach to be honest, as it kind
> of is a driver trying to play the role of a governor.

Well, its not trying to choose which idle state to enter, because
there is only one idle state to enter when virtualized (HLT).

> We have a "polling state" already that could be used here in
> principle so I wonder what would be wrong with that.  

There is no "target residency" concept in the virtualized use-case 
(which is what poll_state.c uses to calculate the poll time).

Moreover the cpuidle-haltpoll driver uses an adaptive logic to
tune poll time (which appparently does not make sense for poll_state).

The only thing they share is the main loop structure:

"while (!need_resched()) {
	cpu_relax();
	now = ktime_get();
}"

> Also note that
> there seems to be at least some code duplication between your code
> and the "polling state" implementation, so maybe it would be
> possible to do some things in a common way?

Again, its just the main loop structure that is shared:
there is no target residency in the virtualized case, 
and we want an adaptive scheme.

Lets think about deduplication: you would have a cpuidle driver,
with a fake "target residency". 

Now, it makes no sense to use a governor for the virtualized case
(again, there is only one idle state: HLT, the host governor is
used for the actual idle state decision in the host).

So i fail to see how i would go about integrating these two 
and what are the advantages of doing so ? 



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch 0/3] cpuidle-haltpoll driver (v2)
  2019-06-07 17:16   ` Marcelo Tosatti
@ 2019-06-07 18:22     ` Paolo Bonzini
  2019-06-07 21:38       ` Marcelo Tosatti
  0 siblings, 1 reply; 10+ messages in thread
From: Paolo Bonzini @ 2019-06-07 18:22 UTC (permalink / raw)
  To: Marcelo Tosatti, Rafael J. Wysocki
  Cc: kvm-devel, Radim KrÄ?máÅ?,
	Andrea Arcangeli, Peter Zijlstra, Wanpeng Li,
	Konrad Rzeszutek Wilk, Raslan KarimAllah, Boris Ostrovsky,
	Ankur Arora, Christian Borntraeger, Linux PM

On 07/06/19 19:16, Marcelo Tosatti wrote:
> There is no "target residency" concept in the virtualized use-case 
> (which is what poll_state.c uses to calculate the poll time).

Actually there is: it is the cost of a vmexit, and it be calibrated with
a very short CPUID loop (e.g. run 100 CPUID instructions and take the
smallest TSC interval---it should take less than 50 microseconds, and
less than a millisecond even on nested virt).

I think it would make sense to improve poll_state.c to use an adaptive
algorithm similar to the one you implemented, which includes optionally
allowing to poll for an interval larger than the target residency.

Paolo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch 0/3] cpuidle-haltpoll driver (v2)
  2019-06-07 18:22     ` Paolo Bonzini
@ 2019-06-07 21:38       ` Marcelo Tosatti
  0 siblings, 0 replies; 10+ messages in thread
From: Marcelo Tosatti @ 2019-06-07 21:38 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Rafael J. Wysocki, kvm-devel, Radim KrÄ?máÅ?,
	Andrea Arcangeli, Peter Zijlstra, Wanpeng Li,
	Konrad Rzeszutek Wilk, Raslan KarimAllah, Boris Ostrovsky,
	Ankur Arora, Christian Borntraeger, Linux PM

On Fri, Jun 07, 2019 at 08:22:35PM +0200, Paolo Bonzini wrote:
> On 07/06/19 19:16, Marcelo Tosatti wrote:
> > There is no "target residency" concept in the virtualized use-case 
> > (which is what poll_state.c uses to calculate the poll time).
> 
> Actually there is: it is the cost of a vmexit, and it be calibrated with
> a very short CPUID loop (e.g. run 100 CPUID instructions and take the
> smallest TSC interval---it should take less than 50 microseconds, and
> less than a millisecond even on nested virt).

For a given application, you want to configure the poll time to the
maximum time an event happen after starting the idle procedure. For SAP
HANA, that value is between 200us - 800us (most tests require less than
800us, but some require 800us, to significantly avoid the IPIs).

"The target residency is the minimum time the hardware must spend in the
given state, including the time needed to enter it (which may be
substantial), in order to save more energy than it would save by
entering one of the shallower idle states instead."

Clearly these are two different things...

> I think it would make sense to improve poll_state.c to use an adaptive
> algorithm similar to the one you implemented, which includes optionally
> allowing to poll for an interval larger than the target residency.
> 
> Paolo

Ok, so i'll move the adaptive code to poll_state.c, where
the driver selects whether to use target_residency or the 
adaptive value (based on module parameters).

Not sure if its an adaptible value is desirable for 
the non virtualized case.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch 0/3] cpuidle-haltpoll driver (v2)
  2019-06-07  9:49 ` [patch 0/3] cpuidle-haltpoll driver (v2) Rafael J. Wysocki
  2019-06-07 17:16   ` Marcelo Tosatti
@ 2019-06-10 14:59   ` Marcelo Tosatti
  2019-06-10 22:03     ` Rafael J. Wysocki
  1 sibling, 1 reply; 10+ messages in thread
From: Marcelo Tosatti @ 2019-06-10 14:59 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: kvm-devel, Paolo Bonzini, Radim KrÄ?máÅ?,
	Andrea Arcangeli, Peter Zijlstra, Wanpeng Li,
	Konrad Rzeszutek Wilk, Raslan KarimAllah, Boris Ostrovsky,
	Ankur Arora, Christian Borntraeger, Linux PM

On Fri, Jun 07, 2019 at 11:49:51AM +0200, Rafael J. Wysocki wrote:
> On 6/4/2019 12:52 AM, Marcelo Tosatti wrote:
> >The cpuidle-haltpoll driver allows the guest vcpus to poll for a specified
> >amount of time before halting. This provides the following benefits
> >to host side polling:
> >
> >         1) The POLL flag is set while polling is performed, which allows
> >            a remote vCPU to avoid sending an IPI (and the associated
> >            cost of handling the IPI) when performing a wakeup.
> >
> >         2) The HLT VM-exit cost can be avoided.
> >
> >The downside of guest side polling is that polling is performed
> >even with other runnable tasks in the host.
> >
> >Results comparing halt_poll_ns and server/client application
> >where a small packet is ping-ponged:
> >
> >host                                        --> 31.33
> >halt_poll_ns=300000 / no guest busy spin    --> 33.40   (93.8%)
> >halt_poll_ns=0 / guest_halt_poll_ns=300000  --> 32.73   (95.7%)
> >
> >For the SAP HANA benchmarks (where idle_spin is a parameter
> >of the previous version of the patch, results should be the
> >same):
> >
> >hpns == halt_poll_ns
> >
> >                           idle_spin=0/   idle_spin=800/    idle_spin=0/
> >                           hpns=200000    hpns=0            hpns=800000
> >DeleteC06T03 (100 thread) 1.76           1.71 (-3%)        1.78   (+1%)
> >InsertC16T02 (100 thread) 2.14           2.07 (-3%)        2.18   (+1.8%)
> >DeleteC00T01 (1 thread)   1.34           1.28 (-4.5%)	   1.29   (-3.7%)
> >UpdateC00T03 (1 thread)   4.72           4.18 (-12%)	   4.53   (-5%)
> >
> >V2:
> >
> >- Move from x86 to generic code (Paolo/Christian).
> >- Add auto-tuning logic (Paolo).
> >- Add MSR to disable host side polling (Paolo).
> >
> >
> >
> First of all, please CC power management patches (including cpuidle,
> cpufreq etc) to linux-pm@vger.kernel.org (there are people on that
> list who may want to see your changes before they go in) and CC
> cpuidle material (in particular) to Peter Zijlstra.
> 
> Second, I'm not a big fan of this approach to be honest, as it kind
> of is a driver trying to play the role of a governor.
> 
> We have a "polling state" already that could be used here in
> principle so I wonder what would be wrong with that.  Also note that
> there seems to be at least some code duplication between your code
> and the "polling state" implementation, so maybe it would be
> possible to do some things in a common way?

Hi Rafael,

After modifying poll_state.c to use a generic "poll time" driver 
callback [1] (since using a variable "target_residency" for that 
looks really ugly), would need a governor which does:

haltpoll_governor_select_next_state()
	if (prev_state was poll and evt happened on prev poll window) -> POLL.
	if (prev_state == HLT)	-> POLL
	otherwise 		-> HLT

And a "default_idle" cpuidle driver that:

defaultidle_idle()
	if (current_clr_polling_and_test()) {
		local_irq_enable();
		return index;
	}
	default_idle(); 
	return

Using such governor with any other cpuidle driver would 
be pointless (since it would enter the first state only
and therefore not save power).

Not certain about using the default_idle driver with 
other governors: one would rather use a driver that 
supports all states on a given machine.

This combination of governor/driver pair, for the sake
of sharing the idle loop, seems awkward to me.
And fails the governor/driver separation: one will use the
pair in practice.

But i have no problem with it, so i'll proceed with that.

Let me know otherwise.

Thanks.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch 0/3] cpuidle-haltpoll driver (v2)
  2019-06-10 14:59   ` Marcelo Tosatti
@ 2019-06-10 22:03     ` Rafael J. Wysocki
  2019-06-11 14:26       ` Marcelo Tosatti
  0 siblings, 1 reply; 10+ messages in thread
From: Rafael J. Wysocki @ 2019-06-10 22:03 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Rafael J. Wysocki, kvm-devel, Paolo Bonzini,
	Radim KrÄ?máÅ?,
	Andrea Arcangeli, Peter Zijlstra, Wanpeng Li,
	Konrad Rzeszutek Wilk, Raslan KarimAllah, Boris Ostrovsky,
	Ankur Arora, Christian Borntraeger, Linux PM

On Mon, Jun 10, 2019 at 5:00 PM Marcelo Tosatti <mtosatti@redhat.com> wrote:
>
> On Fri, Jun 07, 2019 at 11:49:51AM +0200, Rafael J. Wysocki wrote:
> > On 6/4/2019 12:52 AM, Marcelo Tosatti wrote:
> > >The cpuidle-haltpoll driver allows the guest vcpus to poll for a specified
> > >amount of time before halting. This provides the following benefits
> > >to host side polling:
> > >
> > >         1) The POLL flag is set while polling is performed, which allows
> > >            a remote vCPU to avoid sending an IPI (and the associated
> > >            cost of handling the IPI) when performing a wakeup.
> > >
> > >         2) The HLT VM-exit cost can be avoided.
> > >
> > >The downside of guest side polling is that polling is performed
> > >even with other runnable tasks in the host.
> > >
> > >Results comparing halt_poll_ns and server/client application
> > >where a small packet is ping-ponged:
> > >
> > >host                                        --> 31.33
> > >halt_poll_ns=300000 / no guest busy spin    --> 33.40   (93.8%)
> > >halt_poll_ns=0 / guest_halt_poll_ns=300000  --> 32.73   (95.7%)
> > >
> > >For the SAP HANA benchmarks (where idle_spin is a parameter
> > >of the previous version of the patch, results should be the
> > >same):
> > >
> > >hpns == halt_poll_ns
> > >
> > >                           idle_spin=0/   idle_spin=800/    idle_spin=0/
> > >                           hpns=200000    hpns=0            hpns=800000
> > >DeleteC06T03 (100 thread) 1.76           1.71 (-3%)        1.78   (+1%)
> > >InsertC16T02 (100 thread) 2.14           2.07 (-3%)        2.18   (+1.8%)
> > >DeleteC00T01 (1 thread)   1.34           1.28 (-4.5%)           1.29   (-3.7%)
> > >UpdateC00T03 (1 thread)   4.72           4.18 (-12%)    4.53   (-5%)
> > >
> > >V2:
> > >
> > >- Move from x86 to generic code (Paolo/Christian).
> > >- Add auto-tuning logic (Paolo).
> > >- Add MSR to disable host side polling (Paolo).
> > >
> > >
> > >
> > First of all, please CC power management patches (including cpuidle,
> > cpufreq etc) to linux-pm@vger.kernel.org (there are people on that
> > list who may want to see your changes before they go in) and CC
> > cpuidle material (in particular) to Peter Zijlstra.
> >
> > Second, I'm not a big fan of this approach to be honest, as it kind
> > of is a driver trying to play the role of a governor.
> >
> > We have a "polling state" already that could be used here in
> > principle so I wonder what would be wrong with that.  Also note that
> > there seems to be at least some code duplication between your code
> > and the "polling state" implementation, so maybe it would be
> > possible to do some things in a common way?
>
> Hi Rafael,
>
> After modifying poll_state.c to use a generic "poll time" driver
> callback [1] (since using a variable "target_residency" for that
> looks really ugly), would need a governor which does:
>
> haltpoll_governor_select_next_state()
>         if (prev_state was poll and evt happened on prev poll window) -> POLL.
>         if (prev_state == HLT)  -> POLL
>         otherwise               -> HLT
>
> And a "default_idle" cpuidle driver that:
>
> defaultidle_idle()
>         if (current_clr_polling_and_test()) {
>                 local_irq_enable();
>                 return index;
>         }
>         default_idle();
>         return
>
> Using such governor with any other cpuidle driver would
> be pointless (since it would enter the first state only
> and therefore not save power).
>
> Not certain about using the default_idle driver with
> other governors: one would rather use a driver that
> supports all states on a given machine.
>
> This combination of governor/driver pair, for the sake
> of sharing the idle loop, seems awkward to me.
> And fails the governor/driver separation: one will use the
> pair in practice.
>
> But i have no problem with it, so i'll proceed with that.
>
> Let me know otherwise.

If my understanding of your argumentation is correct, it is only
necessary to take the default_idle_call() branch of
cpuidle_idle_call() in the VM case, so it should be sufficient to
provide a suitable default_idle_call() which is what you seem to be
trying to do.

I might have been confused by the terminology used in the patch series
if that's the case.

Also, if that's the case, this is not cpuidle matter really.  It is a
matter of providing a better default_idle_call() for the arch at hand.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch 0/3] cpuidle-haltpoll driver (v2)
  2019-06-10 22:03     ` Rafael J. Wysocki
@ 2019-06-11 14:26       ` Marcelo Tosatti
  2019-06-11 21:24         ` Rafael J. Wysocki
  0 siblings, 1 reply; 10+ messages in thread
From: Marcelo Tosatti @ 2019-06-11 14:26 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, kvm-devel, Paolo Bonzini,
	Radim KrÄ?máÅ?,
	Andrea Arcangeli, Peter Zijlstra, Wanpeng Li,
	Konrad Rzeszutek Wilk, Raslan KarimAllah, Boris Ostrovsky,
	Ankur Arora, Christian Borntraeger, Linux PM

On Tue, Jun 11, 2019 at 12:03:26AM +0200, Rafael J. Wysocki wrote:
> On Mon, Jun 10, 2019 at 5:00 PM Marcelo Tosatti <mtosatti@redhat.com> wrote:
> >
> > On Fri, Jun 07, 2019 at 11:49:51AM +0200, Rafael J. Wysocki wrote:
> > > On 6/4/2019 12:52 AM, Marcelo Tosatti wrote:
> > > >The cpuidle-haltpoll driver allows the guest vcpus to poll for a specified
> > > >amount of time before halting. This provides the following benefits
> > > >to host side polling:
> > > >
> > > >         1) The POLL flag is set while polling is performed, which allows
> > > >            a remote vCPU to avoid sending an IPI (and the associated
> > > >            cost of handling the IPI) when performing a wakeup.
> > > >
> > > >         2) The HLT VM-exit cost can be avoided.
> > > >
> > > >The downside of guest side polling is that polling is performed
> > > >even with other runnable tasks in the host.
> > > >
> > > >Results comparing halt_poll_ns and server/client application
> > > >where a small packet is ping-ponged:
> > > >
> > > >host                                        --> 31.33
> > > >halt_poll_ns=300000 / no guest busy spin    --> 33.40   (93.8%)
> > > >halt_poll_ns=0 / guest_halt_poll_ns=300000  --> 32.73   (95.7%)
> > > >
> > > >For the SAP HANA benchmarks (where idle_spin is a parameter
> > > >of the previous version of the patch, results should be the
> > > >same):
> > > >
> > > >hpns == halt_poll_ns
> > > >
> > > >                           idle_spin=0/   idle_spin=800/    idle_spin=0/
> > > >                           hpns=200000    hpns=0            hpns=800000
> > > >DeleteC06T03 (100 thread) 1.76           1.71 (-3%)        1.78   (+1%)
> > > >InsertC16T02 (100 thread) 2.14           2.07 (-3%)        2.18   (+1.8%)
> > > >DeleteC00T01 (1 thread)   1.34           1.28 (-4.5%)           1.29   (-3.7%)
> > > >UpdateC00T03 (1 thread)   4.72           4.18 (-12%)    4.53   (-5%)
> > > >
> > > >V2:
> > > >
> > > >- Move from x86 to generic code (Paolo/Christian).
> > > >- Add auto-tuning logic (Paolo).
> > > >- Add MSR to disable host side polling (Paolo).
> > > >
> > > >
> > > >
> > > First of all, please CC power management patches (including cpuidle,
> > > cpufreq etc) to linux-pm@vger.kernel.org (there are people on that
> > > list who may want to see your changes before they go in) and CC
> > > cpuidle material (in particular) to Peter Zijlstra.
> > >
> > > Second, I'm not a big fan of this approach to be honest, as it kind
> > > of is a driver trying to play the role of a governor.
> > >
> > > We have a "polling state" already that could be used here in
> > > principle so I wonder what would be wrong with that.  Also note that
> > > there seems to be at least some code duplication between your code
> > > and the "polling state" implementation, so maybe it would be
> > > possible to do some things in a common way?
> >
> > Hi Rafael,
> >
> > After modifying poll_state.c to use a generic "poll time" driver
> > callback [1] (since using a variable "target_residency" for that
> > looks really ugly), would need a governor which does:
> >
> > haltpoll_governor_select_next_state()
> >         if (prev_state was poll and evt happened on prev poll window) -> POLL.
> >         if (prev_state == HLT)  -> POLL
> >         otherwise               -> HLT
> >
> > And a "default_idle" cpuidle driver that:
> >
> > defaultidle_idle()
> >         if (current_clr_polling_and_test()) {
> >                 local_irq_enable();
> >                 return index;
> >         }
> >         default_idle();
> >         return
> >
> > Using such governor with any other cpuidle driver would
> > be pointless (since it would enter the first state only
> > and therefore not save power).
> >
> > Not certain about using the default_idle driver with
> > other governors: one would rather use a driver that
> > supports all states on a given machine.
> >
> > This combination of governor/driver pair, for the sake
> > of sharing the idle loop, seems awkward to me.
> > And fails the governor/driver separation: one will use the
> > pair in practice.
> >
> > But i have no problem with it, so i'll proceed with that.
> >
> > Let me know otherwise.
> 
> If my understanding of your argumentation is correct, it is only
> necessary to take the default_idle_call() branch of
> cpuidle_idle_call() in the VM case, so it should be sufficient to
> provide a suitable default_idle_call() which is what you seem to be
> trying to do.

In the VM case, we need to poll before actually halting (this is because
its tricky to implement MWAIT in guests, so polling for some amount 
of time allows the IPI avoidance optimization,
see trace_sched_wake_idle_without_ipi, to take place). 

The amount of time we poll is variable and adjusted (see adjust_haltpoll_ns 
in the patchset).

> I might have been confused by the terminology used in the patch series
> if that's the case.
> 
> Also, if that's the case, this is not cpuidle matter really.  It is a
> matter of providing a better default_idle_call() for the arch at hand.

Peter Zijlstra suggested a cpuidle driver for this. 

Also, other architectures will use the same "poll before exiting to VM"
logic (so we'd rather avoid duplicating this code): PPC, x86, S/390,
MIPS... So in my POV it makes sense to unify this.

So, back to your initial suggestion: 

Q) "Can you unify code with poll_state.c?" 
A) Yes, but it requires a new governor, which seems overkill and unfit
for the purpose.

Moreover, the logic in menu to decide whether its necessary or not
to stop sched tick is useful for us (so a default_idle_call is not
sufficient), because the cost of enabling/disabling the sched tick is
high on VMs.

So i'll fix the comments of the cpuidle driver (which everyone seems
to agree with, except your understandable distate for it) and repost.




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch 0/3] cpuidle-haltpoll driver (v2)
  2019-06-11 14:26       ` Marcelo Tosatti
@ 2019-06-11 21:24         ` Rafael J. Wysocki
  2019-06-17 15:57           ` Peter Zijlstra
  0 siblings, 1 reply; 10+ messages in thread
From: Rafael J. Wysocki @ 2019-06-11 21:24 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, kvm-devel, Paolo Bonzini,
	Radim KrÄ?máÅ?,
	Andrea Arcangeli, Peter Zijlstra, Wanpeng Li,
	Konrad Rzeszutek Wilk, Raslan KarimAllah, Boris Ostrovsky,
	Ankur Arora, Christian Borntraeger, Linux PM

On Tue, Jun 11, 2019 at 4:27 PM Marcelo Tosatti <mtosatti@redhat.com> wrote:
>
> On Tue, Jun 11, 2019 at 12:03:26AM +0200, Rafael J. Wysocki wrote:
> > On Mon, Jun 10, 2019 at 5:00 PM Marcelo Tosatti <mtosatti@redhat.com> wrote:
> > >
> > > On Fri, Jun 07, 2019 at 11:49:51AM +0200, Rafael J. Wysocki wrote:
> > > > On 6/4/2019 12:52 AM, Marcelo Tosatti wrote:
> > > > >The cpuidle-haltpoll driver allows the guest vcpus to poll for a specified
> > > > >amount of time before halting. This provides the following benefits
> > > > >to host side polling:
> > > > >
> > > > >         1) The POLL flag is set while polling is performed, which allows
> > > > >            a remote vCPU to avoid sending an IPI (and the associated
> > > > >            cost of handling the IPI) when performing a wakeup.
> > > > >
> > > > >         2) The HLT VM-exit cost can be avoided.
> > > > >
> > > > >The downside of guest side polling is that polling is performed
> > > > >even with other runnable tasks in the host.
> > > > >
> > > > >Results comparing halt_poll_ns and server/client application
> > > > >where a small packet is ping-ponged:
> > > > >
> > > > >host                                        --> 31.33
> > > > >halt_poll_ns=300000 / no guest busy spin    --> 33.40   (93.8%)
> > > > >halt_poll_ns=0 / guest_halt_poll_ns=300000  --> 32.73   (95.7%)
> > > > >
> > > > >For the SAP HANA benchmarks (where idle_spin is a parameter
> > > > >of the previous version of the patch, results should be the
> > > > >same):
> > > > >
> > > > >hpns == halt_poll_ns
> > > > >
> > > > >                           idle_spin=0/   idle_spin=800/    idle_spin=0/
> > > > >                           hpns=200000    hpns=0            hpns=800000
> > > > >DeleteC06T03 (100 thread) 1.76           1.71 (-3%)        1.78   (+1%)
> > > > >InsertC16T02 (100 thread) 2.14           2.07 (-3%)        2.18   (+1.8%)
> > > > >DeleteC00T01 (1 thread)   1.34           1.28 (-4.5%)           1.29   (-3.7%)
> > > > >UpdateC00T03 (1 thread)   4.72           4.18 (-12%)    4.53   (-5%)
> > > > >
> > > > >V2:
> > > > >
> > > > >- Move from x86 to generic code (Paolo/Christian).
> > > > >- Add auto-tuning logic (Paolo).
> > > > >- Add MSR to disable host side polling (Paolo).
> > > > >
> > > > >
> > > > >
> > > > First of all, please CC power management patches (including cpuidle,
> > > > cpufreq etc) to linux-pm@vger.kernel.org (there are people on that
> > > > list who may want to see your changes before they go in) and CC
> > > > cpuidle material (in particular) to Peter Zijlstra.
> > > >
> > > > Second, I'm not a big fan of this approach to be honest, as it kind
> > > > of is a driver trying to play the role of a governor.
> > > >
> > > > We have a "polling state" already that could be used here in
> > > > principle so I wonder what would be wrong with that.  Also note that
> > > > there seems to be at least some code duplication between your code
> > > > and the "polling state" implementation, so maybe it would be
> > > > possible to do some things in a common way?
> > >
> > > Hi Rafael,
> > >
> > > After modifying poll_state.c to use a generic "poll time" driver
> > > callback [1] (since using a variable "target_residency" for that
> > > looks really ugly), would need a governor which does:
> > >
> > > haltpoll_governor_select_next_state()
> > >         if (prev_state was poll and evt happened on prev poll window) -> POLL.
> > >         if (prev_state == HLT)  -> POLL
> > >         otherwise               -> HLT
> > >
> > > And a "default_idle" cpuidle driver that:
> > >
> > > defaultidle_idle()
> > >         if (current_clr_polling_and_test()) {
> > >                 local_irq_enable();
> > >                 return index;
> > >         }
> > >         default_idle();
> > >         return
> > >
> > > Using such governor with any other cpuidle driver would
> > > be pointless (since it would enter the first state only
> > > and therefore not save power).
> > >
> > > Not certain about using the default_idle driver with
> > > other governors: one would rather use a driver that
> > > supports all states on a given machine.
> > >
> > > This combination of governor/driver pair, for the sake
> > > of sharing the idle loop, seems awkward to me.
> > > And fails the governor/driver separation: one will use the
> > > pair in practice.
> > >
> > > But i have no problem with it, so i'll proceed with that.
> > >
> > > Let me know otherwise.
> >
> > If my understanding of your argumentation is correct, it is only
> > necessary to take the default_idle_call() branch of
> > cpuidle_idle_call() in the VM case, so it should be sufficient to
> > provide a suitable default_idle_call() which is what you seem to be
> > trying to do.
>
> In the VM case, we need to poll before actually halting (this is because
> its tricky to implement MWAIT in guests, so polling for some amount
> of time allows the IPI avoidance optimization,
> see trace_sched_wake_idle_without_ipi, to take place).
>
> The amount of time we poll is variable and adjusted (see adjust_haltpoll_ns
> in the patchset).
>
> > I might have been confused by the terminology used in the patch series
> > if that's the case.
> >
> > Also, if that's the case, this is not cpuidle matter really.  It is a
> > matter of providing a better default_idle_call() for the arch at hand.
>
> Peter Zijlstra suggested a cpuidle driver for this.

So I wonder what his rationale was.

> Also, other architectures will use the same "poll before exiting to VM"
> logic (so we'd rather avoid duplicating this code): PPC, x86, S/390,
> MIPS... So in my POV it makes sense to unify this.

The logic is fine IMO, but the implementation here is questionable.

> So, back to your initial suggestion:
>
> Q) "Can you unify code with poll_state.c?"
> A) Yes, but it requires a new governor, which seems overkill and unfit
> for the purpose.
>
> Moreover, the logic in menu to decide whether its necessary or not
> to stop sched tick is useful for us (so a default_idle_call is not
> sufficient), because the cost of enabling/disabling the sched tick is
> high on VMs.

So in fact you need a governor, but you really only need it to decide
whether or not to stop the tick for you.

menu has a quite high overhead for that. :-)

> So i'll fix the comments of the cpuidle driver (which everyone seems
> to agree with, except your understandable distate for it) and repost.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch 0/3] cpuidle-haltpoll driver (v2)
  2019-06-11 21:24         ` Rafael J. Wysocki
@ 2019-06-17 15:57           ` Peter Zijlstra
  0 siblings, 0 replies; 10+ messages in thread
From: Peter Zijlstra @ 2019-06-17 15:57 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Marcelo Tosatti, Rafael J. Wysocki, kvm-devel, Paolo Bonzini,
	Radim KrÄ?máÅ?,
	Andrea Arcangeli, Wanpeng Li, Konrad Rzeszutek Wilk,
	Raslan KarimAllah, Boris Ostrovsky, Ankur Arora,
	Christian Borntraeger, Linux PM

On Tue, Jun 11, 2019 at 11:24:39PM +0200, Rafael J. Wysocki wrote:
> > Peter Zijlstra suggested a cpuidle driver for this.
> 
> So I wonder what his rationale was.

I was thinking we don't need this hard-coded in the idle loop when virt
can load a special driver.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-06-17 15:57 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20190603225242.289109849@amt.cnet>
2019-06-07  9:49 ` [patch 0/3] cpuidle-haltpoll driver (v2) Rafael J. Wysocki
2019-06-07 17:16   ` Marcelo Tosatti
2019-06-07 18:22     ` Paolo Bonzini
2019-06-07 21:38       ` Marcelo Tosatti
2019-06-10 14:59   ` Marcelo Tosatti
2019-06-10 22:03     ` Rafael J. Wysocki
2019-06-11 14:26       ` Marcelo Tosatti
2019-06-11 21:24         ` Rafael J. Wysocki
2019-06-17 15:57           ` Peter Zijlstra
     [not found] ` <20190603225254.212931277@amt.cnet>
2019-06-07  9:54   ` [patch 1/3] drivers/cpuidle: add cpuidle-haltpoll driver Rafael J. Wysocki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).