linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V4 0/5] powerpc/cpuidle: Generic POWERPC cpuidle driver enabled for POWER and POWERNV platforms
@ 2013-08-22  5:29 Deepthi Dharwar
  2013-08-22  5:30 ` [PATCH V4 1/5] pseries/cpuidle: Remove dependency of pseries.h file Deepthi Dharwar
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Deepthi Dharwar @ 2013-08-22  5:29 UTC (permalink / raw)
  To: linux-pm, linuxppc-dev, linux-kernel
  Cc: rjw, daniel.lezcano, dongsheng.wang, preeti, srivatsa.bhat, scottwood

This patch series consolidates the backend cpuidle driver for pSeries
and powernv platforms with minimal code duplication.

Current existing backend driver for pseries has been moved to drivers/cpuidle 
and has been extended to accommodate powernv idle power mgmt states. 
As seen in V1 of this patch series, having a separate powernv backend driver 
results in too much code duplication, which is less elegant and can pose 
maintenance problems going further.

Using the cpuidle framework to exploit platform low power idle states
management can take advantage of advanced heuristics, tunables and features 
provided by framework. The statistics and tracing infrastructure provided 
by the cpuidle framework also helps in enabling power management 
related tools and help tune the system and applications.

Earlier in 3.3 kernel, pSeries idle state management was modified to 
exploit the cpuidle framework and the end goal of this patch is to have powernv
platform also to hook its idle states into cpuidle framework with minimal 
code duplication between both platforms. This result is a generic powerpc 
backend driver currently enabled for pseries and powernv platforms and which
can be extended to accommodate other powerpc archs as well in the future.

This series aims to maintain compatibility and functionality to existing pseries 
and powernv idle cpu management code. There are no new functions or idle 
states added as part of this series. This can be extended by adding more 
states to this existing framework.

With this patch series, the powernv cpuidle functionalities are on-par with 
pSeries idle management.  

V1 -> http://lkml.org/lkml/2013/7/23/143
V2 -> https://lkml.org/lkml/2013/7/30/872
V3 -> http://comments.gmane.org/gmane.linux.ports.ppc.embedded/63093 

Changes in V4:
=============

* This patch series includes generic backend driver cpuidle cleanups 
  including, replacing the driver and device initialisation
  routines with cpuidle_register function.

* Enable CPUIDLE framework only for POWER and POWERNV platforms.

Changes in V3:
=============

* This patch series does not include smt-snooze-delay fixes. 
  This will be taken up later on.

* Integrated POWERPC driver in drivers/cpuidle.  Enabled for all of 
  POWERPC platform.  Currently has PSERIES and POWERNV support.
  No compile time flags in .c file. This  will be one consolidated 
  binary that does a run time detection based on platform and take 
  decisions accordingly.

* Enabled CPUIDLE framwork for all of PPC64.

Changes in V2:
=============

* Merged the backend driver posted out for powernv in V1 with
  pSeries to create a single powerpc driver but this had compile
  time flags.

 Deepthi Dharwar (5):
      pseries/cpuidle: Remove dependency of pseries.h file
      pseries: Move plpar_wrapper.h to powerpc common include/asm location.
      powerpc/cpuidle: Generic powerpc backend cpuidle driver.
      powerpc/cpuidle: Enable powernv cpuidle support.
      powernv/cpuidle: Enable idle powernv cpu to call into the cpuidle framework.


 arch/powerpc/include/asm/paca.h                 |   23 +
 arch/powerpc/include/asm/plpar_wrappers.h       |  325 +++++++++++++++++++++
 arch/powerpc/include/asm/processor.h            |    2 
 arch/powerpc/platforms/powernv/setup.c          |   14 +
 arch/powerpc/platforms/pseries/Kconfig          |    9 -
 arch/powerpc/platforms/pseries/Makefile         |    1 
 arch/powerpc/platforms/pseries/cmm.c            |    3 
 arch/powerpc/platforms/pseries/dtl.c            |    3 
 arch/powerpc/platforms/pseries/hotplug-cpu.c    |    3 
 arch/powerpc/platforms/pseries/hvconsole.c      |    2 
 arch/powerpc/platforms/pseries/iommu.c          |    3 
 arch/powerpc/platforms/pseries/kexec.c          |    2 
 arch/powerpc/platforms/pseries/lpar.c           |    2 
 arch/powerpc/platforms/pseries/plpar_wrappers.h |  324 ---------------------
 arch/powerpc/platforms/pseries/processor_idle.c |  362 -----------------------
 arch/powerpc/platforms/pseries/pseries.h        |    3 
 arch/powerpc/platforms/pseries/setup.c          |    2 
 arch/powerpc/platforms/pseries/smp.c            |    2 
 drivers/cpuidle/Kconfig                         |    7 
 drivers/cpuidle/Makefile                        |    2 
 drivers/cpuidle/cpuidle-powerpc.c               |  335 +++++++++++++++++++++
 21 files changed, 716 insertions(+), 713 deletions(-)
 create mode 100644 arch/powerpc/include/asm/plpar_wrappers.h
 delete mode 100644 arch/powerpc/platforms/pseries/plpar_wrappers.h
 delete mode 100644 arch/powerpc/platforms/pseries/processor_idle.c
 create mode 100644 drivers/cpuidle/cpuidle-powerpc.c


-- Deepthi

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH V4 1/5] pseries/cpuidle: Remove dependency of pseries.h file
  2013-08-22  5:29 [PATCH V4 0/5] powerpc/cpuidle: Generic POWERPC cpuidle driver enabled for POWER and POWERNV platforms Deepthi Dharwar
@ 2013-08-22  5:30 ` Deepthi Dharwar
  2013-08-22  5:30 ` [PATCH V4 2/5] pseries: Move plpar_wrapper.h to powerpc common include/asm location Deepthi Dharwar
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Deepthi Dharwar @ 2013-08-22  5:30 UTC (permalink / raw)
  To: linux-pm, linuxppc-dev, linux-kernel
  Cc: rjw, daniel.lezcano, dongsheng.wang, preeti, srivatsa.bhat, scottwood

As a part of pseries_idle cleanup to make the backend driver
code common to both pseries and powernv.
Remove non-essential smt_snooze_delay declaration in pseries.h
header file and pseries.h file inclusion in
pseries/processor_idle.c

Signed-off-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
---
 arch/powerpc/platforms/pseries/processor_idle.c |    1 -
 arch/powerpc/platforms/pseries/pseries.h        |    3 ---
 2 files changed, 4 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/processor_idle.c b/arch/powerpc/platforms/pseries/processor_idle.c
index 4644efa0..ca70279 100644
--- a/arch/powerpc/platforms/pseries/processor_idle.c
+++ b/arch/powerpc/platforms/pseries/processor_idle.c
@@ -20,7 +20,6 @@
 #include <asm/runlatch.h>
 
 #include "plpar_wrappers.h"
-#include "pseries.h"
 
 struct cpuidle_driver pseries_idle_driver = {
 	.name             = "pseries_idle",
diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h
index c2a3a25..d1b07e6 100644
--- a/arch/powerpc/platforms/pseries/pseries.h
+++ b/arch/powerpc/platforms/pseries/pseries.h
@@ -60,9 +60,6 @@ extern struct device_node *dlpar_configure_connector(u32);
 extern int dlpar_attach_node(struct device_node *);
 extern int dlpar_detach_node(struct device_node *);
 
-/* Snooze Delay, pseries_idle */
-DECLARE_PER_CPU(long, smt_snooze_delay);
-
 /* PCI root bridge prepare function override for pseries */
 struct pci_host_bridge;
 int pseries_root_bridge_prepare(struct pci_host_bridge *bridge);

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V4 2/5] pseries: Move plpar_wrapper.h to powerpc common include/asm location.
  2013-08-22  5:29 [PATCH V4 0/5] powerpc/cpuidle: Generic POWERPC cpuidle driver enabled for POWER and POWERNV platforms Deepthi Dharwar
  2013-08-22  5:30 ` [PATCH V4 1/5] pseries/cpuidle: Remove dependency of pseries.h file Deepthi Dharwar
@ 2013-08-22  5:30 ` Deepthi Dharwar
  2013-08-22  5:30 ` [PATCH V4 3/5] powerpc/cpuidle: Generic powerpc backend cpuidle driver Deepthi Dharwar
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Deepthi Dharwar @ 2013-08-22  5:30 UTC (permalink / raw)
  To: linux-pm, linuxppc-dev, linux-kernel
  Cc: rjw, daniel.lezcano, dongsheng.wang, preeti, srivatsa.bhat, scottwood

As a part of pseries_idle backend driver cleanup to make
the code common to both pseries and powernv platforms, it
is necessary to move the backend-driver code to drivers/cpuidle.

As a pre-requisite for that, it is essential to move plpar_wrapper.h
to include/asm.

Signed-off-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/plpar_wrappers.h       |  325 +++++++++++++++++++++++
 arch/powerpc/platforms/pseries/cmm.c            |    3 
 arch/powerpc/platforms/pseries/dtl.c            |    3 
 arch/powerpc/platforms/pseries/hotplug-cpu.c    |    3 
 arch/powerpc/platforms/pseries/hvconsole.c      |    2 
 arch/powerpc/platforms/pseries/iommu.c          |    3 
 arch/powerpc/platforms/pseries/kexec.c          |    2 
 arch/powerpc/platforms/pseries/lpar.c           |    2 
 arch/powerpc/platforms/pseries/plpar_wrappers.h |  324 -----------------------
 arch/powerpc/platforms/pseries/processor_idle.c |    3 
 arch/powerpc/platforms/pseries/setup.c          |    2 
 arch/powerpc/platforms/pseries/smp.c            |    2 
 12 files changed, 336 insertions(+), 338 deletions(-)
 create mode 100644 arch/powerpc/include/asm/plpar_wrappers.h
 delete mode 100644 arch/powerpc/platforms/pseries/plpar_wrappers.h

diff --git a/arch/powerpc/include/asm/plpar_wrappers.h b/arch/powerpc/include/asm/plpar_wrappers.h
new file mode 100644
index 0000000..e2f84d6
--- /dev/null
+++ b/arch/powerpc/include/asm/plpar_wrappers.h
@@ -0,0 +1,325 @@
+#ifndef _PSERIES_PLPAR_WRAPPERS_H
+#define _PSERIES_PLPAR_WRAPPERS_H
+
+#include <linux/string.h>
+#include <linux/irqflags.h>
+
+#include <asm/hvcall.h>
+#include <asm/paca.h>
+#include <asm/page.h>
+
+/* Get state of physical CPU from query_cpu_stopped */
+int smp_query_cpu_stopped(unsigned int pcpu);
+#define QCSS_STOPPED 0
+#define QCSS_STOPPING 1
+#define QCSS_NOT_STOPPED 2
+#define QCSS_HARDWARE_ERROR -1
+#define QCSS_HARDWARE_BUSY -2
+
+static inline long poll_pending(void)
+{
+	return plpar_hcall_norets(H_POLL_PENDING);
+}
+
+static inline u8 get_cede_latency_hint(void)
+{
+	return get_lppaca()->cede_latency_hint;
+}
+
+static inline void set_cede_latency_hint(u8 latency_hint)
+{
+	get_lppaca()->cede_latency_hint = latency_hint;
+}
+
+static inline long cede_processor(void)
+{
+	return plpar_hcall_norets(H_CEDE);
+}
+
+static inline long extended_cede_processor(unsigned long latency_hint)
+{
+	long rc;
+	u8 old_latency_hint = get_cede_latency_hint();
+
+	set_cede_latency_hint(latency_hint);
+
+	rc = cede_processor();
+#ifdef CONFIG_TRACE_IRQFLAGS
+		/* Ensure that H_CEDE returns with IRQs on */
+		if (WARN_ON(!(mfmsr() & MSR_EE)))
+			__hard_irq_enable();
+#endif
+
+	set_cede_latency_hint(old_latency_hint);
+
+	return rc;
+}
+
+static inline long vpa_call(unsigned long flags, unsigned long cpu,
+		unsigned long vpa)
+{
+	flags = flags << H_VPA_FUNC_SHIFT;
+
+	return plpar_hcall_norets(H_REGISTER_VPA, flags, cpu, vpa);
+}
+
+static inline long unregister_vpa(unsigned long cpu)
+{
+	return vpa_call(H_VPA_DEREG_VPA, cpu, 0);
+}
+
+static inline long register_vpa(unsigned long cpu, unsigned long vpa)
+{
+	return vpa_call(H_VPA_REG_VPA, cpu, vpa);
+}
+
+static inline long unregister_slb_shadow(unsigned long cpu)
+{
+	return vpa_call(H_VPA_DEREG_SLB, cpu, 0);
+}
+
+static inline long register_slb_shadow(unsigned long cpu, unsigned long vpa)
+{
+	return vpa_call(H_VPA_REG_SLB, cpu, vpa);
+}
+
+static inline long unregister_dtl(unsigned long cpu)
+{
+	return vpa_call(H_VPA_DEREG_DTL, cpu, 0);
+}
+
+static inline long register_dtl(unsigned long cpu, unsigned long vpa)
+{
+	return vpa_call(H_VPA_REG_DTL, cpu, vpa);
+}
+
+static inline long plpar_page_set_loaned(unsigned long vpa)
+{
+	unsigned long cmo_page_sz = cmo_get_page_size();
+	long rc = 0;
+	int i;
+
+	for (i = 0; !rc && i < PAGE_SIZE; i += cmo_page_sz)
+		rc = plpar_hcall_norets(H_PAGE_INIT, H_PAGE_SET_LOANED, vpa + i, 0);
+
+	for (i -= cmo_page_sz; rc && i != 0; i -= cmo_page_sz)
+		plpar_hcall_norets(H_PAGE_INIT, H_PAGE_SET_ACTIVE,
+				   vpa + i - cmo_page_sz, 0);
+
+	return rc;
+}
+
+static inline long plpar_page_set_active(unsigned long vpa)
+{
+	unsigned long cmo_page_sz = cmo_get_page_size();
+	long rc = 0;
+	int i;
+
+	for (i = 0; !rc && i < PAGE_SIZE; i += cmo_page_sz)
+		rc = plpar_hcall_norets(H_PAGE_INIT, H_PAGE_SET_ACTIVE, vpa + i, 0);
+
+	for (i -= cmo_page_sz; rc && i != 0; i -= cmo_page_sz)
+		plpar_hcall_norets(H_PAGE_INIT, H_PAGE_SET_LOANED,
+				   vpa + i - cmo_page_sz, 0);
+
+	return rc;
+}
+
+extern void vpa_init(int cpu);
+
+static inline long plpar_pte_enter(unsigned long flags,
+		unsigned long hpte_group, unsigned long hpte_v,
+		unsigned long hpte_r, unsigned long *slot)
+{
+	long rc;
+	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
+
+	rc = plpar_hcall(H_ENTER, retbuf, flags, hpte_group, hpte_v, hpte_r);
+
+	*slot = retbuf[0];
+
+	return rc;
+}
+
+static inline long plpar_pte_remove(unsigned long flags, unsigned long ptex,
+		unsigned long avpn, unsigned long *old_pteh_ret,
+		unsigned long *old_ptel_ret)
+{
+	long rc;
+	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
+
+	rc = plpar_hcall(H_REMOVE, retbuf, flags, ptex, avpn);
+
+	*old_pteh_ret = retbuf[0];
+	*old_ptel_ret = retbuf[1];
+
+	return rc;
+}
+
+/* plpar_pte_remove_raw can be called in real mode. It calls plpar_hcall_raw */
+static inline long plpar_pte_remove_raw(unsigned long flags, unsigned long ptex,
+		unsigned long avpn, unsigned long *old_pteh_ret,
+		unsigned long *old_ptel_ret)
+{
+	long rc;
+	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
+
+	rc = plpar_hcall_raw(H_REMOVE, retbuf, flags, ptex, avpn);
+
+	*old_pteh_ret = retbuf[0];
+	*old_ptel_ret = retbuf[1];
+
+	return rc;
+}
+
+static inline long plpar_pte_read(unsigned long flags, unsigned long ptex,
+		unsigned long *old_pteh_ret, unsigned long *old_ptel_ret)
+{
+	long rc;
+	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
+
+	rc = plpar_hcall(H_READ, retbuf, flags, ptex);
+
+	*old_pteh_ret = retbuf[0];
+	*old_ptel_ret = retbuf[1];
+
+	return rc;
+}
+
+/* plpar_pte_read_raw can be called in real mode. It calls plpar_hcall_raw */
+static inline long plpar_pte_read_raw(unsigned long flags, unsigned long ptex,
+		unsigned long *old_pteh_ret, unsigned long *old_ptel_ret)
+{
+	long rc;
+	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
+
+	rc = plpar_hcall_raw(H_READ, retbuf, flags, ptex);
+
+	*old_pteh_ret = retbuf[0];
+	*old_ptel_ret = retbuf[1];
+
+	return rc;
+}
+
+/*
+ * plpar_pte_read_4_raw can be called in real mode.
+ * ptes must be 8*sizeof(unsigned long)
+ */
+static inline long plpar_pte_read_4_raw(unsigned long flags, unsigned long ptex,
+					unsigned long *ptes)
+
+{
+	long rc;
+	unsigned long retbuf[PLPAR_HCALL9_BUFSIZE];
+
+	rc = plpar_hcall9_raw(H_READ, retbuf, flags | H_READ_4, ptex);
+
+	memcpy(ptes, retbuf, 8*sizeof(unsigned long));
+
+	return rc;
+}
+
+static inline long plpar_pte_protect(unsigned long flags, unsigned long ptex,
+		unsigned long avpn)
+{
+	return plpar_hcall_norets(H_PROTECT, flags, ptex, avpn);
+}
+
+static inline long plpar_tce_get(unsigned long liobn, unsigned long ioba,
+		unsigned long *tce_ret)
+{
+	long rc;
+	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
+
+	rc = plpar_hcall(H_GET_TCE, retbuf, liobn, ioba);
+
+	*tce_ret = retbuf[0];
+
+	return rc;
+}
+
+static inline long plpar_tce_put(unsigned long liobn, unsigned long ioba,
+		unsigned long tceval)
+{
+	return plpar_hcall_norets(H_PUT_TCE, liobn, ioba, tceval);
+}
+
+static inline long plpar_tce_put_indirect(unsigned long liobn,
+		unsigned long ioba, unsigned long page, unsigned long count)
+{
+	return plpar_hcall_norets(H_PUT_TCE_INDIRECT, liobn, ioba, page, count);
+}
+
+static inline long plpar_tce_stuff(unsigned long liobn, unsigned long ioba,
+		unsigned long tceval, unsigned long count)
+{
+	return plpar_hcall_norets(H_STUFF_TCE, liobn, ioba, tceval, count);
+}
+
+static inline long plpar_get_term_char(unsigned long termno,
+		unsigned long *len_ret, char *buf_ret)
+{
+	long rc;
+	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
+	unsigned long *lbuf = (unsigned long *)buf_ret;	/* TODO: alignment? */
+
+	rc = plpar_hcall(H_GET_TERM_CHAR, retbuf, termno);
+
+	*len_ret = retbuf[0];
+	lbuf[0] = retbuf[1];
+	lbuf[1] = retbuf[2];
+
+	return rc;
+}
+
+static inline long plpar_put_term_char(unsigned long termno, unsigned long len,
+		const char *buffer)
+{
+	unsigned long *lbuf = (unsigned long *)buffer;	/* TODO: alignment? */
+	return plpar_hcall_norets(H_PUT_TERM_CHAR, termno, len, lbuf[0],
+			lbuf[1]);
+}
+
+/* Set various resource mode parameters */
+static inline long plpar_set_mode(unsigned long mflags, unsigned long resource,
+		unsigned long value1, unsigned long value2)
+{
+	return plpar_hcall_norets(H_SET_MODE, mflags, resource, value1, value2);
+}
+
+/*
+ * Enable relocation on exceptions on this partition
+ *
+ * Note: this call has a partition wide scope and can take a while to complete.
+ * If it returns H_LONG_BUSY_* it should be retried periodically until it
+ * returns H_SUCCESS.
+ */
+static inline long enable_reloc_on_exceptions(void)
+{
+	/* mflags = 3: Exceptions at 0xC000000000004000 */
+	return plpar_set_mode(3, 3, 0, 0);
+}
+
+/*
+ * Disable relocation on exceptions on this partition
+ *
+ * Note: this call has a partition wide scope and can take a while to complete.
+ * If it returns H_LONG_BUSY_* it should be retried periodically until it
+ * returns H_SUCCESS.
+ */
+static inline long disable_reloc_on_exceptions(void)
+{
+	return plpar_set_mode(0, 3, 0, 0);
+}
+
+static inline long plapr_set_ciabr(unsigned long ciabr)
+{
+	return plpar_set_mode(0, 1, ciabr, 0);
+}
+
+static inline long plapr_set_watchpoint0(unsigned long dawr0, unsigned long dawrx0)
+{
+	return plpar_set_mode(0, 2, dawr0, dawrx0);
+}
+
+#endif /* _PSERIES_PLPAR_WRAPPERS_H */
diff --git a/arch/powerpc/platforms/pseries/cmm.c b/arch/powerpc/platforms/pseries/cmm.c
index c638535..1e561be 100644
--- a/arch/powerpc/platforms/pseries/cmm.c
+++ b/arch/powerpc/platforms/pseries/cmm.c
@@ -40,8 +40,7 @@
 #include <asm/pgalloc.h>
 #include <asm/uaccess.h>
 #include <linux/memory.h>
-
-#include "plpar_wrappers.h"
+#include <asm/plpar_wrappers.h>
 
 #define CMM_DRIVER_VERSION	"1.0.0"
 #define CMM_DEFAULT_DELAY	1
diff --git a/arch/powerpc/platforms/pseries/dtl.c b/arch/powerpc/platforms/pseries/dtl.c
index 0cc0ac0..f6cb051 100644
--- a/arch/powerpc/platforms/pseries/dtl.c
+++ b/arch/powerpc/platforms/pseries/dtl.c
@@ -29,8 +29,7 @@
 #include <asm/firmware.h>
 #include <asm/lppaca.h>
 #include <asm/debug.h>
-
-#include "plpar_wrappers.h"
+#include <asm/plpar_wrappers.h>
 
 struct dtl {
 	struct dtl_entry	*buf;
diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
index 217ca5c..a8ef932 100644
--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
+++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
@@ -30,7 +30,8 @@
 #include <asm/machdep.h>
 #include <asm/vdso_datapage.h>
 #include <asm/xics.h>
-#include "plpar_wrappers.h"
+#include <asm/plpar_wrappers.h>
+
 #include "offline_states.h"
 
 /* This version can't take the spinlock, because it never returns */
diff --git a/arch/powerpc/platforms/pseries/hvconsole.c b/arch/powerpc/platforms/pseries/hvconsole.c
index b344f94..f3f108b 100644
--- a/arch/powerpc/platforms/pseries/hvconsole.c
+++ b/arch/powerpc/platforms/pseries/hvconsole.c
@@ -28,7 +28,7 @@
 #include <linux/errno.h>
 #include <asm/hvcall.h>
 #include <asm/hvconsole.h>
-#include "plpar_wrappers.h"
+#include <asm/plpar_wrappers.h>
 
 /**
  * hvc_get_chars - retrieve characters from firmware for denoted vterm adatper
diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 23fc1dc..4821933 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -48,8 +48,7 @@
 #include <asm/ppc-pci.h>
 #include <asm/udbg.h>
 #include <asm/mmzone.h>
-
-#include "plpar_wrappers.h"
+#include <asm/plpar_wrappers.h>
 
 
 static void tce_invalidate_pSeries_sw(struct iommu_table *tbl,
diff --git a/arch/powerpc/platforms/pseries/kexec.c b/arch/powerpc/platforms/pseries/kexec.c
index 7d94bdc..13fa95b3 100644
--- a/arch/powerpc/platforms/pseries/kexec.c
+++ b/arch/powerpc/platforms/pseries/kexec.c
@@ -17,9 +17,9 @@
 #include <asm/mpic.h>
 #include <asm/xics.h>
 #include <asm/smp.h>
+#include <asm/plpar_wrappers.h>
 
 #include "pseries.h"
-#include "plpar_wrappers.h"
 
 static void pseries_kexec_cpu_down(int crash_shutdown, int secondary)
 {
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 8bad880..e1873bc 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -41,8 +41,8 @@
 #include <asm/smp.h>
 #include <asm/trace.h>
 #include <asm/firmware.h>
+#include <asm/plpar_wrappers.h>
 
-#include "plpar_wrappers.h"
 #include "pseries.h"
 
 /* Flag bits for H_BULK_REMOVE */
diff --git a/arch/powerpc/platforms/pseries/plpar_wrappers.h b/arch/powerpc/platforms/pseries/plpar_wrappers.h
deleted file mode 100644
index f35787b..0000000
--- a/arch/powerpc/platforms/pseries/plpar_wrappers.h
+++ /dev/null
@@ -1,324 +0,0 @@
-#ifndef _PSERIES_PLPAR_WRAPPERS_H
-#define _PSERIES_PLPAR_WRAPPERS_H
-
-#include <linux/string.h>
-#include <linux/irqflags.h>
-
-#include <asm/hvcall.h>
-#include <asm/paca.h>
-#include <asm/page.h>
-
-/* Get state of physical CPU from query_cpu_stopped */
-int smp_query_cpu_stopped(unsigned int pcpu);
-#define QCSS_STOPPED 0
-#define QCSS_STOPPING 1
-#define QCSS_NOT_STOPPED 2
-#define QCSS_HARDWARE_ERROR -1
-#define QCSS_HARDWARE_BUSY -2
-
-static inline long poll_pending(void)
-{
-	return plpar_hcall_norets(H_POLL_PENDING);
-}
-
-static inline u8 get_cede_latency_hint(void)
-{
-	return get_lppaca()->cede_latency_hint;
-}
-
-static inline void set_cede_latency_hint(u8 latency_hint)
-{
-	get_lppaca()->cede_latency_hint = latency_hint;
-}
-
-static inline long cede_processor(void)
-{
-	return plpar_hcall_norets(H_CEDE);
-}
-
-static inline long extended_cede_processor(unsigned long latency_hint)
-{
-	long rc;
-	u8 old_latency_hint = get_cede_latency_hint();
-
-	set_cede_latency_hint(latency_hint);
-
-	rc = cede_processor();
-#ifdef CONFIG_TRACE_IRQFLAGS
-		/* Ensure that H_CEDE returns with IRQs on */
-		if (WARN_ON(!(mfmsr() & MSR_EE)))
-			__hard_irq_enable();
-#endif
-
-	set_cede_latency_hint(old_latency_hint);
-
-	return rc;
-}
-
-static inline long vpa_call(unsigned long flags, unsigned long cpu,
-		unsigned long vpa)
-{
-	flags = flags << H_VPA_FUNC_SHIFT;
-
-	return plpar_hcall_norets(H_REGISTER_VPA, flags, cpu, vpa);
-}
-
-static inline long unregister_vpa(unsigned long cpu)
-{
-	return vpa_call(H_VPA_DEREG_VPA, cpu, 0);
-}
-
-static inline long register_vpa(unsigned long cpu, unsigned long vpa)
-{
-	return vpa_call(H_VPA_REG_VPA, cpu, vpa);
-}
-
-static inline long unregister_slb_shadow(unsigned long cpu)
-{
-	return vpa_call(H_VPA_DEREG_SLB, cpu, 0);
-}
-
-static inline long register_slb_shadow(unsigned long cpu, unsigned long vpa)
-{
-	return vpa_call(H_VPA_REG_SLB, cpu, vpa);
-}
-
-static inline long unregister_dtl(unsigned long cpu)
-{
-	return vpa_call(H_VPA_DEREG_DTL, cpu, 0);
-}
-
-static inline long register_dtl(unsigned long cpu, unsigned long vpa)
-{
-	return vpa_call(H_VPA_REG_DTL, cpu, vpa);
-}
-
-static inline long plpar_page_set_loaned(unsigned long vpa)
-{
-	unsigned long cmo_page_sz = cmo_get_page_size();
-	long rc = 0;
-	int i;
-
-	for (i = 0; !rc && i < PAGE_SIZE; i += cmo_page_sz)
-		rc = plpar_hcall_norets(H_PAGE_INIT, H_PAGE_SET_LOANED, vpa + i, 0);
-
-	for (i -= cmo_page_sz; rc && i != 0; i -= cmo_page_sz)
-		plpar_hcall_norets(H_PAGE_INIT, H_PAGE_SET_ACTIVE,
-				   vpa + i - cmo_page_sz, 0);
-
-	return rc;
-}
-
-static inline long plpar_page_set_active(unsigned long vpa)
-{
-	unsigned long cmo_page_sz = cmo_get_page_size();
-	long rc = 0;
-	int i;
-
-	for (i = 0; !rc && i < PAGE_SIZE; i += cmo_page_sz)
-		rc = plpar_hcall_norets(H_PAGE_INIT, H_PAGE_SET_ACTIVE, vpa + i, 0);
-
-	for (i -= cmo_page_sz; rc && i != 0; i -= cmo_page_sz)
-		plpar_hcall_norets(H_PAGE_INIT, H_PAGE_SET_LOANED,
-				   vpa + i - cmo_page_sz, 0);
-
-	return rc;
-}
-
-extern void vpa_init(int cpu);
-
-static inline long plpar_pte_enter(unsigned long flags,
-		unsigned long hpte_group, unsigned long hpte_v,
-		unsigned long hpte_r, unsigned long *slot)
-{
-	long rc;
-	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
-
-	rc = plpar_hcall(H_ENTER, retbuf, flags, hpte_group, hpte_v, hpte_r);
-
-	*slot = retbuf[0];
-
-	return rc;
-}
-
-static inline long plpar_pte_remove(unsigned long flags, unsigned long ptex,
-		unsigned long avpn, unsigned long *old_pteh_ret,
-		unsigned long *old_ptel_ret)
-{
-	long rc;
-	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
-
-	rc = plpar_hcall(H_REMOVE, retbuf, flags, ptex, avpn);
-
-	*old_pteh_ret = retbuf[0];
-	*old_ptel_ret = retbuf[1];
-
-	return rc;
-}
-
-/* plpar_pte_remove_raw can be called in real mode. It calls plpar_hcall_raw */
-static inline long plpar_pte_remove_raw(unsigned long flags, unsigned long ptex,
-		unsigned long avpn, unsigned long *old_pteh_ret,
-		unsigned long *old_ptel_ret)
-{
-	long rc;
-	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
-
-	rc = plpar_hcall_raw(H_REMOVE, retbuf, flags, ptex, avpn);
-
-	*old_pteh_ret = retbuf[0];
-	*old_ptel_ret = retbuf[1];
-
-	return rc;
-}
-
-static inline long plpar_pte_read(unsigned long flags, unsigned long ptex,
-		unsigned long *old_pteh_ret, unsigned long *old_ptel_ret)
-{
-	long rc;
-	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
-
-	rc = plpar_hcall(H_READ, retbuf, flags, ptex);
-
-	*old_pteh_ret = retbuf[0];
-	*old_ptel_ret = retbuf[1];
-
-	return rc;
-}
-
-/* plpar_pte_read_raw can be called in real mode. It calls plpar_hcall_raw */
-static inline long plpar_pte_read_raw(unsigned long flags, unsigned long ptex,
-		unsigned long *old_pteh_ret, unsigned long *old_ptel_ret)
-{
-	long rc;
-	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
-
-	rc = plpar_hcall_raw(H_READ, retbuf, flags, ptex);
-
-	*old_pteh_ret = retbuf[0];
-	*old_ptel_ret = retbuf[1];
-
-	return rc;
-}
-
-/*
- * plpar_pte_read_4_raw can be called in real mode.
- * ptes must be 8*sizeof(unsigned long)
- */
-static inline long plpar_pte_read_4_raw(unsigned long flags, unsigned long ptex,
-					unsigned long *ptes)
-
-{
-	long rc;
-	unsigned long retbuf[PLPAR_HCALL9_BUFSIZE];
-
-	rc = plpar_hcall9_raw(H_READ, retbuf, flags | H_READ_4, ptex);
-
-	memcpy(ptes, retbuf, 8*sizeof(unsigned long));
-
-	return rc;
-}
-
-static inline long plpar_pte_protect(unsigned long flags, unsigned long ptex,
-		unsigned long avpn)
-{
-	return plpar_hcall_norets(H_PROTECT, flags, ptex, avpn);
-}
-
-static inline long plpar_tce_get(unsigned long liobn, unsigned long ioba,
-		unsigned long *tce_ret)
-{
-	long rc;
-	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
-
-	rc = plpar_hcall(H_GET_TCE, retbuf, liobn, ioba);
-
-	*tce_ret = retbuf[0];
-
-	return rc;
-}
-
-static inline long plpar_tce_put(unsigned long liobn, unsigned long ioba,
-		unsigned long tceval)
-{
-	return plpar_hcall_norets(H_PUT_TCE, liobn, ioba, tceval);
-}
-
-static inline long plpar_tce_put_indirect(unsigned long liobn,
-		unsigned long ioba, unsigned long page, unsigned long count)
-{
-	return plpar_hcall_norets(H_PUT_TCE_INDIRECT, liobn, ioba, page, count);
-}
-
-static inline long plpar_tce_stuff(unsigned long liobn, unsigned long ioba,
-		unsigned long tceval, unsigned long count)
-{
-	return plpar_hcall_norets(H_STUFF_TCE, liobn, ioba, tceval, count);
-}
-
-static inline long plpar_get_term_char(unsigned long termno,
-		unsigned long *len_ret, char *buf_ret)
-{
-	long rc;
-	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
-	unsigned long *lbuf = (unsigned long *)buf_ret;	/* TODO: alignment? */
-
-	rc = plpar_hcall(H_GET_TERM_CHAR, retbuf, termno);
-
-	*len_ret = retbuf[0];
-	lbuf[0] = retbuf[1];
-	lbuf[1] = retbuf[2];
-
-	return rc;
-}
-
-static inline long plpar_put_term_char(unsigned long termno, unsigned long len,
-		const char *buffer)
-{
-	unsigned long *lbuf = (unsigned long *)buffer;	/* TODO: alignment? */
-	return plpar_hcall_norets(H_PUT_TERM_CHAR, termno, len, lbuf[0],
-			lbuf[1]);
-}
-
-/* Set various resource mode parameters */
-static inline long plpar_set_mode(unsigned long mflags, unsigned long resource,
-		unsigned long value1, unsigned long value2)
-{
-	return plpar_hcall_norets(H_SET_MODE, mflags, resource, value1, value2);
-}
-
-/*
- * Enable relocation on exceptions on this partition
- *
- * Note: this call has a partition wide scope and can take a while to complete.
- * If it returns H_LONG_BUSY_* it should be retried periodically until it
- * returns H_SUCCESS.
- */
-static inline long enable_reloc_on_exceptions(void)
-{
-	/* mflags = 3: Exceptions at 0xC000000000004000 */
-	return plpar_set_mode(3, 3, 0, 0);
-}
-
-/*
- * Disable relocation on exceptions on this partition
- *
- * Note: this call has a partition wide scope and can take a while to complete.
- * If it returns H_LONG_BUSY_* it should be retried periodically until it
- * returns H_SUCCESS.
- */
-static inline long disable_reloc_on_exceptions(void) {
-	return plpar_set_mode(0, 3, 0, 0);
-}
-
-static inline long plapr_set_ciabr(unsigned long ciabr)
-{
-	return plpar_set_mode(0, 1, ciabr, 0);
-}
-
-static inline long plapr_set_watchpoint0(unsigned long dawr0, unsigned long dawrx0)
-{
-	return plpar_set_mode(0, 2, dawr0, dawrx0);
-}
-
-#endif /* _PSERIES_PLPAR_WRAPPERS_H */
diff --git a/arch/powerpc/platforms/pseries/processor_idle.c b/arch/powerpc/platforms/pseries/processor_idle.c
index ca70279..c905b99 100644
--- a/arch/powerpc/platforms/pseries/processor_idle.c
+++ b/arch/powerpc/platforms/pseries/processor_idle.c
@@ -18,8 +18,7 @@
 #include <asm/machdep.h>
 #include <asm/firmware.h>
 #include <asm/runlatch.h>
-
-#include "plpar_wrappers.h"
+#include <asm/plpar_wrappers.h>
 
 struct cpuidle_driver pseries_idle_driver = {
 	.name             = "pseries_idle",
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index c11c823..4291589 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -66,8 +66,8 @@
 #include <asm/firmware.h>
 #include <asm/eeh.h>
 #include <asm/reg.h>
+#include <asm/plpar_wrappers.h>
 
-#include "plpar_wrappers.h"
 #include "pseries.h"
 
 int CMO_PrPSP = -1;
diff --git a/arch/powerpc/platforms/pseries/smp.c b/arch/powerpc/platforms/pseries/smp.c
index 306643c..1c79af7 100644
--- a/arch/powerpc/platforms/pseries/smp.c
+++ b/arch/powerpc/platforms/pseries/smp.c
@@ -43,8 +43,8 @@
 #include <asm/cputhreads.h>
 #include <asm/xics.h>
 #include <asm/dbell.h>
+#include <asm/plpar_wrappers.h>
 
-#include "plpar_wrappers.h"
 #include "pseries.h"
 #include "offline_states.h"
 

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V4 3/5] powerpc/cpuidle: Generic powerpc backend cpuidle driver.
  2013-08-22  5:29 [PATCH V4 0/5] powerpc/cpuidle: Generic POWERPC cpuidle driver enabled for POWER and POWERNV platforms Deepthi Dharwar
  2013-08-22  5:30 ` [PATCH V4 1/5] pseries/cpuidle: Remove dependency of pseries.h file Deepthi Dharwar
  2013-08-22  5:30 ` [PATCH V4 2/5] pseries: Move plpar_wrapper.h to powerpc common include/asm location Deepthi Dharwar
@ 2013-08-22  5:30 ` Deepthi Dharwar
  2013-08-22 10:56   ` Bartlomiej Zolnierkiewicz
  2013-08-22  5:30 ` [PATCH V4 4/5] powerpc/cpuidle: Enable powernv cpuidle support Deepthi Dharwar
  2013-08-22  5:30 ` [PATCH V4 5/5] powernv/cpuidle: Enable idle powernv cpu to call into the cpuidle framework Deepthi Dharwar
  4 siblings, 1 reply; 8+ messages in thread
From: Deepthi Dharwar @ 2013-08-22  5:30 UTC (permalink / raw)
  To: linux-pm, linuxppc-dev, linux-kernel
  Cc: rjw, daniel.lezcano, dongsheng.wang, preeti, srivatsa.bhat, scottwood

This patch involves moving the current pseries_idle backend driver code
from pseries/processor_idle.c to drivers/cpuidle/cpuidle-powerpc.c,
and making the backend code generic enough to be able to extend this
driver code for both powernv and pseries.

It enables the support for pseries platform, such that going forward the same code
with minimal efforts can be re-used for a common driver on powernv
and can be further extended to support cpuidle idle state mgmt for the rest
of the powerpc platforms in the future. This removes a lot of code duplicacy,
making the code elegant.

Signed-off-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/paca.h                 |   23 +
 arch/powerpc/include/asm/processor.h            |    2 
 arch/powerpc/platforms/pseries/Kconfig          |    9 -
 arch/powerpc/platforms/pseries/Makefile         |    1 
 arch/powerpc/platforms/pseries/processor_idle.c |  360 -----------------------
 drivers/cpuidle/Kconfig                         |    7 
 drivers/cpuidle/Makefile                        |    2 
 drivers/cpuidle/cpuidle-powerpc.c               |  304 +++++++++++++++++++
 8 files changed, 337 insertions(+), 371 deletions(-)
 delete mode 100644 arch/powerpc/platforms/pseries/processor_idle.c
 create mode 100644 drivers/cpuidle/cpuidle-powerpc.c

diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index 77c91e7..7bd83ff 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -175,6 +175,29 @@ extern void setup_paca(struct paca_struct *new_paca);
 extern void allocate_pacas(void);
 extern void free_unused_pacas(void);
 
+#ifdef CONFIG_PPC_BOOK3S
+#define get_lppaca_is_shared_proc()  get_paca()->lppaca_ptr->shared_proc
+static inline void set_lppaca_idle(u8  idle)
+{
+	get_paca()->lppaca_ptr->idle = idle;
+}
+
+static inline void add_lppaca_wait_state(u64 cycles)
+{
+	get_paca()->lppaca_ptr->wait_state_cycles += cycles;
+}
+
+static inline void set_lppaca_donate_dedicated_cpu(u8 value)
+{
+	get_paca()->lppaca_ptr->donate_dedicated_cpu = value;
+}
+#else
+#define get_lppaca_is_shared_proc()	-1
+static inline void set_lppaca_idle(u8 idle) { }
+static inline void  add_lppaca_wait_state(u64 cycles) { }
+static inline void  set_lppaca_donate_dedicated_cpu(u8 value) { }
+#endif
+
 #else /* CONFIG_PPC64 */
 
 static inline void allocate_pacas(void) { };
diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index e378ccc..5f57c56 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -430,7 +430,7 @@ enum idle_boot_override {IDLE_NO_OVERRIDE = 0, IDLE_POWERSAVE_OFF};
 extern int powersave_nap;	/* set if nap mode can be used in idle loop */
 extern void power7_nap(void);
 
-#ifdef CONFIG_PSERIES_IDLE
+#ifdef CONFIG_CPU_IDLE_POWERPC
 extern void update_smt_snooze_delay(int cpu, int residency);
 #else
 static inline void update_smt_snooze_delay(int cpu, int residency) {}
diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
index 62b4f80..bb59bb0 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -119,12 +119,3 @@ config DTL
 	  which are accessible through a debugfs file.
 
 	  Say N if you are unsure.
-
-config PSERIES_IDLE
-	bool "Cpuidle driver for pSeries platforms"
-	depends on CPU_IDLE
-	depends on PPC_PSERIES
-	default y
-	help
-	  Select this option to enable processor idle state management
-	  through cpuidle subsystem.
diff --git a/arch/powerpc/platforms/pseries/Makefile b/arch/powerpc/platforms/pseries/Makefile
index 8ae0103..4b22379 100644
--- a/arch/powerpc/platforms/pseries/Makefile
+++ b/arch/powerpc/platforms/pseries/Makefile
@@ -21,7 +21,6 @@ obj-$(CONFIG_HCALL_STATS)	+= hvCall_inst.o
 obj-$(CONFIG_CMM)		+= cmm.o
 obj-$(CONFIG_DTL)		+= dtl.o
 obj-$(CONFIG_IO_EVENT_IRQ)	+= io_event_irq.o
-obj-$(CONFIG_PSERIES_IDLE)	+= processor_idle.o
 
 ifeq ($(CONFIG_PPC_PSERIES),y)
 obj-$(CONFIG_SUSPEND)		+= suspend.o
diff --git a/arch/powerpc/platforms/pseries/processor_idle.c b/arch/powerpc/platforms/pseries/processor_idle.c
deleted file mode 100644
index c905b99..0000000
--- a/arch/powerpc/platforms/pseries/processor_idle.c
+++ /dev/null
@@ -1,360 +0,0 @@
-/*
- *  processor_idle - idle state cpuidle driver.
- *  Adapted from drivers/idle/intel_idle.c and
- *  drivers/acpi/processor_idle.c
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/moduleparam.h>
-#include <linux/cpuidle.h>
-#include <linux/cpu.h>
-#include <linux/notifier.h>
-
-#include <asm/paca.h>
-#include <asm/reg.h>
-#include <asm/machdep.h>
-#include <asm/firmware.h>
-#include <asm/runlatch.h>
-#include <asm/plpar_wrappers.h>
-
-struct cpuidle_driver pseries_idle_driver = {
-	.name             = "pseries_idle",
-	.owner            = THIS_MODULE,
-};
-
-#define MAX_IDLE_STATE_COUNT	2
-
-static int max_idle_state = MAX_IDLE_STATE_COUNT - 1;
-static struct cpuidle_device __percpu *pseries_cpuidle_devices;
-static struct cpuidle_state *cpuidle_state_table;
-
-static inline void idle_loop_prolog(unsigned long *in_purr)
-{
-	*in_purr = mfspr(SPRN_PURR);
-	/*
-	 * Indicate to the HV that we are idle. Now would be
-	 * a good time to find other work to dispatch.
-	 */
-	get_lppaca()->idle = 1;
-}
-
-static inline void idle_loop_epilog(unsigned long in_purr)
-{
-	get_lppaca()->wait_state_cycles += mfspr(SPRN_PURR) - in_purr;
-	get_lppaca()->idle = 0;
-}
-
-static int snooze_loop(struct cpuidle_device *dev,
-			struct cpuidle_driver *drv,
-			int index)
-{
-	unsigned long in_purr;
-	int cpu = dev->cpu;
-
-	idle_loop_prolog(&in_purr);
-	local_irq_enable();
-	set_thread_flag(TIF_POLLING_NRFLAG);
-
-	while ((!need_resched()) && cpu_online(cpu)) {
-		ppc64_runlatch_off();
-		HMT_low();
-		HMT_very_low();
-	}
-
-	HMT_medium();
-	clear_thread_flag(TIF_POLLING_NRFLAG);
-	smp_mb();
-
-	idle_loop_epilog(in_purr);
-
-	return index;
-}
-
-static void check_and_cede_processor(void)
-{
-	/*
-	 * Ensure our interrupt state is properly tracked,
-	 * also checks if no interrupt has occurred while we
-	 * were soft-disabled
-	 */
-	if (prep_irq_for_idle()) {
-		cede_processor();
-#ifdef CONFIG_TRACE_IRQFLAGS
-		/* Ensure that H_CEDE returns with IRQs on */
-		if (WARN_ON(!(mfmsr() & MSR_EE)))
-			__hard_irq_enable();
-#endif
-	}
-}
-
-static int dedicated_cede_loop(struct cpuidle_device *dev,
-				struct cpuidle_driver *drv,
-				int index)
-{
-	unsigned long in_purr;
-
-	idle_loop_prolog(&in_purr);
-	get_lppaca()->donate_dedicated_cpu = 1;
-
-	ppc64_runlatch_off();
-	HMT_medium();
-	check_and_cede_processor();
-
-	get_lppaca()->donate_dedicated_cpu = 0;
-
-	idle_loop_epilog(in_purr);
-
-	return index;
-}
-
-static int shared_cede_loop(struct cpuidle_device *dev,
-			struct cpuidle_driver *drv,
-			int index)
-{
-	unsigned long in_purr;
-
-	idle_loop_prolog(&in_purr);
-
-	/*
-	 * Yield the processor to the hypervisor.  We return if
-	 * an external interrupt occurs (which are driven prior
-	 * to returning here) or if a prod occurs from another
-	 * processor. When returning here, external interrupts
-	 * are enabled.
-	 */
-	check_and_cede_processor();
-
-	idle_loop_epilog(in_purr);
-
-	return index;
-}
-
-/*
- * States for dedicated partition case.
- */
-static struct cpuidle_state dedicated_states[MAX_IDLE_STATE_COUNT] = {
-	{ /* Snooze */
-		.name = "snooze",
-		.desc = "snooze",
-		.flags = CPUIDLE_FLAG_TIME_VALID,
-		.exit_latency = 0,
-		.target_residency = 0,
-		.enter = &snooze_loop },
-	{ /* CEDE */
-		.name = "CEDE",
-		.desc = "CEDE",
-		.flags = CPUIDLE_FLAG_TIME_VALID,
-		.exit_latency = 10,
-		.target_residency = 100,
-		.enter = &dedicated_cede_loop },
-};
-
-/*
- * States for shared partition case.
- */
-static struct cpuidle_state shared_states[MAX_IDLE_STATE_COUNT] = {
-	{ /* Shared Cede */
-		.name = "Shared Cede",
-		.desc = "Shared Cede",
-		.flags = CPUIDLE_FLAG_TIME_VALID,
-		.exit_latency = 0,
-		.target_residency = 0,
-		.enter = &shared_cede_loop },
-};
-
-void update_smt_snooze_delay(int cpu, int residency)
-{
-	struct cpuidle_driver *drv = cpuidle_get_driver();
-	struct cpuidle_device *dev = per_cpu(cpuidle_devices, cpu);
-
-	if (cpuidle_state_table != dedicated_states)
-		return;
-
-	if (residency < 0) {
-		/* Disable the Nap state on that cpu */
-		if (dev)
-			dev->states_usage[1].disable = 1;
-	} else
-		if (drv)
-			drv->states[1].target_residency = residency;
-}
-
-static int pseries_cpuidle_add_cpu_notifier(struct notifier_block *n,
-			unsigned long action, void *hcpu)
-{
-	int hotcpu = (unsigned long)hcpu;
-	struct cpuidle_device *dev =
-			per_cpu_ptr(pseries_cpuidle_devices, hotcpu);
-
-	if (dev && cpuidle_get_driver()) {
-		switch (action) {
-		case CPU_ONLINE:
-		case CPU_ONLINE_FROZEN:
-			cpuidle_pause_and_lock();
-			cpuidle_enable_device(dev);
-			cpuidle_resume_and_unlock();
-			break;
-
-		case CPU_DEAD:
-		case CPU_DEAD_FROZEN:
-			cpuidle_pause_and_lock();
-			cpuidle_disable_device(dev);
-			cpuidle_resume_and_unlock();
-			break;
-
-		default:
-			return NOTIFY_DONE;
-		}
-	}
-	return NOTIFY_OK;
-}
-
-static struct notifier_block setup_hotplug_notifier = {
-	.notifier_call = pseries_cpuidle_add_cpu_notifier,
-};
-
-/*
- * pseries_cpuidle_driver_init()
- */
-static int pseries_cpuidle_driver_init(void)
-{
-	int idle_state;
-	struct cpuidle_driver *drv = &pseries_idle_driver;
-
-	drv->state_count = 0;
-
-	for (idle_state = 0; idle_state < MAX_IDLE_STATE_COUNT; ++idle_state) {
-
-		if (idle_state > max_idle_state)
-			break;
-
-		/* is the state not enabled? */
-		if (cpuidle_state_table[idle_state].enter == NULL)
-			continue;
-
-		drv->states[drv->state_count] =	/* structure copy */
-			cpuidle_state_table[idle_state];
-
-		drv->state_count += 1;
-	}
-
-	return 0;
-}
-
-/* pseries_idle_devices_uninit(void)
- * unregister cpuidle devices and de-allocate memory
- */
-static void pseries_idle_devices_uninit(void)
-{
-	int i;
-	struct cpuidle_device *dev;
-
-	for_each_possible_cpu(i) {
-		dev = per_cpu_ptr(pseries_cpuidle_devices, i);
-		cpuidle_unregister_device(dev);
-	}
-
-	free_percpu(pseries_cpuidle_devices);
-	return;
-}
-
-/* pseries_idle_devices_init()
- * allocate, initialize and register cpuidle device
- */
-static int pseries_idle_devices_init(void)
-{
-	int i;
-	struct cpuidle_driver *drv = &pseries_idle_driver;
-	struct cpuidle_device *dev;
-
-	pseries_cpuidle_devices = alloc_percpu(struct cpuidle_device);
-	if (pseries_cpuidle_devices == NULL)
-		return -ENOMEM;
-
-	for_each_possible_cpu(i) {
-		dev = per_cpu_ptr(pseries_cpuidle_devices, i);
-		dev->state_count = drv->state_count;
-		dev->cpu = i;
-		if (cpuidle_register_device(dev)) {
-			printk(KERN_DEBUG \
-				"cpuidle_register_device %d failed!\n", i);
-			return -EIO;
-		}
-	}
-
-	return 0;
-}
-
-/*
- * pseries_idle_probe()
- * Choose state table for shared versus dedicated partition
- */
-static int pseries_idle_probe(void)
-{
-
-	if (!firmware_has_feature(FW_FEATURE_SPLPAR))
-		return -ENODEV;
-
-	if (cpuidle_disable != IDLE_NO_OVERRIDE)
-		return -ENODEV;
-
-	if (max_idle_state == 0) {
-		printk(KERN_DEBUG "pseries processor idle disabled.\n");
-		return -EPERM;
-	}
-
-	if (get_lppaca()->shared_proc)
-		cpuidle_state_table = shared_states;
-	else
-		cpuidle_state_table = dedicated_states;
-
-	return 0;
-}
-
-static int __init pseries_processor_idle_init(void)
-{
-	int retval;
-
-	retval = pseries_idle_probe();
-	if (retval)
-		return retval;
-
-	pseries_cpuidle_driver_init();
-	retval = cpuidle_register_driver(&pseries_idle_driver);
-	if (retval) {
-		printk(KERN_DEBUG "Registration of pseries driver failed.\n");
-		return retval;
-	}
-
-	retval = pseries_idle_devices_init();
-	if (retval) {
-		pseries_idle_devices_uninit();
-		cpuidle_unregister_driver(&pseries_idle_driver);
-		return retval;
-	}
-
-	register_cpu_notifier(&setup_hotplug_notifier);
-	printk(KERN_DEBUG "pseries_idle_driver registered\n");
-
-	return 0;
-}
-
-static void __exit pseries_processor_idle_exit(void)
-{
-
-	unregister_cpu_notifier(&setup_hotplug_notifier);
-	pseries_idle_devices_uninit();
-	cpuidle_unregister_driver(&pseries_idle_driver);
-
-	return;
-}
-
-module_init(pseries_processor_idle_init);
-module_exit(pseries_processor_idle_exit);
-
-MODULE_AUTHOR("Deepthi Dharwar <deepthi@linux.vnet.ibm.com>");
-MODULE_DESCRIPTION("Cpuidle driver for POWER");
-MODULE_LICENSE("GPL");
diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
index 0e2cd5c..53ce03d 100644
--- a/drivers/cpuidle/Kconfig
+++ b/drivers/cpuidle/Kconfig
@@ -42,6 +42,13 @@ config CPU_IDLE_ZYNQ
 	help
 	  Select this to enable cpuidle on Xilinx Zynq processors.
 
+config CPU_IDLE_POWERPC
+	bool "CPU Idle driver for POWERPC platforms"
+	depends on PPC_PSERIES || PPC_POWERNV
+	default y
+        help
+          Select this option to enable processor idle state management
+	  for POWERPC platform.
 endif
 
 config ARCH_NEEDS_CPU_IDLE_COUPLED
diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
index 8767a7b..d12e205 100644
--- a/drivers/cpuidle/Makefile
+++ b/drivers/cpuidle/Makefile
@@ -8,3 +8,5 @@ obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
 obj-$(CONFIG_CPU_IDLE_CALXEDA) += cpuidle-calxeda.o
 obj-$(CONFIG_ARCH_KIRKWOOD) += cpuidle-kirkwood.o
 obj-$(CONFIG_CPU_IDLE_ZYNQ) += cpuidle-zynq.o
+
+obj-$(CONFIG_CPU_IDLE_POWERPC) += cpuidle-powerpc.o
diff --git a/drivers/cpuidle/cpuidle-powerpc.c b/drivers/cpuidle/cpuidle-powerpc.c
new file mode 100644
index 0000000..3662aba
--- /dev/null
+++ b/drivers/cpuidle/cpuidle-powerpc.c
@@ -0,0 +1,304 @@
+/*
+ *  processor_idle - idle state cpuidle driver.
+ *  Adapted from drivers/idle/intel_idle.c and
+ *  drivers/acpi/processor_idle.c
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/moduleparam.h>
+#include <linux/cpuidle.h>
+#include <linux/cpu.h>
+#include <linux/notifier.h>
+
+#include <asm/paca.h>
+#include <asm/reg.h>
+#include <asm/machdep.h>
+#include <asm/firmware.h>
+#include <asm/runlatch.h>
+#include <asm/plpar_wrappers.h>
+
+struct cpuidle_driver powerpc_idle_driver = {
+	.name             = "powerpc_idle",
+	.owner            = THIS_MODULE,
+};
+
+#define MAX_IDLE_STATE_COUNT	2
+
+static int max_idle_state = MAX_IDLE_STATE_COUNT - 1;
+static struct cpuidle_state *cpuidle_state_table;
+
+static inline void idle_loop_prolog(unsigned long *in_purr)
+{
+	*in_purr = mfspr(SPRN_PURR);
+	/*
+	 * Indicate to the HV that we are idle. Now would be
+	 * a good time to find other work to dispatch.
+	 */
+	set_lppaca_idle(1);
+}
+
+static inline void idle_loop_epilog(unsigned long in_purr)
+{
+	add_lppaca_wait_state(mfspr(SPRN_PURR) - in_purr);
+	set_lppaca_idle(0);
+}
+
+static int snooze_loop(struct cpuidle_device *dev,
+			struct cpuidle_driver *drv,
+			int index)
+{
+	unsigned long in_purr;
+
+	idle_loop_prolog(&in_purr);
+	local_irq_enable();
+	set_thread_flag(TIF_POLLING_NRFLAG);
+
+	while (!need_resched()) {
+		ppc64_runlatch_off();
+		HMT_low();
+		HMT_very_low();
+	}
+
+	HMT_medium();
+	clear_thread_flag(TIF_POLLING_NRFLAG);
+	smp_mb();
+
+	idle_loop_epilog(in_purr);
+
+	return index;
+}
+
+static void check_and_cede_processor(void)
+{
+	/*
+	 * Ensure our interrupt state is properly tracked,
+	 * also checks if no interrupt has occurred while we
+	 * were soft-disabled
+	 */
+	if (prep_irq_for_idle()) {
+		cede_processor();
+#ifdef CONFIG_TRACE_IRQFLAGS
+		/* Ensure that H_CEDE returns with IRQs on */
+		if (WARN_ON(!(mfmsr() & MSR_EE)))
+			__hard_irq_enable();
+#endif
+	}
+}
+
+static int dedicated_cede_loop(struct cpuidle_device *dev,
+				struct cpuidle_driver *drv,
+				int index)
+{
+	unsigned long in_purr;
+
+	idle_loop_prolog(&in_purr);
+	set_lppaca_donate_dedicated_cpu(1);
+
+	ppc64_runlatch_off();
+	HMT_medium();
+	check_and_cede_processor();
+
+	set_lppaca_donate_dedicated_cpu(0);
+	idle_loop_epilog(in_purr);
+
+	return index;
+}
+
+static int shared_cede_loop(struct cpuidle_device *dev,
+			struct cpuidle_driver *drv,
+			int index)
+{
+	unsigned long in_purr;
+
+	idle_loop_prolog(&in_purr);
+
+	/*
+	 * Yield the processor to the hypervisor.  We return if
+	 * an external interrupt occurs (which are driven prior
+	 * to returning here) or if a prod occurs from another
+	 * processor. When returning here, external interrupts
+	 * are enabled.
+	 */
+	check_and_cede_processor();
+
+	idle_loop_epilog(in_purr);
+
+	return index;
+}
+
+/*
+ * States for dedicated partition case.
+ */
+static struct cpuidle_state dedicated_states[MAX_IDLE_STATE_COUNT] = {
+	{ /* Snooze */
+		.name = "snooze",
+		.desc = "snooze",
+		.flags = CPUIDLE_FLAG_TIME_VALID,
+		.exit_latency = 0,
+		.target_residency = 0,
+		.enter = &snooze_loop },
+	{ /* CEDE */
+		.name = "CEDE",
+		.desc = "CEDE",
+		.flags = CPUIDLE_FLAG_TIME_VALID,
+		.exit_latency = 10,
+		.target_residency = 100,
+		.enter = &dedicated_cede_loop },
+};
+
+/*
+ * States for shared partition case.
+ */
+static struct cpuidle_state shared_states[MAX_IDLE_STATE_COUNT] = {
+	{ /* Shared Cede */
+		.name = "Shared Cede",
+		.desc = "Shared Cede",
+		.flags = CPUIDLE_FLAG_TIME_VALID,
+		.exit_latency = 0,
+		.target_residency = 0,
+		.enter = &shared_cede_loop },
+};
+
+void update_smt_snooze_delay(int cpu, int residency)
+{
+	struct cpuidle_driver *drv = cpuidle_get_driver();
+	struct cpuidle_device *dev = per_cpu(cpuidle_devices, cpu);
+
+	if (cpuidle_state_table != dedicated_states)
+		return;
+
+	if (residency < 0) {
+		/* Disable the Nap state on that cpu */
+		if (dev)
+			dev->states_usage[1].disable = 1;
+	} else
+		if (drv)
+			drv->states[1].target_residency = residency;
+}
+
+static int powerpc_cpuidle_add_cpu_notifier(struct notifier_block *n,
+			unsigned long action, void *hcpu)
+{
+	int hotcpu = (unsigned long)hcpu;
+	struct cpuidle_device *dev =
+			per_cpu_ptr(cpuidle_devices, hotcpu);
+
+	if (dev && cpuidle_get_driver()) {
+		switch (action) {
+		case CPU_ONLINE:
+		case CPU_ONLINE_FROZEN:
+			cpuidle_pause_and_lock();
+			cpuidle_enable_device(dev);
+			cpuidle_resume_and_unlock();
+			break;
+
+		case CPU_DEAD:
+		case CPU_DEAD_FROZEN:
+			cpuidle_pause_and_lock();
+			cpuidle_disable_device(dev);
+			cpuidle_resume_and_unlock();
+			break;
+
+		default:
+			return NOTIFY_DONE;
+		}
+	}
+	return NOTIFY_OK;
+}
+
+static struct notifier_block setup_hotplug_notifier = {
+	.notifier_call = powerpc_cpuidle_add_cpu_notifier,
+};
+
+/*
+ * powerpc_cpuidle_driver_init()
+ */
+static int powerpc_cpuidle_driver_init(void)
+{
+	int idle_state;
+	struct cpuidle_driver *drv = &powerpc_idle_driver;
+
+	drv->state_count = 0;
+
+	for (idle_state = 0; idle_state < MAX_IDLE_STATE_COUNT; ++idle_state) {
+
+		if (idle_state > max_idle_state)
+			break;
+
+		/* is the state not enabled? */
+		if (cpuidle_state_table[idle_state].enter == NULL)
+			continue;
+
+		drv->states[drv->state_count] =	/* structure copy */
+			cpuidle_state_table[idle_state];
+
+		drv->state_count += 1;
+	}
+
+	return 0;
+}
+
+/*
+ * powerpc_idle_probe()
+ * Choose state table for shared versus dedicated partition
+ */
+static int powerpc_idle_probe(void)
+{
+
+	if (cpuidle_disable != IDLE_NO_OVERRIDE)
+		return -ENODEV;
+
+	if (max_idle_state == 0) {
+		printk(KERN_DEBUG "powerpc processor idle disabled.\n");
+		return -EPERM;
+	}
+
+	if (firmware_has_feature(FW_FEATURE_SPLPAR)) {
+		if (get_lppaca_is_shared_proc() == 1)
+			cpuidle_state_table = shared_states;
+		else if (get_lppaca_is_shared_proc() == 0)
+			cpuidle_state_table = dedicated_states;
+	} else
+		return -ENODEV;
+
+	return 0;
+}
+
+static int __init powerpc_processor_idle_init(void)
+{
+	int retval;
+
+	retval = powerpc_idle_probe();
+	if (retval)
+		return retval;
+
+	powerpc_cpuidle_driver_init();
+	retval = cpuidle_register(&powerpc_idle_driver, NULL);
+	if (retval) {
+		printk(KERN_DEBUG "Registration of powerpc driver failed.\n");
+		return retval;
+	}
+
+	register_cpu_notifier(&setup_hotplug_notifier);
+	printk(KERN_DEBUG "powerpc_idle_driver registered\n");
+
+	return 0;
+}
+
+static void __exit powerpc_processor_idle_exit(void)
+{
+
+	unregister_cpu_notifier(&setup_hotplug_notifier);
+	cpuidle_unregister(&powerpc_idle_driver);
+	return;
+}
+
+module_init(powerpc_processor_idle_init);
+module_exit(powerpc_processor_idle_exit);
+
+MODULE_AUTHOR("Deepthi Dharwar <deepthi@linux.vnet.ibm.com>");
+MODULE_DESCRIPTION("Cpuidle driver for powerpc");
+MODULE_LICENSE("GPL");

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V4 4/5] powerpc/cpuidle: Enable powernv cpuidle support.
  2013-08-22  5:29 [PATCH V4 0/5] powerpc/cpuidle: Generic POWERPC cpuidle driver enabled for POWER and POWERNV platforms Deepthi Dharwar
                   ` (2 preceding siblings ...)
  2013-08-22  5:30 ` [PATCH V4 3/5] powerpc/cpuidle: Generic powerpc backend cpuidle driver Deepthi Dharwar
@ 2013-08-22  5:30 ` Deepthi Dharwar
  2013-08-22  5:30 ` [PATCH V4 5/5] powernv/cpuidle: Enable idle powernv cpu to call into the cpuidle framework Deepthi Dharwar
  4 siblings, 0 replies; 8+ messages in thread
From: Deepthi Dharwar @ 2013-08-22  5:30 UTC (permalink / raw)
  To: linux-pm, linuxppc-dev, linux-kernel
  Cc: rjw, daniel.lezcano, dongsheng.wang, preeti, srivatsa.bhat, scottwood

The following patch extends the current powerpc backend
idle driver to the powernv platform.

Signed-off-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
---
 drivers/cpuidle/cpuidle-powerpc.c |   35 +++++++++++++++++++++++++++++++++--
 1 file changed, 33 insertions(+), 2 deletions(-)

diff --git a/drivers/cpuidle/cpuidle-powerpc.c b/drivers/cpuidle/cpuidle-powerpc.c
index 3662aba..973de6a 100644
--- a/drivers/cpuidle/cpuidle-powerpc.c
+++ b/drivers/cpuidle/cpuidle-powerpc.c
@@ -52,7 +52,9 @@ static int snooze_loop(struct cpuidle_device *dev,
 {
 	unsigned long in_purr;
 
-	idle_loop_prolog(&in_purr);
+	if (firmware_has_feature(FW_FEATURE_SPLPAR))
+		idle_loop_prolog(&in_purr);
+
 	local_irq_enable();
 	set_thread_flag(TIF_POLLING_NRFLAG);
 
@@ -66,7 +68,8 @@ static int snooze_loop(struct cpuidle_device *dev,
 	clear_thread_flag(TIF_POLLING_NRFLAG);
 	smp_mb();
 
-	idle_loop_epilog(in_purr);
+	if (firmware_has_feature(FW_FEATURE_SPLPAR))
+		idle_loop_epilog(in_purr);
 
 	return index;
 }
@@ -129,6 +132,15 @@ static int shared_cede_loop(struct cpuidle_device *dev,
 	return index;
 }
 
+static int nap_loop(struct cpuidle_device *dev,
+			struct cpuidle_driver *drv,
+			int index)
+{
+	ppc64_runlatch_off();
+	power7_idle();
+	return index;
+}
+
 /*
  * States for dedicated partition case.
  */
@@ -162,6 +174,23 @@ static struct cpuidle_state shared_states[MAX_IDLE_STATE_COUNT] = {
 		.enter = &shared_cede_loop },
 };
 
+static struct cpuidle_state powernv_states[MAX_IDLE_STATE_COUNT] = {
+	{ /* Snooze */
+		.name = "snooze",
+		.desc = "snooze",
+		.flags = CPUIDLE_FLAG_TIME_VALID,
+		.exit_latency = 0,
+		.target_residency = 0,
+		.enter = &snooze_loop },
+	{ /* NAP */
+		.name = "NAP",
+		.desc = "NAP",
+		.flags = CPUIDLE_FLAG_TIME_VALID,
+		.exit_latency = 10,
+		.target_residency = 100,
+		.enter = &nap_loop },
+};
+
 void update_smt_snooze_delay(int cpu, int residency)
 {
 	struct cpuidle_driver *drv = cpuidle_get_driver();
@@ -261,6 +290,8 @@ static int powerpc_idle_probe(void)
 			cpuidle_state_table = shared_states;
 		else if (get_lppaca_is_shared_proc() == 0)
 			cpuidle_state_table = dedicated_states;
+	} else if (firmware_has_feature(FW_FEATURE_OPALv3)) {
+			cpuidle_state_table = powernv_states;
 	} else
 		return -ENODEV;
 

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V4 5/5] powernv/cpuidle: Enable idle powernv cpu to call into the cpuidle framework.
  2013-08-22  5:29 [PATCH V4 0/5] powerpc/cpuidle: Generic POWERPC cpuidle driver enabled for POWER and POWERNV platforms Deepthi Dharwar
                   ` (3 preceding siblings ...)
  2013-08-22  5:30 ` [PATCH V4 4/5] powerpc/cpuidle: Enable powernv cpuidle support Deepthi Dharwar
@ 2013-08-22  5:30 ` Deepthi Dharwar
  4 siblings, 0 replies; 8+ messages in thread
From: Deepthi Dharwar @ 2013-08-22  5:30 UTC (permalink / raw)
  To: linux-pm, linuxppc-dev, linux-kernel
  Cc: rjw, daniel.lezcano, dongsheng.wang, preeti, srivatsa.bhat, scottwood

This patch enables idle cpu on the powernv platform  to hook on to the cpuidle
framework, if available, else call on to default idle platform
code.

Signed-off-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
---
 arch/powerpc/platforms/powernv/setup.c |   14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index 84438af..fc62f21 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -25,6 +25,7 @@
 #include <linux/of.h>
 #include <linux/interrupt.h>
 #include <linux/bug.h>
+#include <linux/cpuidle.h>
 
 #include <asm/machdep.h>
 #include <asm/firmware.h>
@@ -175,6 +176,17 @@ static void __init pnv_setup_machdep_rtas(void)
 }
 #endif /* CONFIG_PPC_POWERNV_RTAS */
 
+void powernv_idle(void)
+{
+	/* Hook to cpuidle framework if available, else
+	 * call on default platform idle code
+	 */
+	if (cpuidle_idle_call()) {
+		HMT_low();
+		HMT_very_low();
+	}
+}
+
 static int __init pnv_probe(void)
 {
 	unsigned long root = of_get_flat_dt_root();
@@ -205,7 +217,7 @@ define_machine(powernv) {
 	.show_cpuinfo		= pnv_show_cpuinfo,
 	.progress		= pnv_progress,
 	.machine_shutdown	= pnv_shutdown,
-	.power_save             = power7_idle,
+	.power_save             = powernv_idle,
 	.calibrate_decr		= generic_calibrate_decr,
 #ifdef CONFIG_KEXEC
 	.kexec_cpu_down		= pnv_kexec_cpu_down,

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH V4 3/5] powerpc/cpuidle: Generic powerpc backend cpuidle driver.
  2013-08-22  5:30 ` [PATCH V4 3/5] powerpc/cpuidle: Generic powerpc backend cpuidle driver Deepthi Dharwar
@ 2013-08-22 10:56   ` Bartlomiej Zolnierkiewicz
  2013-08-23 10:19     ` Deepthi Dharwar
  0 siblings, 1 reply; 8+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2013-08-22 10:56 UTC (permalink / raw)
  To: Deepthi Dharwar
  Cc: rjw, daniel.lezcano, linux-kernel, dongsheng.wang, preeti,
	srivatsa.bhat, scottwood, linux-pm, linuxppc-dev


Hi,

On Thursday, August 22, 2013 11:00:29 AM Deepthi Dharwar wrote:
> This patch involves moving the current pseries_idle backend driver code
> from pseries/processor_idle.c to drivers/cpuidle/cpuidle-powerpc.c,
> and making the backend code generic enough to be able to extend this
> driver code for both powernv and pseries.
> 
> It enables the support for pseries platform, such that going forward the same code
> with minimal efforts can be re-used for a common driver on powernv
> and can be further extended to support cpuidle idle state mgmt for the rest
> of the powerpc platforms in the future. This removes a lot of code duplicacy,
> making the code elegant.

This patch mixes the code movement with the actual code changes which is
not a good practice as it makes review more difficult and is generally bad
from the long term maintainance POV.

Please split this patch on code movement and code changes parts.

V4 of this patch now also seems to contain changes which I posted on
Tuesday as a part of dev->state_count removal patchset:

http://permalink.gmane.org/gmane.linux.power-management.general/37392
http://permalink.gmane.org/gmane.linux.power-management.general/37393

so some work probably got duplicated. :(

Best regards,
--
Bartlomiej Zolnierkiewicz
Samsung R&D Institute Poland
Samsung Electronics

> Signed-off-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/paca.h                 |   23 +
>  arch/powerpc/include/asm/processor.h            |    2 
>  arch/powerpc/platforms/pseries/Kconfig          |    9 -
>  arch/powerpc/platforms/pseries/Makefile         |    1 
>  arch/powerpc/platforms/pseries/processor_idle.c |  360 -----------------------
>  drivers/cpuidle/Kconfig                         |    7 
>  drivers/cpuidle/Makefile                        |    2 
>  drivers/cpuidle/cpuidle-powerpc.c               |  304 +++++++++++++++++++
>  8 files changed, 337 insertions(+), 371 deletions(-)
>  delete mode 100644 arch/powerpc/platforms/pseries/processor_idle.c
>  create mode 100644 drivers/cpuidle/cpuidle-powerpc.c
> 
> diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
> index 77c91e7..7bd83ff 100644
> --- a/arch/powerpc/include/asm/paca.h
> +++ b/arch/powerpc/include/asm/paca.h
> @@ -175,6 +175,29 @@ extern void setup_paca(struct paca_struct *new_paca);
>  extern void allocate_pacas(void);
>  extern void free_unused_pacas(void);
>  
> +#ifdef CONFIG_PPC_BOOK3S
> +#define get_lppaca_is_shared_proc()  get_paca()->lppaca_ptr->shared_proc
> +static inline void set_lppaca_idle(u8  idle)
> +{
> +	get_paca()->lppaca_ptr->idle = idle;
> +}
> +
> +static inline void add_lppaca_wait_state(u64 cycles)
> +{
> +	get_paca()->lppaca_ptr->wait_state_cycles += cycles;
> +}
> +
> +static inline void set_lppaca_donate_dedicated_cpu(u8 value)
> +{
> +	get_paca()->lppaca_ptr->donate_dedicated_cpu = value;
> +}
> +#else
> +#define get_lppaca_is_shared_proc()	-1
> +static inline void set_lppaca_idle(u8 idle) { }
> +static inline void  add_lppaca_wait_state(u64 cycles) { }
> +static inline void  set_lppaca_donate_dedicated_cpu(u8 value) { }
> +#endif
> +
>  #else /* CONFIG_PPC64 */
>  
>  static inline void allocate_pacas(void) { };
> diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
> index e378ccc..5f57c56 100644
> --- a/arch/powerpc/include/asm/processor.h
> +++ b/arch/powerpc/include/asm/processor.h
> @@ -430,7 +430,7 @@ enum idle_boot_override {IDLE_NO_OVERRIDE = 0, IDLE_POWERSAVE_OFF};
>  extern int powersave_nap;	/* set if nap mode can be used in idle loop */
>  extern void power7_nap(void);
>  
> -#ifdef CONFIG_PSERIES_IDLE
> +#ifdef CONFIG_CPU_IDLE_POWERPC
>  extern void update_smt_snooze_delay(int cpu, int residency);
>  #else
>  static inline void update_smt_snooze_delay(int cpu, int residency) {}
> diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
> index 62b4f80..bb59bb0 100644
> --- a/arch/powerpc/platforms/pseries/Kconfig
> +++ b/arch/powerpc/platforms/pseries/Kconfig
> @@ -119,12 +119,3 @@ config DTL
>  	  which are accessible through a debugfs file.
>  
>  	  Say N if you are unsure.
> -
> -config PSERIES_IDLE
> -	bool "Cpuidle driver for pSeries platforms"
> -	depends on CPU_IDLE
> -	depends on PPC_PSERIES
> -	default y
> -	help
> -	  Select this option to enable processor idle state management
> -	  through cpuidle subsystem.
> diff --git a/arch/powerpc/platforms/pseries/Makefile b/arch/powerpc/platforms/pseries/Makefile
> index 8ae0103..4b22379 100644
> --- a/arch/powerpc/platforms/pseries/Makefile
> +++ b/arch/powerpc/platforms/pseries/Makefile
> @@ -21,7 +21,6 @@ obj-$(CONFIG_HCALL_STATS)	+= hvCall_inst.o
>  obj-$(CONFIG_CMM)		+= cmm.o
>  obj-$(CONFIG_DTL)		+= dtl.o
>  obj-$(CONFIG_IO_EVENT_IRQ)	+= io_event_irq.o
> -obj-$(CONFIG_PSERIES_IDLE)	+= processor_idle.o
>  
>  ifeq ($(CONFIG_PPC_PSERIES),y)
>  obj-$(CONFIG_SUSPEND)		+= suspend.o
> diff --git a/arch/powerpc/platforms/pseries/processor_idle.c b/arch/powerpc/platforms/pseries/processor_idle.c
> deleted file mode 100644
> index c905b99..0000000
> --- a/arch/powerpc/platforms/pseries/processor_idle.c
> +++ /dev/null
> @@ -1,360 +0,0 @@
> -/*
> - *  processor_idle - idle state cpuidle driver.
> - *  Adapted from drivers/idle/intel_idle.c and
> - *  drivers/acpi/processor_idle.c
> - *
> - */
> -
> -#include <linux/kernel.h>
> -#include <linux/module.h>
> -#include <linux/init.h>
> -#include <linux/moduleparam.h>
> -#include <linux/cpuidle.h>
> -#include <linux/cpu.h>
> -#include <linux/notifier.h>
> -
> -#include <asm/paca.h>
> -#include <asm/reg.h>
> -#include <asm/machdep.h>
> -#include <asm/firmware.h>
> -#include <asm/runlatch.h>
> -#include <asm/plpar_wrappers.h>
> -
> -struct cpuidle_driver pseries_idle_driver = {
> -	.name             = "pseries_idle",
> -	.owner            = THIS_MODULE,
> -};
> -
> -#define MAX_IDLE_STATE_COUNT	2
> -
> -static int max_idle_state = MAX_IDLE_STATE_COUNT - 1;
> -static struct cpuidle_device __percpu *pseries_cpuidle_devices;
> -static struct cpuidle_state *cpuidle_state_table;
> -
> -static inline void idle_loop_prolog(unsigned long *in_purr)
> -{
> -	*in_purr = mfspr(SPRN_PURR);
> -	/*
> -	 * Indicate to the HV that we are idle. Now would be
> -	 * a good time to find other work to dispatch.
> -	 */
> -	get_lppaca()->idle = 1;
> -}
> -
> -static inline void idle_loop_epilog(unsigned long in_purr)
> -{
> -	get_lppaca()->wait_state_cycles += mfspr(SPRN_PURR) - in_purr;
> -	get_lppaca()->idle = 0;
> -}
> -
> -static int snooze_loop(struct cpuidle_device *dev,
> -			struct cpuidle_driver *drv,
> -			int index)
> -{
> -	unsigned long in_purr;
> -	int cpu = dev->cpu;
> -
> -	idle_loop_prolog(&in_purr);
> -	local_irq_enable();
> -	set_thread_flag(TIF_POLLING_NRFLAG);
> -
> -	while ((!need_resched()) && cpu_online(cpu)) {
> -		ppc64_runlatch_off();
> -		HMT_low();
> -		HMT_very_low();
> -	}
> -
> -	HMT_medium();
> -	clear_thread_flag(TIF_POLLING_NRFLAG);
> -	smp_mb();
> -
> -	idle_loop_epilog(in_purr);
> -
> -	return index;
> -}
> -
> -static void check_and_cede_processor(void)
> -{
> -	/*
> -	 * Ensure our interrupt state is properly tracked,
> -	 * also checks if no interrupt has occurred while we
> -	 * were soft-disabled
> -	 */
> -	if (prep_irq_for_idle()) {
> -		cede_processor();
> -#ifdef CONFIG_TRACE_IRQFLAGS
> -		/* Ensure that H_CEDE returns with IRQs on */
> -		if (WARN_ON(!(mfmsr() & MSR_EE)))
> -			__hard_irq_enable();
> -#endif
> -	}
> -}
> -
> -static int dedicated_cede_loop(struct cpuidle_device *dev,
> -				struct cpuidle_driver *drv,
> -				int index)
> -{
> -	unsigned long in_purr;
> -
> -	idle_loop_prolog(&in_purr);
> -	get_lppaca()->donate_dedicated_cpu = 1;
> -
> -	ppc64_runlatch_off();
> -	HMT_medium();
> -	check_and_cede_processor();
> -
> -	get_lppaca()->donate_dedicated_cpu = 0;
> -
> -	idle_loop_epilog(in_purr);
> -
> -	return index;
> -}
> -
> -static int shared_cede_loop(struct cpuidle_device *dev,
> -			struct cpuidle_driver *drv,
> -			int index)
> -{
> -	unsigned long in_purr;
> -
> -	idle_loop_prolog(&in_purr);
> -
> -	/*
> -	 * Yield the processor to the hypervisor.  We return if
> -	 * an external interrupt occurs (which are driven prior
> -	 * to returning here) or if a prod occurs from another
> -	 * processor. When returning here, external interrupts
> -	 * are enabled.
> -	 */
> -	check_and_cede_processor();
> -
> -	idle_loop_epilog(in_purr);
> -
> -	return index;
> -}
> -
> -/*
> - * States for dedicated partition case.
> - */
> -static struct cpuidle_state dedicated_states[MAX_IDLE_STATE_COUNT] = {
> -	{ /* Snooze */
> -		.name = "snooze",
> -		.desc = "snooze",
> -		.flags = CPUIDLE_FLAG_TIME_VALID,
> -		.exit_latency = 0,
> -		.target_residency = 0,
> -		.enter = &snooze_loop },
> -	{ /* CEDE */
> -		.name = "CEDE",
> -		.desc = "CEDE",
> -		.flags = CPUIDLE_FLAG_TIME_VALID,
> -		.exit_latency = 10,
> -		.target_residency = 100,
> -		.enter = &dedicated_cede_loop },
> -};
> -
> -/*
> - * States for shared partition case.
> - */
> -static struct cpuidle_state shared_states[MAX_IDLE_STATE_COUNT] = {
> -	{ /* Shared Cede */
> -		.name = "Shared Cede",
> -		.desc = "Shared Cede",
> -		.flags = CPUIDLE_FLAG_TIME_VALID,
> -		.exit_latency = 0,
> -		.target_residency = 0,
> -		.enter = &shared_cede_loop },
> -};
> -
> -void update_smt_snooze_delay(int cpu, int residency)
> -{
> -	struct cpuidle_driver *drv = cpuidle_get_driver();
> -	struct cpuidle_device *dev = per_cpu(cpuidle_devices, cpu);
> -
> -	if (cpuidle_state_table != dedicated_states)
> -		return;
> -
> -	if (residency < 0) {
> -		/* Disable the Nap state on that cpu */
> -		if (dev)
> -			dev->states_usage[1].disable = 1;
> -	} else
> -		if (drv)
> -			drv->states[1].target_residency = residency;
> -}
> -
> -static int pseries_cpuidle_add_cpu_notifier(struct notifier_block *n,
> -			unsigned long action, void *hcpu)
> -{
> -	int hotcpu = (unsigned long)hcpu;
> -	struct cpuidle_device *dev =
> -			per_cpu_ptr(pseries_cpuidle_devices, hotcpu);
> -
> -	if (dev && cpuidle_get_driver()) {
> -		switch (action) {
> -		case CPU_ONLINE:
> -		case CPU_ONLINE_FROZEN:
> -			cpuidle_pause_and_lock();
> -			cpuidle_enable_device(dev);
> -			cpuidle_resume_and_unlock();
> -			break;
> -
> -		case CPU_DEAD:
> -		case CPU_DEAD_FROZEN:
> -			cpuidle_pause_and_lock();
> -			cpuidle_disable_device(dev);
> -			cpuidle_resume_and_unlock();
> -			break;
> -
> -		default:
> -			return NOTIFY_DONE;
> -		}
> -	}
> -	return NOTIFY_OK;
> -}
> -
> -static struct notifier_block setup_hotplug_notifier = {
> -	.notifier_call = pseries_cpuidle_add_cpu_notifier,
> -};
> -
> -/*
> - * pseries_cpuidle_driver_init()
> - */
> -static int pseries_cpuidle_driver_init(void)
> -{
> -	int idle_state;
> -	struct cpuidle_driver *drv = &pseries_idle_driver;
> -
> -	drv->state_count = 0;
> -
> -	for (idle_state = 0; idle_state < MAX_IDLE_STATE_COUNT; ++idle_state) {
> -
> -		if (idle_state > max_idle_state)
> -			break;
> -
> -		/* is the state not enabled? */
> -		if (cpuidle_state_table[idle_state].enter == NULL)
> -			continue;
> -
> -		drv->states[drv->state_count] =	/* structure copy */
> -			cpuidle_state_table[idle_state];
> -
> -		drv->state_count += 1;
> -	}
> -
> -	return 0;
> -}
> -
> -/* pseries_idle_devices_uninit(void)
> - * unregister cpuidle devices and de-allocate memory
> - */
> -static void pseries_idle_devices_uninit(void)
> -{
> -	int i;
> -	struct cpuidle_device *dev;
> -
> -	for_each_possible_cpu(i) {
> -		dev = per_cpu_ptr(pseries_cpuidle_devices, i);
> -		cpuidle_unregister_device(dev);
> -	}
> -
> -	free_percpu(pseries_cpuidle_devices);
> -	return;
> -}
> -
> -/* pseries_idle_devices_init()
> - * allocate, initialize and register cpuidle device
> - */
> -static int pseries_idle_devices_init(void)
> -{
> -	int i;
> -	struct cpuidle_driver *drv = &pseries_idle_driver;
> -	struct cpuidle_device *dev;
> -
> -	pseries_cpuidle_devices = alloc_percpu(struct cpuidle_device);
> -	if (pseries_cpuidle_devices == NULL)
> -		return -ENOMEM;
> -
> -	for_each_possible_cpu(i) {
> -		dev = per_cpu_ptr(pseries_cpuidle_devices, i);
> -		dev->state_count = drv->state_count;
> -		dev->cpu = i;
> -		if (cpuidle_register_device(dev)) {
> -			printk(KERN_DEBUG \
> -				"cpuidle_register_device %d failed!\n", i);
> -			return -EIO;
> -		}
> -	}
> -
> -	return 0;
> -}
> -
> -/*
> - * pseries_idle_probe()
> - * Choose state table for shared versus dedicated partition
> - */
> -static int pseries_idle_probe(void)
> -{
> -
> -	if (!firmware_has_feature(FW_FEATURE_SPLPAR))
> -		return -ENODEV;
> -
> -	if (cpuidle_disable != IDLE_NO_OVERRIDE)
> -		return -ENODEV;
> -
> -	if (max_idle_state == 0) {
> -		printk(KERN_DEBUG "pseries processor idle disabled.\n");
> -		return -EPERM;
> -	}
> -
> -	if (get_lppaca()->shared_proc)
> -		cpuidle_state_table = shared_states;
> -	else
> -		cpuidle_state_table = dedicated_states;
> -
> -	return 0;
> -}
> -
> -static int __init pseries_processor_idle_init(void)
> -{
> -	int retval;
> -
> -	retval = pseries_idle_probe();
> -	if (retval)
> -		return retval;
> -
> -	pseries_cpuidle_driver_init();
> -	retval = cpuidle_register_driver(&pseries_idle_driver);
> -	if (retval) {
> -		printk(KERN_DEBUG "Registration of pseries driver failed.\n");
> -		return retval;
> -	}
> -
> -	retval = pseries_idle_devices_init();
> -	if (retval) {
> -		pseries_idle_devices_uninit();
> -		cpuidle_unregister_driver(&pseries_idle_driver);
> -		return retval;
> -	}
> -
> -	register_cpu_notifier(&setup_hotplug_notifier);
> -	printk(KERN_DEBUG "pseries_idle_driver registered\n");
> -
> -	return 0;
> -}
> -
> -static void __exit pseries_processor_idle_exit(void)
> -{
> -
> -	unregister_cpu_notifier(&setup_hotplug_notifier);
> -	pseries_idle_devices_uninit();
> -	cpuidle_unregister_driver(&pseries_idle_driver);
> -
> -	return;
> -}
> -
> -module_init(pseries_processor_idle_init);
> -module_exit(pseries_processor_idle_exit);
> -
> -MODULE_AUTHOR("Deepthi Dharwar <deepthi@linux.vnet.ibm.com>");
> -MODULE_DESCRIPTION("Cpuidle driver for POWER");
> -MODULE_LICENSE("GPL");
> diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
> index 0e2cd5c..53ce03d 100644
> --- a/drivers/cpuidle/Kconfig
> +++ b/drivers/cpuidle/Kconfig
> @@ -42,6 +42,13 @@ config CPU_IDLE_ZYNQ
>  	help
>  	  Select this to enable cpuidle on Xilinx Zynq processors.
>  
> +config CPU_IDLE_POWERPC
> +	bool "CPU Idle driver for POWERPC platforms"
> +	depends on PPC_PSERIES || PPC_POWERNV
> +	default y
> +        help
> +          Select this option to enable processor idle state management
> +	  for POWERPC platform.
>  endif
>  
>  config ARCH_NEEDS_CPU_IDLE_COUPLED
> diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
> index 8767a7b..d12e205 100644
> --- a/drivers/cpuidle/Makefile
> +++ b/drivers/cpuidle/Makefile
> @@ -8,3 +8,5 @@ obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
>  obj-$(CONFIG_CPU_IDLE_CALXEDA) += cpuidle-calxeda.o
>  obj-$(CONFIG_ARCH_KIRKWOOD) += cpuidle-kirkwood.o
>  obj-$(CONFIG_CPU_IDLE_ZYNQ) += cpuidle-zynq.o
> +
> +obj-$(CONFIG_CPU_IDLE_POWERPC) += cpuidle-powerpc.o
> diff --git a/drivers/cpuidle/cpuidle-powerpc.c b/drivers/cpuidle/cpuidle-powerpc.c
> new file mode 100644
> index 0000000..3662aba
> --- /dev/null
> +++ b/drivers/cpuidle/cpuidle-powerpc.c
> @@ -0,0 +1,304 @@
> +/*
> + *  processor_idle - idle state cpuidle driver.
> + *  Adapted from drivers/idle/intel_idle.c and
> + *  drivers/acpi/processor_idle.c
> + *
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/init.h>
> +#include <linux/moduleparam.h>
> +#include <linux/cpuidle.h>
> +#include <linux/cpu.h>
> +#include <linux/notifier.h>
> +
> +#include <asm/paca.h>
> +#include <asm/reg.h>
> +#include <asm/machdep.h>
> +#include <asm/firmware.h>
> +#include <asm/runlatch.h>
> +#include <asm/plpar_wrappers.h>
> +
> +struct cpuidle_driver powerpc_idle_driver = {
> +	.name             = "powerpc_idle",
> +	.owner            = THIS_MODULE,
> +};
> +
> +#define MAX_IDLE_STATE_COUNT	2
> +
> +static int max_idle_state = MAX_IDLE_STATE_COUNT - 1;
> +static struct cpuidle_state *cpuidle_state_table;
> +
> +static inline void idle_loop_prolog(unsigned long *in_purr)
> +{
> +	*in_purr = mfspr(SPRN_PURR);
> +	/*
> +	 * Indicate to the HV that we are idle. Now would be
> +	 * a good time to find other work to dispatch.
> +	 */
> +	set_lppaca_idle(1);
> +}
> +
> +static inline void idle_loop_epilog(unsigned long in_purr)
> +{
> +	add_lppaca_wait_state(mfspr(SPRN_PURR) - in_purr);
> +	set_lppaca_idle(0);
> +}
> +
> +static int snooze_loop(struct cpuidle_device *dev,
> +			struct cpuidle_driver *drv,
> +			int index)
> +{
> +	unsigned long in_purr;
> +
> +	idle_loop_prolog(&in_purr);
> +	local_irq_enable();
> +	set_thread_flag(TIF_POLLING_NRFLAG);
> +
> +	while (!need_resched()) {
> +		ppc64_runlatch_off();
> +		HMT_low();
> +		HMT_very_low();
> +	}
> +
> +	HMT_medium();
> +	clear_thread_flag(TIF_POLLING_NRFLAG);
> +	smp_mb();
> +
> +	idle_loop_epilog(in_purr);
> +
> +	return index;
> +}
> +
> +static void check_and_cede_processor(void)
> +{
> +	/*
> +	 * Ensure our interrupt state is properly tracked,
> +	 * also checks if no interrupt has occurred while we
> +	 * were soft-disabled
> +	 */
> +	if (prep_irq_for_idle()) {
> +		cede_processor();
> +#ifdef CONFIG_TRACE_IRQFLAGS
> +		/* Ensure that H_CEDE returns with IRQs on */
> +		if (WARN_ON(!(mfmsr() & MSR_EE)))
> +			__hard_irq_enable();
> +#endif
> +	}
> +}
> +
> +static int dedicated_cede_loop(struct cpuidle_device *dev,
> +				struct cpuidle_driver *drv,
> +				int index)
> +{
> +	unsigned long in_purr;
> +
> +	idle_loop_prolog(&in_purr);
> +	set_lppaca_donate_dedicated_cpu(1);
> +
> +	ppc64_runlatch_off();
> +	HMT_medium();
> +	check_and_cede_processor();
> +
> +	set_lppaca_donate_dedicated_cpu(0);
> +	idle_loop_epilog(in_purr);
> +
> +	return index;
> +}
> +
> +static int shared_cede_loop(struct cpuidle_device *dev,
> +			struct cpuidle_driver *drv,
> +			int index)
> +{
> +	unsigned long in_purr;
> +
> +	idle_loop_prolog(&in_purr);
> +
> +	/*
> +	 * Yield the processor to the hypervisor.  We return if
> +	 * an external interrupt occurs (which are driven prior
> +	 * to returning here) or if a prod occurs from another
> +	 * processor. When returning here, external interrupts
> +	 * are enabled.
> +	 */
> +	check_and_cede_processor();
> +
> +	idle_loop_epilog(in_purr);
> +
> +	return index;
> +}
> +
> +/*
> + * States for dedicated partition case.
> + */
> +static struct cpuidle_state dedicated_states[MAX_IDLE_STATE_COUNT] = {
> +	{ /* Snooze */
> +		.name = "snooze",
> +		.desc = "snooze",
> +		.flags = CPUIDLE_FLAG_TIME_VALID,
> +		.exit_latency = 0,
> +		.target_residency = 0,
> +		.enter = &snooze_loop },
> +	{ /* CEDE */
> +		.name = "CEDE",
> +		.desc = "CEDE",
> +		.flags = CPUIDLE_FLAG_TIME_VALID,
> +		.exit_latency = 10,
> +		.target_residency = 100,
> +		.enter = &dedicated_cede_loop },
> +};
> +
> +/*
> + * States for shared partition case.
> + */
> +static struct cpuidle_state shared_states[MAX_IDLE_STATE_COUNT] = {
> +	{ /* Shared Cede */
> +		.name = "Shared Cede",
> +		.desc = "Shared Cede",
> +		.flags = CPUIDLE_FLAG_TIME_VALID,
> +		.exit_latency = 0,
> +		.target_residency = 0,
> +		.enter = &shared_cede_loop },
> +};
> +
> +void update_smt_snooze_delay(int cpu, int residency)
> +{
> +	struct cpuidle_driver *drv = cpuidle_get_driver();
> +	struct cpuidle_device *dev = per_cpu(cpuidle_devices, cpu);
> +
> +	if (cpuidle_state_table != dedicated_states)
> +		return;
> +
> +	if (residency < 0) {
> +		/* Disable the Nap state on that cpu */
> +		if (dev)
> +			dev->states_usage[1].disable = 1;
> +	} else
> +		if (drv)
> +			drv->states[1].target_residency = residency;
> +}
> +
> +static int powerpc_cpuidle_add_cpu_notifier(struct notifier_block *n,
> +			unsigned long action, void *hcpu)
> +{
> +	int hotcpu = (unsigned long)hcpu;
> +	struct cpuidle_device *dev =
> +			per_cpu_ptr(cpuidle_devices, hotcpu);
> +
> +	if (dev && cpuidle_get_driver()) {
> +		switch (action) {
> +		case CPU_ONLINE:
> +		case CPU_ONLINE_FROZEN:
> +			cpuidle_pause_and_lock();
> +			cpuidle_enable_device(dev);
> +			cpuidle_resume_and_unlock();
> +			break;
> +
> +		case CPU_DEAD:
> +		case CPU_DEAD_FROZEN:
> +			cpuidle_pause_and_lock();
> +			cpuidle_disable_device(dev);
> +			cpuidle_resume_and_unlock();
> +			break;
> +
> +		default:
> +			return NOTIFY_DONE;
> +		}
> +	}
> +	return NOTIFY_OK;
> +}
> +
> +static struct notifier_block setup_hotplug_notifier = {
> +	.notifier_call = powerpc_cpuidle_add_cpu_notifier,
> +};
> +
> +/*
> + * powerpc_cpuidle_driver_init()
> + */
> +static int powerpc_cpuidle_driver_init(void)
> +{
> +	int idle_state;
> +	struct cpuidle_driver *drv = &powerpc_idle_driver;
> +
> +	drv->state_count = 0;
> +
> +	for (idle_state = 0; idle_state < MAX_IDLE_STATE_COUNT; ++idle_state) {
> +
> +		if (idle_state > max_idle_state)
> +			break;
> +
> +		/* is the state not enabled? */
> +		if (cpuidle_state_table[idle_state].enter == NULL)
> +			continue;
> +
> +		drv->states[drv->state_count] =	/* structure copy */
> +			cpuidle_state_table[idle_state];
> +
> +		drv->state_count += 1;
> +	}
> +
> +	return 0;
> +}
> +
> +/*
> + * powerpc_idle_probe()
> + * Choose state table for shared versus dedicated partition
> + */
> +static int powerpc_idle_probe(void)
> +{
> +
> +	if (cpuidle_disable != IDLE_NO_OVERRIDE)
> +		return -ENODEV;
> +
> +	if (max_idle_state == 0) {
> +		printk(KERN_DEBUG "powerpc processor idle disabled.\n");
> +		return -EPERM;
> +	}
> +
> +	if (firmware_has_feature(FW_FEATURE_SPLPAR)) {
> +		if (get_lppaca_is_shared_proc() == 1)
> +			cpuidle_state_table = shared_states;
> +		else if (get_lppaca_is_shared_proc() == 0)
> +			cpuidle_state_table = dedicated_states;
> +	} else
> +		return -ENODEV;
> +
> +	return 0;
> +}
> +
> +static int __init powerpc_processor_idle_init(void)
> +{
> +	int retval;
> +
> +	retval = powerpc_idle_probe();
> +	if (retval)
> +		return retval;
> +
> +	powerpc_cpuidle_driver_init();
> +	retval = cpuidle_register(&powerpc_idle_driver, NULL);
> +	if (retval) {
> +		printk(KERN_DEBUG "Registration of powerpc driver failed.\n");
> +		return retval;
> +	}
> +
> +	register_cpu_notifier(&setup_hotplug_notifier);
> +	printk(KERN_DEBUG "powerpc_idle_driver registered\n");
> +
> +	return 0;
> +}
> +
> +static void __exit powerpc_processor_idle_exit(void)
> +{
> +
> +	unregister_cpu_notifier(&setup_hotplug_notifier);
> +	cpuidle_unregister(&powerpc_idle_driver);
> +	return;
> +}
> +
> +module_init(powerpc_processor_idle_init);
> +module_exit(powerpc_processor_idle_exit);
> +
> +MODULE_AUTHOR("Deepthi Dharwar <deepthi@linux.vnet.ibm.com>");
> +MODULE_DESCRIPTION("Cpuidle driver for powerpc");
> +MODULE_LICENSE("GPL");

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V4 3/5] powerpc/cpuidle: Generic powerpc backend cpuidle driver.
  2013-08-22 10:56   ` Bartlomiej Zolnierkiewicz
@ 2013-08-23 10:19     ` Deepthi Dharwar
  0 siblings, 0 replies; 8+ messages in thread
From: Deepthi Dharwar @ 2013-08-23 10:19 UTC (permalink / raw)
  To: Bartlomiej Zolnierkiewicz
  Cc: daniel.lezcano, linux-kernel, dongsheng.wang, rjw, scottwood,
	srivatsa.bhat, preeti, linux-pm, linuxppc-dev

Hi Bartlomiej,

Thanks for the review.

On 08/22/2013 04:26 PM, Bartlomiej Zolnierkiewicz wrote:
> 
> Hi,
> 
> On Thursday, August 22, 2013 11:00:29 AM Deepthi Dharwar wrote:
>> This patch involves moving the current pseries_idle backend driver code
>> from pseries/processor_idle.c to drivers/cpuidle/cpuidle-powerpc.c,
>> and making the backend code generic enough to be able to extend this
>> driver code for both powernv and pseries.
>>
>> It enables the support for pseries platform, such that going forward the same code
>> with minimal efforts can be re-used for a common driver on powernv
>> and can be further extended to support cpuidle idle state mgmt for the rest
>> of the powerpc platforms in the future. This removes a lot of code duplicacy,
>> making the code elegant.
> 
> This patch mixes the code movement with the actual code changes which is
> not a good practice as it makes review more difficult and is generally bad
> from the long term maintainance POV.
> 
> Please split this patch on code movement and code changes parts.

Sure. I shall do so.

> V4 of this patch now also seems to contain changes which I posted on
> Tuesday as a part of dev->state_count removal patchset:
> 
> http://permalink.gmane.org/gmane.linux.power-management.general/37392
> http://permalink.gmane.org/gmane.linux.power-management.general/37393
> 
> so some work probably got duplicated. :(
> 

Sorry about that. I have been re-writing this driver over the last few
weeks and this cleanup was on my to-do list since V1 as pointed out by
Daniel. I have missed seeing your cleanup.

Thanks for patch !

Regards,
Deepthi


> Best regards,
> --
> Bartlomiej Zolnierkiewicz
> Samsung R&D Institute Poland
> Samsung Electronics
> 
>> Signed-off-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
>> ---
>>  arch/powerpc/include/asm/paca.h                 |   23 +
>>  arch/powerpc/include/asm/processor.h            |    2 
>>  arch/powerpc/platforms/pseries/Kconfig          |    9 -
>>  arch/powerpc/platforms/pseries/Makefile         |    1 
>>  arch/powerpc/platforms/pseries/processor_idle.c |  360 -----------------------
>>  drivers/cpuidle/Kconfig                         |    7 
>>  drivers/cpuidle/Makefile                        |    2 
>>  drivers/cpuidle/cpuidle-powerpc.c               |  304 +++++++++++++++++++
>>  8 files changed, 337 insertions(+), 371 deletions(-)
>>  delete mode 100644 arch/powerpc/platforms/pseries/processor_idle.c
>>  create mode 100644 drivers/cpuidle/cpuidle-powerpc.c
>>
>> diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
>> index 77c91e7..7bd83ff 100644
>> --- a/arch/powerpc/include/asm/paca.h
>> +++ b/arch/powerpc/include/asm/paca.h
>> @@ -175,6 +175,29 @@ extern void setup_paca(struct paca_struct *new_paca);
>>  extern void allocate_pacas(void);
>>  extern void free_unused_pacas(void);
>>  
>> +#ifdef CONFIG_PPC_BOOK3S
>> +#define get_lppaca_is_shared_proc()  get_paca()->lppaca_ptr->shared_proc
>> +static inline void set_lppaca_idle(u8  idle)
>> +{
>> +	get_paca()->lppaca_ptr->idle = idle;
>> +}
>> +
>> +static inline void add_lppaca_wait_state(u64 cycles)
>> +{
>> +	get_paca()->lppaca_ptr->wait_state_cycles += cycles;
>> +}
>> +
>> +static inline void set_lppaca_donate_dedicated_cpu(u8 value)
>> +{
>> +	get_paca()->lppaca_ptr->donate_dedicated_cpu = value;
>> +}
>> +#else
>> +#define get_lppaca_is_shared_proc()	-1
>> +static inline void set_lppaca_idle(u8 idle) { }
>> +static inline void  add_lppaca_wait_state(u64 cycles) { }
>> +static inline void  set_lppaca_donate_dedicated_cpu(u8 value) { }
>> +#endif
>> +
>>  #else /* CONFIG_PPC64 */
>>  
>>  static inline void allocate_pacas(void) { };
>> diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
>> index e378ccc..5f57c56 100644
>> --- a/arch/powerpc/include/asm/processor.h
>> +++ b/arch/powerpc/include/asm/processor.h
>> @@ -430,7 +430,7 @@ enum idle_boot_override {IDLE_NO_OVERRIDE = 0, IDLE_POWERSAVE_OFF};
>>  extern int powersave_nap;	/* set if nap mode can be used in idle loop */
>>  extern void power7_nap(void);
>>  
>> -#ifdef CONFIG_PSERIES_IDLE
>> +#ifdef CONFIG_CPU_IDLE_POWERPC
>>  extern void update_smt_snooze_delay(int cpu, int residency);
>>  #else
>>  static inline void update_smt_snooze_delay(int cpu, int residency) {}
>> diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
>> index 62b4f80..bb59bb0 100644
>> --- a/arch/powerpc/platforms/pseries/Kconfig
>> +++ b/arch/powerpc/platforms/pseries/Kconfig
>> @@ -119,12 +119,3 @@ config DTL
>>  	  which are accessible through a debugfs file.
>>  
>>  	  Say N if you are unsure.
>> -
>> -config PSERIES_IDLE
>> -	bool "Cpuidle driver for pSeries platforms"
>> -	depends on CPU_IDLE
>> -	depends on PPC_PSERIES
>> -	default y
>> -	help
>> -	  Select this option to enable processor idle state management
>> -	  through cpuidle subsystem.
>> diff --git a/arch/powerpc/platforms/pseries/Makefile b/arch/powerpc/platforms/pseries/Makefile
>> index 8ae0103..4b22379 100644
>> --- a/arch/powerpc/platforms/pseries/Makefile
>> +++ b/arch/powerpc/platforms/pseries/Makefile
>> @@ -21,7 +21,6 @@ obj-$(CONFIG_HCALL_STATS)	+= hvCall_inst.o
>>  obj-$(CONFIG_CMM)		+= cmm.o
>>  obj-$(CONFIG_DTL)		+= dtl.o
>>  obj-$(CONFIG_IO_EVENT_IRQ)	+= io_event_irq.o
>> -obj-$(CONFIG_PSERIES_IDLE)	+= processor_idle.o
>>  
>>  ifeq ($(CONFIG_PPC_PSERIES),y)
>>  obj-$(CONFIG_SUSPEND)		+= suspend.o
>> diff --git a/arch/powerpc/platforms/pseries/processor_idle.c b/arch/powerpc/platforms/pseries/processor_idle.c
>> deleted file mode 100644
>> index c905b99..0000000
>> --- a/arch/powerpc/platforms/pseries/processor_idle.c
>> +++ /dev/null
>> @@ -1,360 +0,0 @@
>> -/*
>> - *  processor_idle - idle state cpuidle driver.
>> - *  Adapted from drivers/idle/intel_idle.c and
>> - *  drivers/acpi/processor_idle.c
>> - *
>> - */
>> -
>> -#include <linux/kernel.h>
>> -#include <linux/module.h>
>> -#include <linux/init.h>
>> -#include <linux/moduleparam.h>
>> -#include <linux/cpuidle.h>
>> -#include <linux/cpu.h>
>> -#include <linux/notifier.h>
>> -
>> -#include <asm/paca.h>
>> -#include <asm/reg.h>
>> -#include <asm/machdep.h>
>> -#include <asm/firmware.h>
>> -#include <asm/runlatch.h>
>> -#include <asm/plpar_wrappers.h>
>> -
>> -struct cpuidle_driver pseries_idle_driver = {
>> -	.name             = "pseries_idle",
>> -	.owner            = THIS_MODULE,
>> -};
>> -
>> -#define MAX_IDLE_STATE_COUNT	2
>> -
>> -static int max_idle_state = MAX_IDLE_STATE_COUNT - 1;
>> -static struct cpuidle_device __percpu *pseries_cpuidle_devices;
>> -static struct cpuidle_state *cpuidle_state_table;
>> -
>> -static inline void idle_loop_prolog(unsigned long *in_purr)
>> -{
>> -	*in_purr = mfspr(SPRN_PURR);
>> -	/*
>> -	 * Indicate to the HV that we are idle. Now would be
>> -	 * a good time to find other work to dispatch.
>> -	 */
>> -	get_lppaca()->idle = 1;
>> -}
>> -
>> -static inline void idle_loop_epilog(unsigned long in_purr)
>> -{
>> -	get_lppaca()->wait_state_cycles += mfspr(SPRN_PURR) - in_purr;
>> -	get_lppaca()->idle = 0;
>> -}
>> -
>> -static int snooze_loop(struct cpuidle_device *dev,
>> -			struct cpuidle_driver *drv,
>> -			int index)
>> -{
>> -	unsigned long in_purr;
>> -	int cpu = dev->cpu;
>> -
>> -	idle_loop_prolog(&in_purr);
>> -	local_irq_enable();
>> -	set_thread_flag(TIF_POLLING_NRFLAG);
>> -
>> -	while ((!need_resched()) && cpu_online(cpu)) {
>> -		ppc64_runlatch_off();
>> -		HMT_low();
>> -		HMT_very_low();
>> -	}
>> -
>> -	HMT_medium();
>> -	clear_thread_flag(TIF_POLLING_NRFLAG);
>> -	smp_mb();
>> -
>> -	idle_loop_epilog(in_purr);
>> -
>> -	return index;
>> -}
>> -
>> -static void check_and_cede_processor(void)
>> -{
>> -	/*
>> -	 * Ensure our interrupt state is properly tracked,
>> -	 * also checks if no interrupt has occurred while we
>> -	 * were soft-disabled
>> -	 */
>> -	if (prep_irq_for_idle()) {
>> -		cede_processor();
>> -#ifdef CONFIG_TRACE_IRQFLAGS
>> -		/* Ensure that H_CEDE returns with IRQs on */
>> -		if (WARN_ON(!(mfmsr() & MSR_EE)))
>> -			__hard_irq_enable();
>> -#endif
>> -	}
>> -}
>> -
>> -static int dedicated_cede_loop(struct cpuidle_device *dev,
>> -				struct cpuidle_driver *drv,
>> -				int index)
>> -{
>> -	unsigned long in_purr;
>> -
>> -	idle_loop_prolog(&in_purr);
>> -	get_lppaca()->donate_dedicated_cpu = 1;
>> -
>> -	ppc64_runlatch_off();
>> -	HMT_medium();
>> -	check_and_cede_processor();
>> -
>> -	get_lppaca()->donate_dedicated_cpu = 0;
>> -
>> -	idle_loop_epilog(in_purr);
>> -
>> -	return index;
>> -}
>> -
>> -static int shared_cede_loop(struct cpuidle_device *dev,
>> -			struct cpuidle_driver *drv,
>> -			int index)
>> -{
>> -	unsigned long in_purr;
>> -
>> -	idle_loop_prolog(&in_purr);
>> -
>> -	/*
>> -	 * Yield the processor to the hypervisor.  We return if
>> -	 * an external interrupt occurs (which are driven prior
>> -	 * to returning here) or if a prod occurs from another
>> -	 * processor. When returning here, external interrupts
>> -	 * are enabled.
>> -	 */
>> -	check_and_cede_processor();
>> -
>> -	idle_loop_epilog(in_purr);
>> -
>> -	return index;
>> -}
>> -
>> -/*
>> - * States for dedicated partition case.
>> - */
>> -static struct cpuidle_state dedicated_states[MAX_IDLE_STATE_COUNT] = {
>> -	{ /* Snooze */
>> -		.name = "snooze",
>> -		.desc = "snooze",
>> -		.flags = CPUIDLE_FLAG_TIME_VALID,
>> -		.exit_latency = 0,
>> -		.target_residency = 0,
>> -		.enter = &snooze_loop },
>> -	{ /* CEDE */
>> -		.name = "CEDE",
>> -		.desc = "CEDE",
>> -		.flags = CPUIDLE_FLAG_TIME_VALID,
>> -		.exit_latency = 10,
>> -		.target_residency = 100,
>> -		.enter = &dedicated_cede_loop },
>> -};
>> -
>> -/*
>> - * States for shared partition case.
>> - */
>> -static struct cpuidle_state shared_states[MAX_IDLE_STATE_COUNT] = {
>> -	{ /* Shared Cede */
>> -		.name = "Shared Cede",
>> -		.desc = "Shared Cede",
>> -		.flags = CPUIDLE_FLAG_TIME_VALID,
>> -		.exit_latency = 0,
>> -		.target_residency = 0,
>> -		.enter = &shared_cede_loop },
>> -};
>> -
>> -void update_smt_snooze_delay(int cpu, int residency)
>> -{
>> -	struct cpuidle_driver *drv = cpuidle_get_driver();
>> -	struct cpuidle_device *dev = per_cpu(cpuidle_devices, cpu);
>> -
>> -	if (cpuidle_state_table != dedicated_states)
>> -		return;
>> -
>> -	if (residency < 0) {
>> -		/* Disable the Nap state on that cpu */
>> -		if (dev)
>> -			dev->states_usage[1].disable = 1;
>> -	} else
>> -		if (drv)
>> -			drv->states[1].target_residency = residency;
>> -}
>> -
>> -static int pseries_cpuidle_add_cpu_notifier(struct notifier_block *n,
>> -			unsigned long action, void *hcpu)
>> -{
>> -	int hotcpu = (unsigned long)hcpu;
>> -	struct cpuidle_device *dev =
>> -			per_cpu_ptr(pseries_cpuidle_devices, hotcpu);
>> -
>> -	if (dev && cpuidle_get_driver()) {
>> -		switch (action) {
>> -		case CPU_ONLINE:
>> -		case CPU_ONLINE_FROZEN:
>> -			cpuidle_pause_and_lock();
>> -			cpuidle_enable_device(dev);
>> -			cpuidle_resume_and_unlock();
>> -			break;
>> -
>> -		case CPU_DEAD:
>> -		case CPU_DEAD_FROZEN:
>> -			cpuidle_pause_and_lock();
>> -			cpuidle_disable_device(dev);
>> -			cpuidle_resume_and_unlock();
>> -			break;
>> -
>> -		default:
>> -			return NOTIFY_DONE;
>> -		}
>> -	}
>> -	return NOTIFY_OK;
>> -}
>> -
>> -static struct notifier_block setup_hotplug_notifier = {
>> -	.notifier_call = pseries_cpuidle_add_cpu_notifier,
>> -};
>> -
>> -/*
>> - * pseries_cpuidle_driver_init()
>> - */
>> -static int pseries_cpuidle_driver_init(void)
>> -{
>> -	int idle_state;
>> -	struct cpuidle_driver *drv = &pseries_idle_driver;
>> -
>> -	drv->state_count = 0;
>> -
>> -	for (idle_state = 0; idle_state < MAX_IDLE_STATE_COUNT; ++idle_state) {
>> -
>> -		if (idle_state > max_idle_state)
>> -			break;
>> -
>> -		/* is the state not enabled? */
>> -		if (cpuidle_state_table[idle_state].enter == NULL)
>> -			continue;
>> -
>> -		drv->states[drv->state_count] =	/* structure copy */
>> -			cpuidle_state_table[idle_state];
>> -
>> -		drv->state_count += 1;
>> -	}
>> -
>> -	return 0;
>> -}
>> -
>> -/* pseries_idle_devices_uninit(void)
>> - * unregister cpuidle devices and de-allocate memory
>> - */
>> -static void pseries_idle_devices_uninit(void)
>> -{
>> -	int i;
>> -	struct cpuidle_device *dev;
>> -
>> -	for_each_possible_cpu(i) {
>> -		dev = per_cpu_ptr(pseries_cpuidle_devices, i);
>> -		cpuidle_unregister_device(dev);
>> -	}
>> -
>> -	free_percpu(pseries_cpuidle_devices);
>> -	return;
>> -}
>> -
>> -/* pseries_idle_devices_init()
>> - * allocate, initialize and register cpuidle device
>> - */
>> -static int pseries_idle_devices_init(void)
>> -{
>> -	int i;
>> -	struct cpuidle_driver *drv = &pseries_idle_driver;
>> -	struct cpuidle_device *dev;
>> -
>> -	pseries_cpuidle_devices = alloc_percpu(struct cpuidle_device);
>> -	if (pseries_cpuidle_devices == NULL)
>> -		return -ENOMEM;
>> -
>> -	for_each_possible_cpu(i) {
>> -		dev = per_cpu_ptr(pseries_cpuidle_devices, i);
>> -		dev->state_count = drv->state_count;
>> -		dev->cpu = i;
>> -		if (cpuidle_register_device(dev)) {
>> -			printk(KERN_DEBUG \
>> -				"cpuidle_register_device %d failed!\n", i);
>> -			return -EIO;
>> -		}
>> -	}
>> -
>> -	return 0;
>> -}
>> -
>> -/*
>> - * pseries_idle_probe()
>> - * Choose state table for shared versus dedicated partition
>> - */
>> -static int pseries_idle_probe(void)
>> -{
>> -
>> -	if (!firmware_has_feature(FW_FEATURE_SPLPAR))
>> -		return -ENODEV;
>> -
>> -	if (cpuidle_disable != IDLE_NO_OVERRIDE)
>> -		return -ENODEV;
>> -
>> -	if (max_idle_state == 0) {
>> -		printk(KERN_DEBUG "pseries processor idle disabled.\n");
>> -		return -EPERM;
>> -	}
>> -
>> -	if (get_lppaca()->shared_proc)
>> -		cpuidle_state_table = shared_states;
>> -	else
>> -		cpuidle_state_table = dedicated_states;
>> -
>> -	return 0;
>> -}
>> -
>> -static int __init pseries_processor_idle_init(void)
>> -{
>> -	int retval;
>> -
>> -	retval = pseries_idle_probe();
>> -	if (retval)
>> -		return retval;
>> -
>> -	pseries_cpuidle_driver_init();
>> -	retval = cpuidle_register_driver(&pseries_idle_driver);
>> -	if (retval) {
>> -		printk(KERN_DEBUG "Registration of pseries driver failed.\n");
>> -		return retval;
>> -	}
>> -
>> -	retval = pseries_idle_devices_init();
>> -	if (retval) {
>> -		pseries_idle_devices_uninit();
>> -		cpuidle_unregister_driver(&pseries_idle_driver);
>> -		return retval;
>> -	}
>> -
>> -	register_cpu_notifier(&setup_hotplug_notifier);
>> -	printk(KERN_DEBUG "pseries_idle_driver registered\n");
>> -
>> -	return 0;
>> -}
>> -
>> -static void __exit pseries_processor_idle_exit(void)
>> -{
>> -
>> -	unregister_cpu_notifier(&setup_hotplug_notifier);
>> -	pseries_idle_devices_uninit();
>> -	cpuidle_unregister_driver(&pseries_idle_driver);
>> -
>> -	return;
>> -}
>> -
>> -module_init(pseries_processor_idle_init);
>> -module_exit(pseries_processor_idle_exit);
>> -
>> -MODULE_AUTHOR("Deepthi Dharwar <deepthi@linux.vnet.ibm.com>");
>> -MODULE_DESCRIPTION("Cpuidle driver for POWER");
>> -MODULE_LICENSE("GPL");
>> diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
>> index 0e2cd5c..53ce03d 100644
>> --- a/drivers/cpuidle/Kconfig
>> +++ b/drivers/cpuidle/Kconfig
>> @@ -42,6 +42,13 @@ config CPU_IDLE_ZYNQ
>>  	help
>>  	  Select this to enable cpuidle on Xilinx Zynq processors.
>>  
>> +config CPU_IDLE_POWERPC
>> +	bool "CPU Idle driver for POWERPC platforms"
>> +	depends on PPC_PSERIES || PPC_POWERNV
>> +	default y
>> +        help
>> +          Select this option to enable processor idle state management
>> +	  for POWERPC platform.
>>  endif
>>  
>>  config ARCH_NEEDS_CPU_IDLE_COUPLED
>> diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
>> index 8767a7b..d12e205 100644
>> --- a/drivers/cpuidle/Makefile
>> +++ b/drivers/cpuidle/Makefile
>> @@ -8,3 +8,5 @@ obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
>>  obj-$(CONFIG_CPU_IDLE_CALXEDA) += cpuidle-calxeda.o
>>  obj-$(CONFIG_ARCH_KIRKWOOD) += cpuidle-kirkwood.o
>>  obj-$(CONFIG_CPU_IDLE_ZYNQ) += cpuidle-zynq.o
>> +
>> +obj-$(CONFIG_CPU_IDLE_POWERPC) += cpuidle-powerpc.o
>> diff --git a/drivers/cpuidle/cpuidle-powerpc.c b/drivers/cpuidle/cpuidle-powerpc.c
>> new file mode 100644
>> index 0000000..3662aba
>> --- /dev/null
>> +++ b/drivers/cpuidle/cpuidle-powerpc.c
>> @@ -0,0 +1,304 @@
>> +/*
>> + *  processor_idle - idle state cpuidle driver.
>> + *  Adapted from drivers/idle/intel_idle.c and
>> + *  drivers/acpi/processor_idle.c
>> + *
>> + */
>> +
>> +#include <linux/kernel.h>
>> +#include <linux/module.h>
>> +#include <linux/init.h>
>> +#include <linux/moduleparam.h>
>> +#include <linux/cpuidle.h>
>> +#include <linux/cpu.h>
>> +#include <linux/notifier.h>
>> +
>> +#include <asm/paca.h>
>> +#include <asm/reg.h>
>> +#include <asm/machdep.h>
>> +#include <asm/firmware.h>
>> +#include <asm/runlatch.h>
>> +#include <asm/plpar_wrappers.h>
>> +
>> +struct cpuidle_driver powerpc_idle_driver = {
>> +	.name             = "powerpc_idle",
>> +	.owner            = THIS_MODULE,
>> +};
>> +
>> +#define MAX_IDLE_STATE_COUNT	2
>> +
>> +static int max_idle_state = MAX_IDLE_STATE_COUNT - 1;
>> +static struct cpuidle_state *cpuidle_state_table;
>> +
>> +static inline void idle_loop_prolog(unsigned long *in_purr)
>> +{
>> +	*in_purr = mfspr(SPRN_PURR);
>> +	/*
>> +	 * Indicate to the HV that we are idle. Now would be
>> +	 * a good time to find other work to dispatch.
>> +	 */
>> +	set_lppaca_idle(1);
>> +}
>> +
>> +static inline void idle_loop_epilog(unsigned long in_purr)
>> +{
>> +	add_lppaca_wait_state(mfspr(SPRN_PURR) - in_purr);
>> +	set_lppaca_idle(0);
>> +}
>> +
>> +static int snooze_loop(struct cpuidle_device *dev,
>> +			struct cpuidle_driver *drv,
>> +			int index)
>> +{
>> +	unsigned long in_purr;
>> +
>> +	idle_loop_prolog(&in_purr);
>> +	local_irq_enable();
>> +	set_thread_flag(TIF_POLLING_NRFLAG);
>> +
>> +	while (!need_resched()) {
>> +		ppc64_runlatch_off();
>> +		HMT_low();
>> +		HMT_very_low();
>> +	}
>> +
>> +	HMT_medium();
>> +	clear_thread_flag(TIF_POLLING_NRFLAG);
>> +	smp_mb();
>> +
>> +	idle_loop_epilog(in_purr);
>> +
>> +	return index;
>> +}
>> +
>> +static void check_and_cede_processor(void)
>> +{
>> +	/*
>> +	 * Ensure our interrupt state is properly tracked,
>> +	 * also checks if no interrupt has occurred while we
>> +	 * were soft-disabled
>> +	 */
>> +	if (prep_irq_for_idle()) {
>> +		cede_processor();
>> +#ifdef CONFIG_TRACE_IRQFLAGS
>> +		/* Ensure that H_CEDE returns with IRQs on */
>> +		if (WARN_ON(!(mfmsr() & MSR_EE)))
>> +			__hard_irq_enable();
>> +#endif
>> +	}
>> +}
>> +
>> +static int dedicated_cede_loop(struct cpuidle_device *dev,
>> +				struct cpuidle_driver *drv,
>> +				int index)
>> +{
>> +	unsigned long in_purr;
>> +
>> +	idle_loop_prolog(&in_purr);
>> +	set_lppaca_donate_dedicated_cpu(1);
>> +
>> +	ppc64_runlatch_off();
>> +	HMT_medium();
>> +	check_and_cede_processor();
>> +
>> +	set_lppaca_donate_dedicated_cpu(0);
>> +	idle_loop_epilog(in_purr);
>> +
>> +	return index;
>> +}
>> +
>> +static int shared_cede_loop(struct cpuidle_device *dev,
>> +			struct cpuidle_driver *drv,
>> +			int index)
>> +{
>> +	unsigned long in_purr;
>> +
>> +	idle_loop_prolog(&in_purr);
>> +
>> +	/*
>> +	 * Yield the processor to the hypervisor.  We return if
>> +	 * an external interrupt occurs (which are driven prior
>> +	 * to returning here) or if a prod occurs from another
>> +	 * processor. When returning here, external interrupts
>> +	 * are enabled.
>> +	 */
>> +	check_and_cede_processor();
>> +
>> +	idle_loop_epilog(in_purr);
>> +
>> +	return index;
>> +}
>> +
>> +/*
>> + * States for dedicated partition case.
>> + */
>> +static struct cpuidle_state dedicated_states[MAX_IDLE_STATE_COUNT] = {
>> +	{ /* Snooze */
>> +		.name = "snooze",
>> +		.desc = "snooze",
>> +		.flags = CPUIDLE_FLAG_TIME_VALID,
>> +		.exit_latency = 0,
>> +		.target_residency = 0,
>> +		.enter = &snooze_loop },
>> +	{ /* CEDE */
>> +		.name = "CEDE",
>> +		.desc = "CEDE",
>> +		.flags = CPUIDLE_FLAG_TIME_VALID,
>> +		.exit_latency = 10,
>> +		.target_residency = 100,
>> +		.enter = &dedicated_cede_loop },
>> +};
>> +
>> +/*
>> + * States for shared partition case.
>> + */
>> +static struct cpuidle_state shared_states[MAX_IDLE_STATE_COUNT] = {
>> +	{ /* Shared Cede */
>> +		.name = "Shared Cede",
>> +		.desc = "Shared Cede",
>> +		.flags = CPUIDLE_FLAG_TIME_VALID,
>> +		.exit_latency = 0,
>> +		.target_residency = 0,
>> +		.enter = &shared_cede_loop },
>> +};
>> +
>> +void update_smt_snooze_delay(int cpu, int residency)
>> +{
>> +	struct cpuidle_driver *drv = cpuidle_get_driver();
>> +	struct cpuidle_device *dev = per_cpu(cpuidle_devices, cpu);
>> +
>> +	if (cpuidle_state_table != dedicated_states)
>> +		return;
>> +
>> +	if (residency < 0) {
>> +		/* Disable the Nap state on that cpu */
>> +		if (dev)
>> +			dev->states_usage[1].disable = 1;
>> +	} else
>> +		if (drv)
>> +			drv->states[1].target_residency = residency;
>> +}
>> +
>> +static int powerpc_cpuidle_add_cpu_notifier(struct notifier_block *n,
>> +			unsigned long action, void *hcpu)
>> +{
>> +	int hotcpu = (unsigned long)hcpu;
>> +	struct cpuidle_device *dev =
>> +			per_cpu_ptr(cpuidle_devices, hotcpu);
>> +
>> +	if (dev && cpuidle_get_driver()) {
>> +		switch (action) {
>> +		case CPU_ONLINE:
>> +		case CPU_ONLINE_FROZEN:
>> +			cpuidle_pause_and_lock();
>> +			cpuidle_enable_device(dev);
>> +			cpuidle_resume_and_unlock();
>> +			break;
>> +
>> +		case CPU_DEAD:
>> +		case CPU_DEAD_FROZEN:
>> +			cpuidle_pause_and_lock();
>> +			cpuidle_disable_device(dev);
>> +			cpuidle_resume_and_unlock();
>> +			break;
>> +
>> +		default:
>> +			return NOTIFY_DONE;
>> +		}
>> +	}
>> +	return NOTIFY_OK;
>> +}
>> +
>> +static struct notifier_block setup_hotplug_notifier = {
>> +	.notifier_call = powerpc_cpuidle_add_cpu_notifier,
>> +};
>> +
>> +/*
>> + * powerpc_cpuidle_driver_init()
>> + */
>> +static int powerpc_cpuidle_driver_init(void)
>> +{
>> +	int idle_state;
>> +	struct cpuidle_driver *drv = &powerpc_idle_driver;
>> +
>> +	drv->state_count = 0;
>> +
>> +	for (idle_state = 0; idle_state < MAX_IDLE_STATE_COUNT; ++idle_state) {
>> +
>> +		if (idle_state > max_idle_state)
>> +			break;
>> +
>> +		/* is the state not enabled? */
>> +		if (cpuidle_state_table[idle_state].enter == NULL)
>> +			continue;
>> +
>> +		drv->states[drv->state_count] =	/* structure copy */
>> +			cpuidle_state_table[idle_state];
>> +
>> +		drv->state_count += 1;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +/*
>> + * powerpc_idle_probe()
>> + * Choose state table for shared versus dedicated partition
>> + */
>> +static int powerpc_idle_probe(void)
>> +{
>> +
>> +	if (cpuidle_disable != IDLE_NO_OVERRIDE)
>> +		return -ENODEV;
>> +
>> +	if (max_idle_state == 0) {
>> +		printk(KERN_DEBUG "powerpc processor idle disabled.\n");
>> +		return -EPERM;
>> +	}
>> +
>> +	if (firmware_has_feature(FW_FEATURE_SPLPAR)) {
>> +		if (get_lppaca_is_shared_proc() == 1)
>> +			cpuidle_state_table = shared_states;
>> +		else if (get_lppaca_is_shared_proc() == 0)
>> +			cpuidle_state_table = dedicated_states;
>> +	} else
>> +		return -ENODEV;
>> +
>> +	return 0;
>> +}
>> +
>> +static int __init powerpc_processor_idle_init(void)
>> +{
>> +	int retval;
>> +
>> +	retval = powerpc_idle_probe();
>> +	if (retval)
>> +		return retval;
>> +
>> +	powerpc_cpuidle_driver_init();
>> +	retval = cpuidle_register(&powerpc_idle_driver, NULL);
>> +	if (retval) {
>> +		printk(KERN_DEBUG "Registration of powerpc driver failed.\n");
>> +		return retval;
>> +	}
>> +
>> +	register_cpu_notifier(&setup_hotplug_notifier);
>> +	printk(KERN_DEBUG "powerpc_idle_driver registered\n");
>> +
>> +	return 0;
>> +}
>> +
>> +static void __exit powerpc_processor_idle_exit(void)
>> +{
>> +
>> +	unregister_cpu_notifier(&setup_hotplug_notifier);
>> +	cpuidle_unregister(&powerpc_idle_driver);
>> +	return;
>> +}
>> +
>> +module_init(powerpc_processor_idle_init);
>> +module_exit(powerpc_processor_idle_exit);
>> +
>> +MODULE_AUTHOR("Deepthi Dharwar <deepthi@linux.vnet.ibm.com>");
>> +MODULE_DESCRIPTION("Cpuidle driver for powerpc");
>> +MODULE_LICENSE("GPL");
> 
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev
> 
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2013-08-23 10:20 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-22  5:29 [PATCH V4 0/5] powerpc/cpuidle: Generic POWERPC cpuidle driver enabled for POWER and POWERNV platforms Deepthi Dharwar
2013-08-22  5:30 ` [PATCH V4 1/5] pseries/cpuidle: Remove dependency of pseries.h file Deepthi Dharwar
2013-08-22  5:30 ` [PATCH V4 2/5] pseries: Move plpar_wrapper.h to powerpc common include/asm location Deepthi Dharwar
2013-08-22  5:30 ` [PATCH V4 3/5] powerpc/cpuidle: Generic powerpc backend cpuidle driver Deepthi Dharwar
2013-08-22 10:56   ` Bartlomiej Zolnierkiewicz
2013-08-23 10:19     ` Deepthi Dharwar
2013-08-22  5:30 ` [PATCH V4 4/5] powerpc/cpuidle: Enable powernv cpuidle support Deepthi Dharwar
2013-08-22  5:30 ` [PATCH V4 5/5] powernv/cpuidle: Enable idle powernv cpu to call into the cpuidle framework Deepthi Dharwar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).