linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v11 0/8] RISC-V CPU Idle Support
@ 2022-02-10  5:49 Anup Patel
  2022-02-10  5:49 ` [PATCH v11 1/8] RISC-V: Enable CPU_IDLE drivers Anup Patel
                   ` (8 more replies)
  0 siblings, 9 replies; 25+ messages in thread
From: Anup Patel @ 2022-02-10  5:49 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Rafael J . Wysocki, Pavel Machek, Rob Herring
  Cc: Sandeep Tripathy, Atish Patra, Alistair Francis, Liush,
	Anup Patel, devicetree, linux-riscv, linux-kernel, linux-pm,
	linux-arm-kernel, kvm-riscv

From: Anup Patel <anup.patel@wdc.com>

This series adds RISC-V CPU Idle support using SBI HSM suspend function.
The RISC-V SBI CPU idle driver added by this series is highly inspired
from the ARM PSCI CPU idle driver.

At high-level, this series includes the following changes:
1) Preparatory arch/riscv patches (Patches 1 to 3)
2) Defines for RISC-V SBI HSM suspend (Patch 4)
3) Preparatory patch to share code between RISC-V SBI CPU idle driver
   and ARM PSCI CPU idle driver (Patch 5)
4) RISC-V SBI CPU idle driver and related DT bindings (Patches 6 to 7)

These patches can be found in riscv_sbi_hsm_suspend_v11 branch of
https://github.com/avpatel/linux.git

Special thanks Sandeep Tripathy for providing early feeback on SBI HSM
support in all above projects (RISC-V SBI specification, OpenSBI, and
Linux RISC-V).

Changes since v10:
 - Rebased on Linux-5.17-rc3
 - Typo fix in commit description of PATCH6

Changes since v9:
 - Rebased on Linux-5.17-rc1

Changes since v8:
 - Rebased on Linux-5.15-rc5
 - Fixed DT schema check errors in PATCH7

Changes since v7:
 - Rebased on Linux-5.15-rc3
 - Renamed cpuidle-sbi.c to cpuidle-riscv-sbi.c in PATCH6

Changes since v6:
 - Fixed error reported by "make DT_CHECKER_FLAGS=-m dt_binding_check"

Changes since v5:
 - Rebased on Linux-5.13-rc5
 - Removed unnecessary exports from PATCH5
 - Removed stray ";" from PATCH5
 - Moved sbi_cpuidle_pd_power_off() under "#ifdef CONFIG_DT_IDLE_GENPD"
   in PATCH6

Changes since v4:
 - Rebased on Linux-5.13-rc2
 - Renamed all dt_idle_genpd functions to have "dt_idle_" prefix
 - Added MAINTAINERS file entry for dt_idle_genpd

Changes since v3:
 - Rebased on Linux-5.13-rc2
 - Fixed __cpu_resume_enter() which was broken due to XIP kernel support
 - Removed "struct dt_idle_genpd_ops" abstraction which simplifies code
   sharing between ARM PSCI and RISC-V SBI drivers in PATCH5

Changes since v2:
 - Rebased on Linux-5.12-rc3
 - Updated PATCH7 to add common DT bindings for both ARM and RISC-V
   idle states
 - Added "additionalProperties = false" for both idle-states node and
   child nodes in PATCH7

Changes since v1:
 - Fixex minor typo in PATCH1
 - Use just "idle-states" as DT node name for CPU idle states
 - Added documentation for "cpu-idle-states" DT property in
   devicetree/bindings/riscv/cpus.yaml
 - Added documentation for "riscv,sbi-suspend-param" DT property in
   devicetree/bindings/riscv/idle-states.yaml

Anup Patel (8):
  RISC-V: Enable CPU_IDLE drivers
  RISC-V: Rename relocate() and make it global
  RISC-V: Add arch functions for non-retentive suspend entry/exit
  RISC-V: Add SBI HSM suspend related defines
  cpuidle: Factor-out power domain related code from PSCI domain driver
  cpuidle: Add RISC-V SBI CPU idle driver
  dt-bindings: Add common bindings for ARM and RISC-V idle states
  RISC-V: Enable RISC-V SBI CPU Idle driver for QEMU virt machine

 .../bindings/arm/msm/qcom,idle-state.txt      |   2 +-
 .../devicetree/bindings/arm/psci.yaml         |   2 +-
 .../bindings/{arm => cpu}/idle-states.yaml    | 228 ++++++-
 .../devicetree/bindings/riscv/cpus.yaml       |   6 +
 MAINTAINERS                                   |  14 +
 arch/riscv/Kconfig                            |   7 +
 arch/riscv/Kconfig.socs                       |   3 +
 arch/riscv/configs/defconfig                  |   2 +
 arch/riscv/configs/rv32_defconfig             |   2 +
 arch/riscv/include/asm/asm.h                  |  27 +
 arch/riscv/include/asm/cpuidle.h              |  24 +
 arch/riscv/include/asm/sbi.h                  |  27 +-
 arch/riscv/include/asm/suspend.h              |  36 +
 arch/riscv/kernel/Makefile                    |   2 +
 arch/riscv/kernel/asm-offsets.c               |   3 +
 arch/riscv/kernel/cpu_ops_sbi.c               |   2 +-
 arch/riscv/kernel/head.S                      |  28 +-
 arch/riscv/kernel/process.c                   |   3 +-
 arch/riscv/kernel/suspend.c                   |  87 +++
 arch/riscv/kernel/suspend_entry.S             | 124 ++++
 arch/riscv/kvm/vcpu_sbi_hsm.c                 |   4 +-
 drivers/cpuidle/Kconfig                       |   9 +
 drivers/cpuidle/Kconfig.arm                   |   1 +
 drivers/cpuidle/Kconfig.riscv                 |  15 +
 drivers/cpuidle/Makefile                      |   5 +
 drivers/cpuidle/cpuidle-psci-domain.c         | 138 +---
 drivers/cpuidle/cpuidle-psci.h                |  15 +-
 drivers/cpuidle/cpuidle-riscv-sbi.c           | 627 ++++++++++++++++++
 drivers/cpuidle/dt_idle_genpd.c               | 178 +++++
 drivers/cpuidle/dt_idle_genpd.h               |  50 ++
 30 files changed, 1484 insertions(+), 187 deletions(-)
 rename Documentation/devicetree/bindings/{arm => cpu}/idle-states.yaml (74%)
 create mode 100644 arch/riscv/include/asm/cpuidle.h
 create mode 100644 arch/riscv/include/asm/suspend.h
 create mode 100644 arch/riscv/kernel/suspend.c
 create mode 100644 arch/riscv/kernel/suspend_entry.S
 create mode 100644 drivers/cpuidle/Kconfig.riscv
 create mode 100644 drivers/cpuidle/cpuidle-riscv-sbi.c
 create mode 100644 drivers/cpuidle/dt_idle_genpd.c
 create mode 100644 drivers/cpuidle/dt_idle_genpd.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v11 1/8] RISC-V: Enable CPU_IDLE drivers
  2022-02-10  5:49 [PATCH v11 0/8] RISC-V CPU Idle Support Anup Patel
@ 2022-02-10  5:49 ` Anup Patel
  2022-02-12 11:43   ` Pavel Machek
  2022-02-16  0:50   ` Atish Patra
  2022-02-10  5:49 ` [PATCH v11 2/8] RISC-V: Rename relocate() and make it global Anup Patel
                   ` (7 subsequent siblings)
  8 siblings, 2 replies; 25+ messages in thread
From: Anup Patel @ 2022-02-10  5:49 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Rafael J . Wysocki, Pavel Machek, Rob Herring
  Cc: Sandeep Tripathy, Atish Patra, Alistair Francis, Liush,
	Anup Patel, devicetree, linux-riscv, linux-kernel, linux-pm,
	linux-arm-kernel, kvm-riscv, Guo Ren

From: Anup Patel <anup.patel@wdc.com>

We force select CPU_PM and provide asm/cpuidle.h so that we can
use CPU IDLE drivers for Linux RISC-V kernel.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <apatel@vetanamicro.com>
Reviewed-by: Guo Ren <guoren@kernel.org>
---
 arch/riscv/Kconfig                |  7 +++++++
 arch/riscv/configs/defconfig      |  1 +
 arch/riscv/configs/rv32_defconfig |  1 +
 arch/riscv/include/asm/cpuidle.h  | 24 ++++++++++++++++++++++++
 arch/riscv/kernel/process.c       |  3 ++-
 5 files changed, 35 insertions(+), 1 deletion(-)
 create mode 100644 arch/riscv/include/asm/cpuidle.h

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 5adcbd9b5e88..76976d12b463 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -46,6 +46,7 @@ config RISCV
 	select CLONE_BACKWARDS
 	select CLINT_TIMER if !MMU
 	select COMMON_CLK
+	select CPU_PM if CPU_IDLE
 	select EDAC_SUPPORT
 	select GENERIC_ARCH_TOPOLOGY if SMP
 	select GENERIC_ATOMIC64 if !64BIT
@@ -547,4 +548,10 @@ source "kernel/power/Kconfig"
 
 endmenu
 
+menu "CPU Power Management"
+
+source "drivers/cpuidle/Kconfig"
+
+endmenu
+
 source "arch/riscv/kvm/Kconfig"
diff --git a/arch/riscv/configs/defconfig b/arch/riscv/configs/defconfig
index f120fcc43d0a..a5e0482a4969 100644
--- a/arch/riscv/configs/defconfig
+++ b/arch/riscv/configs/defconfig
@@ -20,6 +20,7 @@ CONFIG_SOC_SIFIVE=y
 CONFIG_SOC_VIRT=y
 CONFIG_SMP=y
 CONFIG_HOTPLUG_CPU=y
+CONFIG_CPU_IDLE=y
 CONFIG_VIRTUALIZATION=y
 CONFIG_KVM=m
 CONFIG_JUMP_LABEL=y
diff --git a/arch/riscv/configs/rv32_defconfig b/arch/riscv/configs/rv32_defconfig
index 8b56a7f1eb06..d1b87db54d68 100644
--- a/arch/riscv/configs/rv32_defconfig
+++ b/arch/riscv/configs/rv32_defconfig
@@ -20,6 +20,7 @@ CONFIG_SOC_VIRT=y
 CONFIG_ARCH_RV32I=y
 CONFIG_SMP=y
 CONFIG_HOTPLUG_CPU=y
+CONFIG_CPU_IDLE=y
 CONFIG_VIRTUALIZATION=y
 CONFIG_KVM=m
 CONFIG_JUMP_LABEL=y
diff --git a/arch/riscv/include/asm/cpuidle.h b/arch/riscv/include/asm/cpuidle.h
new file mode 100644
index 000000000000..71fdc607d4bc
--- /dev/null
+++ b/arch/riscv/include/asm/cpuidle.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Allwinner Ltd
+ * Copyright (C) 2021 Western Digital Corporation or its affiliates.
+ */
+
+#ifndef _ASM_RISCV_CPUIDLE_H
+#define _ASM_RISCV_CPUIDLE_H
+
+#include <asm/barrier.h>
+#include <asm/processor.h>
+
+static inline void cpu_do_idle(void)
+{
+	/*
+	 * Add mb() here to ensure that all
+	 * IO/MEM accesses are completed prior
+	 * to entering WFI.
+	 */
+	mb();
+	wait_for_interrupt();
+}
+
+#endif
diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
index 03ac3aa611f5..504b496787aa 100644
--- a/arch/riscv/kernel/process.c
+++ b/arch/riscv/kernel/process.c
@@ -23,6 +23,7 @@
 #include <asm/string.h>
 #include <asm/switch_to.h>
 #include <asm/thread_info.h>
+#include <asm/cpuidle.h>
 
 register unsigned long gp_in_global __asm__("gp");
 
@@ -37,7 +38,7 @@ extern asmlinkage void ret_from_kernel_thread(void);
 
 void arch_cpu_idle(void)
 {
-	wait_for_interrupt();
+	cpu_do_idle();
 	raw_local_irq_enable();
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v11 2/8] RISC-V: Rename relocate() and make it global
  2022-02-10  5:49 [PATCH v11 0/8] RISC-V CPU Idle Support Anup Patel
  2022-02-10  5:49 ` [PATCH v11 1/8] RISC-V: Enable CPU_IDLE drivers Anup Patel
@ 2022-02-10  5:49 ` Anup Patel
  2022-02-16  0:57   ` Atish Patra
  2022-02-10  5:49 ` [PATCH v11 3/8] RISC-V: Add arch functions for non-retentive suspend entry/exit Anup Patel
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 25+ messages in thread
From: Anup Patel @ 2022-02-10  5:49 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Rafael J . Wysocki, Pavel Machek, Rob Herring
  Cc: Sandeep Tripathy, Atish Patra, Alistair Francis, Liush,
	Anup Patel, devicetree, linux-riscv, linux-kernel, linux-pm,
	linux-arm-kernel, kvm-riscv, Anup Patel, Guo Ren

From: Anup Patel <anup.patel@wdc.com>

The low-level relocate() function enables mmu and relocates
execution to link-time addresses. We rename relocate() function
to relocate_enable_mmu() function which is more informative.

Also, the relocate_enable_mmu() function will be used in the
resume path when a CPU wakes-up from a non-retentive suspend
so we make it global symbol.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Guo Ren <guoren@kernel.org>
---
 arch/riscv/kernel/head.S | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
index 2363b43312fc..5f4c6b6c4974 100644
--- a/arch/riscv/kernel/head.S
+++ b/arch/riscv/kernel/head.S
@@ -90,7 +90,8 @@ pe_head_start:
 
 .align 2
 #ifdef CONFIG_MMU
-relocate:
+	.global relocate_enable_mmu
+relocate_enable_mmu:
 	/* Relocate return address */
 	la a1, kernel_map
 	XIP_FIXUP_OFFSET a1
@@ -185,7 +186,7 @@ secondary_start_sbi:
 	/* Enable virtual memory and relocate to virtual address */
 	la a0, swapper_pg_dir
 	XIP_FIXUP_OFFSET a0
-	call relocate
+	call relocate_enable_mmu
 #endif
 	call setup_trap_vector
 	tail smp_callin
@@ -329,7 +330,7 @@ clear_bss_done:
 #ifdef CONFIG_MMU
 	la a0, early_pg_dir
 	XIP_FIXUP_OFFSET a0
-	call relocate
+	call relocate_enable_mmu
 #endif /* CONFIG_MMU */
 
 	call setup_trap_vector
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v11 3/8] RISC-V: Add arch functions for non-retentive suspend entry/exit
  2022-02-10  5:49 [PATCH v11 0/8] RISC-V CPU Idle Support Anup Patel
  2022-02-10  5:49 ` [PATCH v11 1/8] RISC-V: Enable CPU_IDLE drivers Anup Patel
  2022-02-10  5:49 ` [PATCH v11 2/8] RISC-V: Rename relocate() and make it global Anup Patel
@ 2022-02-10  5:49 ` Anup Patel
  2022-02-10  5:49 ` [PATCH v11 4/8] RISC-V: Add SBI HSM suspend related defines Anup Patel
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Anup Patel @ 2022-02-10  5:49 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Rafael J . Wysocki, Pavel Machek, Rob Herring
  Cc: Sandeep Tripathy, Atish Patra, Alistair Francis, Liush,
	Anup Patel, devicetree, linux-riscv, linux-kernel, linux-pm,
	linux-arm-kernel, kvm-riscv, Anup Patel, Guo Ren

From: Anup Patel <anup.patel@wdc.com>

The hart registers and CSRs are not preserved in non-retentative
suspend state so we provide arch specific helper functions which
will save/restore hart context upon entry/exit to non-retentive
suspend state. These helper functions can be used by cpuidle
drivers for non-retentive suspend entry/exit.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Guo Ren <guoren@kernel.org>
---
 arch/riscv/include/asm/asm.h      |  27 +++++++
 arch/riscv/include/asm/suspend.h  |  36 +++++++++
 arch/riscv/kernel/Makefile        |   2 +
 arch/riscv/kernel/asm-offsets.c   |   3 +
 arch/riscv/kernel/head.S          |  21 -----
 arch/riscv/kernel/suspend.c       |  87 +++++++++++++++++++++
 arch/riscv/kernel/suspend_entry.S | 124 ++++++++++++++++++++++++++++++
 7 files changed, 279 insertions(+), 21 deletions(-)
 create mode 100644 arch/riscv/include/asm/suspend.h
 create mode 100644 arch/riscv/kernel/suspend.c
 create mode 100644 arch/riscv/kernel/suspend_entry.S

diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h
index 618d7c5af1a2..48b4baa4d706 100644
--- a/arch/riscv/include/asm/asm.h
+++ b/arch/riscv/include/asm/asm.h
@@ -67,4 +67,31 @@
 #error "Unexpected __SIZEOF_SHORT__"
 #endif
 
+#ifdef __ASSEMBLY__
+
+/* Common assembly source macros */
+
+#ifdef CONFIG_XIP_KERNEL
+.macro XIP_FIXUP_OFFSET reg
+	REG_L t0, _xip_fixup
+	add \reg, \reg, t0
+.endm
+.macro XIP_FIXUP_FLASH_OFFSET reg
+	la t1, __data_loc
+	li t0, XIP_OFFSET_MASK
+	and t1, t1, t0
+	li t1, XIP_OFFSET
+	sub t0, t0, t1
+	sub \reg, \reg, t0
+.endm
+_xip_fixup: .dword CONFIG_PHYS_RAM_BASE - CONFIG_XIP_PHYS_ADDR - XIP_OFFSET
+#else
+.macro XIP_FIXUP_OFFSET reg
+.endm
+.macro XIP_FIXUP_FLASH_OFFSET reg
+.endm
+#endif /* CONFIG_XIP_KERNEL */
+
+#endif /* __ASSEMBLY__ */
+
 #endif /* _ASM_RISCV_ASM_H */
diff --git a/arch/riscv/include/asm/suspend.h b/arch/riscv/include/asm/suspend.h
new file mode 100644
index 000000000000..8be391c2aecb
--- /dev/null
+++ b/arch/riscv/include/asm/suspend.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ * Copyright (c) 2022 Ventana Micro Systems Inc.
+ */
+
+#ifndef _ASM_RISCV_SUSPEND_H
+#define _ASM_RISCV_SUSPEND_H
+
+#include <asm/ptrace.h>
+
+struct suspend_context {
+	/* Saved and restored by low-level functions */
+	struct pt_regs regs;
+	/* Saved and restored by high-level functions */
+	unsigned long scratch;
+	unsigned long tvec;
+	unsigned long ie;
+#ifdef CONFIG_MMU
+	unsigned long satp;
+#endif
+};
+
+/* Low-level CPU suspend entry function */
+int __cpu_suspend_enter(struct suspend_context *context);
+
+/* High-level CPU suspend which will save context and call finish() */
+int cpu_suspend(unsigned long arg,
+		int (*finish)(unsigned long arg,
+			      unsigned long entry,
+			      unsigned long context));
+
+/* Low-level CPU resume entry function */
+int __cpu_resume_enter(unsigned long hartid, unsigned long context);
+
+#endif
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
index 612556faa527..13fa5733f5e7 100644
--- a/arch/riscv/kernel/Makefile
+++ b/arch/riscv/kernel/Makefile
@@ -48,6 +48,8 @@ obj-$(CONFIG_RISCV_BOOT_SPINWAIT) += cpu_ops_spinwait.o
 obj-$(CONFIG_MODULES)		+= module.o
 obj-$(CONFIG_MODULE_SECTIONS)	+= module-sections.o
 
+obj-$(CONFIG_CPU_PM)		+= suspend_entry.o suspend.o
+
 obj-$(CONFIG_FUNCTION_TRACER)	+= mcount.o ftrace.o
 obj-$(CONFIG_DYNAMIC_FTRACE)	+= mcount-dyn.o
 
diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
index df0519a64eaf..df9444397908 100644
--- a/arch/riscv/kernel/asm-offsets.c
+++ b/arch/riscv/kernel/asm-offsets.c
@@ -13,6 +13,7 @@
 #include <asm/thread_info.h>
 #include <asm/ptrace.h>
 #include <asm/cpu_ops_sbi.h>
+#include <asm/suspend.h>
 
 void asm_offsets(void);
 
@@ -113,6 +114,8 @@ void asm_offsets(void)
 	OFFSET(PT_BADADDR, pt_regs, badaddr);
 	OFFSET(PT_CAUSE, pt_regs, cause);
 
+	OFFSET(SUSPEND_CONTEXT_REGS, suspend_context, regs);
+
 	OFFSET(KVM_ARCH_GUEST_ZERO, kvm_vcpu_arch, guest_context.zero);
 	OFFSET(KVM_ARCH_GUEST_RA, kvm_vcpu_arch, guest_context.ra);
 	OFFSET(KVM_ARCH_GUEST_SP, kvm_vcpu_arch, guest_context.sp);
diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
index 5f4c6b6c4974..893b8bb69391 100644
--- a/arch/riscv/kernel/head.S
+++ b/arch/riscv/kernel/head.S
@@ -16,27 +16,6 @@
 #include <asm/image.h>
 #include "efi-header.S"
 
-#ifdef CONFIG_XIP_KERNEL
-.macro XIP_FIXUP_OFFSET reg
-	REG_L t0, _xip_fixup
-	add \reg, \reg, t0
-.endm
-.macro XIP_FIXUP_FLASH_OFFSET reg
-	la t1, __data_loc
-	li t0, XIP_OFFSET_MASK
-	and t1, t1, t0
-	li t1, XIP_OFFSET
-	sub t0, t0, t1
-	sub \reg, \reg, t0
-.endm
-_xip_fixup: .dword CONFIG_PHYS_RAM_BASE - CONFIG_XIP_PHYS_ADDR - XIP_OFFSET
-#else
-.macro XIP_FIXUP_OFFSET reg
-.endm
-.macro XIP_FIXUP_FLASH_OFFSET reg
-.endm
-#endif /* CONFIG_XIP_KERNEL */
-
 __HEAD
 ENTRY(_start)
 	/*
diff --git a/arch/riscv/kernel/suspend.c b/arch/riscv/kernel/suspend.c
new file mode 100644
index 000000000000..9ba24fb8cc93
--- /dev/null
+++ b/arch/riscv/kernel/suspend.c
@@ -0,0 +1,87 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ * Copyright (c) 2022 Ventana Micro Systems Inc.
+ */
+
+#include <linux/ftrace.h>
+#include <asm/csr.h>
+#include <asm/suspend.h>
+
+static void suspend_save_csrs(struct suspend_context *context)
+{
+	context->scratch = csr_read(CSR_SCRATCH);
+	context->tvec = csr_read(CSR_TVEC);
+	context->ie = csr_read(CSR_IE);
+
+	/*
+	 * No need to save/restore IP CSR (i.e. MIP or SIP) because:
+	 *
+	 * 1. For no-MMU (M-mode) kernel, the bits in MIP are set by
+	 *    external devices (such as interrupt controller, timer, etc).
+	 * 2. For MMU (S-mode) kernel, the bits in SIP are set by
+	 *    M-mode firmware and external devices (such as interrupt
+	 *    controller, etc).
+	 */
+
+#ifdef CONFIG_MMU
+	context->satp = csr_read(CSR_SATP);
+#endif
+}
+
+static void suspend_restore_csrs(struct suspend_context *context)
+{
+	csr_write(CSR_SCRATCH, context->scratch);
+	csr_write(CSR_TVEC, context->tvec);
+	csr_write(CSR_IE, context->ie);
+
+#ifdef CONFIG_MMU
+	csr_write(CSR_SATP, context->satp);
+#endif
+}
+
+int cpu_suspend(unsigned long arg,
+		int (*finish)(unsigned long arg,
+			      unsigned long entry,
+			      unsigned long context))
+{
+	int rc = 0;
+	struct suspend_context context = { 0 };
+
+	/* Finisher should be non-NULL */
+	if (!finish)
+		return -EINVAL;
+
+	/* Save additional CSRs*/
+	suspend_save_csrs(&context);
+
+	/*
+	 * Function graph tracer state gets incosistent when the kernel
+	 * calls functions that never return (aka finishers) hence disable
+	 * graph tracing during their execution.
+	 */
+	pause_graph_tracing();
+
+	/* Save context on stack */
+	if (__cpu_suspend_enter(&context)) {
+		/* Call the finisher */
+		rc = finish(arg, __pa_symbol(__cpu_resume_enter),
+			    (ulong)&context);
+
+		/*
+		 * Should never reach here, unless the suspend finisher
+		 * fails. Successful cpu_suspend() should return from
+		 * __cpu_resume_entry()
+		 */
+		if (!rc)
+			rc = -EOPNOTSUPP;
+	}
+
+	/* Enable function graph tracer */
+	unpause_graph_tracing();
+
+	/* Restore additional CSRs */
+	suspend_restore_csrs(&context);
+
+	return rc;
+}
diff --git a/arch/riscv/kernel/suspend_entry.S b/arch/riscv/kernel/suspend_entry.S
new file mode 100644
index 000000000000..4b07b809a2b8
--- /dev/null
+++ b/arch/riscv/kernel/suspend_entry.S
@@ -0,0 +1,124 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ * Copyright (c) 2022 Ventana Micro Systems Inc.
+ */
+
+#include <linux/linkage.h>
+#include <asm/asm.h>
+#include <asm/asm-offsets.h>
+#include <asm/csr.h>
+
+	.text
+	.altmacro
+	.option norelax
+
+ENTRY(__cpu_suspend_enter)
+	/* Save registers (except A0 and T0-T6) */
+	REG_S	ra, (SUSPEND_CONTEXT_REGS + PT_RA)(a0)
+	REG_S	sp, (SUSPEND_CONTEXT_REGS + PT_SP)(a0)
+	REG_S	gp, (SUSPEND_CONTEXT_REGS + PT_GP)(a0)
+	REG_S	tp, (SUSPEND_CONTEXT_REGS + PT_TP)(a0)
+	REG_S	s0, (SUSPEND_CONTEXT_REGS + PT_S0)(a0)
+	REG_S	s1, (SUSPEND_CONTEXT_REGS + PT_S1)(a0)
+	REG_S	a1, (SUSPEND_CONTEXT_REGS + PT_A1)(a0)
+	REG_S	a2, (SUSPEND_CONTEXT_REGS + PT_A2)(a0)
+	REG_S	a3, (SUSPEND_CONTEXT_REGS + PT_A3)(a0)
+	REG_S	a4, (SUSPEND_CONTEXT_REGS + PT_A4)(a0)
+	REG_S	a5, (SUSPEND_CONTEXT_REGS + PT_A5)(a0)
+	REG_S	a6, (SUSPEND_CONTEXT_REGS + PT_A6)(a0)
+	REG_S	a7, (SUSPEND_CONTEXT_REGS + PT_A7)(a0)
+	REG_S	s2, (SUSPEND_CONTEXT_REGS + PT_S2)(a0)
+	REG_S	s3, (SUSPEND_CONTEXT_REGS + PT_S3)(a0)
+	REG_S	s4, (SUSPEND_CONTEXT_REGS + PT_S4)(a0)
+	REG_S	s5, (SUSPEND_CONTEXT_REGS + PT_S5)(a0)
+	REG_S	s6, (SUSPEND_CONTEXT_REGS + PT_S6)(a0)
+	REG_S	s7, (SUSPEND_CONTEXT_REGS + PT_S7)(a0)
+	REG_S	s8, (SUSPEND_CONTEXT_REGS + PT_S8)(a0)
+	REG_S	s9, (SUSPEND_CONTEXT_REGS + PT_S9)(a0)
+	REG_S	s10, (SUSPEND_CONTEXT_REGS + PT_S10)(a0)
+	REG_S	s11, (SUSPEND_CONTEXT_REGS + PT_S11)(a0)
+
+	/* Save CSRs */
+	csrr	t0, CSR_EPC
+	REG_S	t0, (SUSPEND_CONTEXT_REGS + PT_EPC)(a0)
+	csrr	t0, CSR_STATUS
+	REG_S	t0, (SUSPEND_CONTEXT_REGS + PT_STATUS)(a0)
+	csrr	t0, CSR_TVAL
+	REG_S	t0, (SUSPEND_CONTEXT_REGS + PT_BADADDR)(a0)
+	csrr	t0, CSR_CAUSE
+	REG_S	t0, (SUSPEND_CONTEXT_REGS + PT_CAUSE)(a0)
+
+	/* Return non-zero value */
+	li	a0, 1
+
+	/* Return to C code */
+	ret
+END(__cpu_suspend_enter)
+
+ENTRY(__cpu_resume_enter)
+	/* Load the global pointer */
+	.option push
+	.option norelax
+		la gp, __global_pointer$
+	.option pop
+
+#ifdef CONFIG_MMU
+	/* Save A0 and A1 */
+	add	t0, a0, zero
+	add	t1, a1, zero
+
+	/* Enable MMU */
+	la	a0, swapper_pg_dir
+	XIP_FIXUP_OFFSET a0
+	call	relocate_enable_mmu
+
+	/* Restore A0 and A1 */
+	add	a0, t0, zero
+	add	a1, t1, zero
+#endif
+
+	/* Make A0 point to suspend context */
+	add	a0, a1, zero
+
+	/* Restore CSRs */
+	REG_L	t0, (SUSPEND_CONTEXT_REGS + PT_EPC)(a0)
+	csrw	CSR_EPC, t0
+	REG_L	t0, (SUSPEND_CONTEXT_REGS + PT_STATUS)(a0)
+	csrw	CSR_STATUS, t0
+	REG_L	t0, (SUSPEND_CONTEXT_REGS + PT_BADADDR)(a0)
+	csrw	CSR_TVAL, t0
+	REG_L	t0, (SUSPEND_CONTEXT_REGS + PT_CAUSE)(a0)
+	csrw	CSR_CAUSE, t0
+
+	/* Restore registers (except A0 and T0-T6) */
+	REG_L	ra, (SUSPEND_CONTEXT_REGS + PT_RA)(a0)
+	REG_L	sp, (SUSPEND_CONTEXT_REGS + PT_SP)(a0)
+	REG_L	gp, (SUSPEND_CONTEXT_REGS + PT_GP)(a0)
+	REG_L	tp, (SUSPEND_CONTEXT_REGS + PT_TP)(a0)
+	REG_L	s0, (SUSPEND_CONTEXT_REGS + PT_S0)(a0)
+	REG_L	s1, (SUSPEND_CONTEXT_REGS + PT_S1)(a0)
+	REG_L	a1, (SUSPEND_CONTEXT_REGS + PT_A1)(a0)
+	REG_L	a2, (SUSPEND_CONTEXT_REGS + PT_A2)(a0)
+	REG_L	a3, (SUSPEND_CONTEXT_REGS + PT_A3)(a0)
+	REG_L	a4, (SUSPEND_CONTEXT_REGS + PT_A4)(a0)
+	REG_L	a5, (SUSPEND_CONTEXT_REGS + PT_A5)(a0)
+	REG_L	a6, (SUSPEND_CONTEXT_REGS + PT_A6)(a0)
+	REG_L	a7, (SUSPEND_CONTEXT_REGS + PT_A7)(a0)
+	REG_L	s2, (SUSPEND_CONTEXT_REGS + PT_S2)(a0)
+	REG_L	s3, (SUSPEND_CONTEXT_REGS + PT_S3)(a0)
+	REG_L	s4, (SUSPEND_CONTEXT_REGS + PT_S4)(a0)
+	REG_L	s5, (SUSPEND_CONTEXT_REGS + PT_S5)(a0)
+	REG_L	s6, (SUSPEND_CONTEXT_REGS + PT_S6)(a0)
+	REG_L	s7, (SUSPEND_CONTEXT_REGS + PT_S7)(a0)
+	REG_L	s8, (SUSPEND_CONTEXT_REGS + PT_S8)(a0)
+	REG_L	s9, (SUSPEND_CONTEXT_REGS + PT_S9)(a0)
+	REG_L	s10, (SUSPEND_CONTEXT_REGS + PT_S10)(a0)
+	REG_L	s11, (SUSPEND_CONTEXT_REGS + PT_S11)(a0)
+
+	/* Return zero value */
+	add	a0, zero, zero
+
+	/* Return to C code */
+	ret
+END(__cpu_resume_enter)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v11 4/8] RISC-V: Add SBI HSM suspend related defines
  2022-02-10  5:49 [PATCH v11 0/8] RISC-V CPU Idle Support Anup Patel
                   ` (2 preceding siblings ...)
  2022-02-10  5:49 ` [PATCH v11 3/8] RISC-V: Add arch functions for non-retentive suspend entry/exit Anup Patel
@ 2022-02-10  5:49 ` Anup Patel
  2022-02-16  7:57   ` Atish Patra
  2022-02-23  7:02   ` Anup Patel
  2022-02-10  5:49 ` [PATCH v11 5/8] cpuidle: Factor-out power domain related code from PSCI domain driver Anup Patel
                   ` (4 subsequent siblings)
  8 siblings, 2 replies; 25+ messages in thread
From: Anup Patel @ 2022-02-10  5:49 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Rafael J . Wysocki, Pavel Machek, Rob Herring
  Cc: Sandeep Tripathy, Atish Patra, Alistair Francis, Liush,
	Anup Patel, devicetree, linux-riscv, linux-kernel, linux-pm,
	linux-arm-kernel, kvm-riscv, Anup Patel, Guo Ren

From: Anup Patel <anup.patel@wdc.com>

We add defines related to SBI HSM suspend call and also
update HSM states naming as-per latest SBI specification.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Guo Ren <guoren@kernel.org>
---
 arch/riscv/include/asm/sbi.h    | 27 ++++++++++++++++++++++-----
 arch/riscv/kernel/cpu_ops_sbi.c |  2 +-
 arch/riscv/kvm/vcpu_sbi_hsm.c   |  4 ++--
 3 files changed, 25 insertions(+), 8 deletions(-)

diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
index d1c37479d828..06133b4f8e20 100644
--- a/arch/riscv/include/asm/sbi.h
+++ b/arch/riscv/include/asm/sbi.h
@@ -71,15 +71,32 @@ enum sbi_ext_hsm_fid {
 	SBI_EXT_HSM_HART_START = 0,
 	SBI_EXT_HSM_HART_STOP,
 	SBI_EXT_HSM_HART_STATUS,
+	SBI_EXT_HSM_HART_SUSPEND,
 };
 
-enum sbi_hsm_hart_status {
-	SBI_HSM_HART_STATUS_STARTED = 0,
-	SBI_HSM_HART_STATUS_STOPPED,
-	SBI_HSM_HART_STATUS_START_PENDING,
-	SBI_HSM_HART_STATUS_STOP_PENDING,
+enum sbi_hsm_hart_state {
+	SBI_HSM_STATE_STARTED = 0,
+	SBI_HSM_STATE_STOPPED,
+	SBI_HSM_STATE_START_PENDING,
+	SBI_HSM_STATE_STOP_PENDING,
+	SBI_HSM_STATE_SUSPENDED,
+	SBI_HSM_STATE_SUSPEND_PENDING,
+	SBI_HSM_STATE_RESUME_PENDING,
 };
 
+#define SBI_HSM_SUSP_BASE_MASK			0x7fffffff
+#define SBI_HSM_SUSP_NON_RET_BIT		0x80000000
+#define SBI_HSM_SUSP_PLAT_BASE			0x10000000
+
+#define SBI_HSM_SUSPEND_RET_DEFAULT		0x00000000
+#define SBI_HSM_SUSPEND_RET_PLATFORM		SBI_HSM_SUSP_PLAT_BASE
+#define SBI_HSM_SUSPEND_RET_LAST		SBI_HSM_SUSP_BASE_MASK
+#define SBI_HSM_SUSPEND_NON_RET_DEFAULT		SBI_HSM_SUSP_NON_RET_BIT
+#define SBI_HSM_SUSPEND_NON_RET_PLATFORM	(SBI_HSM_SUSP_NON_RET_BIT | \
+						 SBI_HSM_SUSP_PLAT_BASE)
+#define SBI_HSM_SUSPEND_NON_RET_LAST		(SBI_HSM_SUSP_NON_RET_BIT | \
+						 SBI_HSM_SUSP_BASE_MASK)
+
 enum sbi_ext_srst_fid {
 	SBI_EXT_SRST_RESET = 0,
 };
diff --git a/arch/riscv/kernel/cpu_ops_sbi.c b/arch/riscv/kernel/cpu_ops_sbi.c
index dae29cbfe550..2e16f6732cdf 100644
--- a/arch/riscv/kernel/cpu_ops_sbi.c
+++ b/arch/riscv/kernel/cpu_ops_sbi.c
@@ -111,7 +111,7 @@ static int sbi_cpu_is_stopped(unsigned int cpuid)
 
 	rc = sbi_hsm_hart_get_status(hartid);
 
-	if (rc == SBI_HSM_HART_STATUS_STOPPED)
+	if (rc == SBI_HSM_STATE_STOPPED)
 		return 0;
 	return rc;
 }
diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c
index 2e383687fa48..1ac4b2e8e4ec 100644
--- a/arch/riscv/kvm/vcpu_sbi_hsm.c
+++ b/arch/riscv/kvm/vcpu_sbi_hsm.c
@@ -60,9 +60,9 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu)
 	if (!target_vcpu)
 		return -EINVAL;
 	if (!target_vcpu->arch.power_off)
-		return SBI_HSM_HART_STATUS_STARTED;
+		return SBI_HSM_STATE_STARTED;
 	else
-		return SBI_HSM_HART_STATUS_STOPPED;
+		return SBI_HSM_STATE_STOPPED;
 }
 
 static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v11 5/8] cpuidle: Factor-out power domain related code from PSCI domain driver
  2022-02-10  5:49 [PATCH v11 0/8] RISC-V CPU Idle Support Anup Patel
                   ` (3 preceding siblings ...)
  2022-02-10  5:49 ` [PATCH v11 4/8] RISC-V: Add SBI HSM suspend related defines Anup Patel
@ 2022-02-10  5:49 ` Anup Patel
  2022-02-10  5:49 ` [PATCH v11 6/8] cpuidle: Add RISC-V SBI CPU idle driver Anup Patel
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Anup Patel @ 2022-02-10  5:49 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Rafael J . Wysocki, Pavel Machek, Rob Herring
  Cc: Sandeep Tripathy, Atish Patra, Alistair Francis, Liush,
	Anup Patel, devicetree, linux-riscv, linux-kernel, linux-pm,
	linux-arm-kernel, kvm-riscv, Anup Patel

From: Anup Patel <anup.patel@wdc.com>

The generic power domain related code in PSCI domain driver is largely
independent of PSCI and can be shared with RISC-V SBI domain driver
hence we factor-out this code into dt_idle_genpd.c and dt_idle_genpd.h.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 MAINTAINERS                           |   7 +
 drivers/cpuidle/Kconfig               |   4 +
 drivers/cpuidle/Kconfig.arm           |   1 +
 drivers/cpuidle/Makefile              |   1 +
 drivers/cpuidle/cpuidle-psci-domain.c | 138 +-------------------
 drivers/cpuidle/cpuidle-psci.h        |  15 ++-
 drivers/cpuidle/dt_idle_genpd.c       | 178 ++++++++++++++++++++++++++
 drivers/cpuidle/dt_idle_genpd.h       |  50 ++++++++
 8 files changed, 259 insertions(+), 135 deletions(-)
 create mode 100644 drivers/cpuidle/dt_idle_genpd.c
 create mode 100644 drivers/cpuidle/dt_idle_genpd.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 69a2935daf6c..39ece23e8d93 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -5051,6 +5051,13 @@ S:	Supported
 F:	drivers/cpuidle/cpuidle-psci.h
 F:	drivers/cpuidle/cpuidle-psci-domain.c
 
+CPUIDLE DRIVER - DT IDLE PM DOMAIN
+M:	Ulf Hansson <ulf.hansson@linaro.org>
+L:	linux-pm@vger.kernel.org
+S:	Supported
+F:	drivers/cpuidle/dt_idle_genpd.c
+F:	drivers/cpuidle/dt_idle_genpd.h
+
 CRAMFS FILESYSTEM
 M:	Nicolas Pitre <nico@fluxnic.net>
 S:	Maintained
diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
index c0aeedd66f02..f1afe7ab6b54 100644
--- a/drivers/cpuidle/Kconfig
+++ b/drivers/cpuidle/Kconfig
@@ -47,6 +47,10 @@ config CPU_IDLE_GOV_HALTPOLL
 config DT_IDLE_STATES
 	bool
 
+config DT_IDLE_GENPD
+	depends on PM_GENERIC_DOMAINS_OF
+	bool
+
 menu "ARM CPU Idle Drivers"
 depends on ARM || ARM64
 source "drivers/cpuidle/Kconfig.arm"
diff --git a/drivers/cpuidle/Kconfig.arm b/drivers/cpuidle/Kconfig.arm
index 15d6c46c0a47..be7f512109f7 100644
--- a/drivers/cpuidle/Kconfig.arm
+++ b/drivers/cpuidle/Kconfig.arm
@@ -27,6 +27,7 @@ config ARM_PSCI_CPUIDLE_DOMAIN
 	bool "PSCI CPU idle Domain"
 	depends on ARM_PSCI_CPUIDLE
 	depends on PM_GENERIC_DOMAINS_OF
+	select DT_IDLE_GENPD
 	default y
 	help
 	  Select this to enable the PSCI based CPUidle driver to use PM domains,
diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
index 26bbc5e74123..11a26cef279f 100644
--- a/drivers/cpuidle/Makefile
+++ b/drivers/cpuidle/Makefile
@@ -6,6 +6,7 @@
 obj-y += cpuidle.o driver.o governor.o sysfs.o governors/
 obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
 obj-$(CONFIG_DT_IDLE_STATES)		  += dt_idle_states.o
+obj-$(CONFIG_DT_IDLE_GENPD)		  += dt_idle_genpd.o
 obj-$(CONFIG_ARCH_HAS_CPU_RELAX)	  += poll_state.o
 obj-$(CONFIG_HALTPOLL_CPUIDLE)		  += cpuidle-haltpoll.o
 
diff --git a/drivers/cpuidle/cpuidle-psci-domain.c b/drivers/cpuidle/cpuidle-psci-domain.c
index ff2c3f8e4668..755bbdfc5b82 100644
--- a/drivers/cpuidle/cpuidle-psci-domain.c
+++ b/drivers/cpuidle/cpuidle-psci-domain.c
@@ -47,73 +47,14 @@ static int psci_pd_power_off(struct generic_pm_domain *pd)
 	return 0;
 }
 
-static int psci_pd_parse_state_nodes(struct genpd_power_state *states,
-				     int state_count)
-{
-	int i, ret;
-	u32 psci_state, *psci_state_buf;
-
-	for (i = 0; i < state_count; i++) {
-		ret = psci_dt_parse_state_node(to_of_node(states[i].fwnode),
-					&psci_state);
-		if (ret)
-			goto free_state;
-
-		psci_state_buf = kmalloc(sizeof(u32), GFP_KERNEL);
-		if (!psci_state_buf) {
-			ret = -ENOMEM;
-			goto free_state;
-		}
-		*psci_state_buf = psci_state;
-		states[i].data = psci_state_buf;
-	}
-
-	return 0;
-
-free_state:
-	i--;
-	for (; i >= 0; i--)
-		kfree(states[i].data);
-	return ret;
-}
-
-static int psci_pd_parse_states(struct device_node *np,
-			struct genpd_power_state **states, int *state_count)
-{
-	int ret;
-
-	/* Parse the domain idle states. */
-	ret = of_genpd_parse_idle_states(np, states, state_count);
-	if (ret)
-		return ret;
-
-	/* Fill out the PSCI specifics for each found state. */
-	ret = psci_pd_parse_state_nodes(*states, *state_count);
-	if (ret)
-		kfree(*states);
-
-	return ret;
-}
-
-static void psci_pd_free_states(struct genpd_power_state *states,
-				unsigned int state_count)
-{
-	int i;
-
-	for (i = 0; i < state_count; i++)
-		kfree(states[i].data);
-	kfree(states);
-}
-
 static int psci_pd_init(struct device_node *np, bool use_osi)
 {
 	struct generic_pm_domain *pd;
 	struct psci_pd_provider *pd_provider;
 	struct dev_power_governor *pd_gov;
-	struct genpd_power_state *states = NULL;
 	int ret = -ENOMEM, state_count = 0;
 
-	pd = kzalloc(sizeof(*pd), GFP_KERNEL);
+	pd = dt_idle_pd_alloc(np, psci_dt_parse_state_node);
 	if (!pd)
 		goto out;
 
@@ -121,22 +62,6 @@ static int psci_pd_init(struct device_node *np, bool use_osi)
 	if (!pd_provider)
 		goto free_pd;
 
-	pd->name = kasprintf(GFP_KERNEL, "%pOF", np);
-	if (!pd->name)
-		goto free_pd_prov;
-
-	/*
-	 * Parse the domain idle states and let genpd manage the state selection
-	 * for those being compatible with "domain-idle-state".
-	 */
-	ret = psci_pd_parse_states(np, &states, &state_count);
-	if (ret)
-		goto free_name;
-
-	pd->free_states = psci_pd_free_states;
-	pd->name = kbasename(pd->name);
-	pd->states = states;
-	pd->state_count = state_count;
 	pd->flags |= GENPD_FLAG_IRQ_SAFE | GENPD_FLAG_CPU_DOMAIN;
 
 	/* Allow power off when OSI has been successfully enabled. */
@@ -149,10 +74,8 @@ static int psci_pd_init(struct device_node *np, bool use_osi)
 	pd_gov = state_count > 0 ? &pm_domain_cpu_gov : NULL;
 
 	ret = pm_genpd_init(pd, pd_gov, false);
-	if (ret) {
-		psci_pd_free_states(states, state_count);
-		goto free_name;
-	}
+	if (ret)
+		goto free_pd_prov;
 
 	ret = of_genpd_add_provider_simple(np, pd);
 	if (ret)
@@ -166,12 +89,10 @@ static int psci_pd_init(struct device_node *np, bool use_osi)
 
 remove_pd:
 	pm_genpd_remove(pd);
-free_name:
-	kfree(pd->name);
 free_pd_prov:
 	kfree(pd_provider);
 free_pd:
-	kfree(pd);
+	dt_idle_pd_free(pd);
 out:
 	pr_err("failed to init PM domain ret=%d %pOF\n", ret, np);
 	return ret;
@@ -195,30 +116,6 @@ static void psci_pd_remove(void)
 	}
 }
 
-static int psci_pd_init_topology(struct device_node *np)
-{
-	struct device_node *node;
-	struct of_phandle_args child, parent;
-	int ret;
-
-	for_each_child_of_node(np, node) {
-		if (of_parse_phandle_with_args(node, "power-domains",
-					"#power-domain-cells", 0, &parent))
-			continue;
-
-		child.np = node;
-		child.args_count = 0;
-		ret = of_genpd_add_subdomain(&parent, &child);
-		of_node_put(parent.np);
-		if (ret) {
-			of_node_put(node);
-			return ret;
-		}
-	}
-
-	return 0;
-}
-
 static bool psci_pd_try_set_osi_mode(void)
 {
 	int ret;
@@ -282,7 +179,7 @@ static int psci_cpuidle_domain_probe(struct platform_device *pdev)
 		goto no_pd;
 
 	/* Link genpd masters/subdomains to model the CPU topology. */
-	ret = psci_pd_init_topology(np);
+	ret = dt_idle_pd_init_topology(np);
 	if (ret)
 		goto remove_pd;
 
@@ -314,28 +211,3 @@ static int __init psci_idle_init_domains(void)
 	return platform_driver_register(&psci_cpuidle_domain_driver);
 }
 subsys_initcall(psci_idle_init_domains);
-
-struct device *psci_dt_attach_cpu(int cpu)
-{
-	struct device *dev;
-
-	dev = dev_pm_domain_attach_by_name(get_cpu_device(cpu), "psci");
-	if (IS_ERR_OR_NULL(dev))
-		return dev;
-
-	pm_runtime_irq_safe(dev);
-	if (cpu_online(cpu))
-		pm_runtime_get_sync(dev);
-
-	dev_pm_syscore_device(dev, true);
-
-	return dev;
-}
-
-void psci_dt_detach_cpu(struct device *dev)
-{
-	if (IS_ERR_OR_NULL(dev))
-		return;
-
-	dev_pm_domain_detach(dev, false);
-}
diff --git a/drivers/cpuidle/cpuidle-psci.h b/drivers/cpuidle/cpuidle-psci.h
index d8e925e84c27..4e132640ed64 100644
--- a/drivers/cpuidle/cpuidle-psci.h
+++ b/drivers/cpuidle/cpuidle-psci.h
@@ -10,8 +10,19 @@ void psci_set_domain_state(u32 state);
 int psci_dt_parse_state_node(struct device_node *np, u32 *state);
 
 #ifdef CONFIG_ARM_PSCI_CPUIDLE_DOMAIN
-struct device *psci_dt_attach_cpu(int cpu);
-void psci_dt_detach_cpu(struct device *dev);
+
+#include "dt_idle_genpd.h"
+
+static inline struct device *psci_dt_attach_cpu(int cpu)
+{
+	return dt_idle_attach_cpu(cpu, "psci");
+}
+
+static inline void psci_dt_detach_cpu(struct device *dev)
+{
+	dt_idle_detach_cpu(dev);
+}
+
 #else
 static inline struct device *psci_dt_attach_cpu(int cpu) { return NULL; }
 static inline void psci_dt_detach_cpu(struct device *dev) { }
diff --git a/drivers/cpuidle/dt_idle_genpd.c b/drivers/cpuidle/dt_idle_genpd.c
new file mode 100644
index 000000000000..b37165514d4e
--- /dev/null
+++ b/drivers/cpuidle/dt_idle_genpd.c
@@ -0,0 +1,178 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * PM domains for CPUs via genpd.
+ *
+ * Copyright (C) 2019 Linaro Ltd.
+ * Author: Ulf Hansson <ulf.hansson@linaro.org>
+ *
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ * Copyright (c) 2022 Ventana Micro Systems Inc.
+ */
+
+#define pr_fmt(fmt) "dt-idle-genpd: " fmt
+
+#include <linux/cpu.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/pm_domain.h>
+#include <linux/pm_runtime.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+
+#include "dt_idle_genpd.h"
+
+static int pd_parse_state_nodes(
+			int (*parse_state)(struct device_node *, u32 *),
+			struct genpd_power_state *states, int state_count)
+{
+	int i, ret;
+	u32 state, *state_buf;
+
+	for (i = 0; i < state_count; i++) {
+		ret = parse_state(to_of_node(states[i].fwnode), &state);
+		if (ret)
+			goto free_state;
+
+		state_buf = kmalloc(sizeof(u32), GFP_KERNEL);
+		if (!state_buf) {
+			ret = -ENOMEM;
+			goto free_state;
+		}
+		*state_buf = state;
+		states[i].data = state_buf;
+	}
+
+	return 0;
+
+free_state:
+	i--;
+	for (; i >= 0; i--)
+		kfree(states[i].data);
+	return ret;
+}
+
+static int pd_parse_states(struct device_node *np,
+			   int (*parse_state)(struct device_node *, u32 *),
+			   struct genpd_power_state **states,
+			   int *state_count)
+{
+	int ret;
+
+	/* Parse the domain idle states. */
+	ret = of_genpd_parse_idle_states(np, states, state_count);
+	if (ret)
+		return ret;
+
+	/* Fill out the dt specifics for each found state. */
+	ret = pd_parse_state_nodes(parse_state, *states, *state_count);
+	if (ret)
+		kfree(*states);
+
+	return ret;
+}
+
+static void pd_free_states(struct genpd_power_state *states,
+			    unsigned int state_count)
+{
+	int i;
+
+	for (i = 0; i < state_count; i++)
+		kfree(states[i].data);
+	kfree(states);
+}
+
+void dt_idle_pd_free(struct generic_pm_domain *pd)
+{
+	pd_free_states(pd->states, pd->state_count);
+	kfree(pd->name);
+	kfree(pd);
+}
+
+struct generic_pm_domain *dt_idle_pd_alloc(struct device_node *np,
+			int (*parse_state)(struct device_node *, u32 *))
+{
+	struct generic_pm_domain *pd;
+	struct genpd_power_state *states = NULL;
+	int ret, state_count = 0;
+
+	pd = kzalloc(sizeof(*pd), GFP_KERNEL);
+	if (!pd)
+		goto out;
+
+	pd->name = kasprintf(GFP_KERNEL, "%pOF", np);
+	if (!pd->name)
+		goto free_pd;
+
+	/*
+	 * Parse the domain idle states and let genpd manage the state selection
+	 * for those being compatible with "domain-idle-state".
+	 */
+	ret = pd_parse_states(np, parse_state, &states, &state_count);
+	if (ret)
+		goto free_name;
+
+	pd->free_states = pd_free_states;
+	pd->name = kbasename(pd->name);
+	pd->states = states;
+	pd->state_count = state_count;
+
+	pr_debug("alloc PM domain %s\n", pd->name);
+	return pd;
+
+free_name:
+	kfree(pd->name);
+free_pd:
+	kfree(pd);
+out:
+	pr_err("failed to alloc PM domain %pOF\n", np);
+	return NULL;
+}
+
+int dt_idle_pd_init_topology(struct device_node *np)
+{
+	struct device_node *node;
+	struct of_phandle_args child, parent;
+	int ret;
+
+	for_each_child_of_node(np, node) {
+		if (of_parse_phandle_with_args(node, "power-domains",
+					"#power-domain-cells", 0, &parent))
+			continue;
+
+		child.np = node;
+		child.args_count = 0;
+		ret = of_genpd_add_subdomain(&parent, &child);
+		of_node_put(parent.np);
+		if (ret) {
+			of_node_put(node);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+struct device *dt_idle_attach_cpu(int cpu, const char *name)
+{
+	struct device *dev;
+
+	dev = dev_pm_domain_attach_by_name(get_cpu_device(cpu), name);
+	if (IS_ERR_OR_NULL(dev))
+		return dev;
+
+	pm_runtime_irq_safe(dev);
+	if (cpu_online(cpu))
+		pm_runtime_get_sync(dev);
+
+	dev_pm_syscore_device(dev, true);
+
+	return dev;
+}
+
+void dt_idle_detach_cpu(struct device *dev)
+{
+	if (IS_ERR_OR_NULL(dev))
+		return;
+
+	dev_pm_domain_detach(dev, false);
+}
diff --git a/drivers/cpuidle/dt_idle_genpd.h b/drivers/cpuidle/dt_idle_genpd.h
new file mode 100644
index 000000000000..a95483d08a02
--- /dev/null
+++ b/drivers/cpuidle/dt_idle_genpd.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __DT_IDLE_GENPD
+#define __DT_IDLE_GENPD
+
+struct device_node;
+struct generic_pm_domain;
+
+#ifdef CONFIG_DT_IDLE_GENPD
+
+void dt_idle_pd_free(struct generic_pm_domain *pd);
+
+struct generic_pm_domain *dt_idle_pd_alloc(struct device_node *np,
+			int (*parse_state)(struct device_node *, u32 *));
+
+int dt_idle_pd_init_topology(struct device_node *np);
+
+struct device *dt_idle_attach_cpu(int cpu, const char *name);
+
+void dt_idle_detach_cpu(struct device *dev);
+
+#else
+
+static inline void dt_idle_pd_free(struct generic_pm_domain *pd)
+{
+}
+
+static inline struct generic_pm_domain *dt_idle_pd_alloc(
+			struct device_node *np,
+			int (*parse_state)(struct device_node *, u32 *))
+{
+	return NULL;
+}
+
+static inline int dt_idle_pd_init_topology(struct device_node *np)
+{
+	return 0;
+}
+
+static inline struct device *dt_idle_attach_cpu(int cpu, const char *name)
+{
+	return NULL;
+}
+
+static inline void dt_idle_detach_cpu(struct device *dev)
+{
+}
+
+#endif
+
+#endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v11 6/8] cpuidle: Add RISC-V SBI CPU idle driver
  2022-02-10  5:49 [PATCH v11 0/8] RISC-V CPU Idle Support Anup Patel
                   ` (4 preceding siblings ...)
  2022-02-10  5:49 ` [PATCH v11 5/8] cpuidle: Factor-out power domain related code from PSCI domain driver Anup Patel
@ 2022-02-10  5:49 ` Anup Patel
  2022-02-16  8:09   ` Atish Patra
                     ` (2 more replies)
  2022-02-10  5:49 ` [PATCH v11 7/8] dt-bindings: Add common bindings for ARM and RISC-V idle states Anup Patel
                   ` (2 subsequent siblings)
  8 siblings, 3 replies; 25+ messages in thread
From: Anup Patel @ 2022-02-10  5:49 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Rafael J . Wysocki, Pavel Machek, Rob Herring
  Cc: Sandeep Tripathy, Atish Patra, Alistair Francis, Liush,
	Anup Patel, devicetree, linux-riscv, linux-kernel, linux-pm,
	linux-arm-kernel, kvm-riscv, Anup Patel

From: Anup Patel <anup.patel@wdc.com>

The RISC-V SBI HSM extension provides HSM suspend call which can
be used by Linux RISC-V to enter platform specific low-power state.

This patch adds a CPU idle driver based on RISC-V SBI calls which
will populate idle states from device tree and use SBI calls to
entry these idle states.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
 MAINTAINERS                         |   7 +
 drivers/cpuidle/Kconfig             |   5 +
 drivers/cpuidle/Kconfig.riscv       |  15 +
 drivers/cpuidle/Makefile            |   4 +
 drivers/cpuidle/cpuidle-riscv-sbi.c | 627 ++++++++++++++++++++++++++++
 5 files changed, 658 insertions(+)
 create mode 100644 drivers/cpuidle/Kconfig.riscv
 create mode 100644 drivers/cpuidle/cpuidle-riscv-sbi.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 39ece23e8d93..2ff0055a26a7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -5058,6 +5058,13 @@ S:	Supported
 F:	drivers/cpuidle/dt_idle_genpd.c
 F:	drivers/cpuidle/dt_idle_genpd.h
 
+CPUIDLE DRIVER - RISC-V SBI
+M:	Anup Patel <anup@brainfault.org>
+L:	linux-pm@vger.kernel.org
+L:	linux-riscv@lists.infradead.org
+S:	Maintained
+F:	drivers/cpuidle/cpuidle-riscv-sbi.c
+
 CRAMFS FILESYSTEM
 M:	Nicolas Pitre <nico@fluxnic.net>
 S:	Maintained
diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
index f1afe7ab6b54..ff71dd662880 100644
--- a/drivers/cpuidle/Kconfig
+++ b/drivers/cpuidle/Kconfig
@@ -66,6 +66,11 @@ depends on PPC
 source "drivers/cpuidle/Kconfig.powerpc"
 endmenu
 
+menu "RISC-V CPU Idle Drivers"
+depends on RISCV
+source "drivers/cpuidle/Kconfig.riscv"
+endmenu
+
 config HALTPOLL_CPUIDLE
 	tristate "Halt poll cpuidle driver"
 	depends on X86 && KVM_GUEST
diff --git a/drivers/cpuidle/Kconfig.riscv b/drivers/cpuidle/Kconfig.riscv
new file mode 100644
index 000000000000..78518c26af74
--- /dev/null
+++ b/drivers/cpuidle/Kconfig.riscv
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# RISC-V CPU Idle drivers
+#
+
+config RISCV_SBI_CPUIDLE
+	bool "RISC-V SBI CPU idle Driver"
+	depends on RISCV_SBI
+	select DT_IDLE_STATES
+	select CPU_IDLE_MULTIPLE_DRIVERS
+	select DT_IDLE_GENPD if PM_GENERIC_DOMAINS_OF
+	help
+	  Select this option to enable RISC-V SBI firmware based CPU idle
+	  driver for RISC-V systems. This drivers also supports hierarchical
+	  DT based layout of the idle state.
diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
index 11a26cef279f..d103342b7cfc 100644
--- a/drivers/cpuidle/Makefile
+++ b/drivers/cpuidle/Makefile
@@ -35,3 +35,7 @@ obj-$(CONFIG_MIPS_CPS_CPUIDLE)		+= cpuidle-cps.o
 # POWERPC drivers
 obj-$(CONFIG_PSERIES_CPUIDLE)		+= cpuidle-pseries.o
 obj-$(CONFIG_POWERNV_CPUIDLE)		+= cpuidle-powernv.o
+
+###############################################################################
+# RISC-V drivers
+obj-$(CONFIG_RISCV_SBI_CPUIDLE)		+= cpuidle-riscv-sbi.o
diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
new file mode 100644
index 000000000000..b459eda2cd37
--- /dev/null
+++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
@@ -0,0 +1,627 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * RISC-V SBI CPU idle driver.
+ *
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ * Copyright (c) 2022 Ventana Micro Systems Inc.
+ */
+
+#define pr_fmt(fmt) "cpuidle-riscv-sbi: " fmt
+
+#include <linux/cpuidle.h>
+#include <linux/cpumask.h>
+#include <linux/cpu_pm.h>
+#include <linux/cpu_cooling.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/slab.h>
+#include <linux/platform_device.h>
+#include <linux/pm_domain.h>
+#include <linux/pm_runtime.h>
+#include <asm/cpuidle.h>
+#include <asm/sbi.h>
+#include <asm/suspend.h>
+
+#include "dt_idle_states.h"
+#include "dt_idle_genpd.h"
+
+struct sbi_cpuidle_data {
+	u32 *states;
+	struct device *dev;
+};
+
+struct sbi_domain_state {
+	bool available;
+	u32 state;
+};
+
+static DEFINE_PER_CPU_READ_MOSTLY(struct sbi_cpuidle_data, sbi_cpuidle_data);
+static DEFINE_PER_CPU(struct sbi_domain_state, domain_state);
+static bool sbi_cpuidle_use_osi;
+static bool sbi_cpuidle_use_cpuhp;
+static bool sbi_cpuidle_pd_allow_domain_state;
+
+static inline void sbi_set_domain_state(u32 state)
+{
+	struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
+
+	data->available = true;
+	data->state = state;
+}
+
+static inline u32 sbi_get_domain_state(void)
+{
+	struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
+
+	return data->state;
+}
+
+static inline void sbi_clear_domain_state(void)
+{
+	struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
+
+	data->available = false;
+}
+
+static inline bool sbi_is_domain_state_available(void)
+{
+	struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
+
+	return data->available;
+}
+
+static int sbi_suspend_finisher(unsigned long suspend_type,
+				unsigned long resume_addr,
+				unsigned long opaque)
+{
+	struct sbiret ret;
+
+	ret = sbi_ecall(SBI_EXT_HSM, SBI_EXT_HSM_HART_SUSPEND,
+			suspend_type, resume_addr, opaque, 0, 0, 0);
+
+	return (ret.error) ? sbi_err_map_linux_errno(ret.error) : 0;
+}
+
+static int sbi_suspend(u32 state)
+{
+	if (state & SBI_HSM_SUSP_NON_RET_BIT)
+		return cpu_suspend(state, sbi_suspend_finisher);
+	else
+		return sbi_suspend_finisher(state, 0, 0);
+}
+
+static int sbi_cpuidle_enter_state(struct cpuidle_device *dev,
+				   struct cpuidle_driver *drv, int idx)
+{
+	u32 *states = __this_cpu_read(sbi_cpuidle_data.states);
+
+	return CPU_PM_CPU_IDLE_ENTER_PARAM(sbi_suspend, idx, states[idx]);
+}
+
+static int __sbi_enter_domain_idle_state(struct cpuidle_device *dev,
+					  struct cpuidle_driver *drv, int idx,
+					  bool s2idle)
+{
+	struct sbi_cpuidle_data *data = this_cpu_ptr(&sbi_cpuidle_data);
+	u32 *states = data->states;
+	struct device *pd_dev = data->dev;
+	u32 state;
+	int ret;
+
+	ret = cpu_pm_enter();
+	if (ret)
+		return -1;
+
+	/* Do runtime PM to manage a hierarchical CPU toplogy. */
+	rcu_irq_enter_irqson();
+	if (s2idle)
+		dev_pm_genpd_suspend(pd_dev);
+	else
+		pm_runtime_put_sync_suspend(pd_dev);
+	rcu_irq_exit_irqson();
+
+	if (sbi_is_domain_state_available())
+		state = sbi_get_domain_state();
+	else
+		state = states[idx];
+
+	ret = sbi_suspend(state) ? -1 : idx;
+
+	rcu_irq_enter_irqson();
+	if (s2idle)
+		dev_pm_genpd_resume(pd_dev);
+	else
+		pm_runtime_get_sync(pd_dev);
+	rcu_irq_exit_irqson();
+
+	cpu_pm_exit();
+
+	/* Clear the domain state to start fresh when back from idle. */
+	sbi_clear_domain_state();
+	return ret;
+}
+
+static int sbi_enter_domain_idle_state(struct cpuidle_device *dev,
+				       struct cpuidle_driver *drv, int idx)
+{
+	return __sbi_enter_domain_idle_state(dev, drv, idx, false);
+}
+
+static int sbi_enter_s2idle_domain_idle_state(struct cpuidle_device *dev,
+					      struct cpuidle_driver *drv,
+					      int idx)
+{
+	return __sbi_enter_domain_idle_state(dev, drv, idx, true);
+}
+
+static int sbi_cpuidle_cpuhp_up(unsigned int cpu)
+{
+	struct device *pd_dev = __this_cpu_read(sbi_cpuidle_data.dev);
+
+	if (pd_dev)
+		pm_runtime_get_sync(pd_dev);
+
+	return 0;
+}
+
+static int sbi_cpuidle_cpuhp_down(unsigned int cpu)
+{
+	struct device *pd_dev = __this_cpu_read(sbi_cpuidle_data.dev);
+
+	if (pd_dev) {
+		pm_runtime_put_sync(pd_dev);
+		/* Clear domain state to start fresh at next online. */
+		sbi_clear_domain_state();
+	}
+
+	return 0;
+}
+
+static void sbi_idle_init_cpuhp(void)
+{
+	int err;
+
+	if (!sbi_cpuidle_use_cpuhp)
+		return;
+
+	err = cpuhp_setup_state_nocalls(CPUHP_AP_CPU_PM_STARTING,
+					"cpuidle/sbi:online",
+					sbi_cpuidle_cpuhp_up,
+					sbi_cpuidle_cpuhp_down);
+	if (err)
+		pr_warn("Failed %d while setup cpuhp state\n", err);
+}
+
+static const struct of_device_id sbi_cpuidle_state_match[] = {
+	{ .compatible = "riscv,idle-state",
+	  .data = sbi_cpuidle_enter_state },
+	{ },
+};
+
+static bool sbi_suspend_state_is_valid(u32 state)
+{
+	if (state > SBI_HSM_SUSPEND_RET_DEFAULT &&
+	    state < SBI_HSM_SUSPEND_RET_PLATFORM)
+		return false;
+	if (state > SBI_HSM_SUSPEND_NON_RET_DEFAULT &&
+	    state < SBI_HSM_SUSPEND_NON_RET_PLATFORM)
+		return false;
+	return true;
+}
+
+static int sbi_dt_parse_state_node(struct device_node *np, u32 *state)
+{
+	int err = of_property_read_u32(np, "riscv,sbi-suspend-param", state);
+
+	if (err) {
+		pr_warn("%pOF missing riscv,sbi-suspend-param property\n", np);
+		return err;
+	}
+
+	if (!sbi_suspend_state_is_valid(*state)) {
+		pr_warn("Invalid SBI suspend state %#x\n", *state);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int sbi_dt_cpu_init_topology(struct cpuidle_driver *drv,
+				     struct sbi_cpuidle_data *data,
+				     unsigned int state_count, int cpu)
+{
+	/* Currently limit the hierarchical topology to be used in OSI mode. */
+	if (!sbi_cpuidle_use_osi)
+		return 0;
+
+	data->dev = dt_idle_attach_cpu(cpu, "sbi");
+	if (IS_ERR_OR_NULL(data->dev))
+		return PTR_ERR_OR_ZERO(data->dev);
+
+	/*
+	 * Using the deepest state for the CPU to trigger a potential selection
+	 * of a shared state for the domain, assumes the domain states are all
+	 * deeper states.
+	 */
+	drv->states[state_count - 1].enter = sbi_enter_domain_idle_state;
+	drv->states[state_count - 1].enter_s2idle =
+					sbi_enter_s2idle_domain_idle_state;
+	sbi_cpuidle_use_cpuhp = true;
+
+	return 0;
+}
+
+static int sbi_cpuidle_dt_init_states(struct device *dev,
+					struct cpuidle_driver *drv,
+					unsigned int cpu,
+					unsigned int state_count)
+{
+	struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu);
+	struct device_node *state_node;
+	struct device_node *cpu_node;
+	u32 *states;
+	int i, ret;
+
+	cpu_node = of_cpu_device_node_get(cpu);
+	if (!cpu_node)
+		return -ENODEV;
+
+	states = devm_kcalloc(dev, state_count, sizeof(*states), GFP_KERNEL);
+	if (!states) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	/* Parse SBI specific details from state DT nodes */
+	for (i = 1; i < state_count; i++) {
+		state_node = of_get_cpu_state_node(cpu_node, i - 1);
+		if (!state_node)
+			break;
+
+		ret = sbi_dt_parse_state_node(state_node, &states[i]);
+		of_node_put(state_node);
+
+		if (ret)
+			return ret;
+
+		pr_debug("sbi-state %#x index %d\n", states[i], i);
+	}
+	if (i != state_count) {
+		ret = -ENODEV;
+		goto fail;
+	}
+
+	/* Initialize optional data, used for the hierarchical topology. */
+	ret = sbi_dt_cpu_init_topology(drv, data, state_count, cpu);
+	if (ret < 0)
+		return ret;
+
+	/* Store states in the per-cpu struct. */
+	data->states = states;
+
+fail:
+	of_node_put(cpu_node);
+
+	return ret;
+}
+
+static void sbi_cpuidle_deinit_cpu(int cpu)
+{
+	struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu);
+
+	dt_idle_detach_cpu(data->dev);
+	sbi_cpuidle_use_cpuhp = false;
+}
+
+static int sbi_cpuidle_init_cpu(struct device *dev, int cpu)
+{
+	struct cpuidle_driver *drv;
+	unsigned int state_count = 0;
+	int ret = 0;
+
+	drv = devm_kzalloc(dev, sizeof(*drv), GFP_KERNEL);
+	if (!drv)
+		return -ENOMEM;
+
+	drv->name = "sbi_cpuidle";
+	drv->owner = THIS_MODULE;
+	drv->cpumask = (struct cpumask *)cpumask_of(cpu);
+
+	/* RISC-V architectural WFI to be represented as state index 0. */
+	drv->states[0].enter = sbi_cpuidle_enter_state;
+	drv->states[0].exit_latency = 1;
+	drv->states[0].target_residency = 1;
+	drv->states[0].power_usage = UINT_MAX;
+	strcpy(drv->states[0].name, "WFI");
+	strcpy(drv->states[0].desc, "RISC-V WFI");
+
+	/*
+	 * If no DT idle states are detected (ret == 0) let the driver
+	 * initialization fail accordingly since there is no reason to
+	 * initialize the idle driver if only wfi is supported, the
+	 * default archictectural back-end already executes wfi
+	 * on idle entry.
+	 */
+	ret = dt_init_idle_driver(drv, sbi_cpuidle_state_match, 1);
+	if (ret <= 0) {
+		pr_debug("HART%ld: failed to parse DT idle states\n",
+			 cpuid_to_hartid_map(cpu));
+		return ret ? : -ENODEV;
+	}
+	state_count = ret + 1; /* Include WFI state as well */
+
+	/* Initialize idle states from DT. */
+	ret = sbi_cpuidle_dt_init_states(dev, drv, cpu, state_count);
+	if (ret) {
+		pr_err("HART%ld: failed to init idle states\n",
+		       cpuid_to_hartid_map(cpu));
+		return ret;
+	}
+
+	ret = cpuidle_register(drv, NULL);
+	if (ret)
+		goto deinit;
+
+	cpuidle_cooling_register(drv);
+
+	return 0;
+deinit:
+	sbi_cpuidle_deinit_cpu(cpu);
+	return ret;
+}
+
+static void sbi_cpuidle_domain_sync_state(struct device *dev)
+{
+	/*
+	 * All devices have now been attached/probed to the PM domain
+	 * topology, hence it's fine to allow domain states to be picked.
+	 */
+	sbi_cpuidle_pd_allow_domain_state = true;
+}
+
+#ifdef CONFIG_DT_IDLE_GENPD
+
+static int sbi_cpuidle_pd_power_off(struct generic_pm_domain *pd)
+{
+	struct genpd_power_state *state = &pd->states[pd->state_idx];
+	u32 *pd_state;
+
+	if (!state->data)
+		return 0;
+
+	if (!sbi_cpuidle_pd_allow_domain_state)
+		return -EBUSY;
+
+	/* OSI mode is enabled, set the corresponding domain state. */
+	pd_state = state->data;
+	sbi_set_domain_state(*pd_state);
+
+	return 0;
+}
+
+struct sbi_pd_provider {
+	struct list_head link;
+	struct device_node *node;
+};
+
+static LIST_HEAD(sbi_pd_providers);
+
+static int sbi_pd_init(struct device_node *np)
+{
+	struct generic_pm_domain *pd;
+	struct sbi_pd_provider *pd_provider;
+	struct dev_power_governor *pd_gov;
+	int ret = -ENOMEM, state_count = 0;
+
+	pd = dt_idle_pd_alloc(np, sbi_dt_parse_state_node);
+	if (!pd)
+		goto out;
+
+	pd_provider = kzalloc(sizeof(*pd_provider), GFP_KERNEL);
+	if (!pd_provider)
+		goto free_pd;
+
+	pd->flags |= GENPD_FLAG_IRQ_SAFE | GENPD_FLAG_CPU_DOMAIN;
+
+	/* Allow power off when OSI is available. */
+	if (sbi_cpuidle_use_osi)
+		pd->power_off = sbi_cpuidle_pd_power_off;
+	else
+		pd->flags |= GENPD_FLAG_ALWAYS_ON;
+
+	/* Use governor for CPU PM domains if it has some states to manage. */
+	pd_gov = state_count > 0 ? &pm_domain_cpu_gov : NULL;
+
+	ret = pm_genpd_init(pd, pd_gov, false);
+	if (ret)
+		goto free_pd_prov;
+
+	ret = of_genpd_add_provider_simple(np, pd);
+	if (ret)
+		goto remove_pd;
+
+	pd_provider->node = of_node_get(np);
+	list_add(&pd_provider->link, &sbi_pd_providers);
+
+	pr_debug("init PM domain %s\n", pd->name);
+	return 0;
+
+remove_pd:
+	pm_genpd_remove(pd);
+free_pd_prov:
+	kfree(pd_provider);
+free_pd:
+	dt_idle_pd_free(pd);
+out:
+	pr_err("failed to init PM domain ret=%d %pOF\n", ret, np);
+	return ret;
+}
+
+static void sbi_pd_remove(void)
+{
+	struct sbi_pd_provider *pd_provider, *it;
+	struct generic_pm_domain *genpd;
+
+	list_for_each_entry_safe(pd_provider, it, &sbi_pd_providers, link) {
+		of_genpd_del_provider(pd_provider->node);
+
+		genpd = of_genpd_remove_last(pd_provider->node);
+		if (!IS_ERR(genpd))
+			kfree(genpd);
+
+		of_node_put(pd_provider->node);
+		list_del(&pd_provider->link);
+		kfree(pd_provider);
+	}
+}
+
+static int sbi_genpd_probe(struct device_node *np)
+{
+	struct device_node *node;
+	int ret = 0, pd_count = 0;
+
+	if (!np)
+		return -ENODEV;
+
+	/*
+	 * Parse child nodes for the "#power-domain-cells" property and
+	 * initialize a genpd/genpd-of-provider pair when it's found.
+	 */
+	for_each_child_of_node(np, node) {
+		if (!of_find_property(node, "#power-domain-cells", NULL))
+			continue;
+
+		ret = sbi_pd_init(node);
+		if (ret)
+			goto put_node;
+
+		pd_count++;
+	}
+
+	/* Bail out if not using the hierarchical CPU topology. */
+	if (!pd_count)
+		goto no_pd;
+
+	/* Link genpd masters/subdomains to model the CPU topology. */
+	ret = dt_idle_pd_init_topology(np);
+	if (ret)
+		goto remove_pd;
+
+	return 0;
+
+put_node:
+	of_node_put(node);
+remove_pd:
+	sbi_pd_remove();
+	pr_err("failed to create CPU PM domains ret=%d\n", ret);
+no_pd:
+	return ret;
+}
+
+#else
+
+static inline int sbi_genpd_probe(struct device_node *np)
+{
+	return 0;
+}
+
+#endif
+
+static int sbi_cpuidle_probe(struct platform_device *pdev)
+{
+	int cpu, ret;
+	struct cpuidle_driver *drv;
+	struct cpuidle_device *dev;
+	struct device_node *np, *pds_node;
+
+	/* Detect OSI support based on CPU DT nodes */
+	sbi_cpuidle_use_osi = true;
+	for_each_possible_cpu(cpu) {
+		np = of_cpu_device_node_get(cpu);
+		if (np &&
+		    of_find_property(np, "power-domains", NULL) &&
+		    of_find_property(np, "power-domain-names", NULL)) {
+			continue;
+		} else {
+			sbi_cpuidle_use_osi = false;
+			break;
+		}
+	}
+
+	/* Populate generic power domains from DT nodes */
+	pds_node = of_find_node_by_path("/cpus/power-domains");
+	if (pds_node) {
+		ret = sbi_genpd_probe(pds_node);
+		of_node_put(pds_node);
+		if (ret)
+			return ret;
+	}
+
+	/* Initialize CPU idle driver for each CPU */
+	for_each_possible_cpu(cpu) {
+		ret = sbi_cpuidle_init_cpu(&pdev->dev, cpu);
+		if (ret) {
+			pr_debug("HART%ld: idle driver init failed\n",
+				 cpuid_to_hartid_map(cpu));
+			goto out_fail;
+		}
+	}
+
+	/* Setup CPU hotplut notifiers */
+	sbi_idle_init_cpuhp();
+
+	pr_info("idle driver registered for all CPUs\n");
+
+	return 0;
+
+out_fail:
+	while (--cpu >= 0) {
+		dev = per_cpu(cpuidle_devices, cpu);
+		drv = cpuidle_get_cpu_driver(dev);
+		cpuidle_unregister(drv);
+		sbi_cpuidle_deinit_cpu(cpu);
+	}
+
+	return ret;
+}
+
+static struct platform_driver sbi_cpuidle_driver = {
+	.probe = sbi_cpuidle_probe,
+	.driver = {
+		.name = "sbi-cpuidle",
+		.sync_state = sbi_cpuidle_domain_sync_state,
+	},
+};
+
+static int __init sbi_cpuidle_init(void)
+{
+	int ret;
+	struct platform_device *pdev;
+
+	/*
+	 * The SBI HSM suspend function is only available when:
+	 * 1) SBI version is 0.3 or higher
+	 * 2) SBI HSM extension is available
+	 */
+	if ((sbi_spec_version < sbi_mk_version(0, 3)) ||
+	    sbi_probe_extension(SBI_EXT_HSM) <= 0) {
+		pr_info("HSM suspend not available\n");
+		return 0;
+	}
+
+	ret = platform_driver_register(&sbi_cpuidle_driver);
+	if (ret)
+		return ret;
+
+	pdev = platform_device_register_simple("sbi-cpuidle",
+						-1, NULL, 0);
+	if (IS_ERR(pdev)) {
+		platform_driver_unregister(&sbi_cpuidle_driver);
+		return PTR_ERR(pdev);
+	}
+
+	return 0;
+}
+device_initcall(sbi_cpuidle_init);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v11 7/8] dt-bindings: Add common bindings for ARM and RISC-V idle states
  2022-02-10  5:49 [PATCH v11 0/8] RISC-V CPU Idle Support Anup Patel
                   ` (5 preceding siblings ...)
  2022-02-10  5:49 ` [PATCH v11 6/8] cpuidle: Add RISC-V SBI CPU idle driver Anup Patel
@ 2022-02-10  5:49 ` Anup Patel
  2022-02-10  5:49 ` [PATCH v11 8/8] RISC-V: Enable RISC-V SBI CPU Idle driver for QEMU virt machine Anup Patel
  2022-03-31  0:16 ` [PATCH v11 0/8] RISC-V CPU Idle Support Palmer Dabbelt
  8 siblings, 0 replies; 25+ messages in thread
From: Anup Patel @ 2022-02-10  5:49 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Rafael J . Wysocki, Pavel Machek, Rob Herring
  Cc: Sandeep Tripathy, Atish Patra, Alistair Francis, Liush,
	Anup Patel, devicetree, linux-riscv, linux-kernel, linux-pm,
	linux-arm-kernel, kvm-riscv, Anup Patel, Rob Herring, Guo Ren

From: Anup Patel <anup.patel@wdc.com>

The RISC-V CPU idle states will be described in under the
/cpus/idle-states DT node in the same way as ARM CPU idle
states.

This patch adds common bindings documentation for both ARM
and RISC-V idle states.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Reviewed-by: Guo Ren <guoren@kernel.org>
---
 .../bindings/arm/msm/qcom,idle-state.txt      |   2 +-
 .../devicetree/bindings/arm/psci.yaml         |   2 +-
 .../bindings/{arm => cpu}/idle-states.yaml    | 228 ++++++++++++++++--
 .../devicetree/bindings/riscv/cpus.yaml       |   6 +
 4 files changed, 219 insertions(+), 19 deletions(-)
 rename Documentation/devicetree/bindings/{arm => cpu}/idle-states.yaml (74%)

diff --git a/Documentation/devicetree/bindings/arm/msm/qcom,idle-state.txt b/Documentation/devicetree/bindings/arm/msm/qcom,idle-state.txt
index 6ce0b212ec6d..606b4b1b709d 100644
--- a/Documentation/devicetree/bindings/arm/msm/qcom,idle-state.txt
+++ b/Documentation/devicetree/bindings/arm/msm/qcom,idle-state.txt
@@ -81,4 +81,4 @@ Example:
 		};
 	};
 
-[1]. Documentation/devicetree/bindings/arm/idle-states.yaml
+[1]. Documentation/devicetree/bindings/cpu/idle-states.yaml
diff --git a/Documentation/devicetree/bindings/arm/psci.yaml b/Documentation/devicetree/bindings/arm/psci.yaml
index 8b77cf83a095..dd83ef278af0 100644
--- a/Documentation/devicetree/bindings/arm/psci.yaml
+++ b/Documentation/devicetree/bindings/arm/psci.yaml
@@ -101,7 +101,7 @@ properties:
       bindings in [1]) must specify this property.
 
       [1] Kernel documentation - ARM idle states bindings
-        Documentation/devicetree/bindings/arm/idle-states.yaml
+        Documentation/devicetree/bindings/cpu/idle-states.yaml
 
 patternProperties:
   "^power-domain-":
diff --git a/Documentation/devicetree/bindings/arm/idle-states.yaml b/Documentation/devicetree/bindings/cpu/idle-states.yaml
similarity index 74%
rename from Documentation/devicetree/bindings/arm/idle-states.yaml
rename to Documentation/devicetree/bindings/cpu/idle-states.yaml
index 52bce5dbb11f..95506ffb816c 100644
--- a/Documentation/devicetree/bindings/arm/idle-states.yaml
+++ b/Documentation/devicetree/bindings/cpu/idle-states.yaml
@@ -1,25 +1,30 @@
 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
 %YAML 1.2
 ---
-$id: http://devicetree.org/schemas/arm/idle-states.yaml#
+$id: http://devicetree.org/schemas/cpu/idle-states.yaml#
 $schema: http://devicetree.org/meta-schemas/core.yaml#
 
-title: ARM idle states binding description
+title: Idle states binding description
 
 maintainers:
   - Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
+  - Anup Patel <anup@brainfault.org>
 
 description: |+
   ==========================================
   1 - Introduction
   ==========================================
 
-  ARM systems contain HW capable of managing power consumption dynamically,
-  where cores can be put in different low-power states (ranging from simple wfi
-  to power gating) according to OS PM policies. The CPU states representing the
-  range of dynamic idle states that a processor can enter at run-time, can be
-  specified through device tree bindings representing the parameters required to
-  enter/exit specific idle states on a given processor.
+  ARM and RISC-V systems contain HW capable of managing power consumption
+  dynamically, where cores can be put in different low-power states (ranging
+  from simple wfi to power gating) according to OS PM policies. The CPU states
+  representing the range of dynamic idle states that a processor can enter at
+  run-time, can be specified through device tree bindings representing the
+  parameters required to enter/exit specific idle states on a given processor.
+
+  ==========================================
+  2 - ARM idle states
+  ==========================================
 
   According to the Server Base System Architecture document (SBSA, [3]), the
   power states an ARM CPU can be put into are identified by the following list:
@@ -43,8 +48,23 @@ description: |+
   The device tree binding definition for ARM idle states is the subject of this
   document.
 
+  ==========================================
+  3 - RISC-V idle states
+  ==========================================
+
+  On RISC-V systems, the HARTs (or CPUs) [6] can be put in platform specific
+  suspend (or idle) states (ranging from simple WFI, power gating, etc). The
+  RISC-V SBI v0.3 (or higher) [7] hart state management extension provides a
+  standard mechanism for OS to request HART state transitions.
+
+  The platform specific suspend (or idle) states of a hart can be either
+  retentive or non-rententive in nature. A retentive suspend state will
+  preserve HART registers and CSR values for all privilege modes whereas
+  a non-retentive suspend state will not preserve HART registers and CSR
+  values.
+
   ===========================================
-  2 - idle-states definitions
+  4 - idle-states definitions
   ===========================================
 
   Idle states are characterized for a specific system through a set of
@@ -211,10 +231,10 @@ description: |+
   properties specification that is the subject of the following sections.
 
   ===========================================
-  3 - idle-states node
+  5 - idle-states node
   ===========================================
 
-  ARM processor idle states are defined within the idle-states node, which is
+  The processor idle states are defined within the idle-states node, which is
   a direct child of the cpus node [1] and provides a container where the
   processor idle states, defined as device tree nodes, are listed.
 
@@ -223,7 +243,7 @@ description: |+
   just supports idle_standby, an idle-states node is not required.
 
   ===========================================
-  4 - References
+  6 - References
   ===========================================
 
   [1] ARM Linux Kernel documentation - CPUs bindings
@@ -238,9 +258,15 @@ description: |+
   [4] ARM Architecture Reference Manuals
       http://infocenter.arm.com/help/index.jsp
 
-  [6] ARM Linux Kernel documentation - Booting AArch64 Linux
+  [5] ARM Linux Kernel documentation - Booting AArch64 Linux
       Documentation/arm64/booting.rst
 
+  [6] RISC-V Linux Kernel documentation - CPUs bindings
+      Documentation/devicetree/bindings/riscv/cpus.yaml
+
+  [7] RISC-V Supervisor Binary Interface (SBI)
+      http://github.com/riscv/riscv-sbi-doc/riscv-sbi.adoc
+
 properties:
   $nodename:
     const: idle-states
@@ -253,7 +279,7 @@ properties:
       On ARM 32-bit systems this property is optional
 
       This assumes that the "enable-method" property is set to "psci" in the cpu
-      node[6] that is responsible for setting up CPU idle management in the OS
+      node[5] that is responsible for setting up CPU idle management in the OS
       implementation.
     const: psci
 
@@ -265,8 +291,8 @@ patternProperties:
       as follows.
 
       The idle state entered by executing the wfi instruction (idle_standby
-      SBSA,[3][4]) is considered standard on all ARM platforms and therefore
-      must not be listed.
+      SBSA,[3][4]) is considered standard on all ARM and RISC-V platforms and
+      therefore must not be listed.
 
       In addition to the properties listed above, a state node may require
       additional properties specific to the entry-method defined in the
@@ -275,7 +301,27 @@ patternProperties:
 
     properties:
       compatible:
-        const: arm,idle-state
+        enum:
+          - arm,idle-state
+          - riscv,idle-state
+
+      arm,psci-suspend-param:
+        $ref: /schemas/types.yaml#/definitions/uint32
+        description: |
+          power_state parameter to pass to the ARM PSCI suspend call.
+
+          Device tree nodes that require usage of PSCI CPU_SUSPEND function
+          (i.e. idle states node with entry-method property is set to "psci")
+          must specify this property.
+
+      riscv,sbi-suspend-param:
+        $ref: /schemas/types.yaml#/definitions/uint32
+        description: |
+          suspend_type parameter to pass to the RISC-V SBI HSM suspend call.
+
+          This property is required in idle state nodes of device tree meant
+          for RISC-V systems. For more details on the suspend_type parameter
+          refer the SBI specifiation v0.3 (or higher) [7].
 
       local-timer-stop:
         description:
@@ -317,6 +363,8 @@ patternProperties:
         description:
           A string used as a descriptive name for the idle state.
 
+    additionalProperties: false
+
     required:
       - compatible
       - entry-latency-us
@@ -658,4 +706,150 @@ examples:
         };
     };
 
+  - |
+    // Example 3 (RISC-V 64-bit, 4-cpu systems, two clusters):
+
+    cpus {
+        #size-cells = <0>;
+        #address-cells = <1>;
+
+        cpu@0 {
+            device_type = "cpu";
+            compatible = "riscv";
+            reg = <0x0>;
+            riscv,isa = "rv64imafdc";
+            mmu-type = "riscv,sv48";
+            cpu-idle-states = <&CPU_RET_0_0 &CPU_NONRET_0_0
+                            &CLUSTER_RET_0 &CLUSTER_NONRET_0>;
+
+            cpu_intc0: interrupt-controller {
+                #interrupt-cells = <1>;
+                compatible = "riscv,cpu-intc";
+                interrupt-controller;
+            };
+        };
+
+        cpu@1 {
+            device_type = "cpu";
+            compatible = "riscv";
+            reg = <0x1>;
+            riscv,isa = "rv64imafdc";
+            mmu-type = "riscv,sv48";
+            cpu-idle-states = <&CPU_RET_0_0 &CPU_NONRET_0_0
+                            &CLUSTER_RET_0 &CLUSTER_NONRET_0>;
+
+            cpu_intc1: interrupt-controller {
+                #interrupt-cells = <1>;
+                compatible = "riscv,cpu-intc";
+                interrupt-controller;
+            };
+        };
+
+        cpu@10 {
+            device_type = "cpu";
+            compatible = "riscv";
+            reg = <0x10>;
+            riscv,isa = "rv64imafdc";
+            mmu-type = "riscv,sv48";
+            cpu-idle-states = <&CPU_RET_1_0 &CPU_NONRET_1_0
+                            &CLUSTER_RET_1 &CLUSTER_NONRET_1>;
+
+            cpu_intc10: interrupt-controller {
+                #interrupt-cells = <1>;
+                compatible = "riscv,cpu-intc";
+                interrupt-controller;
+            };
+        };
+
+        cpu@11 {
+            device_type = "cpu";
+            compatible = "riscv";
+            reg = <0x11>;
+            riscv,isa = "rv64imafdc";
+            mmu-type = "riscv,sv48";
+            cpu-idle-states = <&CPU_RET_1_0 &CPU_NONRET_1_0
+                            &CLUSTER_RET_1 &CLUSTER_NONRET_1>;
+
+            cpu_intc11: interrupt-controller {
+                #interrupt-cells = <1>;
+                compatible = "riscv,cpu-intc";
+                interrupt-controller;
+            };
+        };
+
+        idle-states {
+            CPU_RET_0_0: cpu-retentive-0-0 {
+                compatible = "riscv,idle-state";
+                riscv,sbi-suspend-param = <0x10000000>;
+                entry-latency-us = <20>;
+                exit-latency-us = <40>;
+                min-residency-us = <80>;
+            };
+
+            CPU_NONRET_0_0: cpu-nonretentive-0-0 {
+                compatible = "riscv,idle-state";
+                riscv,sbi-suspend-param = <0x90000000>;
+                entry-latency-us = <250>;
+                exit-latency-us = <500>;
+                min-residency-us = <950>;
+            };
+
+            CLUSTER_RET_0: cluster-retentive-0 {
+                compatible = "riscv,idle-state";
+                riscv,sbi-suspend-param = <0x11000000>;
+                local-timer-stop;
+                entry-latency-us = <50>;
+                exit-latency-us = <100>;
+                min-residency-us = <250>;
+                wakeup-latency-us = <130>;
+            };
+
+            CLUSTER_NONRET_0: cluster-nonretentive-0 {
+                compatible = "riscv,idle-state";
+                riscv,sbi-suspend-param = <0x91000000>;
+                local-timer-stop;
+                entry-latency-us = <600>;
+                exit-latency-us = <1100>;
+                min-residency-us = <2700>;
+                wakeup-latency-us = <1500>;
+            };
+
+            CPU_RET_1_0: cpu-retentive-1-0 {
+                compatible = "riscv,idle-state";
+                riscv,sbi-suspend-param = <0x10000010>;
+                entry-latency-us = <20>;
+                exit-latency-us = <40>;
+                min-residency-us = <80>;
+            };
+
+            CPU_NONRET_1_0: cpu-nonretentive-1-0 {
+                compatible = "riscv,idle-state";
+                riscv,sbi-suspend-param = <0x90000010>;
+                entry-latency-us = <250>;
+                exit-latency-us = <500>;
+                min-residency-us = <950>;
+            };
+
+            CLUSTER_RET_1: cluster-retentive-1 {
+                compatible = "riscv,idle-state";
+                riscv,sbi-suspend-param = <0x11000010>;
+                local-timer-stop;
+                entry-latency-us = <50>;
+                exit-latency-us = <100>;
+                min-residency-us = <250>;
+                wakeup-latency-us = <130>;
+            };
+
+            CLUSTER_NONRET_1: cluster-nonretentive-1 {
+                compatible = "riscv,idle-state";
+                riscv,sbi-suspend-param = <0x91000010>;
+                local-timer-stop;
+                entry-latency-us = <600>;
+                exit-latency-us = <1100>;
+                min-residency-us = <2700>;
+                wakeup-latency-us = <1500>;
+            };
+        };
+    };
+
 ...
diff --git a/Documentation/devicetree/bindings/riscv/cpus.yaml b/Documentation/devicetree/bindings/riscv/cpus.yaml
index aa5fb64d57eb..f62f646bc695 100644
--- a/Documentation/devicetree/bindings/riscv/cpus.yaml
+++ b/Documentation/devicetree/bindings/riscv/cpus.yaml
@@ -99,6 +99,12 @@ properties:
       - compatible
       - interrupt-controller
 
+  cpu-idle-states:
+    $ref: '/schemas/types.yaml#/definitions/phandle-array'
+    description: |
+      List of phandles to idle state nodes supported
+      by this hart (see ./idle-states.yaml).
+
 required:
   - riscv,isa
   - interrupt-controller
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v11 8/8] RISC-V: Enable RISC-V SBI CPU Idle driver for QEMU virt machine
  2022-02-10  5:49 [PATCH v11 0/8] RISC-V CPU Idle Support Anup Patel
                   ` (6 preceding siblings ...)
  2022-02-10  5:49 ` [PATCH v11 7/8] dt-bindings: Add common bindings for ARM and RISC-V idle states Anup Patel
@ 2022-02-10  5:49 ` Anup Patel
  2022-03-31  0:16 ` [PATCH v11 0/8] RISC-V CPU Idle Support Palmer Dabbelt
  8 siblings, 0 replies; 25+ messages in thread
From: Anup Patel @ 2022-02-10  5:49 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Rafael J . Wysocki, Pavel Machek, Rob Herring
  Cc: Sandeep Tripathy, Atish Patra, Alistair Francis, Liush,
	Anup Patel, devicetree, linux-riscv, linux-kernel, linux-pm,
	linux-arm-kernel, kvm-riscv, Anup Patel

From: Anup Patel <anup.patel@wdc.com>

We enable RISC-V SBI CPU Idle driver for QEMU virt machine to test
SBI HSM Supend on QEMU.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
 arch/riscv/Kconfig.socs           | 3 +++
 arch/riscv/configs/defconfig      | 1 +
 arch/riscv/configs/rv32_defconfig | 1 +
 3 files changed, 5 insertions(+)

diff --git a/arch/riscv/Kconfig.socs b/arch/riscv/Kconfig.socs
index 6ec44a22278a..f4097a815201 100644
--- a/arch/riscv/Kconfig.socs
+++ b/arch/riscv/Kconfig.socs
@@ -36,6 +36,9 @@ config SOC_VIRT
 	select GOLDFISH
 	select RTC_DRV_GOLDFISH if RTC_CLASS
 	select SIFIVE_PLIC
+	select PM_GENERIC_DOMAINS if PM
+	select PM_GENERIC_DOMAINS_OF if PM && OF
+	select RISCV_SBI_CPUIDLE if CPU_IDLE
 	help
 	  This enables support for QEMU Virt Machine.
 
diff --git a/arch/riscv/configs/defconfig b/arch/riscv/configs/defconfig
index a5e0482a4969..b8c882b70b02 100644
--- a/arch/riscv/configs/defconfig
+++ b/arch/riscv/configs/defconfig
@@ -20,6 +20,7 @@ CONFIG_SOC_SIFIVE=y
 CONFIG_SOC_VIRT=y
 CONFIG_SMP=y
 CONFIG_HOTPLUG_CPU=y
+CONFIG_PM=y
 CONFIG_CPU_IDLE=y
 CONFIG_VIRTUALIZATION=y
 CONFIG_KVM=m
diff --git a/arch/riscv/configs/rv32_defconfig b/arch/riscv/configs/rv32_defconfig
index d1b87db54d68..6f9a7c89bff9 100644
--- a/arch/riscv/configs/rv32_defconfig
+++ b/arch/riscv/configs/rv32_defconfig
@@ -20,6 +20,7 @@ CONFIG_SOC_VIRT=y
 CONFIG_ARCH_RV32I=y
 CONFIG_SMP=y
 CONFIG_HOTPLUG_CPU=y
+CONFIG_PM=y
 CONFIG_CPU_IDLE=y
 CONFIG_VIRTUALIZATION=y
 CONFIG_KVM=m
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 1/8] RISC-V: Enable CPU_IDLE drivers
  2022-02-10  5:49 ` [PATCH v11 1/8] RISC-V: Enable CPU_IDLE drivers Anup Patel
@ 2022-02-12 11:43   ` Pavel Machek
  2022-02-12 12:49     ` Anup Patel
  2022-02-16  0:50   ` Atish Patra
  1 sibling, 1 reply; 25+ messages in thread
From: Pavel Machek @ 2022-02-12 11:43 UTC (permalink / raw)
  To: Anup Patel
  Cc: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Rafael J . Wysocki, Rob Herring, Sandeep Tripathy,
	Atish Patra, Alistair Francis, Liush, Anup Patel, devicetree,
	linux-riscv, linux-kernel, linux-pm, linux-arm-kernel, kvm-riscv,
	Guo Ren

[-- Attachment #1: Type: text/plain, Size: 430 bytes --]

Hi!

> From: Anup Patel <anup.patel@wdc.com>
> 
> We force select CPU_PM and provide asm/cpuidle.h so that we can
> use CPU IDLE drivers for Linux RISC-V kernel.
> 
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> Signed-off-by: Anup Patel <apatel@vetanamicro.com>

This is quite... interesting. Normally we have one signoff per
person...

Best regards,
							Pavel
-- 
http://www.livejournal.com/~pavelmachek

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 1/8] RISC-V: Enable CPU_IDLE drivers
  2022-02-12 11:43   ` Pavel Machek
@ 2022-02-12 12:49     ` Anup Patel
  2022-03-10 18:43       ` Palmer Dabbelt
  0 siblings, 1 reply; 25+ messages in thread
From: Anup Patel @ 2022-02-12 12:49 UTC (permalink / raw)
  To: Pavel Machek
  Cc: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Rafael J . Wysocki, Rob Herring, Sandeep Tripathy,
	Atish Patra, Alistair Francis, Liush, Anup Patel, devicetree,
	linux-riscv, linux-kernel@vger.kernel.org List,
	open list:THERMAL, Linux ARM, kvm-riscv, Guo Ren

On Sat, Feb 12, 2022 at 5:13 PM Pavel Machek <pavel@ucw.cz> wrote:
>
> Hi!
>
> > From: Anup Patel <anup.patel@wdc.com>
> >
> > We force select CPU_PM and provide asm/cpuidle.h so that we can
> > use CPU IDLE drivers for Linux RISC-V kernel.
> >
> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > Signed-off-by: Anup Patel <apatel@vetanamicro.com>
>
> This is quite... interesting. Normally we have one signoff per
> person...

I was working for Western Digital (WDC) when I first submitted this
series and recently I joined Ventana Micro Systems.

Regards,
Anup

>
> Best regards,
>                                                         Pavel
> --
> http://www.livejournal.com/~pavelmachek

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 1/8] RISC-V: Enable CPU_IDLE drivers
  2022-02-10  5:49 ` [PATCH v11 1/8] RISC-V: Enable CPU_IDLE drivers Anup Patel
  2022-02-12 11:43   ` Pavel Machek
@ 2022-02-16  0:50   ` Atish Patra
  1 sibling, 0 replies; 25+ messages in thread
From: Atish Patra @ 2022-02-16  0:50 UTC (permalink / raw)
  To: Anup Patel
  Cc: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Rafael J . Wysocki, Pavel Machek, Rob Herring,
	Sandeep Tripathy, Alistair Francis, Liush, Anup Patel,
	devicetree, linux-riscv, linux-kernel@vger.kernel.org List,
	open list:THERMAL, linux-arm-kernel, kvm-riscv, Guo Ren

On Wed, Feb 9, 2022 at 9:50 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> From: Anup Patel <anup.patel@wdc.com>
>
> We force select CPU_PM and provide asm/cpuidle.h so that we can
> use CPU IDLE drivers for Linux RISC-V kernel.
>
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> Signed-off-by: Anup Patel <apatel@vetanamicro.com>
> Reviewed-by: Guo Ren <guoren@kernel.org>
> ---
>  arch/riscv/Kconfig                |  7 +++++++
>  arch/riscv/configs/defconfig      |  1 +
>  arch/riscv/configs/rv32_defconfig |  1 +
>  arch/riscv/include/asm/cpuidle.h  | 24 ++++++++++++++++++++++++
>  arch/riscv/kernel/process.c       |  3 ++-
>  5 files changed, 35 insertions(+), 1 deletion(-)
>  create mode 100644 arch/riscv/include/asm/cpuidle.h
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 5adcbd9b5e88..76976d12b463 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -46,6 +46,7 @@ config RISCV
>         select CLONE_BACKWARDS
>         select CLINT_TIMER if !MMU
>         select COMMON_CLK
> +       select CPU_PM if CPU_IDLE
>         select EDAC_SUPPORT
>         select GENERIC_ARCH_TOPOLOGY if SMP
>         select GENERIC_ATOMIC64 if !64BIT
> @@ -547,4 +548,10 @@ source "kernel/power/Kconfig"
>
>  endmenu
>
> +menu "CPU Power Management"
> +
> +source "drivers/cpuidle/Kconfig"
> +
> +endmenu
> +
>  source "arch/riscv/kvm/Kconfig"
> diff --git a/arch/riscv/configs/defconfig b/arch/riscv/configs/defconfig
> index f120fcc43d0a..a5e0482a4969 100644
> --- a/arch/riscv/configs/defconfig
> +++ b/arch/riscv/configs/defconfig
> @@ -20,6 +20,7 @@ CONFIG_SOC_SIFIVE=y
>  CONFIG_SOC_VIRT=y
>  CONFIG_SMP=y
>  CONFIG_HOTPLUG_CPU=y
> +CONFIG_CPU_IDLE=y
>  CONFIG_VIRTUALIZATION=y
>  CONFIG_KVM=m
>  CONFIG_JUMP_LABEL=y
> diff --git a/arch/riscv/configs/rv32_defconfig b/arch/riscv/configs/rv32_defconfig
> index 8b56a7f1eb06..d1b87db54d68 100644
> --- a/arch/riscv/configs/rv32_defconfig
> +++ b/arch/riscv/configs/rv32_defconfig
> @@ -20,6 +20,7 @@ CONFIG_SOC_VIRT=y
>  CONFIG_ARCH_RV32I=y
>  CONFIG_SMP=y
>  CONFIG_HOTPLUG_CPU=y
> +CONFIG_CPU_IDLE=y
>  CONFIG_VIRTUALIZATION=y
>  CONFIG_KVM=m
>  CONFIG_JUMP_LABEL=y
> diff --git a/arch/riscv/include/asm/cpuidle.h b/arch/riscv/include/asm/cpuidle.h
> new file mode 100644
> index 000000000000..71fdc607d4bc
> --- /dev/null
> +++ b/arch/riscv/include/asm/cpuidle.h
> @@ -0,0 +1,24 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2021 Allwinner Ltd
> + * Copyright (C) 2021 Western Digital Corporation or its affiliates.
> + */
> +
> +#ifndef _ASM_RISCV_CPUIDLE_H
> +#define _ASM_RISCV_CPUIDLE_H
> +
> +#include <asm/barrier.h>
> +#include <asm/processor.h>
> +
> +static inline void cpu_do_idle(void)
> +{
> +       /*
> +        * Add mb() here to ensure that all
> +        * IO/MEM accesses are completed prior
> +        * to entering WFI.
> +        */
> +       mb();
> +       wait_for_interrupt();
> +}
> +
> +#endif
> diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
> index 03ac3aa611f5..504b496787aa 100644
> --- a/arch/riscv/kernel/process.c
> +++ b/arch/riscv/kernel/process.c
> @@ -23,6 +23,7 @@
>  #include <asm/string.h>
>  #include <asm/switch_to.h>
>  #include <asm/thread_info.h>
> +#include <asm/cpuidle.h>
>
>  register unsigned long gp_in_global __asm__("gp");
>
> @@ -37,7 +38,7 @@ extern asmlinkage void ret_from_kernel_thread(void);
>
>  void arch_cpu_idle(void)
>  {
> -       wait_for_interrupt();
> +       cpu_do_idle();
>         raw_local_irq_enable();
>  }
>
> --
> 2.25.1
>

Reviewed-by: Atish Patra <atishp@rivosinc.com>


-- 
Regards,
Atish

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 2/8] RISC-V: Rename relocate() and make it global
  2022-02-10  5:49 ` [PATCH v11 2/8] RISC-V: Rename relocate() and make it global Anup Patel
@ 2022-02-16  0:57   ` Atish Patra
  0 siblings, 0 replies; 25+ messages in thread
From: Atish Patra @ 2022-02-16  0:57 UTC (permalink / raw)
  To: Anup Patel
  Cc: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Rafael J . Wysocki, Pavel Machek, Rob Herring,
	Sandeep Tripathy, Alistair Francis, Liush, Anup Patel,
	devicetree, linux-riscv, linux-kernel@vger.kernel.org List,
	open list:THERMAL, linux-arm-kernel, kvm-riscv, Guo Ren

On Wed, Feb 9, 2022 at 9:50 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> From: Anup Patel <anup.patel@wdc.com>
>
> The low-level relocate() function enables mmu and relocates
> execution to link-time addresses. We rename relocate() function
> to relocate_enable_mmu() function which is more informative.
>
> Also, the relocate_enable_mmu() function will be used in the
> resume path when a CPU wakes-up from a non-retentive suspend
> so we make it global symbol.
>
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> Reviewed-by: Guo Ren <guoren@kernel.org>
> ---
>  arch/riscv/kernel/head.S | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
> index 2363b43312fc..5f4c6b6c4974 100644
> --- a/arch/riscv/kernel/head.S
> +++ b/arch/riscv/kernel/head.S
> @@ -90,7 +90,8 @@ pe_head_start:
>
>  .align 2
>  #ifdef CONFIG_MMU
> -relocate:
> +       .global relocate_enable_mmu
> +relocate_enable_mmu:
>         /* Relocate return address */
>         la a1, kernel_map
>         XIP_FIXUP_OFFSET a1
> @@ -185,7 +186,7 @@ secondary_start_sbi:
>         /* Enable virtual memory and relocate to virtual address */
>         la a0, swapper_pg_dir
>         XIP_FIXUP_OFFSET a0
> -       call relocate
> +       call relocate_enable_mmu
>  #endif
>         call setup_trap_vector
>         tail smp_callin
> @@ -329,7 +330,7 @@ clear_bss_done:
>  #ifdef CONFIG_MMU
>         la a0, early_pg_dir
>         XIP_FIXUP_OFFSET a0
> -       call relocate
> +       call relocate_enable_mmu
>  #endif /* CONFIG_MMU */
>
>         call setup_trap_vector
> --
> 2.25.1
>



Reviewed-by: Atish Patra <atishp@rivosinc.com>

--
Regards,
Atish

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 4/8] RISC-V: Add SBI HSM suspend related defines
  2022-02-10  5:49 ` [PATCH v11 4/8] RISC-V: Add SBI HSM suspend related defines Anup Patel
@ 2022-02-16  7:57   ` Atish Patra
  2022-02-23  7:02   ` Anup Patel
  1 sibling, 0 replies; 25+ messages in thread
From: Atish Patra @ 2022-02-16  7:57 UTC (permalink / raw)
  To: Anup Patel
  Cc: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Rafael J . Wysocki, Pavel Machek, Rob Herring,
	Sandeep Tripathy, Alistair Francis, Liush, Anup Patel,
	devicetree, linux-riscv, linux-kernel@vger.kernel.org List,
	open list:THERMAL, linux-arm-kernel, kvm-riscv, Guo Ren

On Wed, Feb 9, 2022 at 9:50 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> From: Anup Patel <anup.patel@wdc.com>
>
> We add defines related to SBI HSM suspend call and also
> update HSM states naming as-per latest SBI specification.
>
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> Reviewed-by: Guo Ren <guoren@kernel.org>
> ---
>  arch/riscv/include/asm/sbi.h    | 27 ++++++++++++++++++++++-----
>  arch/riscv/kernel/cpu_ops_sbi.c |  2 +-
>  arch/riscv/kvm/vcpu_sbi_hsm.c   |  4 ++--
>  3 files changed, 25 insertions(+), 8 deletions(-)
>
> diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
> index d1c37479d828..06133b4f8e20 100644
> --- a/arch/riscv/include/asm/sbi.h
> +++ b/arch/riscv/include/asm/sbi.h
> @@ -71,15 +71,32 @@ enum sbi_ext_hsm_fid {
>         SBI_EXT_HSM_HART_START = 0,
>         SBI_EXT_HSM_HART_STOP,
>         SBI_EXT_HSM_HART_STATUS,
> +       SBI_EXT_HSM_HART_SUSPEND,
>  };
>
> -enum sbi_hsm_hart_status {
> -       SBI_HSM_HART_STATUS_STARTED = 0,
> -       SBI_HSM_HART_STATUS_STOPPED,
> -       SBI_HSM_HART_STATUS_START_PENDING,
> -       SBI_HSM_HART_STATUS_STOP_PENDING,
> +enum sbi_hsm_hart_state {
> +       SBI_HSM_STATE_STARTED = 0,
> +       SBI_HSM_STATE_STOPPED,
> +       SBI_HSM_STATE_START_PENDING,
> +       SBI_HSM_STATE_STOP_PENDING,
> +       SBI_HSM_STATE_SUSPENDED,
> +       SBI_HSM_STATE_SUSPEND_PENDING,
> +       SBI_HSM_STATE_RESUME_PENDING,
>  };
>
> +#define SBI_HSM_SUSP_BASE_MASK                 0x7fffffff
> +#define SBI_HSM_SUSP_NON_RET_BIT               0x80000000
> +#define SBI_HSM_SUSP_PLAT_BASE                 0x10000000
> +
> +#define SBI_HSM_SUSPEND_RET_DEFAULT            0x00000000
> +#define SBI_HSM_SUSPEND_RET_PLATFORM           SBI_HSM_SUSP_PLAT_BASE
> +#define SBI_HSM_SUSPEND_RET_LAST               SBI_HSM_SUSP_BASE_MASK
> +#define SBI_HSM_SUSPEND_NON_RET_DEFAULT                SBI_HSM_SUSP_NON_RET_BIT
> +#define SBI_HSM_SUSPEND_NON_RET_PLATFORM       (SBI_HSM_SUSP_NON_RET_BIT | \
> +                                                SBI_HSM_SUSP_PLAT_BASE)
> +#define SBI_HSM_SUSPEND_NON_RET_LAST           (SBI_HSM_SUSP_NON_RET_BIT | \
> +                                                SBI_HSM_SUSP_BASE_MASK)
> +
>  enum sbi_ext_srst_fid {
>         SBI_EXT_SRST_RESET = 0,
>  };
> diff --git a/arch/riscv/kernel/cpu_ops_sbi.c b/arch/riscv/kernel/cpu_ops_sbi.c
> index dae29cbfe550..2e16f6732cdf 100644
> --- a/arch/riscv/kernel/cpu_ops_sbi.c
> +++ b/arch/riscv/kernel/cpu_ops_sbi.c
> @@ -111,7 +111,7 @@ static int sbi_cpu_is_stopped(unsigned int cpuid)
>
>         rc = sbi_hsm_hart_get_status(hartid);
>
> -       if (rc == SBI_HSM_HART_STATUS_STOPPED)
> +       if (rc == SBI_HSM_STATE_STOPPED)
>                 return 0;
>         return rc;
>  }
> diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c
> index 2e383687fa48..1ac4b2e8e4ec 100644
> --- a/arch/riscv/kvm/vcpu_sbi_hsm.c
> +++ b/arch/riscv/kvm/vcpu_sbi_hsm.c
> @@ -60,9 +60,9 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu)
>         if (!target_vcpu)
>                 return -EINVAL;
>         if (!target_vcpu->arch.power_off)
> -               return SBI_HSM_HART_STATUS_STARTED;
> +               return SBI_HSM_STATE_STARTED;
>         else
> -               return SBI_HSM_HART_STATUS_STOPPED;
> +               return SBI_HSM_STATE_STOPPED;
>  }
>
>  static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
> --
> 2.25.1
>

Reviewed-by: Atish Patra <atishp@rivosinc.com>

-- 
Regards,
Atish

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 6/8] cpuidle: Add RISC-V SBI CPU idle driver
  2022-02-10  5:49 ` [PATCH v11 6/8] cpuidle: Add RISC-V SBI CPU idle driver Anup Patel
@ 2022-02-16  8:09   ` Atish Patra
  2022-02-16 13:45     ` Jessica Clarke
  2022-03-10 20:01   ` Palmer Dabbelt
  2022-03-12  8:34   ` Anup Patel
  2 siblings, 1 reply; 25+ messages in thread
From: Atish Patra @ 2022-02-16  8:09 UTC (permalink / raw)
  To: Anup Patel
  Cc: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Rafael J . Wysocki, Pavel Machek, Rob Herring,
	Sandeep Tripathy, Alistair Francis, Liush, Anup Patel,
	devicetree, linux-riscv, linux-kernel@vger.kernel.org List,
	open list:THERMAL, linux-arm-kernel, kvm-riscv

On Wed, Feb 9, 2022 at 9:51 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> From: Anup Patel <anup.patel@wdc.com>
>
> The RISC-V SBI HSM extension provides HSM suspend call which can
> be used by Linux RISC-V to enter platform specific low-power state.
>
> This patch adds a CPU idle driver based on RISC-V SBI calls which
> will populate idle states from device tree and use SBI calls to
> entry these idle states.
>
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  MAINTAINERS                         |   7 +
>  drivers/cpuidle/Kconfig             |   5 +
>  drivers/cpuidle/Kconfig.riscv       |  15 +
>  drivers/cpuidle/Makefile            |   4 +
>  drivers/cpuidle/cpuidle-riscv-sbi.c | 627 ++++++++++++++++++++++++++++
>  5 files changed, 658 insertions(+)
>  create mode 100644 drivers/cpuidle/Kconfig.riscv
>  create mode 100644 drivers/cpuidle/cpuidle-riscv-sbi.c
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 39ece23e8d93..2ff0055a26a7 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -5058,6 +5058,13 @@ S:       Supported
>  F:     drivers/cpuidle/dt_idle_genpd.c
>  F:     drivers/cpuidle/dt_idle_genpd.h
>
> +CPUIDLE DRIVER - RISC-V SBI
> +M:     Anup Patel <anup@brainfault.org>
> +L:     linux-pm@vger.kernel.org
> +L:     linux-riscv@lists.infradead.org
> +S:     Maintained
> +F:     drivers/cpuidle/cpuidle-riscv-sbi.c
> +
>  CRAMFS FILESYSTEM
>  M:     Nicolas Pitre <nico@fluxnic.net>
>  S:     Maintained
> diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
> index f1afe7ab6b54..ff71dd662880 100644
> --- a/drivers/cpuidle/Kconfig
> +++ b/drivers/cpuidle/Kconfig
> @@ -66,6 +66,11 @@ depends on PPC
>  source "drivers/cpuidle/Kconfig.powerpc"
>  endmenu
>
> +menu "RISC-V CPU Idle Drivers"
> +depends on RISCV
> +source "drivers/cpuidle/Kconfig.riscv"
> +endmenu
> +
>  config HALTPOLL_CPUIDLE
>         tristate "Halt poll cpuidle driver"
>         depends on X86 && KVM_GUEST
> diff --git a/drivers/cpuidle/Kconfig.riscv b/drivers/cpuidle/Kconfig.riscv
> new file mode 100644
> index 000000000000..78518c26af74
> --- /dev/null
> +++ b/drivers/cpuidle/Kconfig.riscv
> @@ -0,0 +1,15 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +# RISC-V CPU Idle drivers
> +#
> +
> +config RISCV_SBI_CPUIDLE
> +       bool "RISC-V SBI CPU idle Driver"
> +       depends on RISCV_SBI
> +       select DT_IDLE_STATES
> +       select CPU_IDLE_MULTIPLE_DRIVERS
> +       select DT_IDLE_GENPD if PM_GENERIC_DOMAINS_OF
> +       help
> +         Select this option to enable RISC-V SBI firmware based CPU idle
> +         driver for RISC-V systems. This drivers also supports hierarchical
> +         DT based layout of the idle state.
> diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
> index 11a26cef279f..d103342b7cfc 100644
> --- a/drivers/cpuidle/Makefile
> +++ b/drivers/cpuidle/Makefile
> @@ -35,3 +35,7 @@ obj-$(CONFIG_MIPS_CPS_CPUIDLE)                += cpuidle-cps.o
>  # POWERPC drivers
>  obj-$(CONFIG_PSERIES_CPUIDLE)          += cpuidle-pseries.o
>  obj-$(CONFIG_POWERNV_CPUIDLE)          += cpuidle-powernv.o
> +
> +###############################################################################
> +# RISC-V drivers
> +obj-$(CONFIG_RISCV_SBI_CPUIDLE)                += cpuidle-riscv-sbi.o
> diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
> new file mode 100644
> index 000000000000..b459eda2cd37
> --- /dev/null
> +++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
> @@ -0,0 +1,627 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * RISC-V SBI CPU idle driver.
> + *
> + * Copyright (c) 2021 Western Digital Corporation or its affiliates.
> + * Copyright (c) 2022 Ventana Micro Systems Inc.
> + */
> +
> +#define pr_fmt(fmt) "cpuidle-riscv-sbi: " fmt
> +
> +#include <linux/cpuidle.h>
> +#include <linux/cpumask.h>
> +#include <linux/cpu_pm.h>
> +#include <linux/cpu_cooling.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_device.h>
> +#include <linux/slab.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_domain.h>
> +#include <linux/pm_runtime.h>
> +#include <asm/cpuidle.h>
> +#include <asm/sbi.h>
> +#include <asm/suspend.h>
> +
> +#include "dt_idle_states.h"
> +#include "dt_idle_genpd.h"
> +
> +struct sbi_cpuidle_data {
> +       u32 *states;
> +       struct device *dev;
> +};
> +
> +struct sbi_domain_state {
> +       bool available;
> +       u32 state;
> +};
> +
> +static DEFINE_PER_CPU_READ_MOSTLY(struct sbi_cpuidle_data, sbi_cpuidle_data);
> +static DEFINE_PER_CPU(struct sbi_domain_state, domain_state);
> +static bool sbi_cpuidle_use_osi;
> +static bool sbi_cpuidle_use_cpuhp;
> +static bool sbi_cpuidle_pd_allow_domain_state;
> +
> +static inline void sbi_set_domain_state(u32 state)
> +{
> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> +
> +       data->available = true;
> +       data->state = state;
> +}
> +
> +static inline u32 sbi_get_domain_state(void)
> +{
> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> +
> +       return data->state;
> +}
> +
> +static inline void sbi_clear_domain_state(void)
> +{
> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> +
> +       data->available = false;
> +}
> +
> +static inline bool sbi_is_domain_state_available(void)
> +{
> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> +
> +       return data->available;
> +}
> +
> +static int sbi_suspend_finisher(unsigned long suspend_type,
> +                               unsigned long resume_addr,
> +                               unsigned long opaque)
> +{
> +       struct sbiret ret;
> +
> +       ret = sbi_ecall(SBI_EXT_HSM, SBI_EXT_HSM_HART_SUSPEND,
> +                       suspend_type, resume_addr, opaque, 0, 0, 0);
> +
> +       return (ret.error) ? sbi_err_map_linux_errno(ret.error) : 0;
> +}
> +
> +static int sbi_suspend(u32 state)
> +{
> +       if (state & SBI_HSM_SUSP_NON_RET_BIT)
> +               return cpu_suspend(state, sbi_suspend_finisher);
> +       else
> +               return sbi_suspend_finisher(state, 0, 0);
> +}
> +
> +static int sbi_cpuidle_enter_state(struct cpuidle_device *dev,
> +                                  struct cpuidle_driver *drv, int idx)
> +{
> +       u32 *states = __this_cpu_read(sbi_cpuidle_data.states);
> +
> +       return CPU_PM_CPU_IDLE_ENTER_PARAM(sbi_suspend, idx, states[idx]);
> +}
> +
> +static int __sbi_enter_domain_idle_state(struct cpuidle_device *dev,
> +                                         struct cpuidle_driver *drv, int idx,
> +                                         bool s2idle)
> +{
> +       struct sbi_cpuidle_data *data = this_cpu_ptr(&sbi_cpuidle_data);
> +       u32 *states = data->states;
> +       struct device *pd_dev = data->dev;
> +       u32 state;
> +       int ret;
> +
> +       ret = cpu_pm_enter();
> +       if (ret)
> +               return -1;
> +
> +       /* Do runtime PM to manage a hierarchical CPU toplogy. */
> +       rcu_irq_enter_irqson();
> +       if (s2idle)
> +               dev_pm_genpd_suspend(pd_dev);
> +       else
> +               pm_runtime_put_sync_suspend(pd_dev);
> +       rcu_irq_exit_irqson();
> +
> +       if (sbi_is_domain_state_available())
> +               state = sbi_get_domain_state();
> +       else
> +               state = states[idx];
> +
> +       ret = sbi_suspend(state) ? -1 : idx;
> +
> +       rcu_irq_enter_irqson();
> +       if (s2idle)
> +               dev_pm_genpd_resume(pd_dev);
> +       else
> +               pm_runtime_get_sync(pd_dev);
> +       rcu_irq_exit_irqson();
> +
> +       cpu_pm_exit();
> +
> +       /* Clear the domain state to start fresh when back from idle. */
> +       sbi_clear_domain_state();
> +       return ret;
> +}
> +
> +static int sbi_enter_domain_idle_state(struct cpuidle_device *dev,
> +                                      struct cpuidle_driver *drv, int idx)
> +{
> +       return __sbi_enter_domain_idle_state(dev, drv, idx, false);
> +}
> +
> +static int sbi_enter_s2idle_domain_idle_state(struct cpuidle_device *dev,
> +                                             struct cpuidle_driver *drv,
> +                                             int idx)
> +{
> +       return __sbi_enter_domain_idle_state(dev, drv, idx, true);
> +}
> +
> +static int sbi_cpuidle_cpuhp_up(unsigned int cpu)
> +{
> +       struct device *pd_dev = __this_cpu_read(sbi_cpuidle_data.dev);
> +
> +       if (pd_dev)
> +               pm_runtime_get_sync(pd_dev);
> +
> +       return 0;
> +}
> +
> +static int sbi_cpuidle_cpuhp_down(unsigned int cpu)
> +{
> +       struct device *pd_dev = __this_cpu_read(sbi_cpuidle_data.dev);
> +
> +       if (pd_dev) {
> +               pm_runtime_put_sync(pd_dev);
> +               /* Clear domain state to start fresh at next online. */
> +               sbi_clear_domain_state();
> +       }
> +
> +       return 0;
> +}
> +
> +static void sbi_idle_init_cpuhp(void)
> +{
> +       int err;
> +
> +       if (!sbi_cpuidle_use_cpuhp)
> +               return;
> +
> +       err = cpuhp_setup_state_nocalls(CPUHP_AP_CPU_PM_STARTING,
> +                                       "cpuidle/sbi:online",
> +                                       sbi_cpuidle_cpuhp_up,
> +                                       sbi_cpuidle_cpuhp_down);
> +       if (err)
> +               pr_warn("Failed %d while setup cpuhp state\n", err);
> +}
> +
> +static const struct of_device_id sbi_cpuidle_state_match[] = {
> +       { .compatible = "riscv,idle-state",
> +         .data = sbi_cpuidle_enter_state },
> +       { },
> +};
> +
> +static bool sbi_suspend_state_is_valid(u32 state)
> +{
> +       if (state > SBI_HSM_SUSPEND_RET_DEFAULT &&
> +           state < SBI_HSM_SUSPEND_RET_PLATFORM)
> +               return false;
> +       if (state > SBI_HSM_SUSPEND_NON_RET_DEFAULT &&
> +           state < SBI_HSM_SUSPEND_NON_RET_PLATFORM)
> +               return false;
> +       return true;
> +}
> +
> +static int sbi_dt_parse_state_node(struct device_node *np, u32 *state)
> +{
> +       int err = of_property_read_u32(np, "riscv,sbi-suspend-param", state);
> +
> +       if (err) {
> +               pr_warn("%pOF missing riscv,sbi-suspend-param property\n", np);
> +               return err;
> +       }
> +
> +       if (!sbi_suspend_state_is_valid(*state)) {
> +               pr_warn("Invalid SBI suspend state %#x\n", *state);
> +               return -EINVAL;
> +       }
> +
> +       return 0;
> +}
> +
> +static int sbi_dt_cpu_init_topology(struct cpuidle_driver *drv,
> +                                    struct sbi_cpuidle_data *data,
> +                                    unsigned int state_count, int cpu)
> +{
> +       /* Currently limit the hierarchical topology to be used in OSI mode. */
> +       if (!sbi_cpuidle_use_osi)
> +               return 0;
> +
> +       data->dev = dt_idle_attach_cpu(cpu, "sbi");
> +       if (IS_ERR_OR_NULL(data->dev))
> +               return PTR_ERR_OR_ZERO(data->dev);
> +
> +       /*
> +        * Using the deepest state for the CPU to trigger a potential selection
> +        * of a shared state for the domain, assumes the domain states are all
> +        * deeper states.
> +        */
> +       drv->states[state_count - 1].enter = sbi_enter_domain_idle_state;
> +       drv->states[state_count - 1].enter_s2idle =
> +                                       sbi_enter_s2idle_domain_idle_state;
> +       sbi_cpuidle_use_cpuhp = true;
> +
> +       return 0;
> +}
> +
> +static int sbi_cpuidle_dt_init_states(struct device *dev,
> +                                       struct cpuidle_driver *drv,
> +                                       unsigned int cpu,
> +                                       unsigned int state_count)
> +{
> +       struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu);
> +       struct device_node *state_node;
> +       struct device_node *cpu_node;
> +       u32 *states;
> +       int i, ret;
> +
> +       cpu_node = of_cpu_device_node_get(cpu);
> +       if (!cpu_node)
> +               return -ENODEV;
> +
> +       states = devm_kcalloc(dev, state_count, sizeof(*states), GFP_KERNEL);
> +       if (!states) {
> +               ret = -ENOMEM;
> +               goto fail;
> +       }
> +
> +       /* Parse SBI specific details from state DT nodes */
> +       for (i = 1; i < state_count; i++) {
> +               state_node = of_get_cpu_state_node(cpu_node, i - 1);
> +               if (!state_node)
> +                       break;
> +
> +               ret = sbi_dt_parse_state_node(state_node, &states[i]);
> +               of_node_put(state_node);
> +
> +               if (ret)
> +                       return ret;
> +
> +               pr_debug("sbi-state %#x index %d\n", states[i], i);
> +       }
> +       if (i != state_count) {
> +               ret = -ENODEV;
> +               goto fail;
> +       }
> +
> +       /* Initialize optional data, used for the hierarchical topology. */
> +       ret = sbi_dt_cpu_init_topology(drv, data, state_count, cpu);
> +       if (ret < 0)
> +               return ret;
> +
> +       /* Store states in the per-cpu struct. */
> +       data->states = states;
> +
> +fail:
> +       of_node_put(cpu_node);
> +
> +       return ret;
> +}
> +
> +static void sbi_cpuidle_deinit_cpu(int cpu)
> +{
> +       struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu);
> +
> +       dt_idle_detach_cpu(data->dev);
> +       sbi_cpuidle_use_cpuhp = false;
> +}
> +
> +static int sbi_cpuidle_init_cpu(struct device *dev, int cpu)
> +{
> +       struct cpuidle_driver *drv;
> +       unsigned int state_count = 0;
> +       int ret = 0;
> +
> +       drv = devm_kzalloc(dev, sizeof(*drv), GFP_KERNEL);
> +       if (!drv)
> +               return -ENOMEM;
> +
> +       drv->name = "sbi_cpuidle";
> +       drv->owner = THIS_MODULE;
> +       drv->cpumask = (struct cpumask *)cpumask_of(cpu);
> +
> +       /* RISC-V architectural WFI to be represented as state index 0. */
> +       drv->states[0].enter = sbi_cpuidle_enter_state;
> +       drv->states[0].exit_latency = 1;
> +       drv->states[0].target_residency = 1;
> +       drv->states[0].power_usage = UINT_MAX;
> +       strcpy(drv->states[0].name, "WFI");
> +       strcpy(drv->states[0].desc, "RISC-V WFI");
> +
> +       /*
> +        * If no DT idle states are detected (ret == 0) let the driver
> +        * initialization fail accordingly since there is no reason to
> +        * initialize the idle driver if only wfi is supported, the
> +        * default archictectural back-end already executes wfi
> +        * on idle entry.
> +        */
> +       ret = dt_init_idle_driver(drv, sbi_cpuidle_state_match, 1);
> +       if (ret <= 0) {
> +               pr_debug("HART%ld: failed to parse DT idle states\n",
> +                        cpuid_to_hartid_map(cpu));
> +               return ret ? : -ENODEV;
> +       }
> +       state_count = ret + 1; /* Include WFI state as well */
> +
> +       /* Initialize idle states from DT. */
> +       ret = sbi_cpuidle_dt_init_states(dev, drv, cpu, state_count);
> +       if (ret) {
> +               pr_err("HART%ld: failed to init idle states\n",
> +                      cpuid_to_hartid_map(cpu));
> +               return ret;
> +       }
> +
> +       ret = cpuidle_register(drv, NULL);
> +       if (ret)
> +               goto deinit;
> +
> +       cpuidle_cooling_register(drv);
> +
> +       return 0;
> +deinit:
> +       sbi_cpuidle_deinit_cpu(cpu);
> +       return ret;
> +}
> +
> +static void sbi_cpuidle_domain_sync_state(struct device *dev)
> +{
> +       /*
> +        * All devices have now been attached/probed to the PM domain
> +        * topology, hence it's fine to allow domain states to be picked.
> +        */
> +       sbi_cpuidle_pd_allow_domain_state = true;
> +}
> +
> +#ifdef CONFIG_DT_IDLE_GENPD
> +
> +static int sbi_cpuidle_pd_power_off(struct generic_pm_domain *pd)
> +{
> +       struct genpd_power_state *state = &pd->states[pd->state_idx];
> +       u32 *pd_state;
> +
> +       if (!state->data)
> +               return 0;
> +
> +       if (!sbi_cpuidle_pd_allow_domain_state)
> +               return -EBUSY;
> +
> +       /* OSI mode is enabled, set the corresponding domain state. */
> +       pd_state = state->data;
> +       sbi_set_domain_state(*pd_state);
> +
> +       return 0;
> +}
> +
> +struct sbi_pd_provider {
> +       struct list_head link;
> +       struct device_node *node;
> +};
> +
> +static LIST_HEAD(sbi_pd_providers);
> +
> +static int sbi_pd_init(struct device_node *np)
> +{
> +       struct generic_pm_domain *pd;
> +       struct sbi_pd_provider *pd_provider;
> +       struct dev_power_governor *pd_gov;
> +       int ret = -ENOMEM, state_count = 0;
> +
> +       pd = dt_idle_pd_alloc(np, sbi_dt_parse_state_node);
> +       if (!pd)
> +               goto out;
> +
> +       pd_provider = kzalloc(sizeof(*pd_provider), GFP_KERNEL);
> +       if (!pd_provider)
> +               goto free_pd;
> +
> +       pd->flags |= GENPD_FLAG_IRQ_SAFE | GENPD_FLAG_CPU_DOMAIN;
> +
> +       /* Allow power off when OSI is available. */
> +       if (sbi_cpuidle_use_osi)
> +               pd->power_off = sbi_cpuidle_pd_power_off;
> +       else
> +               pd->flags |= GENPD_FLAG_ALWAYS_ON;
> +
> +       /* Use governor for CPU PM domains if it has some states to manage. */
> +       pd_gov = state_count > 0 ? &pm_domain_cpu_gov : NULL;
> +
> +       ret = pm_genpd_init(pd, pd_gov, false);
> +       if (ret)
> +               goto free_pd_prov;
> +
> +       ret = of_genpd_add_provider_simple(np, pd);
> +       if (ret)
> +               goto remove_pd;
> +
> +       pd_provider->node = of_node_get(np);
> +       list_add(&pd_provider->link, &sbi_pd_providers);
> +
> +       pr_debug("init PM domain %s\n", pd->name);
> +       return 0;
> +
> +remove_pd:
> +       pm_genpd_remove(pd);
> +free_pd_prov:
> +       kfree(pd_provider);
> +free_pd:
> +       dt_idle_pd_free(pd);
> +out:
> +       pr_err("failed to init PM domain ret=%d %pOF\n", ret, np);
> +       return ret;
> +}
> +
> +static void sbi_pd_remove(void)
> +{
> +       struct sbi_pd_provider *pd_provider, *it;
> +       struct generic_pm_domain *genpd;
> +
> +       list_for_each_entry_safe(pd_provider, it, &sbi_pd_providers, link) {
> +               of_genpd_del_provider(pd_provider->node);
> +
> +               genpd = of_genpd_remove_last(pd_provider->node);
> +               if (!IS_ERR(genpd))
> +                       kfree(genpd);
> +
> +               of_node_put(pd_provider->node);
> +               list_del(&pd_provider->link);
> +               kfree(pd_provider);
> +       }
> +}
> +
> +static int sbi_genpd_probe(struct device_node *np)
> +{
> +       struct device_node *node;
> +       int ret = 0, pd_count = 0;
> +
> +       if (!np)
> +               return -ENODEV;
> +
> +       /*
> +        * Parse child nodes for the "#power-domain-cells" property and
> +        * initialize a genpd/genpd-of-provider pair when it's found.
> +        */
> +       for_each_child_of_node(np, node) {
> +               if (!of_find_property(node, "#power-domain-cells", NULL))
> +                       continue;
> +
> +               ret = sbi_pd_init(node);
> +               if (ret)
> +                       goto put_node;
> +
> +               pd_count++;
> +       }
> +
> +       /* Bail out if not using the hierarchical CPU topology. */
> +       if (!pd_count)
> +               goto no_pd;
> +
> +       /* Link genpd masters/subdomains to model the CPU topology. */
> +       ret = dt_idle_pd_init_topology(np);
> +       if (ret)
> +               goto remove_pd;
> +
> +       return 0;
> +
> +put_node:
> +       of_node_put(node);
> +remove_pd:
> +       sbi_pd_remove();
> +       pr_err("failed to create CPU PM domains ret=%d\n", ret);
> +no_pd:
> +       return ret;
> +}
> +
> +#else
> +
> +static inline int sbi_genpd_probe(struct device_node *np)
> +{
> +       return 0;
> +}
> +
> +#endif
> +
> +static int sbi_cpuidle_probe(struct platform_device *pdev)
> +{
> +       int cpu, ret;
> +       struct cpuidle_driver *drv;
> +       struct cpuidle_device *dev;
> +       struct device_node *np, *pds_node;
> +
> +       /* Detect OSI support based on CPU DT nodes */
> +       sbi_cpuidle_use_osi = true;
> +       for_each_possible_cpu(cpu) {
> +               np = of_cpu_device_node_get(cpu);
> +               if (np &&
> +                   of_find_property(np, "power-domains", NULL) &&
> +                   of_find_property(np, "power-domain-names", NULL)) {
> +                       continue;
> +               } else {
> +                       sbi_cpuidle_use_osi = false;
> +                       break;
> +               }
> +       }
> +
> +       /* Populate generic power domains from DT nodes */
> +       pds_node = of_find_node_by_path("/cpus/power-domains");
> +       if (pds_node) {
> +               ret = sbi_genpd_probe(pds_node);
> +               of_node_put(pds_node);
> +               if (ret)
> +                       return ret;
> +       }
> +
> +       /* Initialize CPU idle driver for each CPU */
> +       for_each_possible_cpu(cpu) {
> +               ret = sbi_cpuidle_init_cpu(&pdev->dev, cpu);
> +               if (ret) {
> +                       pr_debug("HART%ld: idle driver init failed\n",
> +                                cpuid_to_hartid_map(cpu));
> +                       goto out_fail;
> +               }
> +       }
> +
> +       /* Setup CPU hotplut notifiers */
> +       sbi_idle_init_cpuhp();
> +
> +       pr_info("idle driver registered for all CPUs\n");
> +
> +       return 0;
> +
> +out_fail:
> +       while (--cpu >= 0) {
> +               dev = per_cpu(cpuidle_devices, cpu);
> +               drv = cpuidle_get_cpu_driver(dev);
> +               cpuidle_unregister(drv);
> +               sbi_cpuidle_deinit_cpu(cpu);
> +       }
> +
> +       return ret;
> +}
> +
> +static struct platform_driver sbi_cpuidle_driver = {
> +       .probe = sbi_cpuidle_probe,
> +       .driver = {
> +               .name = "sbi-cpuidle",
> +               .sync_state = sbi_cpuidle_domain_sync_state,
> +       },
> +};
> +
> +static int __init sbi_cpuidle_init(void)
> +{
> +       int ret;
> +       struct platform_device *pdev;
> +
> +       /*
> +        * The SBI HSM suspend function is only available when:
> +        * 1) SBI version is 0.3 or higher
> +        * 2) SBI HSM extension is available
> +        */
> +       if ((sbi_spec_version < sbi_mk_version(0, 3)) ||
> +           sbi_probe_extension(SBI_EXT_HSM) <= 0) {
> +               pr_info("HSM suspend not available\n");
> +               return 0;
> +       }
> +
> +       ret = platform_driver_register(&sbi_cpuidle_driver);
> +       if (ret)
> +               return ret;
> +
> +       pdev = platform_device_register_simple("sbi-cpuidle",
> +                                               -1, NULL, 0);
> +       if (IS_ERR(pdev)) {
> +               platform_driver_unregister(&sbi_cpuidle_driver);
> +               return PTR_ERR(pdev);
> +       }
> +
> +       return 0;
> +}
> +device_initcall(sbi_cpuidle_init);
> --
> 2.25.1
>

For the SBI part,
Acked-by: Atish Patra <atishp@rivosinc.com>

FYI..
SBI HSM suspend was included in SBI v0.3. The current version of the
SBI specification (v1.0-rc2)
is already frozen as per the RVI guidelines. All the comments received
during the public review period
have been addressed as well.


-- 
Regards,
Atish

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 6/8] cpuidle: Add RISC-V SBI CPU idle driver
  2022-02-16  8:09   ` Atish Patra
@ 2022-02-16 13:45     ` Jessica Clarke
  2022-02-16 21:21       ` Atish Patra
  0 siblings, 1 reply; 25+ messages in thread
From: Jessica Clarke @ 2022-02-16 13:45 UTC (permalink / raw)
  To: Atish Patra
  Cc: Anup Patel, Palmer Dabbelt, Paul Walmsley, Albert Ou,
	Daniel Lezcano, Ulf Hansson, Rafael J . Wysocki, Pavel Machek,
	Rob Herring, Sandeep Tripathy, Alistair Francis, Liush,
	Anup Patel, devicetree, linux-riscv,
	linux-kernel@vger.kernel.org List, open list:THERMAL,
	linux-arm-kernel, kvm-riscv

On 16 Feb 2022, at 08:09, Atish Patra <atishp@atishpatra.org> wrote:
> 
> On Wed, Feb 9, 2022 at 9:51 PM Anup Patel <apatel@ventanamicro.com> wrote:
>> 
>> From: Anup Patel <anup.patel@wdc.com>
>> 
>> The RISC-V SBI HSM extension provides HSM suspend call which can
>> be used by Linux RISC-V to enter platform specific low-power state.
>> 
>> This patch adds a CPU idle driver based on RISC-V SBI calls which
>> will populate idle states from device tree and use SBI calls to
>> entry these idle states.
>> 
>> Signed-off-by: Anup Patel <anup.patel@wdc.com>
>> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
>> ---
>> MAINTAINERS                         |   7 +
>> drivers/cpuidle/Kconfig             |   5 +
>> drivers/cpuidle/Kconfig.riscv       |  15 +
>> drivers/cpuidle/Makefile            |   4 +
>> drivers/cpuidle/cpuidle-riscv-sbi.c | 627 ++++++++++++++++++++++++++++
>> 5 files changed, 658 insertions(+)
>> create mode 100644 drivers/cpuidle/Kconfig.riscv
>> create mode 100644 drivers/cpuidle/cpuidle-riscv-sbi.c
>> 
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index 39ece23e8d93..2ff0055a26a7 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -5058,6 +5058,13 @@ S:       Supported
>> F:     drivers/cpuidle/dt_idle_genpd.c
>> F:     drivers/cpuidle/dt_idle_genpd.h
>> 
>> +CPUIDLE DRIVER - RISC-V SBI
>> +M:     Anup Patel <anup@brainfault.org>
>> +L:     linux-pm@vger.kernel.org
>> +L:     linux-riscv@lists.infradead.org
>> +S:     Maintained
>> +F:     drivers/cpuidle/cpuidle-riscv-sbi.c
>> +
>> CRAMFS FILESYSTEM
>> M:     Nicolas Pitre <nico@fluxnic.net>
>> S:     Maintained
>> diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
>> index f1afe7ab6b54..ff71dd662880 100644
>> --- a/drivers/cpuidle/Kconfig
>> +++ b/drivers/cpuidle/Kconfig
>> @@ -66,6 +66,11 @@ depends on PPC
>> source "drivers/cpuidle/Kconfig.powerpc"
>> endmenu
>> 
>> +menu "RISC-V CPU Idle Drivers"
>> +depends on RISCV
>> +source "drivers/cpuidle/Kconfig.riscv"
>> +endmenu
>> +
>> config HALTPOLL_CPUIDLE
>>        tristate "Halt poll cpuidle driver"
>>        depends on X86 && KVM_GUEST
>> diff --git a/drivers/cpuidle/Kconfig.riscv b/drivers/cpuidle/Kconfig.riscv
>> new file mode 100644
>> index 000000000000..78518c26af74
>> --- /dev/null
>> +++ b/drivers/cpuidle/Kconfig.riscv
>> @@ -0,0 +1,15 @@
>> +# SPDX-License-Identifier: GPL-2.0-only
>> +#
>> +# RISC-V CPU Idle drivers
>> +#
>> +
>> +config RISCV_SBI_CPUIDLE
>> +       bool "RISC-V SBI CPU idle Driver"
>> +       depends on RISCV_SBI
>> +       select DT_IDLE_STATES
>> +       select CPU_IDLE_MULTIPLE_DRIVERS
>> +       select DT_IDLE_GENPD if PM_GENERIC_DOMAINS_OF
>> +       help
>> +         Select this option to enable RISC-V SBI firmware based CPU idle
>> +         driver for RISC-V systems. This drivers also supports hierarchical
>> +         DT based layout of the idle state.
>> diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
>> index 11a26cef279f..d103342b7cfc 100644
>> --- a/drivers/cpuidle/Makefile
>> +++ b/drivers/cpuidle/Makefile
>> @@ -35,3 +35,7 @@ obj-$(CONFIG_MIPS_CPS_CPUIDLE)                += cpuidle-cps.o
>> # POWERPC drivers
>> obj-$(CONFIG_PSERIES_CPUIDLE)          += cpuidle-pseries.o
>> obj-$(CONFIG_POWERNV_CPUIDLE)          += cpuidle-powernv.o
>> +
>> +###############################################################################
>> +# RISC-V drivers
>> +obj-$(CONFIG_RISCV_SBI_CPUIDLE)                += cpuidle-riscv-sbi.o
>> diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
>> new file mode 100644
>> index 000000000000..b459eda2cd37
>> --- /dev/null
>> +++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
>> @@ -0,0 +1,627 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/*
>> + * RISC-V SBI CPU idle driver.
>> + *
>> + * Copyright (c) 2021 Western Digital Corporation or its affiliates.
>> + * Copyright (c) 2022 Ventana Micro Systems Inc.
>> + */
>> +
>> +#define pr_fmt(fmt) "cpuidle-riscv-sbi: " fmt
>> +
>> +#include <linux/cpuidle.h>
>> +#include <linux/cpumask.h>
>> +#include <linux/cpu_pm.h>
>> +#include <linux/cpu_cooling.h>
>> +#include <linux/kernel.h>
>> +#include <linux/module.h>
>> +#include <linux/of.h>
>> +#include <linux/of_device.h>
>> +#include <linux/slab.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/pm_domain.h>
>> +#include <linux/pm_runtime.h>
>> +#include <asm/cpuidle.h>
>> +#include <asm/sbi.h>
>> +#include <asm/suspend.h>
>> +
>> +#include "dt_idle_states.h"
>> +#include "dt_idle_genpd.h"
>> +
>> +struct sbi_cpuidle_data {
>> +       u32 *states;
>> +       struct device *dev;
>> +};
>> +
>> +struct sbi_domain_state {
>> +       bool available;
>> +       u32 state;
>> +};
>> +
>> +static DEFINE_PER_CPU_READ_MOSTLY(struct sbi_cpuidle_data, sbi_cpuidle_data);
>> +static DEFINE_PER_CPU(struct sbi_domain_state, domain_state);
>> +static bool sbi_cpuidle_use_osi;
>> +static bool sbi_cpuidle_use_cpuhp;
>> +static bool sbi_cpuidle_pd_allow_domain_state;
>> +
>> +static inline void sbi_set_domain_state(u32 state)
>> +{
>> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
>> +
>> +       data->available = true;
>> +       data->state = state;
>> +}
>> +
>> +static inline u32 sbi_get_domain_state(void)
>> +{
>> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
>> +
>> +       return data->state;
>> +}
>> +
>> +static inline void sbi_clear_domain_state(void)
>> +{
>> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
>> +
>> +       data->available = false;
>> +}
>> +
>> +static inline bool sbi_is_domain_state_available(void)
>> +{
>> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
>> +
>> +       return data->available;
>> +}
>> +
>> +static int sbi_suspend_finisher(unsigned long suspend_type,
>> +                               unsigned long resume_addr,
>> +                               unsigned long opaque)
>> +{
>> +       struct sbiret ret;
>> +
>> +       ret = sbi_ecall(SBI_EXT_HSM, SBI_EXT_HSM_HART_SUSPEND,
>> +                       suspend_type, resume_addr, opaque, 0, 0, 0);
>> +
>> +       return (ret.error) ? sbi_err_map_linux_errno(ret.error) : 0;
>> +}
>> +
>> +static int sbi_suspend(u32 state)
>> +{
>> +       if (state & SBI_HSM_SUSP_NON_RET_BIT)
>> +               return cpu_suspend(state, sbi_suspend_finisher);
>> +       else
>> +               return sbi_suspend_finisher(state, 0, 0);
>> +}
>> +
>> +static int sbi_cpuidle_enter_state(struct cpuidle_device *dev,
>> +                                  struct cpuidle_driver *drv, int idx)
>> +{
>> +       u32 *states = __this_cpu_read(sbi_cpuidle_data.states);
>> +
>> +       return CPU_PM_CPU_IDLE_ENTER_PARAM(sbi_suspend, idx, states[idx]);
>> +}
>> +
>> +static int __sbi_enter_domain_idle_state(struct cpuidle_device *dev,
>> +                                         struct cpuidle_driver *drv, int idx,
>> +                                         bool s2idle)
>> +{
>> +       struct sbi_cpuidle_data *data = this_cpu_ptr(&sbi_cpuidle_data);
>> +       u32 *states = data->states;
>> +       struct device *pd_dev = data->dev;
>> +       u32 state;
>> +       int ret;
>> +
>> +       ret = cpu_pm_enter();
>> +       if (ret)
>> +               return -1;
>> +
>> +       /* Do runtime PM to manage a hierarchical CPU toplogy. */
>> +       rcu_irq_enter_irqson();
>> +       if (s2idle)
>> +               dev_pm_genpd_suspend(pd_dev);
>> +       else
>> +               pm_runtime_put_sync_suspend(pd_dev);
>> +       rcu_irq_exit_irqson();
>> +
>> +       if (sbi_is_domain_state_available())
>> +               state = sbi_get_domain_state();
>> +       else
>> +               state = states[idx];
>> +
>> +       ret = sbi_suspend(state) ? -1 : idx;
>> +
>> +       rcu_irq_enter_irqson();
>> +       if (s2idle)
>> +               dev_pm_genpd_resume(pd_dev);
>> +       else
>> +               pm_runtime_get_sync(pd_dev);
>> +       rcu_irq_exit_irqson();
>> +
>> +       cpu_pm_exit();
>> +
>> +       /* Clear the domain state to start fresh when back from idle. */
>> +       sbi_clear_domain_state();
>> +       return ret;
>> +}
>> +
>> +static int sbi_enter_domain_idle_state(struct cpuidle_device *dev,
>> +                                      struct cpuidle_driver *drv, int idx)
>> +{
>> +       return __sbi_enter_domain_idle_state(dev, drv, idx, false);
>> +}
>> +
>> +static int sbi_enter_s2idle_domain_idle_state(struct cpuidle_device *dev,
>> +                                             struct cpuidle_driver *drv,
>> +                                             int idx)
>> +{
>> +       return __sbi_enter_domain_idle_state(dev, drv, idx, true);
>> +}
>> +
>> +static int sbi_cpuidle_cpuhp_up(unsigned int cpu)
>> +{
>> +       struct device *pd_dev = __this_cpu_read(sbi_cpuidle_data.dev);
>> +
>> +       if (pd_dev)
>> +               pm_runtime_get_sync(pd_dev);
>> +
>> +       return 0;
>> +}
>> +
>> +static int sbi_cpuidle_cpuhp_down(unsigned int cpu)
>> +{
>> +       struct device *pd_dev = __this_cpu_read(sbi_cpuidle_data.dev);
>> +
>> +       if (pd_dev) {
>> +               pm_runtime_put_sync(pd_dev);
>> +               /* Clear domain state to start fresh at next online. */
>> +               sbi_clear_domain_state();
>> +       }
>> +
>> +       return 0;
>> +}
>> +
>> +static void sbi_idle_init_cpuhp(void)
>> +{
>> +       int err;
>> +
>> +       if (!sbi_cpuidle_use_cpuhp)
>> +               return;
>> +
>> +       err = cpuhp_setup_state_nocalls(CPUHP_AP_CPU_PM_STARTING,
>> +                                       "cpuidle/sbi:online",
>> +                                       sbi_cpuidle_cpuhp_up,
>> +                                       sbi_cpuidle_cpuhp_down);
>> +       if (err)
>> +               pr_warn("Failed %d while setup cpuhp state\n", err);
>> +}
>> +
>> +static const struct of_device_id sbi_cpuidle_state_match[] = {
>> +       { .compatible = "riscv,idle-state",
>> +         .data = sbi_cpuidle_enter_state },
>> +       { },
>> +};
>> +
>> +static bool sbi_suspend_state_is_valid(u32 state)
>> +{
>> +       if (state > SBI_HSM_SUSPEND_RET_DEFAULT &&
>> +           state < SBI_HSM_SUSPEND_RET_PLATFORM)
>> +               return false;
>> +       if (state > SBI_HSM_SUSPEND_NON_RET_DEFAULT &&
>> +           state < SBI_HSM_SUSPEND_NON_RET_PLATFORM)
>> +               return false;
>> +       return true;
>> +}
>> +
>> +static int sbi_dt_parse_state_node(struct device_node *np, u32 *state)
>> +{
>> +       int err = of_property_read_u32(np, "riscv,sbi-suspend-param", state);
>> +
>> +       if (err) {
>> +               pr_warn("%pOF missing riscv,sbi-suspend-param property\n", np);
>> +               return err;
>> +       }
>> +
>> +       if (!sbi_suspend_state_is_valid(*state)) {
>> +               pr_warn("Invalid SBI suspend state %#x\n", *state);
>> +               return -EINVAL;
>> +       }
>> +
>> +       return 0;
>> +}
>> +
>> +static int sbi_dt_cpu_init_topology(struct cpuidle_driver *drv,
>> +                                    struct sbi_cpuidle_data *data,
>> +                                    unsigned int state_count, int cpu)
>> +{
>> +       /* Currently limit the hierarchical topology to be used in OSI mode. */
>> +       if (!sbi_cpuidle_use_osi)
>> +               return 0;
>> +
>> +       data->dev = dt_idle_attach_cpu(cpu, "sbi");
>> +       if (IS_ERR_OR_NULL(data->dev))
>> +               return PTR_ERR_OR_ZERO(data->dev);
>> +
>> +       /*
>> +        * Using the deepest state for the CPU to trigger a potential selection
>> +        * of a shared state for the domain, assumes the domain states are all
>> +        * deeper states.
>> +        */
>> +       drv->states[state_count - 1].enter = sbi_enter_domain_idle_state;
>> +       drv->states[state_count - 1].enter_s2idle =
>> +                                       sbi_enter_s2idle_domain_idle_state;
>> +       sbi_cpuidle_use_cpuhp = true;
>> +
>> +       return 0;
>> +}
>> +
>> +static int sbi_cpuidle_dt_init_states(struct device *dev,
>> +                                       struct cpuidle_driver *drv,
>> +                                       unsigned int cpu,
>> +                                       unsigned int state_count)
>> +{
>> +       struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu);
>> +       struct device_node *state_node;
>> +       struct device_node *cpu_node;
>> +       u32 *states;
>> +       int i, ret;
>> +
>> +       cpu_node = of_cpu_device_node_get(cpu);
>> +       if (!cpu_node)
>> +               return -ENODEV;
>> +
>> +       states = devm_kcalloc(dev, state_count, sizeof(*states), GFP_KERNEL);
>> +       if (!states) {
>> +               ret = -ENOMEM;
>> +               goto fail;
>> +       }
>> +
>> +       /* Parse SBI specific details from state DT nodes */
>> +       for (i = 1; i < state_count; i++) {
>> +               state_node = of_get_cpu_state_node(cpu_node, i - 1);
>> +               if (!state_node)
>> +                       break;
>> +
>> +               ret = sbi_dt_parse_state_node(state_node, &states[i]);
>> +               of_node_put(state_node);
>> +
>> +               if (ret)
>> +                       return ret;
>> +
>> +               pr_debug("sbi-state %#x index %d\n", states[i], i);
>> +       }
>> +       if (i != state_count) {
>> +               ret = -ENODEV;
>> +               goto fail;
>> +       }
>> +
>> +       /* Initialize optional data, used for the hierarchical topology. */
>> +       ret = sbi_dt_cpu_init_topology(drv, data, state_count, cpu);
>> +       if (ret < 0)
>> +               return ret;
>> +
>> +       /* Store states in the per-cpu struct. */
>> +       data->states = states;
>> +
>> +fail:
>> +       of_node_put(cpu_node);
>> +
>> +       return ret;
>> +}
>> +
>> +static void sbi_cpuidle_deinit_cpu(int cpu)
>> +{
>> +       struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu);
>> +
>> +       dt_idle_detach_cpu(data->dev);
>> +       sbi_cpuidle_use_cpuhp = false;
>> +}
>> +
>> +static int sbi_cpuidle_init_cpu(struct device *dev, int cpu)
>> +{
>> +       struct cpuidle_driver *drv;
>> +       unsigned int state_count = 0;
>> +       int ret = 0;
>> +
>> +       drv = devm_kzalloc(dev, sizeof(*drv), GFP_KERNEL);
>> +       if (!drv)
>> +               return -ENOMEM;
>> +
>> +       drv->name = "sbi_cpuidle";
>> +       drv->owner = THIS_MODULE;
>> +       drv->cpumask = (struct cpumask *)cpumask_of(cpu);
>> +
>> +       /* RISC-V architectural WFI to be represented as state index 0. */
>> +       drv->states[0].enter = sbi_cpuidle_enter_state;
>> +       drv->states[0].exit_latency = 1;
>> +       drv->states[0].target_residency = 1;
>> +       drv->states[0].power_usage = UINT_MAX;
>> +       strcpy(drv->states[0].name, "WFI");
>> +       strcpy(drv->states[0].desc, "RISC-V WFI");
>> +
>> +       /*
>> +        * If no DT idle states are detected (ret == 0) let the driver
>> +        * initialization fail accordingly since there is no reason to
>> +        * initialize the idle driver if only wfi is supported, the
>> +        * default archictectural back-end already executes wfi
>> +        * on idle entry.
>> +        */
>> +       ret = dt_init_idle_driver(drv, sbi_cpuidle_state_match, 1);
>> +       if (ret <= 0) {
>> +               pr_debug("HART%ld: failed to parse DT idle states\n",
>> +                        cpuid_to_hartid_map(cpu));
>> +               return ret ? : -ENODEV;
>> +       }
>> +       state_count = ret + 1; /* Include WFI state as well */
>> +
>> +       /* Initialize idle states from DT. */
>> +       ret = sbi_cpuidle_dt_init_states(dev, drv, cpu, state_count);
>> +       if (ret) {
>> +               pr_err("HART%ld: failed to init idle states\n",
>> +                      cpuid_to_hartid_map(cpu));
>> +               return ret;
>> +       }
>> +
>> +       ret = cpuidle_register(drv, NULL);
>> +       if (ret)
>> +               goto deinit;
>> +
>> +       cpuidle_cooling_register(drv);
>> +
>> +       return 0;
>> +deinit:
>> +       sbi_cpuidle_deinit_cpu(cpu);
>> +       return ret;
>> +}
>> +
>> +static void sbi_cpuidle_domain_sync_state(struct device *dev)
>> +{
>> +       /*
>> +        * All devices have now been attached/probed to the PM domain
>> +        * topology, hence it's fine to allow domain states to be picked.
>> +        */
>> +       sbi_cpuidle_pd_allow_domain_state = true;
>> +}
>> +
>> +#ifdef CONFIG_DT_IDLE_GENPD
>> +
>> +static int sbi_cpuidle_pd_power_off(struct generic_pm_domain *pd)
>> +{
>> +       struct genpd_power_state *state = &pd->states[pd->state_idx];
>> +       u32 *pd_state;
>> +
>> +       if (!state->data)
>> +               return 0;
>> +
>> +       if (!sbi_cpuidle_pd_allow_domain_state)
>> +               return -EBUSY;
>> +
>> +       /* OSI mode is enabled, set the corresponding domain state. */
>> +       pd_state = state->data;
>> +       sbi_set_domain_state(*pd_state);
>> +
>> +       return 0;
>> +}
>> +
>> +struct sbi_pd_provider {
>> +       struct list_head link;
>> +       struct device_node *node;
>> +};
>> +
>> +static LIST_HEAD(sbi_pd_providers);
>> +
>> +static int sbi_pd_init(struct device_node *np)
>> +{
>> +       struct generic_pm_domain *pd;
>> +       struct sbi_pd_provider *pd_provider;
>> +       struct dev_power_governor *pd_gov;
>> +       int ret = -ENOMEM, state_count = 0;
>> +
>> +       pd = dt_idle_pd_alloc(np, sbi_dt_parse_state_node);
>> +       if (!pd)
>> +               goto out;
>> +
>> +       pd_provider = kzalloc(sizeof(*pd_provider), GFP_KERNEL);
>> +       if (!pd_provider)
>> +               goto free_pd;
>> +
>> +       pd->flags |= GENPD_FLAG_IRQ_SAFE | GENPD_FLAG_CPU_DOMAIN;
>> +
>> +       /* Allow power off when OSI is available. */
>> +       if (sbi_cpuidle_use_osi)
>> +               pd->power_off = sbi_cpuidle_pd_power_off;
>> +       else
>> +               pd->flags |= GENPD_FLAG_ALWAYS_ON;
>> +
>> +       /* Use governor for CPU PM domains if it has some states to manage. */
>> +       pd_gov = state_count > 0 ? &pm_domain_cpu_gov : NULL;
>> +
>> +       ret = pm_genpd_init(pd, pd_gov, false);
>> +       if (ret)
>> +               goto free_pd_prov;
>> +
>> +       ret = of_genpd_add_provider_simple(np, pd);
>> +       if (ret)
>> +               goto remove_pd;
>> +
>> +       pd_provider->node = of_node_get(np);
>> +       list_add(&pd_provider->link, &sbi_pd_providers);
>> +
>> +       pr_debug("init PM domain %s\n", pd->name);
>> +       return 0;
>> +
>> +remove_pd:
>> +       pm_genpd_remove(pd);
>> +free_pd_prov:
>> +       kfree(pd_provider);
>> +free_pd:
>> +       dt_idle_pd_free(pd);
>> +out:
>> +       pr_err("failed to init PM domain ret=%d %pOF\n", ret, np);
>> +       return ret;
>> +}
>> +
>> +static void sbi_pd_remove(void)
>> +{
>> +       struct sbi_pd_provider *pd_provider, *it;
>> +       struct generic_pm_domain *genpd;
>> +
>> +       list_for_each_entry_safe(pd_provider, it, &sbi_pd_providers, link) {
>> +               of_genpd_del_provider(pd_provider->node);
>> +
>> +               genpd = of_genpd_remove_last(pd_provider->node);
>> +               if (!IS_ERR(genpd))
>> +                       kfree(genpd);
>> +
>> +               of_node_put(pd_provider->node);
>> +               list_del(&pd_provider->link);
>> +               kfree(pd_provider);
>> +       }
>> +}
>> +
>> +static int sbi_genpd_probe(struct device_node *np)
>> +{
>> +       struct device_node *node;
>> +       int ret = 0, pd_count = 0;
>> +
>> +       if (!np)
>> +               return -ENODEV;
>> +
>> +       /*
>> +        * Parse child nodes for the "#power-domain-cells" property and
>> +        * initialize a genpd/genpd-of-provider pair when it's found.
>> +        */
>> +       for_each_child_of_node(np, node) {
>> +               if (!of_find_property(node, "#power-domain-cells", NULL))
>> +                       continue;
>> +
>> +               ret = sbi_pd_init(node);
>> +               if (ret)
>> +                       goto put_node;
>> +
>> +               pd_count++;
>> +       }
>> +
>> +       /* Bail out if not using the hierarchical CPU topology. */
>> +       if (!pd_count)
>> +               goto no_pd;
>> +
>> +       /* Link genpd masters/subdomains to model the CPU topology. */
>> +       ret = dt_idle_pd_init_topology(np);
>> +       if (ret)
>> +               goto remove_pd;
>> +
>> +       return 0;
>> +
>> +put_node:
>> +       of_node_put(node);
>> +remove_pd:
>> +       sbi_pd_remove();
>> +       pr_err("failed to create CPU PM domains ret=%d\n", ret);
>> +no_pd:
>> +       return ret;
>> +}
>> +
>> +#else
>> +
>> +static inline int sbi_genpd_probe(struct device_node *np)
>> +{
>> +       return 0;
>> +}
>> +
>> +#endif
>> +
>> +static int sbi_cpuidle_probe(struct platform_device *pdev)
>> +{
>> +       int cpu, ret;
>> +       struct cpuidle_driver *drv;
>> +       struct cpuidle_device *dev;
>> +       struct device_node *np, *pds_node;
>> +
>> +       /* Detect OSI support based on CPU DT nodes */
>> +       sbi_cpuidle_use_osi = true;
>> +       for_each_possible_cpu(cpu) {
>> +               np = of_cpu_device_node_get(cpu);
>> +               if (np &&
>> +                   of_find_property(np, "power-domains", NULL) &&
>> +                   of_find_property(np, "power-domain-names", NULL)) {
>> +                       continue;
>> +               } else {
>> +                       sbi_cpuidle_use_osi = false;
>> +                       break;
>> +               }
>> +       }
>> +
>> +       /* Populate generic power domains from DT nodes */
>> +       pds_node = of_find_node_by_path("/cpus/power-domains");
>> +       if (pds_node) {
>> +               ret = sbi_genpd_probe(pds_node);
>> +               of_node_put(pds_node);
>> +               if (ret)
>> +                       return ret;
>> +       }
>> +
>> +       /* Initialize CPU idle driver for each CPU */
>> +       for_each_possible_cpu(cpu) {
>> +               ret = sbi_cpuidle_init_cpu(&pdev->dev, cpu);
>> +               if (ret) {
>> +                       pr_debug("HART%ld: idle driver init failed\n",
>> +                                cpuid_to_hartid_map(cpu));
>> +                       goto out_fail;
>> +               }
>> +       }
>> +
>> +       /* Setup CPU hotplut notifiers */
>> +       sbi_idle_init_cpuhp();
>> +
>> +       pr_info("idle driver registered for all CPUs\n");
>> +
>> +       return 0;
>> +
>> +out_fail:
>> +       while (--cpu >= 0) {
>> +               dev = per_cpu(cpuidle_devices, cpu);
>> +               drv = cpuidle_get_cpu_driver(dev);
>> +               cpuidle_unregister(drv);
>> +               sbi_cpuidle_deinit_cpu(cpu);
>> +       }
>> +
>> +       return ret;
>> +}
>> +
>> +static struct platform_driver sbi_cpuidle_driver = {
>> +       .probe = sbi_cpuidle_probe,
>> +       .driver = {
>> +               .name = "sbi-cpuidle",
>> +               .sync_state = sbi_cpuidle_domain_sync_state,
>> +       },
>> +};
>> +
>> +static int __init sbi_cpuidle_init(void)
>> +{
>> +       int ret;
>> +       struct platform_device *pdev;
>> +
>> +       /*
>> +        * The SBI HSM suspend function is only available when:
>> +        * 1) SBI version is 0.3 or higher
>> +        * 2) SBI HSM extension is available
>> +        */
>> +       if ((sbi_spec_version < sbi_mk_version(0, 3)) ||
>> +           sbi_probe_extension(SBI_EXT_HSM) <= 0) {
>> +               pr_info("HSM suspend not available\n");
>> +               return 0;
>> +       }
>> +
>> +       ret = platform_driver_register(&sbi_cpuidle_driver);
>> +       if (ret)
>> +               return ret;
>> +
>> +       pdev = platform_device_register_simple("sbi-cpuidle",
>> +                                               -1, NULL, 0);
>> +       if (IS_ERR(pdev)) {
>> +               platform_driver_unregister(&sbi_cpuidle_driver);
>> +               return PTR_ERR(pdev);
>> +       }
>> +
>> +       return 0;
>> +}
>> +device_initcall(sbi_cpuidle_init);
>> --
>> 2.25.1
>> 
> 
> For the SBI part,
> Acked-by: Atish Patra <atishp@rivosinc.com>
> 
> FYI..
> SBI HSM suspend was included in SBI v0.3. The current version of the
> SBI specification (v1.0-rc2)
> is already frozen as per the RVI guidelines. All the comments received
> during the public review period
> have been addressed as well.

Yet not all comments from *before* the public review period.

Jess


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 6/8] cpuidle: Add RISC-V SBI CPU idle driver
  2022-02-16 13:45     ` Jessica Clarke
@ 2022-02-16 21:21       ` Atish Patra
  0 siblings, 0 replies; 25+ messages in thread
From: Atish Patra @ 2022-02-16 21:21 UTC (permalink / raw)
  To: Jessica Clarke
  Cc: Anup Patel, Palmer Dabbelt, Paul Walmsley, Albert Ou,
	Daniel Lezcano, Ulf Hansson, Rafael J . Wysocki, Pavel Machek,
	Rob Herring, Sandeep Tripathy, Alistair Francis, Liush,
	Anup Patel, devicetree, linux-riscv,
	linux-kernel@vger.kernel.org List, open list:THERMAL,
	linux-arm-kernel, kvm-riscv

On Wed, Feb 16, 2022 at 5:45 AM Jessica Clarke <jrtc27@jrtc27.com> wrote:
>
> On 16 Feb 2022, at 08:09, Atish Patra <atishp@atishpatra.org> wrote:
> >
> > On Wed, Feb 9, 2022 at 9:51 PM Anup Patel <apatel@ventanamicro.com> wrote:
> >>
> >> From: Anup Patel <anup.patel@wdc.com>
> >>
> >> The RISC-V SBI HSM extension provides HSM suspend call which can
> >> be used by Linux RISC-V to enter platform specific low-power state.
> >>
> >> This patch adds a CPU idle driver based on RISC-V SBI calls which
> >> will populate idle states from device tree and use SBI calls to
> >> entry these idle states.
> >>
> >> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> >> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> >> ---
> >> MAINTAINERS                         |   7 +
> >> drivers/cpuidle/Kconfig             |   5 +
> >> drivers/cpuidle/Kconfig.riscv       |  15 +
> >> drivers/cpuidle/Makefile            |   4 +
> >> drivers/cpuidle/cpuidle-riscv-sbi.c | 627 ++++++++++++++++++++++++++++
> >> 5 files changed, 658 insertions(+)
> >> create mode 100644 drivers/cpuidle/Kconfig.riscv
> >> create mode 100644 drivers/cpuidle/cpuidle-riscv-sbi.c
> >>
> >> diff --git a/MAINTAINERS b/MAINTAINERS
> >> index 39ece23e8d93..2ff0055a26a7 100644
> >> --- a/MAINTAINERS
> >> +++ b/MAINTAINERS
> >> @@ -5058,6 +5058,13 @@ S:       Supported
> >> F:     drivers/cpuidle/dt_idle_genpd.c
> >> F:     drivers/cpuidle/dt_idle_genpd.h
> >>
> >> +CPUIDLE DRIVER - RISC-V SBI
> >> +M:     Anup Patel <anup@brainfault.org>
> >> +L:     linux-pm@vger.kernel.org
> >> +L:     linux-riscv@lists.infradead.org
> >> +S:     Maintained
> >> +F:     drivers/cpuidle/cpuidle-riscv-sbi.c
> >> +
> >> CRAMFS FILESYSTEM
> >> M:     Nicolas Pitre <nico@fluxnic.net>
> >> S:     Maintained
> >> diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
> >> index f1afe7ab6b54..ff71dd662880 100644
> >> --- a/drivers/cpuidle/Kconfig
> >> +++ b/drivers/cpuidle/Kconfig
> >> @@ -66,6 +66,11 @@ depends on PPC
> >> source "drivers/cpuidle/Kconfig.powerpc"
> >> endmenu
> >>
> >> +menu "RISC-V CPU Idle Drivers"
> >> +depends on RISCV
> >> +source "drivers/cpuidle/Kconfig.riscv"
> >> +endmenu
> >> +
> >> config HALTPOLL_CPUIDLE
> >>        tristate "Halt poll cpuidle driver"
> >>        depends on X86 && KVM_GUEST
> >> diff --git a/drivers/cpuidle/Kconfig.riscv b/drivers/cpuidle/Kconfig.riscv
> >> new file mode 100644
> >> index 000000000000..78518c26af74
> >> --- /dev/null
> >> +++ b/drivers/cpuidle/Kconfig.riscv
> >> @@ -0,0 +1,15 @@
> >> +# SPDX-License-Identifier: GPL-2.0-only
> >> +#
> >> +# RISC-V CPU Idle drivers
> >> +#
> >> +
> >> +config RISCV_SBI_CPUIDLE
> >> +       bool "RISC-V SBI CPU idle Driver"
> >> +       depends on RISCV_SBI
> >> +       select DT_IDLE_STATES
> >> +       select CPU_IDLE_MULTIPLE_DRIVERS
> >> +       select DT_IDLE_GENPD if PM_GENERIC_DOMAINS_OF
> >> +       help
> >> +         Select this option to enable RISC-V SBI firmware based CPU idle
> >> +         driver for RISC-V systems. This drivers also supports hierarchical
> >> +         DT based layout of the idle state.
> >> diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
> >> index 11a26cef279f..d103342b7cfc 100644
> >> --- a/drivers/cpuidle/Makefile
> >> +++ b/drivers/cpuidle/Makefile
> >> @@ -35,3 +35,7 @@ obj-$(CONFIG_MIPS_CPS_CPUIDLE)                += cpuidle-cps.o
> >> # POWERPC drivers
> >> obj-$(CONFIG_PSERIES_CPUIDLE)          += cpuidle-pseries.o
> >> obj-$(CONFIG_POWERNV_CPUIDLE)          += cpuidle-powernv.o
> >> +
> >> +###############################################################################
> >> +# RISC-V drivers
> >> +obj-$(CONFIG_RISCV_SBI_CPUIDLE)                += cpuidle-riscv-sbi.o
> >> diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
> >> new file mode 100644
> >> index 000000000000..b459eda2cd37
> >> --- /dev/null
> >> +++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
> >> @@ -0,0 +1,627 @@
> >> +// SPDX-License-Identifier: GPL-2.0-only
> >> +/*
> >> + * RISC-V SBI CPU idle driver.
> >> + *
> >> + * Copyright (c) 2021 Western Digital Corporation or its affiliates.
> >> + * Copyright (c) 2022 Ventana Micro Systems Inc.
> >> + */
> >> +
> >> +#define pr_fmt(fmt) "cpuidle-riscv-sbi: " fmt
> >> +
> >> +#include <linux/cpuidle.h>
> >> +#include <linux/cpumask.h>
> >> +#include <linux/cpu_pm.h>
> >> +#include <linux/cpu_cooling.h>
> >> +#include <linux/kernel.h>
> >> +#include <linux/module.h>
> >> +#include <linux/of.h>
> >> +#include <linux/of_device.h>
> >> +#include <linux/slab.h>
> >> +#include <linux/platform_device.h>
> >> +#include <linux/pm_domain.h>
> >> +#include <linux/pm_runtime.h>
> >> +#include <asm/cpuidle.h>
> >> +#include <asm/sbi.h>
> >> +#include <asm/suspend.h>
> >> +
> >> +#include "dt_idle_states.h"
> >> +#include "dt_idle_genpd.h"
> >> +
> >> +struct sbi_cpuidle_data {
> >> +       u32 *states;
> >> +       struct device *dev;
> >> +};
> >> +
> >> +struct sbi_domain_state {
> >> +       bool available;
> >> +       u32 state;
> >> +};
> >> +
> >> +static DEFINE_PER_CPU_READ_MOSTLY(struct sbi_cpuidle_data, sbi_cpuidle_data);
> >> +static DEFINE_PER_CPU(struct sbi_domain_state, domain_state);
> >> +static bool sbi_cpuidle_use_osi;
> >> +static bool sbi_cpuidle_use_cpuhp;
> >> +static bool sbi_cpuidle_pd_allow_domain_state;
> >> +
> >> +static inline void sbi_set_domain_state(u32 state)
> >> +{
> >> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> >> +
> >> +       data->available = true;
> >> +       data->state = state;
> >> +}
> >> +
> >> +static inline u32 sbi_get_domain_state(void)
> >> +{
> >> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> >> +
> >> +       return data->state;
> >> +}
> >> +
> >> +static inline void sbi_clear_domain_state(void)
> >> +{
> >> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> >> +
> >> +       data->available = false;
> >> +}
> >> +
> >> +static inline bool sbi_is_domain_state_available(void)
> >> +{
> >> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> >> +
> >> +       return data->available;
> >> +}
> >> +
> >> +static int sbi_suspend_finisher(unsigned long suspend_type,
> >> +                               unsigned long resume_addr,
> >> +                               unsigned long opaque)
> >> +{
> >> +       struct sbiret ret;
> >> +
> >> +       ret = sbi_ecall(SBI_EXT_HSM, SBI_EXT_HSM_HART_SUSPEND,
> >> +                       suspend_type, resume_addr, opaque, 0, 0, 0);
> >> +
> >> +       return (ret.error) ? sbi_err_map_linux_errno(ret.error) : 0;
> >> +}
> >> +
> >> +static int sbi_suspend(u32 state)
> >> +{
> >> +       if (state & SBI_HSM_SUSP_NON_RET_BIT)
> >> +               return cpu_suspend(state, sbi_suspend_finisher);
> >> +       else
> >> +               return sbi_suspend_finisher(state, 0, 0);
> >> +}
> >> +
> >> +static int sbi_cpuidle_enter_state(struct cpuidle_device *dev,
> >> +                                  struct cpuidle_driver *drv, int idx)
> >> +{
> >> +       u32 *states = __this_cpu_read(sbi_cpuidle_data.states);
> >> +
> >> +       return CPU_PM_CPU_IDLE_ENTER_PARAM(sbi_suspend, idx, states[idx]);
> >> +}
> >> +
> >> +static int __sbi_enter_domain_idle_state(struct cpuidle_device *dev,
> >> +                                         struct cpuidle_driver *drv, int idx,
> >> +                                         bool s2idle)
> >> +{
> >> +       struct sbi_cpuidle_data *data = this_cpu_ptr(&sbi_cpuidle_data);
> >> +       u32 *states = data->states;
> >> +       struct device *pd_dev = data->dev;
> >> +       u32 state;
> >> +       int ret;
> >> +
> >> +       ret = cpu_pm_enter();
> >> +       if (ret)
> >> +               return -1;
> >> +
> >> +       /* Do runtime PM to manage a hierarchical CPU toplogy. */
> >> +       rcu_irq_enter_irqson();
> >> +       if (s2idle)
> >> +               dev_pm_genpd_suspend(pd_dev);
> >> +       else
> >> +               pm_runtime_put_sync_suspend(pd_dev);
> >> +       rcu_irq_exit_irqson();
> >> +
> >> +       if (sbi_is_domain_state_available())
> >> +               state = sbi_get_domain_state();
> >> +       else
> >> +               state = states[idx];
> >> +
> >> +       ret = sbi_suspend(state) ? -1 : idx;
> >> +
> >> +       rcu_irq_enter_irqson();
> >> +       if (s2idle)
> >> +               dev_pm_genpd_resume(pd_dev);
> >> +       else
> >> +               pm_runtime_get_sync(pd_dev);
> >> +       rcu_irq_exit_irqson();
> >> +
> >> +       cpu_pm_exit();
> >> +
> >> +       /* Clear the domain state to start fresh when back from idle. */
> >> +       sbi_clear_domain_state();
> >> +       return ret;
> >> +}
> >> +
> >> +static int sbi_enter_domain_idle_state(struct cpuidle_device *dev,
> >> +                                      struct cpuidle_driver *drv, int idx)
> >> +{
> >> +       return __sbi_enter_domain_idle_state(dev, drv, idx, false);
> >> +}
> >> +
> >> +static int sbi_enter_s2idle_domain_idle_state(struct cpuidle_device *dev,
> >> +                                             struct cpuidle_driver *drv,
> >> +                                             int idx)
> >> +{
> >> +       return __sbi_enter_domain_idle_state(dev, drv, idx, true);
> >> +}
> >> +
> >> +static int sbi_cpuidle_cpuhp_up(unsigned int cpu)
> >> +{
> >> +       struct device *pd_dev = __this_cpu_read(sbi_cpuidle_data.dev);
> >> +
> >> +       if (pd_dev)
> >> +               pm_runtime_get_sync(pd_dev);
> >> +
> >> +       return 0;
> >> +}
> >> +
> >> +static int sbi_cpuidle_cpuhp_down(unsigned int cpu)
> >> +{
> >> +       struct device *pd_dev = __this_cpu_read(sbi_cpuidle_data.dev);
> >> +
> >> +       if (pd_dev) {
> >> +               pm_runtime_put_sync(pd_dev);
> >> +               /* Clear domain state to start fresh at next online. */
> >> +               sbi_clear_domain_state();
> >> +       }
> >> +
> >> +       return 0;
> >> +}
> >> +
> >> +static void sbi_idle_init_cpuhp(void)
> >> +{
> >> +       int err;
> >> +
> >> +       if (!sbi_cpuidle_use_cpuhp)
> >> +               return;
> >> +
> >> +       err = cpuhp_setup_state_nocalls(CPUHP_AP_CPU_PM_STARTING,
> >> +                                       "cpuidle/sbi:online",
> >> +                                       sbi_cpuidle_cpuhp_up,
> >> +                                       sbi_cpuidle_cpuhp_down);
> >> +       if (err)
> >> +               pr_warn("Failed %d while setup cpuhp state\n", err);
> >> +}
> >> +
> >> +static const struct of_device_id sbi_cpuidle_state_match[] = {
> >> +       { .compatible = "riscv,idle-state",
> >> +         .data = sbi_cpuidle_enter_state },
> >> +       { },
> >> +};
> >> +
> >> +static bool sbi_suspend_state_is_valid(u32 state)
> >> +{
> >> +       if (state > SBI_HSM_SUSPEND_RET_DEFAULT &&
> >> +           state < SBI_HSM_SUSPEND_RET_PLATFORM)
> >> +               return false;
> >> +       if (state > SBI_HSM_SUSPEND_NON_RET_DEFAULT &&
> >> +           state < SBI_HSM_SUSPEND_NON_RET_PLATFORM)
> >> +               return false;
> >> +       return true;
> >> +}
> >> +
> >> +static int sbi_dt_parse_state_node(struct device_node *np, u32 *state)
> >> +{
> >> +       int err = of_property_read_u32(np, "riscv,sbi-suspend-param", state);
> >> +
> >> +       if (err) {
> >> +               pr_warn("%pOF missing riscv,sbi-suspend-param property\n", np);
> >> +               return err;
> >> +       }
> >> +
> >> +       if (!sbi_suspend_state_is_valid(*state)) {
> >> +               pr_warn("Invalid SBI suspend state %#x\n", *state);
> >> +               return -EINVAL;
> >> +       }
> >> +
> >> +       return 0;
> >> +}
> >> +
> >> +static int sbi_dt_cpu_init_topology(struct cpuidle_driver *drv,
> >> +                                    struct sbi_cpuidle_data *data,
> >> +                                    unsigned int state_count, int cpu)
> >> +{
> >> +       /* Currently limit the hierarchical topology to be used in OSI mode. */
> >> +       if (!sbi_cpuidle_use_osi)
> >> +               return 0;
> >> +
> >> +       data->dev = dt_idle_attach_cpu(cpu, "sbi");
> >> +       if (IS_ERR_OR_NULL(data->dev))
> >> +               return PTR_ERR_OR_ZERO(data->dev);
> >> +
> >> +       /*
> >> +        * Using the deepest state for the CPU to trigger a potential selection
> >> +        * of a shared state for the domain, assumes the domain states are all
> >> +        * deeper states.
> >> +        */
> >> +       drv->states[state_count - 1].enter = sbi_enter_domain_idle_state;
> >> +       drv->states[state_count - 1].enter_s2idle =
> >> +                                       sbi_enter_s2idle_domain_idle_state;
> >> +       sbi_cpuidle_use_cpuhp = true;
> >> +
> >> +       return 0;
> >> +}
> >> +
> >> +static int sbi_cpuidle_dt_init_states(struct device *dev,
> >> +                                       struct cpuidle_driver *drv,
> >> +                                       unsigned int cpu,
> >> +                                       unsigned int state_count)
> >> +{
> >> +       struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu);
> >> +       struct device_node *state_node;
> >> +       struct device_node *cpu_node;
> >> +       u32 *states;
> >> +       int i, ret;
> >> +
> >> +       cpu_node = of_cpu_device_node_get(cpu);
> >> +       if (!cpu_node)
> >> +               return -ENODEV;
> >> +
> >> +       states = devm_kcalloc(dev, state_count, sizeof(*states), GFP_KERNEL);
> >> +       if (!states) {
> >> +               ret = -ENOMEM;
> >> +               goto fail;
> >> +       }
> >> +
> >> +       /* Parse SBI specific details from state DT nodes */
> >> +       for (i = 1; i < state_count; i++) {
> >> +               state_node = of_get_cpu_state_node(cpu_node, i - 1);
> >> +               if (!state_node)
> >> +                       break;
> >> +
> >> +               ret = sbi_dt_parse_state_node(state_node, &states[i]);
> >> +               of_node_put(state_node);
> >> +
> >> +               if (ret)
> >> +                       return ret;
> >> +
> >> +               pr_debug("sbi-state %#x index %d\n", states[i], i);
> >> +       }
> >> +       if (i != state_count) {
> >> +               ret = -ENODEV;
> >> +               goto fail;
> >> +       }
> >> +
> >> +       /* Initialize optional data, used for the hierarchical topology. */
> >> +       ret = sbi_dt_cpu_init_topology(drv, data, state_count, cpu);
> >> +       if (ret < 0)
> >> +               return ret;
> >> +
> >> +       /* Store states in the per-cpu struct. */
> >> +       data->states = states;
> >> +
> >> +fail:
> >> +       of_node_put(cpu_node);
> >> +
> >> +       return ret;
> >> +}
> >> +
> >> +static void sbi_cpuidle_deinit_cpu(int cpu)
> >> +{
> >> +       struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu);
> >> +
> >> +       dt_idle_detach_cpu(data->dev);
> >> +       sbi_cpuidle_use_cpuhp = false;
> >> +}
> >> +
> >> +static int sbi_cpuidle_init_cpu(struct device *dev, int cpu)
> >> +{
> >> +       struct cpuidle_driver *drv;
> >> +       unsigned int state_count = 0;
> >> +       int ret = 0;
> >> +
> >> +       drv = devm_kzalloc(dev, sizeof(*drv), GFP_KERNEL);
> >> +       if (!drv)
> >> +               return -ENOMEM;
> >> +
> >> +       drv->name = "sbi_cpuidle";
> >> +       drv->owner = THIS_MODULE;
> >> +       drv->cpumask = (struct cpumask *)cpumask_of(cpu);
> >> +
> >> +       /* RISC-V architectural WFI to be represented as state index 0. */
> >> +       drv->states[0].enter = sbi_cpuidle_enter_state;
> >> +       drv->states[0].exit_latency = 1;
> >> +       drv->states[0].target_residency = 1;
> >> +       drv->states[0].power_usage = UINT_MAX;
> >> +       strcpy(drv->states[0].name, "WFI");
> >> +       strcpy(drv->states[0].desc, "RISC-V WFI");
> >> +
> >> +       /*
> >> +        * If no DT idle states are detected (ret == 0) let the driver
> >> +        * initialization fail accordingly since there is no reason to
> >> +        * initialize the idle driver if only wfi is supported, the
> >> +        * default archictectural back-end already executes wfi
> >> +        * on idle entry.
> >> +        */
> >> +       ret = dt_init_idle_driver(drv, sbi_cpuidle_state_match, 1);
> >> +       if (ret <= 0) {
> >> +               pr_debug("HART%ld: failed to parse DT idle states\n",
> >> +                        cpuid_to_hartid_map(cpu));
> >> +               return ret ? : -ENODEV;
> >> +       }
> >> +       state_count = ret + 1; /* Include WFI state as well */
> >> +
> >> +       /* Initialize idle states from DT. */
> >> +       ret = sbi_cpuidle_dt_init_states(dev, drv, cpu, state_count);
> >> +       if (ret) {
> >> +               pr_err("HART%ld: failed to init idle states\n",
> >> +                      cpuid_to_hartid_map(cpu));
> >> +               return ret;
> >> +       }
> >> +
> >> +       ret = cpuidle_register(drv, NULL);
> >> +       if (ret)
> >> +               goto deinit;
> >> +
> >> +       cpuidle_cooling_register(drv);
> >> +
> >> +       return 0;
> >> +deinit:
> >> +       sbi_cpuidle_deinit_cpu(cpu);
> >> +       return ret;
> >> +}
> >> +
> >> +static void sbi_cpuidle_domain_sync_state(struct device *dev)
> >> +{
> >> +       /*
> >> +        * All devices have now been attached/probed to the PM domain
> >> +        * topology, hence it's fine to allow domain states to be picked.
> >> +        */
> >> +       sbi_cpuidle_pd_allow_domain_state = true;
> >> +}
> >> +
> >> +#ifdef CONFIG_DT_IDLE_GENPD
> >> +
> >> +static int sbi_cpuidle_pd_power_off(struct generic_pm_domain *pd)
> >> +{
> >> +       struct genpd_power_state *state = &pd->states[pd->state_idx];
> >> +       u32 *pd_state;
> >> +
> >> +       if (!state->data)
> >> +               return 0;
> >> +
> >> +       if (!sbi_cpuidle_pd_allow_domain_state)
> >> +               return -EBUSY;
> >> +
> >> +       /* OSI mode is enabled, set the corresponding domain state. */
> >> +       pd_state = state->data;
> >> +       sbi_set_domain_state(*pd_state);
> >> +
> >> +       return 0;
> >> +}
> >> +
> >> +struct sbi_pd_provider {
> >> +       struct list_head link;
> >> +       struct device_node *node;
> >> +};
> >> +
> >> +static LIST_HEAD(sbi_pd_providers);
> >> +
> >> +static int sbi_pd_init(struct device_node *np)
> >> +{
> >> +       struct generic_pm_domain *pd;
> >> +       struct sbi_pd_provider *pd_provider;
> >> +       struct dev_power_governor *pd_gov;
> >> +       int ret = -ENOMEM, state_count = 0;
> >> +
> >> +       pd = dt_idle_pd_alloc(np, sbi_dt_parse_state_node);
> >> +       if (!pd)
> >> +               goto out;
> >> +
> >> +       pd_provider = kzalloc(sizeof(*pd_provider), GFP_KERNEL);
> >> +       if (!pd_provider)
> >> +               goto free_pd;
> >> +
> >> +       pd->flags |= GENPD_FLAG_IRQ_SAFE | GENPD_FLAG_CPU_DOMAIN;
> >> +
> >> +       /* Allow power off when OSI is available. */
> >> +       if (sbi_cpuidle_use_osi)
> >> +               pd->power_off = sbi_cpuidle_pd_power_off;
> >> +       else
> >> +               pd->flags |= GENPD_FLAG_ALWAYS_ON;
> >> +
> >> +       /* Use governor for CPU PM domains if it has some states to manage. */
> >> +       pd_gov = state_count > 0 ? &pm_domain_cpu_gov : NULL;
> >> +
> >> +       ret = pm_genpd_init(pd, pd_gov, false);
> >> +       if (ret)
> >> +               goto free_pd_prov;
> >> +
> >> +       ret = of_genpd_add_provider_simple(np, pd);
> >> +       if (ret)
> >> +               goto remove_pd;
> >> +
> >> +       pd_provider->node = of_node_get(np);
> >> +       list_add(&pd_provider->link, &sbi_pd_providers);
> >> +
> >> +       pr_debug("init PM domain %s\n", pd->name);
> >> +       return 0;
> >> +
> >> +remove_pd:
> >> +       pm_genpd_remove(pd);
> >> +free_pd_prov:
> >> +       kfree(pd_provider);
> >> +free_pd:
> >> +       dt_idle_pd_free(pd);
> >> +out:
> >> +       pr_err("failed to init PM domain ret=%d %pOF\n", ret, np);
> >> +       return ret;
> >> +}
> >> +
> >> +static void sbi_pd_remove(void)
> >> +{
> >> +       struct sbi_pd_provider *pd_provider, *it;
> >> +       struct generic_pm_domain *genpd;
> >> +
> >> +       list_for_each_entry_safe(pd_provider, it, &sbi_pd_providers, link) {
> >> +               of_genpd_del_provider(pd_provider->node);
> >> +
> >> +               genpd = of_genpd_remove_last(pd_provider->node);
> >> +               if (!IS_ERR(genpd))
> >> +                       kfree(genpd);
> >> +
> >> +               of_node_put(pd_provider->node);
> >> +               list_del(&pd_provider->link);
> >> +               kfree(pd_provider);
> >> +       }
> >> +}
> >> +
> >> +static int sbi_genpd_probe(struct device_node *np)
> >> +{
> >> +       struct device_node *node;
> >> +       int ret = 0, pd_count = 0;
> >> +
> >> +       if (!np)
> >> +               return -ENODEV;
> >> +
> >> +       /*
> >> +        * Parse child nodes for the "#power-domain-cells" property and
> >> +        * initialize a genpd/genpd-of-provider pair when it's found.
> >> +        */
> >> +       for_each_child_of_node(np, node) {
> >> +               if (!of_find_property(node, "#power-domain-cells", NULL))
> >> +                       continue;
> >> +
> >> +               ret = sbi_pd_init(node);
> >> +               if (ret)
> >> +                       goto put_node;
> >> +
> >> +               pd_count++;
> >> +       }
> >> +
> >> +       /* Bail out if not using the hierarchical CPU topology. */
> >> +       if (!pd_count)
> >> +               goto no_pd;
> >> +
> >> +       /* Link genpd masters/subdomains to model the CPU topology. */
> >> +       ret = dt_idle_pd_init_topology(np);
> >> +       if (ret)
> >> +               goto remove_pd;
> >> +
> >> +       return 0;
> >> +
> >> +put_node:
> >> +       of_node_put(node);
> >> +remove_pd:
> >> +       sbi_pd_remove();
> >> +       pr_err("failed to create CPU PM domains ret=%d\n", ret);
> >> +no_pd:
> >> +       return ret;
> >> +}
> >> +
> >> +#else
> >> +
> >> +static inline int sbi_genpd_probe(struct device_node *np)
> >> +{
> >> +       return 0;
> >> +}
> >> +
> >> +#endif
> >> +
> >> +static int sbi_cpuidle_probe(struct platform_device *pdev)
> >> +{
> >> +       int cpu, ret;
> >> +       struct cpuidle_driver *drv;
> >> +       struct cpuidle_device *dev;
> >> +       struct device_node *np, *pds_node;
> >> +
> >> +       /* Detect OSI support based on CPU DT nodes */
> >> +       sbi_cpuidle_use_osi = true;
> >> +       for_each_possible_cpu(cpu) {
> >> +               np = of_cpu_device_node_get(cpu);
> >> +               if (np &&
> >> +                   of_find_property(np, "power-domains", NULL) &&
> >> +                   of_find_property(np, "power-domain-names", NULL)) {
> >> +                       continue;
> >> +               } else {
> >> +                       sbi_cpuidle_use_osi = false;
> >> +                       break;
> >> +               }
> >> +       }
> >> +
> >> +       /* Populate generic power domains from DT nodes */
> >> +       pds_node = of_find_node_by_path("/cpus/power-domains");
> >> +       if (pds_node) {
> >> +               ret = sbi_genpd_probe(pds_node);
> >> +               of_node_put(pds_node);
> >> +               if (ret)
> >> +                       return ret;
> >> +       }
> >> +
> >> +       /* Initialize CPU idle driver for each CPU */
> >> +       for_each_possible_cpu(cpu) {
> >> +               ret = sbi_cpuidle_init_cpu(&pdev->dev, cpu);
> >> +               if (ret) {
> >> +                       pr_debug("HART%ld: idle driver init failed\n",
> >> +                                cpuid_to_hartid_map(cpu));
> >> +                       goto out_fail;
> >> +               }
> >> +       }
> >> +
> >> +       /* Setup CPU hotplut notifiers */
> >> +       sbi_idle_init_cpuhp();
> >> +
> >> +       pr_info("idle driver registered for all CPUs\n");
> >> +
> >> +       return 0;
> >> +
> >> +out_fail:
> >> +       while (--cpu >= 0) {
> >> +               dev = per_cpu(cpuidle_devices, cpu);
> >> +               drv = cpuidle_get_cpu_driver(dev);
> >> +               cpuidle_unregister(drv);
> >> +               sbi_cpuidle_deinit_cpu(cpu);
> >> +       }
> >> +
> >> +       return ret;
> >> +}
> >> +
> >> +static struct platform_driver sbi_cpuidle_driver = {
> >> +       .probe = sbi_cpuidle_probe,
> >> +       .driver = {
> >> +               .name = "sbi-cpuidle",
> >> +               .sync_state = sbi_cpuidle_domain_sync_state,
> >> +       },
> >> +};
> >> +
> >> +static int __init sbi_cpuidle_init(void)
> >> +{
> >> +       int ret;
> >> +       struct platform_device *pdev;
> >> +
> >> +       /*
> >> +        * The SBI HSM suspend function is only available when:
> >> +        * 1) SBI version is 0.3 or higher
> >> +        * 2) SBI HSM extension is available
> >> +        */
> >> +       if ((sbi_spec_version < sbi_mk_version(0, 3)) ||
> >> +           sbi_probe_extension(SBI_EXT_HSM) <= 0) {
> >> +               pr_info("HSM suspend not available\n");
> >> +               return 0;
> >> +       }
> >> +
> >> +       ret = platform_driver_register(&sbi_cpuidle_driver);
> >> +       if (ret)
> >> +               return ret;
> >> +
> >> +       pdev = platform_device_register_simple("sbi-cpuidle",
> >> +                                               -1, NULL, 0);
> >> +       if (IS_ERR(pdev)) {
> >> +               platform_driver_unregister(&sbi_cpuidle_driver);
> >> +               return PTR_ERR(pdev);
> >> +       }
> >> +
> >> +       return 0;
> >> +}
> >> +device_initcall(sbi_cpuidle_init);
> >> --
> >> 2.25.1
> >>
> >
> > For the SBI part,
> > Acked-by: Atish Patra <atishp@rivosinc.com>
> >
> > FYI..
> > SBI HSM suspend was included in SBI v0.3. The current version of the
> > SBI specification (v1.0-rc2)
> > is already frozen as per the RVI guidelines. All the comments received
> > during the public review period
> > have been addressed as well.
>
> Yet not all comments from *before* the public review period.
>

I guess you are talking about the following issue,
https://github.com/riscv-non-isa/riscv-sbi-doc/issues/82

The issues raised here concern only the legacy version(v0.1) which is
disabled from the kernel in the latest release.
I have left detailed comments on why no changes to the spec is necessary.

> Jess
>


-- 
Regards,
Atish

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 4/8] RISC-V: Add SBI HSM suspend related defines
  2022-02-10  5:49 ` [PATCH v11 4/8] RISC-V: Add SBI HSM suspend related defines Anup Patel
  2022-02-16  7:57   ` Atish Patra
@ 2022-02-23  7:02   ` Anup Patel
  2022-03-08  6:04     ` Anup Patel
  1 sibling, 1 reply; 25+ messages in thread
From: Anup Patel @ 2022-02-23  7:02 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Paul Walmsley, Albert Ou, Daniel Lezcano, Anup Patel,
	Rafael J . Wysocki, Pavel Machek, Rob Herring, Ulf Hansson,
	Sandeep Tripathy, Atish Patra, Alistair Francis, Liush, DTML,
	linux-riscv, linux-kernel@vger.kernel.org List,
	open list:THERMAL, linux-arm-kernel, kvm-riscv, Guo Ren

Hi Palmer

On Thu, Feb 10, 2022 at 11:20 AM Anup Patel <apatel@ventanamicro.com> wrote:
>
> From: Anup Patel <anup.patel@wdc.com>
>
> We add defines related to SBI HSM suspend call and also
> update HSM states naming as-per latest SBI specification.
>
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> Reviewed-by: Guo Ren <guoren@kernel.org>

This patch is shared with "KVM RISC-V SBI v0.3 support".
(https://lore.kernel.org/all/20220201082227.361967-2-apatel@ventanamicro.com/T/)

How do you want to handle this ?

One option is that I take this patch through the KVM RISC-V tree
and you can send this series (minus this patch) for 5.18 after the
KVM RISC-V changes have been merged.

Regards,
Anup

> ---
>  arch/riscv/include/asm/sbi.h    | 27 ++++++++++++++++++++++-----
>  arch/riscv/kernel/cpu_ops_sbi.c |  2 +-
>  arch/riscv/kvm/vcpu_sbi_hsm.c   |  4 ++--
>  3 files changed, 25 insertions(+), 8 deletions(-)
>
> diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
> index d1c37479d828..06133b4f8e20 100644
> --- a/arch/riscv/include/asm/sbi.h
> +++ b/arch/riscv/include/asm/sbi.h
> @@ -71,15 +71,32 @@ enum sbi_ext_hsm_fid {
>         SBI_EXT_HSM_HART_START = 0,
>         SBI_EXT_HSM_HART_STOP,
>         SBI_EXT_HSM_HART_STATUS,
> +       SBI_EXT_HSM_HART_SUSPEND,
>  };
>
> -enum sbi_hsm_hart_status {
> -       SBI_HSM_HART_STATUS_STARTED = 0,
> -       SBI_HSM_HART_STATUS_STOPPED,
> -       SBI_HSM_HART_STATUS_START_PENDING,
> -       SBI_HSM_HART_STATUS_STOP_PENDING,
> +enum sbi_hsm_hart_state {
> +       SBI_HSM_STATE_STARTED = 0,
> +       SBI_HSM_STATE_STOPPED,
> +       SBI_HSM_STATE_START_PENDING,
> +       SBI_HSM_STATE_STOP_PENDING,
> +       SBI_HSM_STATE_SUSPENDED,
> +       SBI_HSM_STATE_SUSPEND_PENDING,
> +       SBI_HSM_STATE_RESUME_PENDING,
>  };
>
> +#define SBI_HSM_SUSP_BASE_MASK                 0x7fffffff
> +#define SBI_HSM_SUSP_NON_RET_BIT               0x80000000
> +#define SBI_HSM_SUSP_PLAT_BASE                 0x10000000
> +
> +#define SBI_HSM_SUSPEND_RET_DEFAULT            0x00000000
> +#define SBI_HSM_SUSPEND_RET_PLATFORM           SBI_HSM_SUSP_PLAT_BASE
> +#define SBI_HSM_SUSPEND_RET_LAST               SBI_HSM_SUSP_BASE_MASK
> +#define SBI_HSM_SUSPEND_NON_RET_DEFAULT                SBI_HSM_SUSP_NON_RET_BIT
> +#define SBI_HSM_SUSPEND_NON_RET_PLATFORM       (SBI_HSM_SUSP_NON_RET_BIT | \
> +                                                SBI_HSM_SUSP_PLAT_BASE)
> +#define SBI_HSM_SUSPEND_NON_RET_LAST           (SBI_HSM_SUSP_NON_RET_BIT | \
> +                                                SBI_HSM_SUSP_BASE_MASK)
> +
>  enum sbi_ext_srst_fid {
>         SBI_EXT_SRST_RESET = 0,
>  };
> diff --git a/arch/riscv/kernel/cpu_ops_sbi.c b/arch/riscv/kernel/cpu_ops_sbi.c
> index dae29cbfe550..2e16f6732cdf 100644
> --- a/arch/riscv/kernel/cpu_ops_sbi.c
> +++ b/arch/riscv/kernel/cpu_ops_sbi.c
> @@ -111,7 +111,7 @@ static int sbi_cpu_is_stopped(unsigned int cpuid)
>
>         rc = sbi_hsm_hart_get_status(hartid);
>
> -       if (rc == SBI_HSM_HART_STATUS_STOPPED)
> +       if (rc == SBI_HSM_STATE_STOPPED)
>                 return 0;
>         return rc;
>  }
> diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c
> index 2e383687fa48..1ac4b2e8e4ec 100644
> --- a/arch/riscv/kvm/vcpu_sbi_hsm.c
> +++ b/arch/riscv/kvm/vcpu_sbi_hsm.c
> @@ -60,9 +60,9 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu)
>         if (!target_vcpu)
>                 return -EINVAL;
>         if (!target_vcpu->arch.power_off)
> -               return SBI_HSM_HART_STATUS_STARTED;
> +               return SBI_HSM_STATE_STARTED;
>         else
> -               return SBI_HSM_HART_STATUS_STOPPED;
> +               return SBI_HSM_STATE_STOPPED;
>  }
>
>  static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 4/8] RISC-V: Add SBI HSM suspend related defines
  2022-02-23  7:02   ` Anup Patel
@ 2022-03-08  6:04     ` Anup Patel
  0 siblings, 0 replies; 25+ messages in thread
From: Anup Patel @ 2022-03-08  6:04 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Paul Walmsley, Albert Ou, Daniel Lezcano, Anup Patel,
	Rafael J . Wysocki, Pavel Machek, Rob Herring, Ulf Hansson,
	Sandeep Tripathy, Atish Patra, Alistair Francis, Liush, DTML,
	linux-riscv, linux-kernel@vger.kernel.org List,
	open list:THERMAL, linux-arm-kernel, kvm-riscv, Guo Ren

Hi Palmer,

On Wed, Feb 23, 2022 at 12:32 PM Anup Patel <anup@brainfault.org> wrote:
>
> Hi Palmer
>
> On Thu, Feb 10, 2022 at 11:20 AM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > From: Anup Patel <anup.patel@wdc.com>
> >
> > We add defines related to SBI HSM suspend call and also
> > update HSM states naming as-per latest SBI specification.
> >
> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > Reviewed-by: Guo Ren <guoren@kernel.org>
>
> This patch is shared with "KVM RISC-V SBI v0.3 support".
> (https://lore.kernel.org/all/20220201082227.361967-2-apatel@ventanamicro.com/T/)
>
> How do you want to handle this ?
>
> One option is that I take this patch through the KVM RISC-V tree
> and you can send this series (minus this patch) for 5.18 after the
> KVM RISC-V changes have been merged.

I have queued this patch for 5.18. Let me know if you want to
handle this patch differently.

Thanks,
Anup

>
> Regards,
> Anup
>
> > ---
> >  arch/riscv/include/asm/sbi.h    | 27 ++++++++++++++++++++++-----
> >  arch/riscv/kernel/cpu_ops_sbi.c |  2 +-
> >  arch/riscv/kvm/vcpu_sbi_hsm.c   |  4 ++--
> >  3 files changed, 25 insertions(+), 8 deletions(-)
> >
> > diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
> > index d1c37479d828..06133b4f8e20 100644
> > --- a/arch/riscv/include/asm/sbi.h
> > +++ b/arch/riscv/include/asm/sbi.h
> > @@ -71,15 +71,32 @@ enum sbi_ext_hsm_fid {
> >         SBI_EXT_HSM_HART_START = 0,
> >         SBI_EXT_HSM_HART_STOP,
> >         SBI_EXT_HSM_HART_STATUS,
> > +       SBI_EXT_HSM_HART_SUSPEND,
> >  };
> >
> > -enum sbi_hsm_hart_status {
> > -       SBI_HSM_HART_STATUS_STARTED = 0,
> > -       SBI_HSM_HART_STATUS_STOPPED,
> > -       SBI_HSM_HART_STATUS_START_PENDING,
> > -       SBI_HSM_HART_STATUS_STOP_PENDING,
> > +enum sbi_hsm_hart_state {
> > +       SBI_HSM_STATE_STARTED = 0,
> > +       SBI_HSM_STATE_STOPPED,
> > +       SBI_HSM_STATE_START_PENDING,
> > +       SBI_HSM_STATE_STOP_PENDING,
> > +       SBI_HSM_STATE_SUSPENDED,
> > +       SBI_HSM_STATE_SUSPEND_PENDING,
> > +       SBI_HSM_STATE_RESUME_PENDING,
> >  };
> >
> > +#define SBI_HSM_SUSP_BASE_MASK                 0x7fffffff
> > +#define SBI_HSM_SUSP_NON_RET_BIT               0x80000000
> > +#define SBI_HSM_SUSP_PLAT_BASE                 0x10000000
> > +
> > +#define SBI_HSM_SUSPEND_RET_DEFAULT            0x00000000
> > +#define SBI_HSM_SUSPEND_RET_PLATFORM           SBI_HSM_SUSP_PLAT_BASE
> > +#define SBI_HSM_SUSPEND_RET_LAST               SBI_HSM_SUSP_BASE_MASK
> > +#define SBI_HSM_SUSPEND_NON_RET_DEFAULT                SBI_HSM_SUSP_NON_RET_BIT
> > +#define SBI_HSM_SUSPEND_NON_RET_PLATFORM       (SBI_HSM_SUSP_NON_RET_BIT | \
> > +                                                SBI_HSM_SUSP_PLAT_BASE)
> > +#define SBI_HSM_SUSPEND_NON_RET_LAST           (SBI_HSM_SUSP_NON_RET_BIT | \
> > +                                                SBI_HSM_SUSP_BASE_MASK)
> > +
> >  enum sbi_ext_srst_fid {
> >         SBI_EXT_SRST_RESET = 0,
> >  };
> > diff --git a/arch/riscv/kernel/cpu_ops_sbi.c b/arch/riscv/kernel/cpu_ops_sbi.c
> > index dae29cbfe550..2e16f6732cdf 100644
> > --- a/arch/riscv/kernel/cpu_ops_sbi.c
> > +++ b/arch/riscv/kernel/cpu_ops_sbi.c
> > @@ -111,7 +111,7 @@ static int sbi_cpu_is_stopped(unsigned int cpuid)
> >
> >         rc = sbi_hsm_hart_get_status(hartid);
> >
> > -       if (rc == SBI_HSM_HART_STATUS_STOPPED)
> > +       if (rc == SBI_HSM_STATE_STOPPED)
> >                 return 0;
> >         return rc;
> >  }
> > diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c
> > index 2e383687fa48..1ac4b2e8e4ec 100644
> > --- a/arch/riscv/kvm/vcpu_sbi_hsm.c
> > +++ b/arch/riscv/kvm/vcpu_sbi_hsm.c
> > @@ -60,9 +60,9 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu)
> >         if (!target_vcpu)
> >                 return -EINVAL;
> >         if (!target_vcpu->arch.power_off)
> > -               return SBI_HSM_HART_STATUS_STARTED;
> > +               return SBI_HSM_STATE_STARTED;
> >         else
> > -               return SBI_HSM_HART_STATUS_STOPPED;
> > +               return SBI_HSM_STATE_STOPPED;
> >  }
> >
> >  static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
> > --
> > 2.25.1
> >

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 1/8] RISC-V: Enable CPU_IDLE drivers
  2022-02-12 12:49     ` Anup Patel
@ 2022-03-10 18:43       ` Palmer Dabbelt
  0 siblings, 0 replies; 25+ messages in thread
From: Palmer Dabbelt @ 2022-03-10 18:43 UTC (permalink / raw)
  To: pavel, apatel
  Cc: pavel, Paul Walmsley, aou, daniel.lezcano, ulf.hansson, rjw,
	robh+dt, milun.tripathy, atishp, Alistair Francis, liush, anup,
	devicetree, linux-riscv, linux-kernel, linux-pm,
	linux-arm-kernel, kvm-riscv, guoren

On Sat, 12 Feb 2022 04:49:46 PST (-0800), apatel@ventanamicro.com wrote:
> On Sat, Feb 12, 2022 at 5:13 PM Pavel Machek <pavel@ucw.cz> wrote:
>>
>> Hi!
>>
>> > From: Anup Patel <anup.patel@wdc.com>
>> >
>> > We force select CPU_PM and provide asm/cpuidle.h so that we can
>> > use CPU IDLE drivers for Linux RISC-V kernel.
>> >
>> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
>> > Signed-off-by: Anup Patel <apatel@vetanamicro.com>
>>
>> This is quite... interesting. Normally we have one signoff per
>> person...
>
> I was working for Western Digital (WDC) when I first submitted this
> series and recently I joined Ventana Micro Systems.

IIUC that's the correct way to go about this, it's certainly what I'd 
do.

>
> Regards,
> Anup
>
>>
>> Best regards,
>>                                                         Pavel
>> --
>> http://www.livejournal.com/~pavelmachek

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 6/8] cpuidle: Add RISC-V SBI CPU idle driver
  2022-02-10  5:49 ` [PATCH v11 6/8] cpuidle: Add RISC-V SBI CPU idle driver Anup Patel
  2022-02-16  8:09   ` Atish Patra
@ 2022-03-10 20:01   ` Palmer Dabbelt
  2022-03-12  8:34   ` Anup Patel
  2 siblings, 0 replies; 25+ messages in thread
From: Palmer Dabbelt @ 2022-03-10 20:01 UTC (permalink / raw)
  To: apatel, rafael, daniel.lezcano
  Cc: Paul Walmsley, aou, daniel.lezcano, ulf.hansson, rjw, pavel,
	robh+dt, milun.tripathy, atishp, Alistair Francis, liush, anup,
	devicetree, linux-riscv, linux-kernel, linux-pm,
	linux-arm-kernel, kvm-riscv, apatel

On Wed, 09 Feb 2022 21:49:45 PST (-0800), apatel@ventanamicro.com wrote:
> From: Anup Patel <anup.patel@wdc.com>
>
> The RISC-V SBI HSM extension provides HSM suspend call which can
> be used by Linux RISC-V to enter platform specific low-power state.
>
> This patch adds a CPU idle driver based on RISC-V SBI calls which
> will populate idle states from device tree and use SBI calls to
> entry these idle states.

This generally looks OK to me.

I'm happy to take it via the RISC-V tree, but I usually like to have at 
least an Ack from the subsystem maintainer before doing so.  I see some 
of the other patches got reviewed, not sure if you guys have issue with 
it or if it just slipped through the cracks.  Atish has confirmed the 
spec is frozen 
<https://lore.kernel.org/lkml/CAOnJCUKmwk=VbwCtkjS_rxArMWhVExeRp4QkkjDUmcvJ69Bqqg@mail.gmail.com/>, 
not sure if that's what folks were waiting for.

I've put the whole set on palmer/riscv-idle, there's one merge conflict 
with my current for-next

diff --cc arch/riscv/kernel/head.S
index ec07f991866a,893b8bb69391..000000000000
--- a/arch/riscv/kernel/head.S
+++ b/arch/riscv/kernel/head.S
diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h
index 48b4baa4d706..7b7ddfaaf671 100644
--- a/arch/riscv/include/asm/asm.h
+++ b/arch/riscv/include/asm/asm.h
@@ -85,6 +85,7 @@
        sub \reg, \reg, t0
 .endm
 _xip_fixup: .dword CONFIG_PHYS_RAM_BASE - CONFIG_XIP_PHYS_ADDR - XIP_OFFSET
+_xip_phys_offset: .dword CONFIG_XIP_PHYS_ADDR + XIP_OFFSET
 #else
 .macro XIP_FIXUP_OFFSET reg
 .endm

with that the series is passing my tests, though I don't have anything 
specific for suspend yet.

I'll hold off on this until I get back around to my own message, to give 
the PM folks a chance to look if they want to.

Thanks!

>
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  MAINTAINERS                         |   7 +
>  drivers/cpuidle/Kconfig             |   5 +
>  drivers/cpuidle/Kconfig.riscv       |  15 +
>  drivers/cpuidle/Makefile            |   4 +
>  drivers/cpuidle/cpuidle-riscv-sbi.c | 627 ++++++++++++++++++++++++++++
>  5 files changed, 658 insertions(+)
>  create mode 100644 drivers/cpuidle/Kconfig.riscv
>  create mode 100644 drivers/cpuidle/cpuidle-riscv-sbi.c
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 39ece23e8d93..2ff0055a26a7 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -5058,6 +5058,13 @@ S:	Supported
>  F:	drivers/cpuidle/dt_idle_genpd.c
>  F:	drivers/cpuidle/dt_idle_genpd.h
>
> +CPUIDLE DRIVER - RISC-V SBI
> +M:	Anup Patel <anup@brainfault.org>
> +L:	linux-pm@vger.kernel.org
> +L:	linux-riscv@lists.infradead.org
> +S:	Maintained
> +F:	drivers/cpuidle/cpuidle-riscv-sbi.c
> +
>  CRAMFS FILESYSTEM
>  M:	Nicolas Pitre <nico@fluxnic.net>
>  S:	Maintained
> diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
> index f1afe7ab6b54..ff71dd662880 100644
> --- a/drivers/cpuidle/Kconfig
> +++ b/drivers/cpuidle/Kconfig
> @@ -66,6 +66,11 @@ depends on PPC
>  source "drivers/cpuidle/Kconfig.powerpc"
>  endmenu
>
> +menu "RISC-V CPU Idle Drivers"
> +depends on RISCV
> +source "drivers/cpuidle/Kconfig.riscv"
> +endmenu
> +
>  config HALTPOLL_CPUIDLE
>  	tristate "Halt poll cpuidle driver"
>  	depends on X86 && KVM_GUEST
> diff --git a/drivers/cpuidle/Kconfig.riscv b/drivers/cpuidle/Kconfig.riscv
> new file mode 100644
> index 000000000000..78518c26af74
> --- /dev/null
> +++ b/drivers/cpuidle/Kconfig.riscv
> @@ -0,0 +1,15 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +# RISC-V CPU Idle drivers
> +#
> +
> +config RISCV_SBI_CPUIDLE
> +	bool "RISC-V SBI CPU idle Driver"
> +	depends on RISCV_SBI
> +	select DT_IDLE_STATES
> +	select CPU_IDLE_MULTIPLE_DRIVERS
> +	select DT_IDLE_GENPD if PM_GENERIC_DOMAINS_OF
> +	help
> +	  Select this option to enable RISC-V SBI firmware based CPU idle
> +	  driver for RISC-V systems. This drivers also supports hierarchical
> +	  DT based layout of the idle state.
> diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
> index 11a26cef279f..d103342b7cfc 100644
> --- a/drivers/cpuidle/Makefile
> +++ b/drivers/cpuidle/Makefile
> @@ -35,3 +35,7 @@ obj-$(CONFIG_MIPS_CPS_CPUIDLE)		+= cpuidle-cps.o
>  # POWERPC drivers
>  obj-$(CONFIG_PSERIES_CPUIDLE)		+= cpuidle-pseries.o
>  obj-$(CONFIG_POWERNV_CPUIDLE)		+= cpuidle-powernv.o
> +
> +###############################################################################
> +# RISC-V drivers
> +obj-$(CONFIG_RISCV_SBI_CPUIDLE)		+= cpuidle-riscv-sbi.o
> diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
> new file mode 100644
> index 000000000000..b459eda2cd37
> --- /dev/null
> +++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
> @@ -0,0 +1,627 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * RISC-V SBI CPU idle driver.
> + *
> + * Copyright (c) 2021 Western Digital Corporation or its affiliates.
> + * Copyright (c) 2022 Ventana Micro Systems Inc.
> + */
> +
> +#define pr_fmt(fmt) "cpuidle-riscv-sbi: " fmt
> +
> +#include <linux/cpuidle.h>
> +#include <linux/cpumask.h>
> +#include <linux/cpu_pm.h>
> +#include <linux/cpu_cooling.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_device.h>
> +#include <linux/slab.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_domain.h>
> +#include <linux/pm_runtime.h>
> +#include <asm/cpuidle.h>
> +#include <asm/sbi.h>
> +#include <asm/suspend.h>
> +
> +#include "dt_idle_states.h"
> +#include "dt_idle_genpd.h"
> +
> +struct sbi_cpuidle_data {
> +	u32 *states;
> +	struct device *dev;
> +};
> +
> +struct sbi_domain_state {
> +	bool available;
> +	u32 state;
> +};
> +
> +static DEFINE_PER_CPU_READ_MOSTLY(struct sbi_cpuidle_data, sbi_cpuidle_data);
> +static DEFINE_PER_CPU(struct sbi_domain_state, domain_state);
> +static bool sbi_cpuidle_use_osi;
> +static bool sbi_cpuidle_use_cpuhp;
> +static bool sbi_cpuidle_pd_allow_domain_state;
> +
> +static inline void sbi_set_domain_state(u32 state)
> +{
> +	struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> +
> +	data->available = true;
> +	data->state = state;
> +}
> +
> +static inline u32 sbi_get_domain_state(void)
> +{
> +	struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> +
> +	return data->state;
> +}
> +
> +static inline void sbi_clear_domain_state(void)
> +{
> +	struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> +
> +	data->available = false;
> +}
> +
> +static inline bool sbi_is_domain_state_available(void)
> +{
> +	struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> +
> +	return data->available;
> +}
> +
> +static int sbi_suspend_finisher(unsigned long suspend_type,
> +				unsigned long resume_addr,
> +				unsigned long opaque)
> +{
> +	struct sbiret ret;
> +
> +	ret = sbi_ecall(SBI_EXT_HSM, SBI_EXT_HSM_HART_SUSPEND,
> +			suspend_type, resume_addr, opaque, 0, 0, 0);
> +
> +	return (ret.error) ? sbi_err_map_linux_errno(ret.error) : 0;
> +}
> +
> +static int sbi_suspend(u32 state)
> +{
> +	if (state & SBI_HSM_SUSP_NON_RET_BIT)
> +		return cpu_suspend(state, sbi_suspend_finisher);
> +	else
> +		return sbi_suspend_finisher(state, 0, 0);
> +}
> +
> +static int sbi_cpuidle_enter_state(struct cpuidle_device *dev,
> +				   struct cpuidle_driver *drv, int idx)
> +{
> +	u32 *states = __this_cpu_read(sbi_cpuidle_data.states);
> +
> +	return CPU_PM_CPU_IDLE_ENTER_PARAM(sbi_suspend, idx, states[idx]);
> +}
> +
> +static int __sbi_enter_domain_idle_state(struct cpuidle_device *dev,
> +					  struct cpuidle_driver *drv, int idx,
> +					  bool s2idle)
> +{
> +	struct sbi_cpuidle_data *data = this_cpu_ptr(&sbi_cpuidle_data);
> +	u32 *states = data->states;
> +	struct device *pd_dev = data->dev;
> +	u32 state;
> +	int ret;
> +
> +	ret = cpu_pm_enter();
> +	if (ret)
> +		return -1;
> +
> +	/* Do runtime PM to manage a hierarchical CPU toplogy. */
> +	rcu_irq_enter_irqson();
> +	if (s2idle)
> +		dev_pm_genpd_suspend(pd_dev);
> +	else
> +		pm_runtime_put_sync_suspend(pd_dev);
> +	rcu_irq_exit_irqson();
> +
> +	if (sbi_is_domain_state_available())
> +		state = sbi_get_domain_state();
> +	else
> +		state = states[idx];
> +
> +	ret = sbi_suspend(state) ? -1 : idx;
> +
> +	rcu_irq_enter_irqson();
> +	if (s2idle)
> +		dev_pm_genpd_resume(pd_dev);
> +	else
> +		pm_runtime_get_sync(pd_dev);
> +	rcu_irq_exit_irqson();
> +
> +	cpu_pm_exit();
> +
> +	/* Clear the domain state to start fresh when back from idle. */
> +	sbi_clear_domain_state();
> +	return ret;
> +}
> +
> +static int sbi_enter_domain_idle_state(struct cpuidle_device *dev,
> +				       struct cpuidle_driver *drv, int idx)
> +{
> +	return __sbi_enter_domain_idle_state(dev, drv, idx, false);
> +}
> +
> +static int sbi_enter_s2idle_domain_idle_state(struct cpuidle_device *dev,
> +					      struct cpuidle_driver *drv,
> +					      int idx)
> +{
> +	return __sbi_enter_domain_idle_state(dev, drv, idx, true);
> +}
> +
> +static int sbi_cpuidle_cpuhp_up(unsigned int cpu)
> +{
> +	struct device *pd_dev = __this_cpu_read(sbi_cpuidle_data.dev);
> +
> +	if (pd_dev)
> +		pm_runtime_get_sync(pd_dev);
> +
> +	return 0;
> +}
> +
> +static int sbi_cpuidle_cpuhp_down(unsigned int cpu)
> +{
> +	struct device *pd_dev = __this_cpu_read(sbi_cpuidle_data.dev);
> +
> +	if (pd_dev) {
> +		pm_runtime_put_sync(pd_dev);
> +		/* Clear domain state to start fresh at next online. */
> +		sbi_clear_domain_state();
> +	}
> +
> +	return 0;
> +}
> +
> +static void sbi_idle_init_cpuhp(void)
> +{
> +	int err;
> +
> +	if (!sbi_cpuidle_use_cpuhp)
> +		return;
> +
> +	err = cpuhp_setup_state_nocalls(CPUHP_AP_CPU_PM_STARTING,
> +					"cpuidle/sbi:online",
> +					sbi_cpuidle_cpuhp_up,
> +					sbi_cpuidle_cpuhp_down);
> +	if (err)
> +		pr_warn("Failed %d while setup cpuhp state\n", err);
> +}
> +
> +static const struct of_device_id sbi_cpuidle_state_match[] = {
> +	{ .compatible = "riscv,idle-state",
> +	  .data = sbi_cpuidle_enter_state },
> +	{ },
> +};
> +
> +static bool sbi_suspend_state_is_valid(u32 state)
> +{
> +	if (state > SBI_HSM_SUSPEND_RET_DEFAULT &&
> +	    state < SBI_HSM_SUSPEND_RET_PLATFORM)
> +		return false;
> +	if (state > SBI_HSM_SUSPEND_NON_RET_DEFAULT &&
> +	    state < SBI_HSM_SUSPEND_NON_RET_PLATFORM)
> +		return false;
> +	return true;
> +}
> +
> +static int sbi_dt_parse_state_node(struct device_node *np, u32 *state)
> +{
> +	int err = of_property_read_u32(np, "riscv,sbi-suspend-param", state);
> +
> +	if (err) {
> +		pr_warn("%pOF missing riscv,sbi-suspend-param property\n", np);
> +		return err;
> +	}
> +
> +	if (!sbi_suspend_state_is_valid(*state)) {
> +		pr_warn("Invalid SBI suspend state %#x\n", *state);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int sbi_dt_cpu_init_topology(struct cpuidle_driver *drv,
> +				     struct sbi_cpuidle_data *data,
> +				     unsigned int state_count, int cpu)
> +{
> +	/* Currently limit the hierarchical topology to be used in OSI mode. */
> +	if (!sbi_cpuidle_use_osi)
> +		return 0;
> +
> +	data->dev = dt_idle_attach_cpu(cpu, "sbi");
> +	if (IS_ERR_OR_NULL(data->dev))
> +		return PTR_ERR_OR_ZERO(data->dev);
> +
> +	/*
> +	 * Using the deepest state for the CPU to trigger a potential selection
> +	 * of a shared state for the domain, assumes the domain states are all
> +	 * deeper states.
> +	 */
> +	drv->states[state_count - 1].enter = sbi_enter_domain_idle_state;
> +	drv->states[state_count - 1].enter_s2idle =
> +					sbi_enter_s2idle_domain_idle_state;
> +	sbi_cpuidle_use_cpuhp = true;
> +
> +	return 0;
> +}
> +
> +static int sbi_cpuidle_dt_init_states(struct device *dev,
> +					struct cpuidle_driver *drv,
> +					unsigned int cpu,
> +					unsigned int state_count)
> +{
> +	struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu);
> +	struct device_node *state_node;
> +	struct device_node *cpu_node;
> +	u32 *states;
> +	int i, ret;
> +
> +	cpu_node = of_cpu_device_node_get(cpu);
> +	if (!cpu_node)
> +		return -ENODEV;
> +
> +	states = devm_kcalloc(dev, state_count, sizeof(*states), GFP_KERNEL);
> +	if (!states) {
> +		ret = -ENOMEM;
> +		goto fail;
> +	}
> +
> +	/* Parse SBI specific details from state DT nodes */
> +	for (i = 1; i < state_count; i++) {
> +		state_node = of_get_cpu_state_node(cpu_node, i - 1);
> +		if (!state_node)
> +			break;
> +
> +		ret = sbi_dt_parse_state_node(state_node, &states[i]);
> +		of_node_put(state_node);
> +
> +		if (ret)
> +			return ret;
> +
> +		pr_debug("sbi-state %#x index %d\n", states[i], i);
> +	}
> +	if (i != state_count) {
> +		ret = -ENODEV;
> +		goto fail;
> +	}
> +
> +	/* Initialize optional data, used for the hierarchical topology. */
> +	ret = sbi_dt_cpu_init_topology(drv, data, state_count, cpu);
> +	if (ret < 0)
> +		return ret;
> +
> +	/* Store states in the per-cpu struct. */
> +	data->states = states;
> +
> +fail:
> +	of_node_put(cpu_node);
> +
> +	return ret;
> +}
> +
> +static void sbi_cpuidle_deinit_cpu(int cpu)
> +{
> +	struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu);
> +
> +	dt_idle_detach_cpu(data->dev);
> +	sbi_cpuidle_use_cpuhp = false;
> +}
> +
> +static int sbi_cpuidle_init_cpu(struct device *dev, int cpu)
> +{
> +	struct cpuidle_driver *drv;
> +	unsigned int state_count = 0;
> +	int ret = 0;
> +
> +	drv = devm_kzalloc(dev, sizeof(*drv), GFP_KERNEL);
> +	if (!drv)
> +		return -ENOMEM;
> +
> +	drv->name = "sbi_cpuidle";
> +	drv->owner = THIS_MODULE;
> +	drv->cpumask = (struct cpumask *)cpumask_of(cpu);
> +
> +	/* RISC-V architectural WFI to be represented as state index 0. */
> +	drv->states[0].enter = sbi_cpuidle_enter_state;
> +	drv->states[0].exit_latency = 1;
> +	drv->states[0].target_residency = 1;
> +	drv->states[0].power_usage = UINT_MAX;
> +	strcpy(drv->states[0].name, "WFI");
> +	strcpy(drv->states[0].desc, "RISC-V WFI");
> +
> +	/*
> +	 * If no DT idle states are detected (ret == 0) let the driver
> +	 * initialization fail accordingly since there is no reason to
> +	 * initialize the idle driver if only wfi is supported, the
> +	 * default archictectural back-end already executes wfi
> +	 * on idle entry.
> +	 */
> +	ret = dt_init_idle_driver(drv, sbi_cpuidle_state_match, 1);
> +	if (ret <= 0) {
> +		pr_debug("HART%ld: failed to parse DT idle states\n",
> +			 cpuid_to_hartid_map(cpu));
> +		return ret ? : -ENODEV;
> +	}
> +	state_count = ret + 1; /* Include WFI state as well */
> +
> +	/* Initialize idle states from DT. */
> +	ret = sbi_cpuidle_dt_init_states(dev, drv, cpu, state_count);
> +	if (ret) {
> +		pr_err("HART%ld: failed to init idle states\n",
> +		       cpuid_to_hartid_map(cpu));
> +		return ret;
> +	}
> +
> +	ret = cpuidle_register(drv, NULL);
> +	if (ret)
> +		goto deinit;
> +
> +	cpuidle_cooling_register(drv);
> +
> +	return 0;
> +deinit:
> +	sbi_cpuidle_deinit_cpu(cpu);
> +	return ret;
> +}
> +
> +static void sbi_cpuidle_domain_sync_state(struct device *dev)
> +{
> +	/*
> +	 * All devices have now been attached/probed to the PM domain
> +	 * topology, hence it's fine to allow domain states to be picked.
> +	 */
> +	sbi_cpuidle_pd_allow_domain_state = true;
> +}
> +
> +#ifdef CONFIG_DT_IDLE_GENPD
> +
> +static int sbi_cpuidle_pd_power_off(struct generic_pm_domain *pd)
> +{
> +	struct genpd_power_state *state = &pd->states[pd->state_idx];
> +	u32 *pd_state;
> +
> +	if (!state->data)
> +		return 0;
> +
> +	if (!sbi_cpuidle_pd_allow_domain_state)
> +		return -EBUSY;
> +
> +	/* OSI mode is enabled, set the corresponding domain state. */
> +	pd_state = state->data;
> +	sbi_set_domain_state(*pd_state);
> +
> +	return 0;
> +}
> +
> +struct sbi_pd_provider {
> +	struct list_head link;
> +	struct device_node *node;
> +};
> +
> +static LIST_HEAD(sbi_pd_providers);
> +
> +static int sbi_pd_init(struct device_node *np)
> +{
> +	struct generic_pm_domain *pd;
> +	struct sbi_pd_provider *pd_provider;
> +	struct dev_power_governor *pd_gov;
> +	int ret = -ENOMEM, state_count = 0;
> +
> +	pd = dt_idle_pd_alloc(np, sbi_dt_parse_state_node);
> +	if (!pd)
> +		goto out;
> +
> +	pd_provider = kzalloc(sizeof(*pd_provider), GFP_KERNEL);
> +	if (!pd_provider)
> +		goto free_pd;
> +
> +	pd->flags |= GENPD_FLAG_IRQ_SAFE | GENPD_FLAG_CPU_DOMAIN;
> +
> +	/* Allow power off when OSI is available. */
> +	if (sbi_cpuidle_use_osi)
> +		pd->power_off = sbi_cpuidle_pd_power_off;
> +	else
> +		pd->flags |= GENPD_FLAG_ALWAYS_ON;
> +
> +	/* Use governor for CPU PM domains if it has some states to manage. */
> +	pd_gov = state_count > 0 ? &pm_domain_cpu_gov : NULL;
> +
> +	ret = pm_genpd_init(pd, pd_gov, false);
> +	if (ret)
> +		goto free_pd_prov;
> +
> +	ret = of_genpd_add_provider_simple(np, pd);
> +	if (ret)
> +		goto remove_pd;
> +
> +	pd_provider->node = of_node_get(np);
> +	list_add(&pd_provider->link, &sbi_pd_providers);
> +
> +	pr_debug("init PM domain %s\n", pd->name);
> +	return 0;
> +
> +remove_pd:
> +	pm_genpd_remove(pd);
> +free_pd_prov:
> +	kfree(pd_provider);
> +free_pd:
> +	dt_idle_pd_free(pd);
> +out:
> +	pr_err("failed to init PM domain ret=%d %pOF\n", ret, np);
> +	return ret;
> +}
> +
> +static void sbi_pd_remove(void)
> +{
> +	struct sbi_pd_provider *pd_provider, *it;
> +	struct generic_pm_domain *genpd;
> +
> +	list_for_each_entry_safe(pd_provider, it, &sbi_pd_providers, link) {
> +		of_genpd_del_provider(pd_provider->node);
> +
> +		genpd = of_genpd_remove_last(pd_provider->node);
> +		if (!IS_ERR(genpd))
> +			kfree(genpd);
> +
> +		of_node_put(pd_provider->node);
> +		list_del(&pd_provider->link);
> +		kfree(pd_provider);
> +	}
> +}
> +
> +static int sbi_genpd_probe(struct device_node *np)
> +{
> +	struct device_node *node;
> +	int ret = 0, pd_count = 0;
> +
> +	if (!np)
> +		return -ENODEV;
> +
> +	/*
> +	 * Parse child nodes for the "#power-domain-cells" property and
> +	 * initialize a genpd/genpd-of-provider pair when it's found.
> +	 */
> +	for_each_child_of_node(np, node) {
> +		if (!of_find_property(node, "#power-domain-cells", NULL))
> +			continue;
> +
> +		ret = sbi_pd_init(node);
> +		if (ret)
> +			goto put_node;
> +
> +		pd_count++;
> +	}
> +
> +	/* Bail out if not using the hierarchical CPU topology. */
> +	if (!pd_count)
> +		goto no_pd;
> +
> +	/* Link genpd masters/subdomains to model the CPU topology. */
> +	ret = dt_idle_pd_init_topology(np);
> +	if (ret)
> +		goto remove_pd;
> +
> +	return 0;
> +
> +put_node:
> +	of_node_put(node);
> +remove_pd:
> +	sbi_pd_remove();
> +	pr_err("failed to create CPU PM domains ret=%d\n", ret);
> +no_pd:
> +	return ret;
> +}
> +
> +#else
> +
> +static inline int sbi_genpd_probe(struct device_node *np)
> +{
> +	return 0;
> +}
> +
> +#endif
> +
> +static int sbi_cpuidle_probe(struct platform_device *pdev)
> +{
> +	int cpu, ret;
> +	struct cpuidle_driver *drv;
> +	struct cpuidle_device *dev;
> +	struct device_node *np, *pds_node;
> +
> +	/* Detect OSI support based on CPU DT nodes */
> +	sbi_cpuidle_use_osi = true;
> +	for_each_possible_cpu(cpu) {
> +		np = of_cpu_device_node_get(cpu);
> +		if (np &&
> +		    of_find_property(np, "power-domains", NULL) &&
> +		    of_find_property(np, "power-domain-names", NULL)) {
> +			continue;
> +		} else {
> +			sbi_cpuidle_use_osi = false;
> +			break;
> +		}
> +	}
> +
> +	/* Populate generic power domains from DT nodes */
> +	pds_node = of_find_node_by_path("/cpus/power-domains");
> +	if (pds_node) {
> +		ret = sbi_genpd_probe(pds_node);
> +		of_node_put(pds_node);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	/* Initialize CPU idle driver for each CPU */
> +	for_each_possible_cpu(cpu) {
> +		ret = sbi_cpuidle_init_cpu(&pdev->dev, cpu);
> +		if (ret) {
> +			pr_debug("HART%ld: idle driver init failed\n",
> +				 cpuid_to_hartid_map(cpu));
> +			goto out_fail;
> +		}
> +	}
> +
> +	/* Setup CPU hotplut notifiers */
> +	sbi_idle_init_cpuhp();
> +
> +	pr_info("idle driver registered for all CPUs\n");
> +
> +	return 0;
> +
> +out_fail:
> +	while (--cpu >= 0) {
> +		dev = per_cpu(cpuidle_devices, cpu);
> +		drv = cpuidle_get_cpu_driver(dev);
> +		cpuidle_unregister(drv);
> +		sbi_cpuidle_deinit_cpu(cpu);
> +	}
> +
> +	return ret;
> +}
> +
> +static struct platform_driver sbi_cpuidle_driver = {
> +	.probe = sbi_cpuidle_probe,
> +	.driver = {
> +		.name = "sbi-cpuidle",
> +		.sync_state = sbi_cpuidle_domain_sync_state,
> +	},
> +};
> +
> +static int __init sbi_cpuidle_init(void)
> +{
> +	int ret;
> +	struct platform_device *pdev;
> +
> +	/*
> +	 * The SBI HSM suspend function is only available when:
> +	 * 1) SBI version is 0.3 or higher
> +	 * 2) SBI HSM extension is available
> +	 */
> +	if ((sbi_spec_version < sbi_mk_version(0, 3)) ||
> +	    sbi_probe_extension(SBI_EXT_HSM) <= 0) {
> +		pr_info("HSM suspend not available\n");
> +		return 0;
> +	}
> +
> +	ret = platform_driver_register(&sbi_cpuidle_driver);
> +	if (ret)
> +		return ret;
> +
> +	pdev = platform_device_register_simple("sbi-cpuidle",
> +						-1, NULL, 0);
> +	if (IS_ERR(pdev)) {
> +		platform_driver_unregister(&sbi_cpuidle_driver);
> +		return PTR_ERR(pdev);
> +	}
> +
> +	return 0;
> +}
> +device_initcall(sbi_cpuidle_init);

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 6/8] cpuidle: Add RISC-V SBI CPU idle driver
  2022-02-10  5:49 ` [PATCH v11 6/8] cpuidle: Add RISC-V SBI CPU idle driver Anup Patel
  2022-02-16  8:09   ` Atish Patra
  2022-03-10 20:01   ` Palmer Dabbelt
@ 2022-03-12  8:34   ` Anup Patel
  2 siblings, 0 replies; 25+ messages in thread
From: Anup Patel @ 2022-03-12  8:34 UTC (permalink / raw)
  To: Rafael J . Wysocki, Rafael J. Wysocki
  Cc: Palmer Dabbelt, Paul Walmsley, Albert Ou, Daniel Lezcano,
	Ulf Hansson, Pavel Machek, Rob Herring, Sandeep Tripathy,
	Atish Patra, Alistair Francis, Liush, DTML, linux-riscv,
	linux-kernel@vger.kernel.org List, open list:THERMAL,
	linux-arm-kernel, kvm-riscv, Anup Patel

Hi Rafael,

On Thu, Feb 10, 2022 at 11:21 AM Anup Patel <apatel@ventanamicro.com> wrote:
>
> From: Anup Patel <anup.patel@wdc.com>
>
> The RISC-V SBI HSM extension provides HSM suspend call which can
> be used by Linux RISC-V to enter platform specific low-power state.
>
> This patch adds a CPU idle driver based on RISC-V SBI calls which
> will populate idle states from device tree and use SBI calls to
> entry these idle states.
>
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>

Does this driver look okay to you ?

Best Regards,
Anup Patel

> ---
>  MAINTAINERS                         |   7 +
>  drivers/cpuidle/Kconfig             |   5 +
>  drivers/cpuidle/Kconfig.riscv       |  15 +
>  drivers/cpuidle/Makefile            |   4 +
>  drivers/cpuidle/cpuidle-riscv-sbi.c | 627 ++++++++++++++++++++++++++++
>  5 files changed, 658 insertions(+)
>  create mode 100644 drivers/cpuidle/Kconfig.riscv
>  create mode 100644 drivers/cpuidle/cpuidle-riscv-sbi.c
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 39ece23e8d93..2ff0055a26a7 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -5058,6 +5058,13 @@ S:       Supported
>  F:     drivers/cpuidle/dt_idle_genpd.c
>  F:     drivers/cpuidle/dt_idle_genpd.h
>
> +CPUIDLE DRIVER - RISC-V SBI
> +M:     Anup Patel <anup@brainfault.org>
> +L:     linux-pm@vger.kernel.org
> +L:     linux-riscv@lists.infradead.org
> +S:     Maintained
> +F:     drivers/cpuidle/cpuidle-riscv-sbi.c
> +
>  CRAMFS FILESYSTEM
>  M:     Nicolas Pitre <nico@fluxnic.net>
>  S:     Maintained
> diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
> index f1afe7ab6b54..ff71dd662880 100644
> --- a/drivers/cpuidle/Kconfig
> +++ b/drivers/cpuidle/Kconfig
> @@ -66,6 +66,11 @@ depends on PPC
>  source "drivers/cpuidle/Kconfig.powerpc"
>  endmenu
>
> +menu "RISC-V CPU Idle Drivers"
> +depends on RISCV
> +source "drivers/cpuidle/Kconfig.riscv"
> +endmenu
> +
>  config HALTPOLL_CPUIDLE
>         tristate "Halt poll cpuidle driver"
>         depends on X86 && KVM_GUEST
> diff --git a/drivers/cpuidle/Kconfig.riscv b/drivers/cpuidle/Kconfig.riscv
> new file mode 100644
> index 000000000000..78518c26af74
> --- /dev/null
> +++ b/drivers/cpuidle/Kconfig.riscv
> @@ -0,0 +1,15 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +# RISC-V CPU Idle drivers
> +#
> +
> +config RISCV_SBI_CPUIDLE
> +       bool "RISC-V SBI CPU idle Driver"
> +       depends on RISCV_SBI
> +       select DT_IDLE_STATES
> +       select CPU_IDLE_MULTIPLE_DRIVERS
> +       select DT_IDLE_GENPD if PM_GENERIC_DOMAINS_OF
> +       help
> +         Select this option to enable RISC-V SBI firmware based CPU idle
> +         driver for RISC-V systems. This drivers also supports hierarchical
> +         DT based layout of the idle state.
> diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
> index 11a26cef279f..d103342b7cfc 100644
> --- a/drivers/cpuidle/Makefile
> +++ b/drivers/cpuidle/Makefile
> @@ -35,3 +35,7 @@ obj-$(CONFIG_MIPS_CPS_CPUIDLE)                += cpuidle-cps.o
>  # POWERPC drivers
>  obj-$(CONFIG_PSERIES_CPUIDLE)          += cpuidle-pseries.o
>  obj-$(CONFIG_POWERNV_CPUIDLE)          += cpuidle-powernv.o
> +
> +###############################################################################
> +# RISC-V drivers
> +obj-$(CONFIG_RISCV_SBI_CPUIDLE)                += cpuidle-riscv-sbi.o
> diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
> new file mode 100644
> index 000000000000..b459eda2cd37
> --- /dev/null
> +++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
> @@ -0,0 +1,627 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * RISC-V SBI CPU idle driver.
> + *
> + * Copyright (c) 2021 Western Digital Corporation or its affiliates.
> + * Copyright (c) 2022 Ventana Micro Systems Inc.
> + */
> +
> +#define pr_fmt(fmt) "cpuidle-riscv-sbi: " fmt
> +
> +#include <linux/cpuidle.h>
> +#include <linux/cpumask.h>
> +#include <linux/cpu_pm.h>
> +#include <linux/cpu_cooling.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_device.h>
> +#include <linux/slab.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_domain.h>
> +#include <linux/pm_runtime.h>
> +#include <asm/cpuidle.h>
> +#include <asm/sbi.h>
> +#include <asm/suspend.h>
> +
> +#include "dt_idle_states.h"
> +#include "dt_idle_genpd.h"
> +
> +struct sbi_cpuidle_data {
> +       u32 *states;
> +       struct device *dev;
> +};
> +
> +struct sbi_domain_state {
> +       bool available;
> +       u32 state;
> +};
> +
> +static DEFINE_PER_CPU_READ_MOSTLY(struct sbi_cpuidle_data, sbi_cpuidle_data);
> +static DEFINE_PER_CPU(struct sbi_domain_state, domain_state);
> +static bool sbi_cpuidle_use_osi;
> +static bool sbi_cpuidle_use_cpuhp;
> +static bool sbi_cpuidle_pd_allow_domain_state;
> +
> +static inline void sbi_set_domain_state(u32 state)
> +{
> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> +
> +       data->available = true;
> +       data->state = state;
> +}
> +
> +static inline u32 sbi_get_domain_state(void)
> +{
> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> +
> +       return data->state;
> +}
> +
> +static inline void sbi_clear_domain_state(void)
> +{
> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> +
> +       data->available = false;
> +}
> +
> +static inline bool sbi_is_domain_state_available(void)
> +{
> +       struct sbi_domain_state *data = this_cpu_ptr(&domain_state);
> +
> +       return data->available;
> +}
> +
> +static int sbi_suspend_finisher(unsigned long suspend_type,
> +                               unsigned long resume_addr,
> +                               unsigned long opaque)
> +{
> +       struct sbiret ret;
> +
> +       ret = sbi_ecall(SBI_EXT_HSM, SBI_EXT_HSM_HART_SUSPEND,
> +                       suspend_type, resume_addr, opaque, 0, 0, 0);
> +
> +       return (ret.error) ? sbi_err_map_linux_errno(ret.error) : 0;
> +}
> +
> +static int sbi_suspend(u32 state)
> +{
> +       if (state & SBI_HSM_SUSP_NON_RET_BIT)
> +               return cpu_suspend(state, sbi_suspend_finisher);
> +       else
> +               return sbi_suspend_finisher(state, 0, 0);
> +}
> +
> +static int sbi_cpuidle_enter_state(struct cpuidle_device *dev,
> +                                  struct cpuidle_driver *drv, int idx)
> +{
> +       u32 *states = __this_cpu_read(sbi_cpuidle_data.states);
> +
> +       return CPU_PM_CPU_IDLE_ENTER_PARAM(sbi_suspend, idx, states[idx]);
> +}
> +
> +static int __sbi_enter_domain_idle_state(struct cpuidle_device *dev,
> +                                         struct cpuidle_driver *drv, int idx,
> +                                         bool s2idle)
> +{
> +       struct sbi_cpuidle_data *data = this_cpu_ptr(&sbi_cpuidle_data);
> +       u32 *states = data->states;
> +       struct device *pd_dev = data->dev;
> +       u32 state;
> +       int ret;
> +
> +       ret = cpu_pm_enter();
> +       if (ret)
> +               return -1;
> +
> +       /* Do runtime PM to manage a hierarchical CPU toplogy. */
> +       rcu_irq_enter_irqson();
> +       if (s2idle)
> +               dev_pm_genpd_suspend(pd_dev);
> +       else
> +               pm_runtime_put_sync_suspend(pd_dev);
> +       rcu_irq_exit_irqson();
> +
> +       if (sbi_is_domain_state_available())
> +               state = sbi_get_domain_state();
> +       else
> +               state = states[idx];
> +
> +       ret = sbi_suspend(state) ? -1 : idx;
> +
> +       rcu_irq_enter_irqson();
> +       if (s2idle)
> +               dev_pm_genpd_resume(pd_dev);
> +       else
> +               pm_runtime_get_sync(pd_dev);
> +       rcu_irq_exit_irqson();
> +
> +       cpu_pm_exit();
> +
> +       /* Clear the domain state to start fresh when back from idle. */
> +       sbi_clear_domain_state();
> +       return ret;
> +}
> +
> +static int sbi_enter_domain_idle_state(struct cpuidle_device *dev,
> +                                      struct cpuidle_driver *drv, int idx)
> +{
> +       return __sbi_enter_domain_idle_state(dev, drv, idx, false);
> +}
> +
> +static int sbi_enter_s2idle_domain_idle_state(struct cpuidle_device *dev,
> +                                             struct cpuidle_driver *drv,
> +                                             int idx)
> +{
> +       return __sbi_enter_domain_idle_state(dev, drv, idx, true);
> +}
> +
> +static int sbi_cpuidle_cpuhp_up(unsigned int cpu)
> +{
> +       struct device *pd_dev = __this_cpu_read(sbi_cpuidle_data.dev);
> +
> +       if (pd_dev)
> +               pm_runtime_get_sync(pd_dev);
> +
> +       return 0;
> +}
> +
> +static int sbi_cpuidle_cpuhp_down(unsigned int cpu)
> +{
> +       struct device *pd_dev = __this_cpu_read(sbi_cpuidle_data.dev);
> +
> +       if (pd_dev) {
> +               pm_runtime_put_sync(pd_dev);
> +               /* Clear domain state to start fresh at next online. */
> +               sbi_clear_domain_state();
> +       }
> +
> +       return 0;
> +}
> +
> +static void sbi_idle_init_cpuhp(void)
> +{
> +       int err;
> +
> +       if (!sbi_cpuidle_use_cpuhp)
> +               return;
> +
> +       err = cpuhp_setup_state_nocalls(CPUHP_AP_CPU_PM_STARTING,
> +                                       "cpuidle/sbi:online",
> +                                       sbi_cpuidle_cpuhp_up,
> +                                       sbi_cpuidle_cpuhp_down);
> +       if (err)
> +               pr_warn("Failed %d while setup cpuhp state\n", err);
> +}
> +
> +static const struct of_device_id sbi_cpuidle_state_match[] = {
> +       { .compatible = "riscv,idle-state",
> +         .data = sbi_cpuidle_enter_state },
> +       { },
> +};
> +
> +static bool sbi_suspend_state_is_valid(u32 state)
> +{
> +       if (state > SBI_HSM_SUSPEND_RET_DEFAULT &&
> +           state < SBI_HSM_SUSPEND_RET_PLATFORM)
> +               return false;
> +       if (state > SBI_HSM_SUSPEND_NON_RET_DEFAULT &&
> +           state < SBI_HSM_SUSPEND_NON_RET_PLATFORM)
> +               return false;
> +       return true;
> +}
> +
> +static int sbi_dt_parse_state_node(struct device_node *np, u32 *state)
> +{
> +       int err = of_property_read_u32(np, "riscv,sbi-suspend-param", state);
> +
> +       if (err) {
> +               pr_warn("%pOF missing riscv,sbi-suspend-param property\n", np);
> +               return err;
> +       }
> +
> +       if (!sbi_suspend_state_is_valid(*state)) {
> +               pr_warn("Invalid SBI suspend state %#x\n", *state);
> +               return -EINVAL;
> +       }
> +
> +       return 0;
> +}
> +
> +static int sbi_dt_cpu_init_topology(struct cpuidle_driver *drv,
> +                                    struct sbi_cpuidle_data *data,
> +                                    unsigned int state_count, int cpu)
> +{
> +       /* Currently limit the hierarchical topology to be used in OSI mode. */
> +       if (!sbi_cpuidle_use_osi)
> +               return 0;
> +
> +       data->dev = dt_idle_attach_cpu(cpu, "sbi");
> +       if (IS_ERR_OR_NULL(data->dev))
> +               return PTR_ERR_OR_ZERO(data->dev);
> +
> +       /*
> +        * Using the deepest state for the CPU to trigger a potential selection
> +        * of a shared state for the domain, assumes the domain states are all
> +        * deeper states.
> +        */
> +       drv->states[state_count - 1].enter = sbi_enter_domain_idle_state;
> +       drv->states[state_count - 1].enter_s2idle =
> +                                       sbi_enter_s2idle_domain_idle_state;
> +       sbi_cpuidle_use_cpuhp = true;
> +
> +       return 0;
> +}
> +
> +static int sbi_cpuidle_dt_init_states(struct device *dev,
> +                                       struct cpuidle_driver *drv,
> +                                       unsigned int cpu,
> +                                       unsigned int state_count)
> +{
> +       struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu);
> +       struct device_node *state_node;
> +       struct device_node *cpu_node;
> +       u32 *states;
> +       int i, ret;
> +
> +       cpu_node = of_cpu_device_node_get(cpu);
> +       if (!cpu_node)
> +               return -ENODEV;
> +
> +       states = devm_kcalloc(dev, state_count, sizeof(*states), GFP_KERNEL);
> +       if (!states) {
> +               ret = -ENOMEM;
> +               goto fail;
> +       }
> +
> +       /* Parse SBI specific details from state DT nodes */
> +       for (i = 1; i < state_count; i++) {
> +               state_node = of_get_cpu_state_node(cpu_node, i - 1);
> +               if (!state_node)
> +                       break;
> +
> +               ret = sbi_dt_parse_state_node(state_node, &states[i]);
> +               of_node_put(state_node);
> +
> +               if (ret)
> +                       return ret;
> +
> +               pr_debug("sbi-state %#x index %d\n", states[i], i);
> +       }
> +       if (i != state_count) {
> +               ret = -ENODEV;
> +               goto fail;
> +       }
> +
> +       /* Initialize optional data, used for the hierarchical topology. */
> +       ret = sbi_dt_cpu_init_topology(drv, data, state_count, cpu);
> +       if (ret < 0)
> +               return ret;
> +
> +       /* Store states in the per-cpu struct. */
> +       data->states = states;
> +
> +fail:
> +       of_node_put(cpu_node);
> +
> +       return ret;
> +}
> +
> +static void sbi_cpuidle_deinit_cpu(int cpu)
> +{
> +       struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu);
> +
> +       dt_idle_detach_cpu(data->dev);
> +       sbi_cpuidle_use_cpuhp = false;
> +}
> +
> +static int sbi_cpuidle_init_cpu(struct device *dev, int cpu)
> +{
> +       struct cpuidle_driver *drv;
> +       unsigned int state_count = 0;
> +       int ret = 0;
> +
> +       drv = devm_kzalloc(dev, sizeof(*drv), GFP_KERNEL);
> +       if (!drv)
> +               return -ENOMEM;
> +
> +       drv->name = "sbi_cpuidle";
> +       drv->owner = THIS_MODULE;
> +       drv->cpumask = (struct cpumask *)cpumask_of(cpu);
> +
> +       /* RISC-V architectural WFI to be represented as state index 0. */
> +       drv->states[0].enter = sbi_cpuidle_enter_state;
> +       drv->states[0].exit_latency = 1;
> +       drv->states[0].target_residency = 1;
> +       drv->states[0].power_usage = UINT_MAX;
> +       strcpy(drv->states[0].name, "WFI");
> +       strcpy(drv->states[0].desc, "RISC-V WFI");
> +
> +       /*
> +        * If no DT idle states are detected (ret == 0) let the driver
> +        * initialization fail accordingly since there is no reason to
> +        * initialize the idle driver if only wfi is supported, the
> +        * default archictectural back-end already executes wfi
> +        * on idle entry.
> +        */
> +       ret = dt_init_idle_driver(drv, sbi_cpuidle_state_match, 1);
> +       if (ret <= 0) {
> +               pr_debug("HART%ld: failed to parse DT idle states\n",
> +                        cpuid_to_hartid_map(cpu));
> +               return ret ? : -ENODEV;
> +       }
> +       state_count = ret + 1; /* Include WFI state as well */
> +
> +       /* Initialize idle states from DT. */
> +       ret = sbi_cpuidle_dt_init_states(dev, drv, cpu, state_count);
> +       if (ret) {
> +               pr_err("HART%ld: failed to init idle states\n",
> +                      cpuid_to_hartid_map(cpu));
> +               return ret;
> +       }
> +
> +       ret = cpuidle_register(drv, NULL);
> +       if (ret)
> +               goto deinit;
> +
> +       cpuidle_cooling_register(drv);
> +
> +       return 0;
> +deinit:
> +       sbi_cpuidle_deinit_cpu(cpu);
> +       return ret;
> +}
> +
> +static void sbi_cpuidle_domain_sync_state(struct device *dev)
> +{
> +       /*
> +        * All devices have now been attached/probed to the PM domain
> +        * topology, hence it's fine to allow domain states to be picked.
> +        */
> +       sbi_cpuidle_pd_allow_domain_state = true;
> +}
> +
> +#ifdef CONFIG_DT_IDLE_GENPD
> +
> +static int sbi_cpuidle_pd_power_off(struct generic_pm_domain *pd)
> +{
> +       struct genpd_power_state *state = &pd->states[pd->state_idx];
> +       u32 *pd_state;
> +
> +       if (!state->data)
> +               return 0;
> +
> +       if (!sbi_cpuidle_pd_allow_domain_state)
> +               return -EBUSY;
> +
> +       /* OSI mode is enabled, set the corresponding domain state. */
> +       pd_state = state->data;
> +       sbi_set_domain_state(*pd_state);
> +
> +       return 0;
> +}
> +
> +struct sbi_pd_provider {
> +       struct list_head link;
> +       struct device_node *node;
> +};
> +
> +static LIST_HEAD(sbi_pd_providers);
> +
> +static int sbi_pd_init(struct device_node *np)
> +{
> +       struct generic_pm_domain *pd;
> +       struct sbi_pd_provider *pd_provider;
> +       struct dev_power_governor *pd_gov;
> +       int ret = -ENOMEM, state_count = 0;
> +
> +       pd = dt_idle_pd_alloc(np, sbi_dt_parse_state_node);
> +       if (!pd)
> +               goto out;
> +
> +       pd_provider = kzalloc(sizeof(*pd_provider), GFP_KERNEL);
> +       if (!pd_provider)
> +               goto free_pd;
> +
> +       pd->flags |= GENPD_FLAG_IRQ_SAFE | GENPD_FLAG_CPU_DOMAIN;
> +
> +       /* Allow power off when OSI is available. */
> +       if (sbi_cpuidle_use_osi)
> +               pd->power_off = sbi_cpuidle_pd_power_off;
> +       else
> +               pd->flags |= GENPD_FLAG_ALWAYS_ON;
> +
> +       /* Use governor for CPU PM domains if it has some states to manage. */
> +       pd_gov = state_count > 0 ? &pm_domain_cpu_gov : NULL;
> +
> +       ret = pm_genpd_init(pd, pd_gov, false);
> +       if (ret)
> +               goto free_pd_prov;
> +
> +       ret = of_genpd_add_provider_simple(np, pd);
> +       if (ret)
> +               goto remove_pd;
> +
> +       pd_provider->node = of_node_get(np);
> +       list_add(&pd_provider->link, &sbi_pd_providers);
> +
> +       pr_debug("init PM domain %s\n", pd->name);
> +       return 0;
> +
> +remove_pd:
> +       pm_genpd_remove(pd);
> +free_pd_prov:
> +       kfree(pd_provider);
> +free_pd:
> +       dt_idle_pd_free(pd);
> +out:
> +       pr_err("failed to init PM domain ret=%d %pOF\n", ret, np);
> +       return ret;
> +}
> +
> +static void sbi_pd_remove(void)
> +{
> +       struct sbi_pd_provider *pd_provider, *it;
> +       struct generic_pm_domain *genpd;
> +
> +       list_for_each_entry_safe(pd_provider, it, &sbi_pd_providers, link) {
> +               of_genpd_del_provider(pd_provider->node);
> +
> +               genpd = of_genpd_remove_last(pd_provider->node);
> +               if (!IS_ERR(genpd))
> +                       kfree(genpd);
> +
> +               of_node_put(pd_provider->node);
> +               list_del(&pd_provider->link);
> +               kfree(pd_provider);
> +       }
> +}
> +
> +static int sbi_genpd_probe(struct device_node *np)
> +{
> +       struct device_node *node;
> +       int ret = 0, pd_count = 0;
> +
> +       if (!np)
> +               return -ENODEV;
> +
> +       /*
> +        * Parse child nodes for the "#power-domain-cells" property and
> +        * initialize a genpd/genpd-of-provider pair when it's found.
> +        */
> +       for_each_child_of_node(np, node) {
> +               if (!of_find_property(node, "#power-domain-cells", NULL))
> +                       continue;
> +
> +               ret = sbi_pd_init(node);
> +               if (ret)
> +                       goto put_node;
> +
> +               pd_count++;
> +       }
> +
> +       /* Bail out if not using the hierarchical CPU topology. */
> +       if (!pd_count)
> +               goto no_pd;
> +
> +       /* Link genpd masters/subdomains to model the CPU topology. */
> +       ret = dt_idle_pd_init_topology(np);
> +       if (ret)
> +               goto remove_pd;
> +
> +       return 0;
> +
> +put_node:
> +       of_node_put(node);
> +remove_pd:
> +       sbi_pd_remove();
> +       pr_err("failed to create CPU PM domains ret=%d\n", ret);
> +no_pd:
> +       return ret;
> +}
> +
> +#else
> +
> +static inline int sbi_genpd_probe(struct device_node *np)
> +{
> +       return 0;
> +}
> +
> +#endif
> +
> +static int sbi_cpuidle_probe(struct platform_device *pdev)
> +{
> +       int cpu, ret;
> +       struct cpuidle_driver *drv;
> +       struct cpuidle_device *dev;
> +       struct device_node *np, *pds_node;
> +
> +       /* Detect OSI support based on CPU DT nodes */
> +       sbi_cpuidle_use_osi = true;
> +       for_each_possible_cpu(cpu) {
> +               np = of_cpu_device_node_get(cpu);
> +               if (np &&
> +                   of_find_property(np, "power-domains", NULL) &&
> +                   of_find_property(np, "power-domain-names", NULL)) {
> +                       continue;
> +               } else {
> +                       sbi_cpuidle_use_osi = false;
> +                       break;
> +               }
> +       }
> +
> +       /* Populate generic power domains from DT nodes */
> +       pds_node = of_find_node_by_path("/cpus/power-domains");
> +       if (pds_node) {
> +               ret = sbi_genpd_probe(pds_node);
> +               of_node_put(pds_node);
> +               if (ret)
> +                       return ret;
> +       }
> +
> +       /* Initialize CPU idle driver for each CPU */
> +       for_each_possible_cpu(cpu) {
> +               ret = sbi_cpuidle_init_cpu(&pdev->dev, cpu);
> +               if (ret) {
> +                       pr_debug("HART%ld: idle driver init failed\n",
> +                                cpuid_to_hartid_map(cpu));
> +                       goto out_fail;
> +               }
> +       }
> +
> +       /* Setup CPU hotplut notifiers */
> +       sbi_idle_init_cpuhp();
> +
> +       pr_info("idle driver registered for all CPUs\n");
> +
> +       return 0;
> +
> +out_fail:
> +       while (--cpu >= 0) {
> +               dev = per_cpu(cpuidle_devices, cpu);
> +               drv = cpuidle_get_cpu_driver(dev);
> +               cpuidle_unregister(drv);
> +               sbi_cpuidle_deinit_cpu(cpu);
> +       }
> +
> +       return ret;
> +}
> +
> +static struct platform_driver sbi_cpuidle_driver = {
> +       .probe = sbi_cpuidle_probe,
> +       .driver = {
> +               .name = "sbi-cpuidle",
> +               .sync_state = sbi_cpuidle_domain_sync_state,
> +       },
> +};
> +
> +static int __init sbi_cpuidle_init(void)
> +{
> +       int ret;
> +       struct platform_device *pdev;
> +
> +       /*
> +        * The SBI HSM suspend function is only available when:
> +        * 1) SBI version is 0.3 or higher
> +        * 2) SBI HSM extension is available
> +        */
> +       if ((sbi_spec_version < sbi_mk_version(0, 3)) ||
> +           sbi_probe_extension(SBI_EXT_HSM) <= 0) {
> +               pr_info("HSM suspend not available\n");
> +               return 0;
> +       }
> +
> +       ret = platform_driver_register(&sbi_cpuidle_driver);
> +       if (ret)
> +               return ret;
> +
> +       pdev = platform_device_register_simple("sbi-cpuidle",
> +                                               -1, NULL, 0);
> +       if (IS_ERR(pdev)) {
> +               platform_driver_unregister(&sbi_cpuidle_driver);
> +               return PTR_ERR(pdev);
> +       }
> +
> +       return 0;
> +}
> +device_initcall(sbi_cpuidle_init);
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/8] RISC-V CPU Idle Support
  2022-02-10  5:49 [PATCH v11 0/8] RISC-V CPU Idle Support Anup Patel
                   ` (7 preceding siblings ...)
  2022-02-10  5:49 ` [PATCH v11 8/8] RISC-V: Enable RISC-V SBI CPU Idle driver for QEMU virt machine Anup Patel
@ 2022-03-31  0:16 ` Palmer Dabbelt
  2022-04-01 18:13   ` Rob Herring
  8 siblings, 1 reply; 25+ messages in thread
From: Palmer Dabbelt @ 2022-03-31  0:16 UTC (permalink / raw)
  To: apatel
  Cc: Paul Walmsley, aou, daniel.lezcano, ulf.hansson, rjw, pavel,
	robh+dt, milun.tripathy, atishp, Alistair Francis, liush, anup,
	devicetree, linux-riscv, linux-kernel, linux-pm,
	linux-arm-kernel, kvm-riscv

On Wed, 09 Feb 2022 21:49:39 PST (-0800), apatel@ventanamicro.com wrote:
> From: Anup Patel <anup.patel@wdc.com>
>
> This series adds RISC-V CPU Idle support using SBI HSM suspend function.
> The RISC-V SBI CPU idle driver added by this series is highly inspired
> from the ARM PSCI CPU idle driver.
>
> At high-level, this series includes the following changes:
> 1) Preparatory arch/riscv patches (Patches 1 to 3)
> 2) Defines for RISC-V SBI HSM suspend (Patch 4)
> 3) Preparatory patch to share code between RISC-V SBI CPU idle driver
>    and ARM PSCI CPU idle driver (Patch 5)
> 4) RISC-V SBI CPU idle driver and related DT bindings (Patches 6 to 7)
>
> These patches can be found in riscv_sbi_hsm_suspend_v11 branch of
> https://github.com/avpatel/linux.git
>
> Special thanks Sandeep Tripathy for providing early feeback on SBI HSM
> support in all above projects (RISC-V SBI specification, OpenSBI, and
> Linux RISC-V).
>
> Changes since v10:
>  - Rebased on Linux-5.17-rc3
>  - Typo fix in commit description of PATCH6
>
> Changes since v9:
>  - Rebased on Linux-5.17-rc1
>
> Changes since v8:
>  - Rebased on Linux-5.15-rc5
>  - Fixed DT schema check errors in PATCH7
>
> Changes since v7:
>  - Rebased on Linux-5.15-rc3
>  - Renamed cpuidle-sbi.c to cpuidle-riscv-sbi.c in PATCH6
>
> Changes since v6:
>  - Fixed error reported by "make DT_CHECKER_FLAGS=-m dt_binding_check"
>
> Changes since v5:
>  - Rebased on Linux-5.13-rc5
>  - Removed unnecessary exports from PATCH5
>  - Removed stray ";" from PATCH5
>  - Moved sbi_cpuidle_pd_power_off() under "#ifdef CONFIG_DT_IDLE_GENPD"
>    in PATCH6
>
> Changes since v4:
>  - Rebased on Linux-5.13-rc2
>  - Renamed all dt_idle_genpd functions to have "dt_idle_" prefix
>  - Added MAINTAINERS file entry for dt_idle_genpd
>
> Changes since v3:
>  - Rebased on Linux-5.13-rc2
>  - Fixed __cpu_resume_enter() which was broken due to XIP kernel support
>  - Removed "struct dt_idle_genpd_ops" abstraction which simplifies code
>    sharing between ARM PSCI and RISC-V SBI drivers in PATCH5
>
> Changes since v2:
>  - Rebased on Linux-5.12-rc3
>  - Updated PATCH7 to add common DT bindings for both ARM and RISC-V
>    idle states
>  - Added "additionalProperties = false" for both idle-states node and
>    child nodes in PATCH7
>
> Changes since v1:
>  - Fixex minor typo in PATCH1
>  - Use just "idle-states" as DT node name for CPU idle states
>  - Added documentation for "cpu-idle-states" DT property in
>    devicetree/bindings/riscv/cpus.yaml
>  - Added documentation for "riscv,sbi-suspend-param" DT property in
>    devicetree/bindings/riscv/idle-states.yaml
>
> Anup Patel (8):
>   RISC-V: Enable CPU_IDLE drivers
>   RISC-V: Rename relocate() and make it global
>   RISC-V: Add arch functions for non-retentive suspend entry/exit
>   RISC-V: Add SBI HSM suspend related defines
>   cpuidle: Factor-out power domain related code from PSCI domain driver
>   cpuidle: Add RISC-V SBI CPU idle driver
>   dt-bindings: Add common bindings for ARM and RISC-V idle states
>   RISC-V: Enable RISC-V SBI CPU Idle driver for QEMU virt machine
>
>  .../bindings/arm/msm/qcom,idle-state.txt      |   2 +-
>  .../devicetree/bindings/arm/psci.yaml         |   2 +-
>  .../bindings/{arm => cpu}/idle-states.yaml    | 228 ++++++-
>  .../devicetree/bindings/riscv/cpus.yaml       |   6 +
>  MAINTAINERS                                   |  14 +
>  arch/riscv/Kconfig                            |   7 +
>  arch/riscv/Kconfig.socs                       |   3 +
>  arch/riscv/configs/defconfig                  |   2 +
>  arch/riscv/configs/rv32_defconfig             |   2 +
>  arch/riscv/include/asm/asm.h                  |  27 +
>  arch/riscv/include/asm/cpuidle.h              |  24 +
>  arch/riscv/include/asm/sbi.h                  |  27 +-
>  arch/riscv/include/asm/suspend.h              |  36 +
>  arch/riscv/kernel/Makefile                    |   2 +
>  arch/riscv/kernel/asm-offsets.c               |   3 +
>  arch/riscv/kernel/cpu_ops_sbi.c               |   2 +-
>  arch/riscv/kernel/head.S                      |  28 +-
>  arch/riscv/kernel/process.c                   |   3 +-
>  arch/riscv/kernel/suspend.c                   |  87 +++
>  arch/riscv/kernel/suspend_entry.S             | 124 ++++
>  arch/riscv/kvm/vcpu_sbi_hsm.c                 |   4 +-
>  drivers/cpuidle/Kconfig                       |   9 +
>  drivers/cpuidle/Kconfig.arm                   |   1 +
>  drivers/cpuidle/Kconfig.riscv                 |  15 +
>  drivers/cpuidle/Makefile                      |   5 +
>  drivers/cpuidle/cpuidle-psci-domain.c         | 138 +---
>  drivers/cpuidle/cpuidle-psci.h                |  15 +-
>  drivers/cpuidle/cpuidle-riscv-sbi.c           | 627 ++++++++++++++++++
>  drivers/cpuidle/dt_idle_genpd.c               | 178 +++++
>  drivers/cpuidle/dt_idle_genpd.h               |  50 ++
>  30 files changed, 1484 insertions(+), 187 deletions(-)
>  rename Documentation/devicetree/bindings/{arm => cpu}/idle-states.yaml (74%)
>  create mode 100644 arch/riscv/include/asm/cpuidle.h
>  create mode 100644 arch/riscv/include/asm/suspend.h
>  create mode 100644 arch/riscv/kernel/suspend.c
>  create mode 100644 arch/riscv/kernel/suspend_entry.S
>  create mode 100644 drivers/cpuidle/Kconfig.riscv
>  create mode 100644 drivers/cpuidle/cpuidle-riscv-sbi.c
>  create mode 100644 drivers/cpuidle/dt_idle_genpd.c
>  create mode 100644 drivers/cpuidle/dt_idle_genpd.h

Thanks, these are on for-next.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/8] RISC-V CPU Idle Support
  2022-03-31  0:16 ` [PATCH v11 0/8] RISC-V CPU Idle Support Palmer Dabbelt
@ 2022-04-01 18:13   ` Rob Herring
  2022-04-01 18:38     ` Palmer Dabbelt
  0 siblings, 1 reply; 25+ messages in thread
From: Rob Herring @ 2022-04-01 18:13 UTC (permalink / raw)
  To: Palmer Dabbelt, apatel
  Cc: Paul Walmsley, Albert Ou, Daniel Lezcano, Ulf Hansson,
	Rafael J. Wysocki, Pavel Machek, Sandeep Tripathy, Atish Patra,
	Alistair Francis, Liush, Anup Patel, devicetree, linux-riscv,
	linux-kernel, open list:THERMAL, linux-arm-kernel, kvm-riscv

On Wed, Mar 30, 2022 at 7:16 PM Palmer Dabbelt <palmer@dabbelt.com> wrote:
>
> On Wed, 09 Feb 2022 21:49:39 PST (-0800), apatel@ventanamicro.com wrote:
> > From: Anup Patel <anup.patel@wdc.com>
> >
> > This series adds RISC-V CPU Idle support using SBI HSM suspend function.
> > The RISC-V SBI CPU idle driver added by this series is highly inspired
> > from the ARM PSCI CPU idle driver.
> >
> > At high-level, this series includes the following changes:
> > 1) Preparatory arch/riscv patches (Patches 1 to 3)
> > 2) Defines for RISC-V SBI HSM suspend (Patch 4)
> > 3) Preparatory patch to share code between RISC-V SBI CPU idle driver
> >    and ARM PSCI CPU idle driver (Patch 5)
> > 4) RISC-V SBI CPU idle driver and related DT bindings (Patches 6 to 7)
> >
> > These patches can be found in riscv_sbi_hsm_suspend_v11 branch of
> > https://github.com/avpatel/linux.git
> >
> > Special thanks Sandeep Tripathy for providing early feeback on SBI HSM
> > support in all above projects (RISC-V SBI specification, OpenSBI, and
> > Linux RISC-V).
> >
> > Changes since v10:
> >  - Rebased on Linux-5.17-rc3
> >  - Typo fix in commit description of PATCH6
> >
> > Changes since v9:
> >  - Rebased on Linux-5.17-rc1
> >
> > Changes since v8:
> >  - Rebased on Linux-5.15-rc5
> >  - Fixed DT schema check errors in PATCH7
> >
> > Changes since v7:
> >  - Rebased on Linux-5.15-rc3
> >  - Renamed cpuidle-sbi.c to cpuidle-riscv-sbi.c in PATCH6
> >
> > Changes since v6:
> >  - Fixed error reported by "make DT_CHECKER_FLAGS=-m dt_binding_check"
> >
> > Changes since v5:
> >  - Rebased on Linux-5.13-rc5
> >  - Removed unnecessary exports from PATCH5
> >  - Removed stray ";" from PATCH5
> >  - Moved sbi_cpuidle_pd_power_off() under "#ifdef CONFIG_DT_IDLE_GENPD"
> >    in PATCH6
> >
> > Changes since v4:
> >  - Rebased on Linux-5.13-rc2
> >  - Renamed all dt_idle_genpd functions to have "dt_idle_" prefix
> >  - Added MAINTAINERS file entry for dt_idle_genpd
> >
> > Changes since v3:
> >  - Rebased on Linux-5.13-rc2
> >  - Fixed __cpu_resume_enter() which was broken due to XIP kernel support
> >  - Removed "struct dt_idle_genpd_ops" abstraction which simplifies code
> >    sharing between ARM PSCI and RISC-V SBI drivers in PATCH5
> >
> > Changes since v2:
> >  - Rebased on Linux-5.12-rc3
> >  - Updated PATCH7 to add common DT bindings for both ARM and RISC-V
> >    idle states
> >  - Added "additionalProperties = false" for both idle-states node and
> >    child nodes in PATCH7
> >
> > Changes since v1:
> >  - Fixex minor typo in PATCH1
> >  - Use just "idle-states" as DT node name for CPU idle states
> >  - Added documentation for "cpu-idle-states" DT property in
> >    devicetree/bindings/riscv/cpus.yaml
> >  - Added documentation for "riscv,sbi-suspend-param" DT property in
> >    devicetree/bindings/riscv/idle-states.yaml
> >
> > Anup Patel (8):
> >   RISC-V: Enable CPU_IDLE drivers
> >   RISC-V: Rename relocate() and make it global
> >   RISC-V: Add arch functions for non-retentive suspend entry/exit
> >   RISC-V: Add SBI HSM suspend related defines
> >   cpuidle: Factor-out power domain related code from PSCI domain driver
> >   cpuidle: Add RISC-V SBI CPU idle driver
> >   dt-bindings: Add common bindings for ARM and RISC-V idle states
> >   RISC-V: Enable RISC-V SBI CPU Idle driver for QEMU virt machine
> >
> >  .../bindings/arm/msm/qcom,idle-state.txt      |   2 +-
> >  .../devicetree/bindings/arm/psci.yaml         |   2 +-
> >  .../bindings/{arm => cpu}/idle-states.yaml    | 228 ++++++-
> >  .../devicetree/bindings/riscv/cpus.yaml       |   6 +
> >  MAINTAINERS                                   |  14 +
> >  arch/riscv/Kconfig                            |   7 +
> >  arch/riscv/Kconfig.socs                       |   3 +
> >  arch/riscv/configs/defconfig                  |   2 +
> >  arch/riscv/configs/rv32_defconfig             |   2 +
> >  arch/riscv/include/asm/asm.h                  |  27 +
> >  arch/riscv/include/asm/cpuidle.h              |  24 +
> >  arch/riscv/include/asm/sbi.h                  |  27 +-
> >  arch/riscv/include/asm/suspend.h              |  36 +
> >  arch/riscv/kernel/Makefile                    |   2 +
> >  arch/riscv/kernel/asm-offsets.c               |   3 +
> >  arch/riscv/kernel/cpu_ops_sbi.c               |   2 +-
> >  arch/riscv/kernel/head.S                      |  28 +-
> >  arch/riscv/kernel/process.c                   |   3 +-
> >  arch/riscv/kernel/suspend.c                   |  87 +++
> >  arch/riscv/kernel/suspend_entry.S             | 124 ++++
> >  arch/riscv/kvm/vcpu_sbi_hsm.c                 |   4 +-
> >  drivers/cpuidle/Kconfig                       |   9 +
> >  drivers/cpuidle/Kconfig.arm                   |   1 +
> >  drivers/cpuidle/Kconfig.riscv                 |  15 +
> >  drivers/cpuidle/Makefile                      |   5 +
> >  drivers/cpuidle/cpuidle-psci-domain.c         | 138 +---
> >  drivers/cpuidle/cpuidle-psci.h                |  15 +-
> >  drivers/cpuidle/cpuidle-riscv-sbi.c           | 627 ++++++++++++++++++
> >  drivers/cpuidle/dt_idle_genpd.c               | 178 +++++
> >  drivers/cpuidle/dt_idle_genpd.h               |  50 ++
> >  30 files changed, 1484 insertions(+), 187 deletions(-)
> >  rename Documentation/devicetree/bindings/{arm => cpu}/idle-states.yaml (74%)
> >  create mode 100644 arch/riscv/include/asm/cpuidle.h
> >  create mode 100644 arch/riscv/include/asm/suspend.h
> >  create mode 100644 arch/riscv/kernel/suspend.c
> >  create mode 100644 arch/riscv/kernel/suspend_entry.S
> >  create mode 100644 drivers/cpuidle/Kconfig.riscv
> >  create mode 100644 drivers/cpuidle/cpuidle-riscv-sbi.c
> >  create mode 100644 drivers/cpuidle/dt_idle_genpd.c
> >  create mode 100644 drivers/cpuidle/dt_idle_genpd.h
>
> Thanks, these are on for-next.

For 5.18? You are not supposed to put new material into linux-next
during the merge window.

In any case, this now cause warnings on 'cpu-idle-states':

/builds/robherring/linux-dt/Documentation/devicetree/bindings/cpu/idle-states.example.dtb:
cpu@0: cpu-idle-states:0: [1, 2, 3, 4] is too long
From schema: /builds/robherring/linux-dt/Documentation/devicetree/bindings/arm/cpus.yaml
/builds/robherring/linux-dt/Documentation/devicetree/bindings/cpu/idle-states.example.dtb:
cpu@1: cpu-idle-states:0: [1, 2, 3, 4] is too long
From schema: /builds/robherring/linux-dt/Documentation/devicetree/bindings/arm/cpus.yaml
/builds/robherring/linux-dt/Documentation/devicetree/bindings/cpu/idle-states.example.dtb:
cpu@100: cpu-idle-states:0: [1, 2, 3, 4] is too long
From schema: /builds/robherring/linux-dt/Documentation/devicetree/bindings/arm/cpus.yaml
/builds/robherring/linux-dt/Documentation/devicetree/bindings/cpu/idle-states.example.dtb:
cpu@101: cpu-idle-states:0: [1, 2, 3, 4] is too long
From schema: /builds/robherring/linux-dt/Documentation/devicetree/bindings/arm/cpus.yaml

See commit 39bd2b6a3783 ("dt-bindings: Improve phandle-array schemas")
for how to fix.

Rob

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/8] RISC-V CPU Idle Support
  2022-04-01 18:13   ` Rob Herring
@ 2022-04-01 18:38     ` Palmer Dabbelt
  0 siblings, 0 replies; 25+ messages in thread
From: Palmer Dabbelt @ 2022-04-01 18:38 UTC (permalink / raw)
  To: robh+dt
  Cc: apatel, Paul Walmsley, aou, daniel.lezcano, ulf.hansson, rjw,
	pavel, milun.tripathy, atishp, Alistair Francis, liush, anup,
	devicetree, linux-riscv, linux-kernel, linux-pm,
	linux-arm-kernel, kvm-riscv

On Fri, 01 Apr 2022 11:13:32 PDT (-0700), robh+dt@kernel.org wrote:
> On Wed, Mar 30, 2022 at 7:16 PM Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>
>> On Wed, 09 Feb 2022 21:49:39 PST (-0800), apatel@ventanamicro.com wrote:
>> > From: Anup Patel <anup.patel@wdc.com>
>> >
>> > This series adds RISC-V CPU Idle support using SBI HSM suspend function.
>> > The RISC-V SBI CPU idle driver added by this series is highly inspired
>> > from the ARM PSCI CPU idle driver.
>> >
>> > At high-level, this series includes the following changes:
>> > 1) Preparatory arch/riscv patches (Patches 1 to 3)
>> > 2) Defines for RISC-V SBI HSM suspend (Patch 4)
>> > 3) Preparatory patch to share code between RISC-V SBI CPU idle driver
>> >    and ARM PSCI CPU idle driver (Patch 5)
>> > 4) RISC-V SBI CPU idle driver and related DT bindings (Patches 6 to 7)
>> >
>> > These patches can be found in riscv_sbi_hsm_suspend_v11 branch of
>> > https://github.com/avpatel/linux.git
>> >
>> > Special thanks Sandeep Tripathy for providing early feeback on SBI HSM
>> > support in all above projects (RISC-V SBI specification, OpenSBI, and
>> > Linux RISC-V).
>> >
>> > Changes since v10:
>> >  - Rebased on Linux-5.17-rc3
>> >  - Typo fix in commit description of PATCH6
>> >
>> > Changes since v9:
>> >  - Rebased on Linux-5.17-rc1
>> >
>> > Changes since v8:
>> >  - Rebased on Linux-5.15-rc5
>> >  - Fixed DT schema check errors in PATCH7
>> >
>> > Changes since v7:
>> >  - Rebased on Linux-5.15-rc3
>> >  - Renamed cpuidle-sbi.c to cpuidle-riscv-sbi.c in PATCH6
>> >
>> > Changes since v6:
>> >  - Fixed error reported by "make DT_CHECKER_FLAGS=-m dt_binding_check"
>> >
>> > Changes since v5:
>> >  - Rebased on Linux-5.13-rc5
>> >  - Removed unnecessary exports from PATCH5
>> >  - Removed stray ";" from PATCH5
>> >  - Moved sbi_cpuidle_pd_power_off() under "#ifdef CONFIG_DT_IDLE_GENPD"
>> >    in PATCH6
>> >
>> > Changes since v4:
>> >  - Rebased on Linux-5.13-rc2
>> >  - Renamed all dt_idle_genpd functions to have "dt_idle_" prefix
>> >  - Added MAINTAINERS file entry for dt_idle_genpd
>> >
>> > Changes since v3:
>> >  - Rebased on Linux-5.13-rc2
>> >  - Fixed __cpu_resume_enter() which was broken due to XIP kernel support
>> >  - Removed "struct dt_idle_genpd_ops" abstraction which simplifies code
>> >    sharing between ARM PSCI and RISC-V SBI drivers in PATCH5
>> >
>> > Changes since v2:
>> >  - Rebased on Linux-5.12-rc3
>> >  - Updated PATCH7 to add common DT bindings for both ARM and RISC-V
>> >    idle states
>> >  - Added "additionalProperties = false" for both idle-states node and
>> >    child nodes in PATCH7
>> >
>> > Changes since v1:
>> >  - Fixex minor typo in PATCH1
>> >  - Use just "idle-states" as DT node name for CPU idle states
>> >  - Added documentation for "cpu-idle-states" DT property in
>> >    devicetree/bindings/riscv/cpus.yaml
>> >  - Added documentation for "riscv,sbi-suspend-param" DT property in
>> >    devicetree/bindings/riscv/idle-states.yaml
>> >
>> > Anup Patel (8):
>> >   RISC-V: Enable CPU_IDLE drivers
>> >   RISC-V: Rename relocate() and make it global
>> >   RISC-V: Add arch functions for non-retentive suspend entry/exit
>> >   RISC-V: Add SBI HSM suspend related defines
>> >   cpuidle: Factor-out power domain related code from PSCI domain driver
>> >   cpuidle: Add RISC-V SBI CPU idle driver
>> >   dt-bindings: Add common bindings for ARM and RISC-V idle states
>> >   RISC-V: Enable RISC-V SBI CPU Idle driver for QEMU virt machine
>> >
>> >  .../bindings/arm/msm/qcom,idle-state.txt      |   2 +-
>> >  .../devicetree/bindings/arm/psci.yaml         |   2 +-
>> >  .../bindings/{arm => cpu}/idle-states.yaml    | 228 ++++++-
>> >  .../devicetree/bindings/riscv/cpus.yaml       |   6 +
>> >  MAINTAINERS                                   |  14 +
>> >  arch/riscv/Kconfig                            |   7 +
>> >  arch/riscv/Kconfig.socs                       |   3 +
>> >  arch/riscv/configs/defconfig                  |   2 +
>> >  arch/riscv/configs/rv32_defconfig             |   2 +
>> >  arch/riscv/include/asm/asm.h                  |  27 +
>> >  arch/riscv/include/asm/cpuidle.h              |  24 +
>> >  arch/riscv/include/asm/sbi.h                  |  27 +-
>> >  arch/riscv/include/asm/suspend.h              |  36 +
>> >  arch/riscv/kernel/Makefile                    |   2 +
>> >  arch/riscv/kernel/asm-offsets.c               |   3 +
>> >  arch/riscv/kernel/cpu_ops_sbi.c               |   2 +-
>> >  arch/riscv/kernel/head.S                      |  28 +-
>> >  arch/riscv/kernel/process.c                   |   3 +-
>> >  arch/riscv/kernel/suspend.c                   |  87 +++
>> >  arch/riscv/kernel/suspend_entry.S             | 124 ++++
>> >  arch/riscv/kvm/vcpu_sbi_hsm.c                 |   4 +-
>> >  drivers/cpuidle/Kconfig                       |   9 +
>> >  drivers/cpuidle/Kconfig.arm                   |   1 +
>> >  drivers/cpuidle/Kconfig.riscv                 |  15 +
>> >  drivers/cpuidle/Makefile                      |   5 +
>> >  drivers/cpuidle/cpuidle-psci-domain.c         | 138 +---
>> >  drivers/cpuidle/cpuidle-psci.h                |  15 +-
>> >  drivers/cpuidle/cpuidle-riscv-sbi.c           | 627 ++++++++++++++++++
>> >  drivers/cpuidle/dt_idle_genpd.c               | 178 +++++
>> >  drivers/cpuidle/dt_idle_genpd.h               |  50 ++
>> >  30 files changed, 1484 insertions(+), 187 deletions(-)
>> >  rename Documentation/devicetree/bindings/{arm => cpu}/idle-states.yaml (74%)
>> >  create mode 100644 arch/riscv/include/asm/cpuidle.h
>> >  create mode 100644 arch/riscv/include/asm/suspend.h
>> >  create mode 100644 arch/riscv/kernel/suspend.c
>> >  create mode 100644 arch/riscv/kernel/suspend_entry.S
>> >  create mode 100644 drivers/cpuidle/Kconfig.riscv
>> >  create mode 100644 drivers/cpuidle/cpuidle-riscv-sbi.c
>> >  create mode 100644 drivers/cpuidle/dt_idle_genpd.c
>> >  create mode 100644 drivers/cpuidle/dt_idle_genpd.h
>>
>> Thanks, these are on for-next.
>
> For 5.18? You are not supposed to put new material into linux-next
> during the merge window.

Ya, I was aiming for these for 5.18 -- I know it's late, but I'd been 
trying to chase folks around for reviews and figured it was good enough.  
I just sent Linus a PR, it's not merged yet so if this is a problem I 
can re-spin it now.

Sorry!

>
> In any case, this now cause warnings on 'cpu-idle-states':
>
> /builds/robherring/linux-dt/Documentation/devicetree/bindings/cpu/idle-states.example.dtb:
> cpu@0: cpu-idle-states:0: [1, 2, 3, 4] is too long
> From schema: /builds/robherring/linux-dt/Documentation/devicetree/bindings/arm/cpus.yaml
> /builds/robherring/linux-dt/Documentation/devicetree/bindings/cpu/idle-states.example.dtb:
> cpu@1: cpu-idle-states:0: [1, 2, 3, 4] is too long
> From schema: /builds/robherring/linux-dt/Documentation/devicetree/bindings/arm/cpus.yaml
> /builds/robherring/linux-dt/Documentation/devicetree/bindings/cpu/idle-states.example.dtb:
> cpu@100: cpu-idle-states:0: [1, 2, 3, 4] is too long
> From schema: /builds/robherring/linux-dt/Documentation/devicetree/bindings/arm/cpus.yaml
> /builds/robherring/linux-dt/Documentation/devicetree/bindings/cpu/idle-states.example.dtb:
> cpu@101: cpu-idle-states:0: [1, 2, 3, 4] is too long
> From schema: /builds/robherring/linux-dt/Documentation/devicetree/bindings/arm/cpus.yaml
>
> See commit 39bd2b6a3783 ("dt-bindings: Improve phandle-array schemas")
> for how to fix.
>
> Rob

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2022-04-01 18:39 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-10  5:49 [PATCH v11 0/8] RISC-V CPU Idle Support Anup Patel
2022-02-10  5:49 ` [PATCH v11 1/8] RISC-V: Enable CPU_IDLE drivers Anup Patel
2022-02-12 11:43   ` Pavel Machek
2022-02-12 12:49     ` Anup Patel
2022-03-10 18:43       ` Palmer Dabbelt
2022-02-16  0:50   ` Atish Patra
2022-02-10  5:49 ` [PATCH v11 2/8] RISC-V: Rename relocate() and make it global Anup Patel
2022-02-16  0:57   ` Atish Patra
2022-02-10  5:49 ` [PATCH v11 3/8] RISC-V: Add arch functions for non-retentive suspend entry/exit Anup Patel
2022-02-10  5:49 ` [PATCH v11 4/8] RISC-V: Add SBI HSM suspend related defines Anup Patel
2022-02-16  7:57   ` Atish Patra
2022-02-23  7:02   ` Anup Patel
2022-03-08  6:04     ` Anup Patel
2022-02-10  5:49 ` [PATCH v11 5/8] cpuidle: Factor-out power domain related code from PSCI domain driver Anup Patel
2022-02-10  5:49 ` [PATCH v11 6/8] cpuidle: Add RISC-V SBI CPU idle driver Anup Patel
2022-02-16  8:09   ` Atish Patra
2022-02-16 13:45     ` Jessica Clarke
2022-02-16 21:21       ` Atish Patra
2022-03-10 20:01   ` Palmer Dabbelt
2022-03-12  8:34   ` Anup Patel
2022-02-10  5:49 ` [PATCH v11 7/8] dt-bindings: Add common bindings for ARM and RISC-V idle states Anup Patel
2022-02-10  5:49 ` [PATCH v11 8/8] RISC-V: Enable RISC-V SBI CPU Idle driver for QEMU virt machine Anup Patel
2022-03-31  0:16 ` [PATCH v11 0/8] RISC-V CPU Idle Support Palmer Dabbelt
2022-04-01 18:13   ` Rob Herring
2022-04-01 18:38     ` Palmer Dabbelt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).