From: Kalesh Singh <kaleshsingh@google.com>
To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org,
madvenka@linux.microsoft.com, tabba@google.com
Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com,
james.morse@arm.com, alexandru.elisei@arm.com,
suzuki.poulose@arm.com, catalin.marinas@arm.com,
andreyknvl@gmail.com, vincenzo.frascino@arm.com,
mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com,
wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com,
yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com,
linux-arm-kernel@lists.infradead.org,
kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org,
android-mm@google.com, kernel-team@android.com
Subject: [PATCH v5 14/17] KVM: arm64: Implement protected nVHE hyp stack unwinder
Date: Wed, 20 Jul 2022 22:57:25 -0700 [thread overview]
Message-ID: <20220721055728.718573-15-kaleshsingh@google.com> (raw)
In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com>
Implements the common framework necessary for unwind() to work in
the protected nVHE context:
- on_accessible_stack()
- on_overflow_stack()
- unwind_next()
Protected nVHE unwind() is used to unwind and save the hyp stack
addresses to the shared stacktrace buffer. The host reads the
entries in this buffer, symbolizes and dumps the stacktrace (later
patch in the series).
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
arch/arm64/include/asm/stacktrace/common.h | 2 ++
arch/arm64/include/asm/stacktrace/nvhe.h | 34 ++++++++++++++++++++--
2 files changed, 34 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/include/asm/stacktrace/common.h
index be7920ba70b0..73fd9e143c4a 100644
--- a/arch/arm64/include/asm/stacktrace/common.h
+++ b/arch/arm64/include/asm/stacktrace/common.h
@@ -34,6 +34,7 @@ enum stack_type {
STACK_TYPE_OVERFLOW,
STACK_TYPE_SDEI_NORMAL,
STACK_TYPE_SDEI_CRITICAL,
+ STACK_TYPE_HYP,
__NR_STACK_TYPES
};
@@ -186,6 +187,7 @@ static inline int unwind_next_common(struct unwind_state *state,
*
* TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL
* TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW
+ * HYP -> OVERFLOW
*
* ... but the nesting itself is strict. Once we transition from one
* stack to another, it's never valid to unwind back to that first
diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/asm/stacktrace/nvhe.h
index 8f02803a005f..c3688e717136 100644
--- a/arch/arm64/include/asm/stacktrace/nvhe.h
+++ b/arch/arm64/include/asm/stacktrace/nvhe.h
@@ -39,10 +39,19 @@ static inline void kvm_nvhe_unwind_init(struct unwind_state *state,
state->pc = pc;
}
+static inline bool on_hyp_stack(unsigned long sp, unsigned long size,
+ struct stack_info *info);
+
static inline bool on_accessible_stack(const struct task_struct *tsk,
unsigned long sp, unsigned long size,
struct stack_info *info)
{
+ if (on_accessible_stack_common(tsk, sp, size, info))
+ return true;
+
+ if (on_hyp_stack(sp, size, info))
+ return true;
+
return false;
}
@@ -60,12 +69,27 @@ DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
static inline bool on_overflow_stack(unsigned long sp, unsigned long size,
struct stack_info *info)
{
- return false;
+ unsigned long low = (unsigned long)this_cpu_ptr(overflow_stack);
+ unsigned long high = low + OVERFLOW_STACK_SIZE;
+
+ return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info);
+}
+
+static inline bool on_hyp_stack(unsigned long sp, unsigned long size,
+ struct stack_info *info)
+{
+ struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params);
+ unsigned long high = params->stack_hyp_va;
+ unsigned long low = high - PAGE_SIZE;
+
+ return on_stack(sp, size, low, high, STACK_TYPE_HYP, info);
}
static inline int notrace unwind_next(struct unwind_state *state)
{
- return 0;
+ struct stack_info info;
+
+ return unwind_next_common(state, &info, NULL);
}
NOKPROBE_SYMBOL(unwind_next);
#else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */
@@ -75,6 +99,12 @@ static inline bool on_overflow_stack(unsigned long sp, unsigned long size,
return false;
}
+static inline bool on_hyp_stack(unsigned long sp, unsigned long size,
+ struct stack_info *info)
+{
+ return false;
+}
+
static inline int notrace unwind_next(struct unwind_state *state)
{
return 0;
--
2.37.0.170.g444d1eabd0-goog
WARNING: multiple messages have this Message-ID (diff)
From: Kalesh Singh <kaleshsingh@google.com>
To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org,
madvenka@linux.microsoft.com, tabba@google.com
Cc: wangkefeng.wang@huawei.com, catalin.marinas@arm.com,
elver@google.com, vincenzo.frascino@arm.com, will@kernel.org,
kvmarm@lists.cs.columbia.edu, android-mm@google.com,
kernel-team@android.com, drjones@redhat.com, ast@kernel.org,
linux-arm-kernel@lists.infradead.org, andreyknvl@gmail.com,
linux-kernel@vger.kernel.org, mhiramat@kernel.org
Subject: [PATCH v5 14/17] KVM: arm64: Implement protected nVHE hyp stack unwinder
Date: Wed, 20 Jul 2022 22:57:25 -0700 [thread overview]
Message-ID: <20220721055728.718573-15-kaleshsingh@google.com> (raw)
In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com>
Implements the common framework necessary for unwind() to work in
the protected nVHE context:
- on_accessible_stack()
- on_overflow_stack()
- unwind_next()
Protected nVHE unwind() is used to unwind and save the hyp stack
addresses to the shared stacktrace buffer. The host reads the
entries in this buffer, symbolizes and dumps the stacktrace (later
patch in the series).
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
arch/arm64/include/asm/stacktrace/common.h | 2 ++
arch/arm64/include/asm/stacktrace/nvhe.h | 34 ++++++++++++++++++++--
2 files changed, 34 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/include/asm/stacktrace/common.h
index be7920ba70b0..73fd9e143c4a 100644
--- a/arch/arm64/include/asm/stacktrace/common.h
+++ b/arch/arm64/include/asm/stacktrace/common.h
@@ -34,6 +34,7 @@ enum stack_type {
STACK_TYPE_OVERFLOW,
STACK_TYPE_SDEI_NORMAL,
STACK_TYPE_SDEI_CRITICAL,
+ STACK_TYPE_HYP,
__NR_STACK_TYPES
};
@@ -186,6 +187,7 @@ static inline int unwind_next_common(struct unwind_state *state,
*
* TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL
* TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW
+ * HYP -> OVERFLOW
*
* ... but the nesting itself is strict. Once we transition from one
* stack to another, it's never valid to unwind back to that first
diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/asm/stacktrace/nvhe.h
index 8f02803a005f..c3688e717136 100644
--- a/arch/arm64/include/asm/stacktrace/nvhe.h
+++ b/arch/arm64/include/asm/stacktrace/nvhe.h
@@ -39,10 +39,19 @@ static inline void kvm_nvhe_unwind_init(struct unwind_state *state,
state->pc = pc;
}
+static inline bool on_hyp_stack(unsigned long sp, unsigned long size,
+ struct stack_info *info);
+
static inline bool on_accessible_stack(const struct task_struct *tsk,
unsigned long sp, unsigned long size,
struct stack_info *info)
{
+ if (on_accessible_stack_common(tsk, sp, size, info))
+ return true;
+
+ if (on_hyp_stack(sp, size, info))
+ return true;
+
return false;
}
@@ -60,12 +69,27 @@ DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
static inline bool on_overflow_stack(unsigned long sp, unsigned long size,
struct stack_info *info)
{
- return false;
+ unsigned long low = (unsigned long)this_cpu_ptr(overflow_stack);
+ unsigned long high = low + OVERFLOW_STACK_SIZE;
+
+ return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info);
+}
+
+static inline bool on_hyp_stack(unsigned long sp, unsigned long size,
+ struct stack_info *info)
+{
+ struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params);
+ unsigned long high = params->stack_hyp_va;
+ unsigned long low = high - PAGE_SIZE;
+
+ return on_stack(sp, size, low, high, STACK_TYPE_HYP, info);
}
static inline int notrace unwind_next(struct unwind_state *state)
{
- return 0;
+ struct stack_info info;
+
+ return unwind_next_common(state, &info, NULL);
}
NOKPROBE_SYMBOL(unwind_next);
#else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */
@@ -75,6 +99,12 @@ static inline bool on_overflow_stack(unsigned long sp, unsigned long size,
return false;
}
+static inline bool on_hyp_stack(unsigned long sp, unsigned long size,
+ struct stack_info *info)
+{
+ return false;
+}
+
static inline int notrace unwind_next(struct unwind_state *state)
{
return 0;
--
2.37.0.170.g444d1eabd0-goog
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
WARNING: multiple messages have this Message-ID (diff)
From: Kalesh Singh <kaleshsingh@google.com>
To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org,
madvenka@linux.microsoft.com, tabba@google.com
Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com,
james.morse@arm.com, alexandru.elisei@arm.com,
suzuki.poulose@arm.com, catalin.marinas@arm.com,
andreyknvl@gmail.com, vincenzo.frascino@arm.com,
mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com,
wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com,
yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com,
linux-arm-kernel@lists.infradead.org,
kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org,
android-mm@google.com, kernel-team@android.com
Subject: [PATCH v5 14/17] KVM: arm64: Implement protected nVHE hyp stack unwinder
Date: Wed, 20 Jul 2022 22:57:25 -0700 [thread overview]
Message-ID: <20220721055728.718573-15-kaleshsingh@google.com> (raw)
In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com>
Implements the common framework necessary for unwind() to work in
the protected nVHE context:
- on_accessible_stack()
- on_overflow_stack()
- unwind_next()
Protected nVHE unwind() is used to unwind and save the hyp stack
addresses to the shared stacktrace buffer. The host reads the
entries in this buffer, symbolizes and dumps the stacktrace (later
patch in the series).
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
arch/arm64/include/asm/stacktrace/common.h | 2 ++
arch/arm64/include/asm/stacktrace/nvhe.h | 34 ++++++++++++++++++++--
2 files changed, 34 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/include/asm/stacktrace/common.h
index be7920ba70b0..73fd9e143c4a 100644
--- a/arch/arm64/include/asm/stacktrace/common.h
+++ b/arch/arm64/include/asm/stacktrace/common.h
@@ -34,6 +34,7 @@ enum stack_type {
STACK_TYPE_OVERFLOW,
STACK_TYPE_SDEI_NORMAL,
STACK_TYPE_SDEI_CRITICAL,
+ STACK_TYPE_HYP,
__NR_STACK_TYPES
};
@@ -186,6 +187,7 @@ static inline int unwind_next_common(struct unwind_state *state,
*
* TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL
* TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW
+ * HYP -> OVERFLOW
*
* ... but the nesting itself is strict. Once we transition from one
* stack to another, it's never valid to unwind back to that first
diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/asm/stacktrace/nvhe.h
index 8f02803a005f..c3688e717136 100644
--- a/arch/arm64/include/asm/stacktrace/nvhe.h
+++ b/arch/arm64/include/asm/stacktrace/nvhe.h
@@ -39,10 +39,19 @@ static inline void kvm_nvhe_unwind_init(struct unwind_state *state,
state->pc = pc;
}
+static inline bool on_hyp_stack(unsigned long sp, unsigned long size,
+ struct stack_info *info);
+
static inline bool on_accessible_stack(const struct task_struct *tsk,
unsigned long sp, unsigned long size,
struct stack_info *info)
{
+ if (on_accessible_stack_common(tsk, sp, size, info))
+ return true;
+
+ if (on_hyp_stack(sp, size, info))
+ return true;
+
return false;
}
@@ -60,12 +69,27 @@ DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
static inline bool on_overflow_stack(unsigned long sp, unsigned long size,
struct stack_info *info)
{
- return false;
+ unsigned long low = (unsigned long)this_cpu_ptr(overflow_stack);
+ unsigned long high = low + OVERFLOW_STACK_SIZE;
+
+ return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info);
+}
+
+static inline bool on_hyp_stack(unsigned long sp, unsigned long size,
+ struct stack_info *info)
+{
+ struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params);
+ unsigned long high = params->stack_hyp_va;
+ unsigned long low = high - PAGE_SIZE;
+
+ return on_stack(sp, size, low, high, STACK_TYPE_HYP, info);
}
static inline int notrace unwind_next(struct unwind_state *state)
{
- return 0;
+ struct stack_info info;
+
+ return unwind_next_common(state, &info, NULL);
}
NOKPROBE_SYMBOL(unwind_next);
#else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */
@@ -75,6 +99,12 @@ static inline bool on_overflow_stack(unsigned long sp, unsigned long size,
return false;
}
+static inline bool on_hyp_stack(unsigned long sp, unsigned long size,
+ struct stack_info *info)
+{
+ return false;
+}
+
static inline int notrace unwind_next(struct unwind_state *state)
{
return 0;
--
2.37.0.170.g444d1eabd0-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2022-07-21 5:58 UTC|newest]
Thread overview: 120+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-21 5:57 [PATCH v5 00/17] KVM nVHE Hypervisor stack unwinder Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` [PATCH v5 01/17] arm64: stacktrace: Add shared header for common stack unwinding code Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` [PATCH v5 02/17] arm64: stacktrace: Factor out on_accessible_stack_common() Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` [PATCH v5 03/17] arm64: stacktrace: Factor out unwind_next_common() Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` [PATCH v5 04/17] arm64: stacktrace: Handle frame pointer from different address spaces Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 9:57 ` Fuad Tabba
2022-07-21 9:57 ` Fuad Tabba
2022-07-21 9:57 ` Fuad Tabba
2022-07-21 5:57 ` [PATCH v5 05/17] arm64: stacktrace: Factor out common unwind() Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-25 14:05 ` Mark Brown
2022-07-25 14:05 ` Mark Brown
2022-07-25 14:05 ` Mark Brown
2022-07-21 5:57 ` [PATCH v5 06/17] arm64: stacktrace: Add description of stacktrace/common.h Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 9:57 ` Fuad Tabba
2022-07-21 9:57 ` Fuad Tabba
2022-07-21 9:57 ` Fuad Tabba
2022-07-21 5:57 ` [PATCH v5 07/17] KVM: arm64: On stack overflow switch to hyp overflow_stack Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` [PATCH v5 08/17] KVM: arm64: Add PROTECTED_NVHE_STACKTRACE Kconfig Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 9:57 ` Fuad Tabba
2022-07-21 9:57 ` Fuad Tabba
2022-07-21 9:57 ` Fuad Tabba
2022-07-21 5:57 ` [PATCH v5 09/17] KVM: arm64: Allocate shared pKVM hyp stacktrace buffers Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 9:57 ` Fuad Tabba
2022-07-21 9:57 ` Fuad Tabba
2022-07-21 9:57 ` Fuad Tabba
2022-07-21 5:57 ` [PATCH v5 10/17] KVM: arm64: Stub implementation of pKVM HYP stack unwinder Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 5:57 ` [PATCH v5 11/17] KVM: arm64: Stub implementation of non-protected nVHE " Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 5:57 ` [PATCH v5 12/17] KVM: arm64: Save protected-nVHE (pKVM) hyp stacktrace Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 9:58 ` Fuad Tabba
2022-07-22 15:33 ` Oliver Upton
2022-07-22 15:33 ` Oliver Upton
2022-07-22 15:33 ` Oliver Upton
2022-07-22 17:28 ` Kalesh Singh
2022-07-22 17:28 ` Kalesh Singh
2022-07-22 17:28 ` Kalesh Singh
2022-07-21 5:57 ` [PATCH v5 13/17] KVM: arm64: Prepare non-protected nVHE hypervisor stacktrace Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 5:57 ` Kalesh Singh [this message]
2022-07-21 5:57 ` [PATCH v5 14/17] KVM: arm64: Implement protected nVHE hyp stack unwinder Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 5:57 ` [PATCH v5 15/17] KVM: arm64: Implement non-protected " Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 9:58 ` Fuad Tabba
2022-07-21 5:57 ` [PATCH v5 16/17] KVM: arm64: Introduce pkvm_dump_backtrace() Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 9:59 ` Fuad Tabba
2022-07-21 9:59 ` Fuad Tabba
2022-07-21 9:59 ` Fuad Tabba
2022-07-22 11:16 ` Oliver Upton
2022-07-22 11:16 ` Oliver Upton
2022-07-22 11:16 ` Oliver Upton
2022-07-22 17:25 ` Kalesh Singh
2022-07-22 17:25 ` Kalesh Singh
2022-07-22 17:25 ` Kalesh Singh
2022-07-21 5:57 ` [PATCH v5 17/17] KVM: arm64: Introduce hyp_dump_backtrace() Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 5:57 ` Kalesh Singh
2022-07-21 9:59 ` Fuad Tabba
2022-07-21 9:59 ` Fuad Tabba
2022-07-21 9:59 ` Fuad Tabba
2022-07-21 20:35 ` Oliver Upton
2022-07-21 20:35 ` Oliver Upton
2022-07-21 20:35 ` Oliver Upton
2022-07-22 0:01 ` Kalesh Singh
2022-07-22 0:01 ` Kalesh Singh
2022-07-22 0:01 ` Kalesh Singh
2022-07-21 9:55 ` [PATCH v5 00/17] KVM nVHE Hypervisor stack unwinder Fuad Tabba
2022-07-21 9:55 ` Fuad Tabba
2022-07-21 9:55 ` Fuad Tabba
2022-07-21 16:06 ` Kalesh Singh
2022-07-21 16:06 ` Kalesh Singh
2022-07-21 16:06 ` Kalesh Singh
2022-07-22 10:48 ` Oliver Upton
2022-07-22 10:48 ` Oliver Upton
2022-07-22 10:48 ` Oliver Upton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220721055728.718573-15-kaleshsingh@google.com \
--to=kaleshsingh@google.com \
--cc=alexandru.elisei@arm.com \
--cc=andreyknvl@gmail.com \
--cc=android-mm@google.com \
--cc=ardb@kernel.org \
--cc=ast@kernel.org \
--cc=broonie@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=drjones@redhat.com \
--cc=elver@google.com \
--cc=james.morse@arm.com \
--cc=keirf@google.com \
--cc=kernel-team@android.com \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=madvenka@linux.microsoft.com \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=mhiramat@kernel.org \
--cc=oupton@google.com \
--cc=qperret@google.com \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=vincenzo.frascino@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.