All of lore.kernel.org
 help / color / mirror / Atom feed
From: Fuad Tabba <tabba@google.com>
To: Kalesh Singh <kaleshsingh@google.com>
Cc: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org,
	madvenka@linux.microsoft.com, will@kernel.org,
	qperret@google.com, james.morse@arm.com,
	alexandru.elisei@arm.com, suzuki.poulose@arm.com,
	catalin.marinas@arm.com, andreyknvl@gmail.com,
	russell.king@oracle.com, vincenzo.frascino@arm.com,
	mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com,
	wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com,
	yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org,
	android-mm@google.com, kernel-team@android.com
Subject: Re: [PATCH v4 03/18] arm64: stacktrace: Factor out unwind_next_common()
Date: Fri, 15 Jul 2022 14:58:40 +0100	[thread overview]
Message-ID: <CA+EHjTyDbH=7Bqo61CEadSKRsRHKsSWQcf=kbx5T_Fsj0-bL4g@mail.gmail.com> (raw)
In-Reply-To: <20220715061027.1612149-4-kaleshsingh@google.com>

Hi Kalesh,

On Fri, Jul 15, 2022 at 7:11 AM Kalesh Singh <kaleshsingh@google.com> wrote:
>
> Move common unwind_next logic to stacktrace/common.h. This allows
> reusing the code in the implementation the nVHE hypervisor stack
> unwinder, later in this series.
>
> Signed-off-by: Kalesh Singh <kaleshsingh@google.com>

Reviewed-by: Fuad Tabba <tabba@google.com>

Thanks,
/fuad


> ---
>  arch/arm64/include/asm/stacktrace/common.h | 50 ++++++++++++++++++++++
>  arch/arm64/kernel/stacktrace.c             | 41 ++----------------
>  2 files changed, 54 insertions(+), 37 deletions(-)
>
> diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/include/asm/stacktrace/common.h
> index f58b786460d3..0c5cbfdb56b5 100644
> --- a/arch/arm64/include/asm/stacktrace/common.h
> +++ b/arch/arm64/include/asm/stacktrace/common.h
> @@ -65,6 +65,10 @@ struct unwind_state {
>  static inline bool on_overflow_stack(unsigned long sp, unsigned long size,
>                                      struct stack_info *info);
>
> +static inline bool on_accessible_stack(const struct task_struct *tsk,
> +                                      unsigned long sp, unsigned long size,
> +                                      struct stack_info *info);
> +
>  static inline bool on_stack(unsigned long sp, unsigned long size,
>                             unsigned long low, unsigned long high,
>                             enum stack_type type, struct stack_info *info)
> @@ -120,4 +124,50 @@ static inline void unwind_init_common(struct unwind_state *state,
>         state->prev_type = STACK_TYPE_UNKNOWN;
>  }
>
> +static inline int unwind_next_common(struct unwind_state *state,
> +                                    struct stack_info *info)
> +{
> +       struct task_struct *tsk = state->task;
> +       unsigned long fp = state->fp;
> +
> +       if (fp & 0x7)
> +               return -EINVAL;
> +
> +       if (!on_accessible_stack(tsk, fp, 16, info))
> +               return -EINVAL;
> +
> +       if (test_bit(info->type, state->stacks_done))
> +               return -EINVAL;
> +
> +       /*
> +        * As stacks grow downward, any valid record on the same stack must be
> +        * at a strictly higher address than the prior record.
> +        *
> +        * Stacks can nest in several valid orders, e.g.
> +        *
> +        * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL
> +        * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW
> +        *
> +        * ... but the nesting itself is strict. Once we transition from one
> +        * stack to another, it's never valid to unwind back to that first
> +        * stack.
> +        */
> +       if (info->type == state->prev_type) {
> +               if (fp <= state->prev_fp)
> +                       return -EINVAL;
> +       } else {
> +               __set_bit(state->prev_type, state->stacks_done);
> +       }
> +
> +       /*
> +        * Record this frame record's values and location. The prev_fp and
> +        * prev_type are only meaningful to the next unwind_next() invocation.
> +        */
> +       state->fp = READ_ONCE(*(unsigned long *)(fp));
> +       state->pc = READ_ONCE(*(unsigned long *)(fp + 8));
> +       state->prev_fp = fp;
> +       state->prev_type = info->type;
> +
> +       return 0;
> +}
>  #endif /* __ASM_STACKTRACE_COMMON_H */
> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> index 94a5dd2ab8fd..834851939364 100644
> --- a/arch/arm64/kernel/stacktrace.c
> +++ b/arch/arm64/kernel/stacktrace.c
> @@ -81,48 +81,15 @@ static int notrace unwind_next(struct unwind_state *state)
>         struct task_struct *tsk = state->task;
>         unsigned long fp = state->fp;
>         struct stack_info info;
> +       int err;
>
>         /* Final frame; nothing to unwind */
>         if (fp == (unsigned long)task_pt_regs(tsk)->stackframe)
>                 return -ENOENT;
>
> -       if (fp & 0x7)
> -               return -EINVAL;
> -
> -       if (!on_accessible_stack(tsk, fp, 16, &info))
> -               return -EINVAL;
> -
> -       if (test_bit(info.type, state->stacks_done))
> -               return -EINVAL;
> -
> -       /*
> -        * As stacks grow downward, any valid record on the same stack must be
> -        * at a strictly higher address than the prior record.
> -        *
> -        * Stacks can nest in several valid orders, e.g.
> -        *
> -        * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL
> -        * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW
> -        *
> -        * ... but the nesting itself is strict. Once we transition from one
> -        * stack to another, it's never valid to unwind back to that first
> -        * stack.
> -        */
> -       if (info.type == state->prev_type) {
> -               if (fp <= state->prev_fp)
> -                       return -EINVAL;
> -       } else {
> -               __set_bit(state->prev_type, state->stacks_done);
> -       }
> -
> -       /*
> -        * Record this frame record's values and location. The prev_fp and
> -        * prev_type are only meaningful to the next unwind_next() invocation.
> -        */
> -       state->fp = READ_ONCE(*(unsigned long *)(fp));
> -       state->pc = READ_ONCE(*(unsigned long *)(fp + 8));
> -       state->prev_fp = fp;
> -       state->prev_type = info.type;
> +       err = unwind_next_common(state, &info);
> +       if (err)
> +               return err;
>
>         state->pc = ptrauth_strip_insn_pac(state->pc);
>
> --
> 2.37.0.170.g444d1eabd0-goog
>

WARNING: multiple messages have this Message-ID (diff)
From: Fuad Tabba <tabba@google.com>
To: Kalesh Singh <kaleshsingh@google.com>
Cc: wangkefeng.wang@huawei.com, elver@google.com,
	catalin.marinas@arm.com, ast@kernel.org,
	vincenzo.frascino@arm.com, will@kernel.org,
	android-mm@google.com, maz@kernel.org,
	kvmarm@lists.cs.columbia.edu, madvenka@linux.microsoft.com,
	andreyknvl@gmail.com, kernel-team@android.com,
	drjones@redhat.com, broonie@kernel.org,
	linux-arm-kernel@lists.infradead.org, russell.king@oracle.com,
	linux-kernel@vger.kernel.org, mhiramat@kernel.org
Subject: Re: [PATCH v4 03/18] arm64: stacktrace: Factor out unwind_next_common()
Date: Fri, 15 Jul 2022 14:58:40 +0100	[thread overview]
Message-ID: <CA+EHjTyDbH=7Bqo61CEadSKRsRHKsSWQcf=kbx5T_Fsj0-bL4g@mail.gmail.com> (raw)
In-Reply-To: <20220715061027.1612149-4-kaleshsingh@google.com>

Hi Kalesh,

On Fri, Jul 15, 2022 at 7:11 AM Kalesh Singh <kaleshsingh@google.com> wrote:
>
> Move common unwind_next logic to stacktrace/common.h. This allows
> reusing the code in the implementation the nVHE hypervisor stack
> unwinder, later in this series.
>
> Signed-off-by: Kalesh Singh <kaleshsingh@google.com>

Reviewed-by: Fuad Tabba <tabba@google.com>

Thanks,
/fuad


> ---
>  arch/arm64/include/asm/stacktrace/common.h | 50 ++++++++++++++++++++++
>  arch/arm64/kernel/stacktrace.c             | 41 ++----------------
>  2 files changed, 54 insertions(+), 37 deletions(-)
>
> diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/include/asm/stacktrace/common.h
> index f58b786460d3..0c5cbfdb56b5 100644
> --- a/arch/arm64/include/asm/stacktrace/common.h
> +++ b/arch/arm64/include/asm/stacktrace/common.h
> @@ -65,6 +65,10 @@ struct unwind_state {
>  static inline bool on_overflow_stack(unsigned long sp, unsigned long size,
>                                      struct stack_info *info);
>
> +static inline bool on_accessible_stack(const struct task_struct *tsk,
> +                                      unsigned long sp, unsigned long size,
> +                                      struct stack_info *info);
> +
>  static inline bool on_stack(unsigned long sp, unsigned long size,
>                             unsigned long low, unsigned long high,
>                             enum stack_type type, struct stack_info *info)
> @@ -120,4 +124,50 @@ static inline void unwind_init_common(struct unwind_state *state,
>         state->prev_type = STACK_TYPE_UNKNOWN;
>  }
>
> +static inline int unwind_next_common(struct unwind_state *state,
> +                                    struct stack_info *info)
> +{
> +       struct task_struct *tsk = state->task;
> +       unsigned long fp = state->fp;
> +
> +       if (fp & 0x7)
> +               return -EINVAL;
> +
> +       if (!on_accessible_stack(tsk, fp, 16, info))
> +               return -EINVAL;
> +
> +       if (test_bit(info->type, state->stacks_done))
> +               return -EINVAL;
> +
> +       /*
> +        * As stacks grow downward, any valid record on the same stack must be
> +        * at a strictly higher address than the prior record.
> +        *
> +        * Stacks can nest in several valid orders, e.g.
> +        *
> +        * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL
> +        * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW
> +        *
> +        * ... but the nesting itself is strict. Once we transition from one
> +        * stack to another, it's never valid to unwind back to that first
> +        * stack.
> +        */
> +       if (info->type == state->prev_type) {
> +               if (fp <= state->prev_fp)
> +                       return -EINVAL;
> +       } else {
> +               __set_bit(state->prev_type, state->stacks_done);
> +       }
> +
> +       /*
> +        * Record this frame record's values and location. The prev_fp and
> +        * prev_type are only meaningful to the next unwind_next() invocation.
> +        */
> +       state->fp = READ_ONCE(*(unsigned long *)(fp));
> +       state->pc = READ_ONCE(*(unsigned long *)(fp + 8));
> +       state->prev_fp = fp;
> +       state->prev_type = info->type;
> +
> +       return 0;
> +}
>  #endif /* __ASM_STACKTRACE_COMMON_H */
> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> index 94a5dd2ab8fd..834851939364 100644
> --- a/arch/arm64/kernel/stacktrace.c
> +++ b/arch/arm64/kernel/stacktrace.c
> @@ -81,48 +81,15 @@ static int notrace unwind_next(struct unwind_state *state)
>         struct task_struct *tsk = state->task;
>         unsigned long fp = state->fp;
>         struct stack_info info;
> +       int err;
>
>         /* Final frame; nothing to unwind */
>         if (fp == (unsigned long)task_pt_regs(tsk)->stackframe)
>                 return -ENOENT;
>
> -       if (fp & 0x7)
> -               return -EINVAL;
> -
> -       if (!on_accessible_stack(tsk, fp, 16, &info))
> -               return -EINVAL;
> -
> -       if (test_bit(info.type, state->stacks_done))
> -               return -EINVAL;
> -
> -       /*
> -        * As stacks grow downward, any valid record on the same stack must be
> -        * at a strictly higher address than the prior record.
> -        *
> -        * Stacks can nest in several valid orders, e.g.
> -        *
> -        * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL
> -        * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW
> -        *
> -        * ... but the nesting itself is strict. Once we transition from one
> -        * stack to another, it's never valid to unwind back to that first
> -        * stack.
> -        */
> -       if (info.type == state->prev_type) {
> -               if (fp <= state->prev_fp)
> -                       return -EINVAL;
> -       } else {
> -               __set_bit(state->prev_type, state->stacks_done);
> -       }
> -
> -       /*
> -        * Record this frame record's values and location. The prev_fp and
> -        * prev_type are only meaningful to the next unwind_next() invocation.
> -        */
> -       state->fp = READ_ONCE(*(unsigned long *)(fp));
> -       state->pc = READ_ONCE(*(unsigned long *)(fp + 8));
> -       state->prev_fp = fp;
> -       state->prev_type = info.type;
> +       err = unwind_next_common(state, &info);
> +       if (err)
> +               return err;
>
>         state->pc = ptrauth_strip_insn_pac(state->pc);
>
> --
> 2.37.0.170.g444d1eabd0-goog
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Fuad Tabba <tabba@google.com>
To: Kalesh Singh <kaleshsingh@google.com>
Cc: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org,
	 madvenka@linux.microsoft.com, will@kernel.org,
	qperret@google.com,  james.morse@arm.com,
	alexandru.elisei@arm.com, suzuki.poulose@arm.com,
	 catalin.marinas@arm.com, andreyknvl@gmail.com,
	russell.king@oracle.com,  vincenzo.frascino@arm.com,
	mhiramat@kernel.org, ast@kernel.org,  drjones@redhat.com,
	wangkefeng.wang@huawei.com, elver@google.com,  keirf@google.com,
	yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com,
	 linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu,  linux-kernel@vger.kernel.org,
	android-mm@google.com, kernel-team@android.com
Subject: Re: [PATCH v4 03/18] arm64: stacktrace: Factor out unwind_next_common()
Date: Fri, 15 Jul 2022 14:58:40 +0100	[thread overview]
Message-ID: <CA+EHjTyDbH=7Bqo61CEadSKRsRHKsSWQcf=kbx5T_Fsj0-bL4g@mail.gmail.com> (raw)
In-Reply-To: <20220715061027.1612149-4-kaleshsingh@google.com>

Hi Kalesh,

On Fri, Jul 15, 2022 at 7:11 AM Kalesh Singh <kaleshsingh@google.com> wrote:
>
> Move common unwind_next logic to stacktrace/common.h. This allows
> reusing the code in the implementation the nVHE hypervisor stack
> unwinder, later in this series.
>
> Signed-off-by: Kalesh Singh <kaleshsingh@google.com>

Reviewed-by: Fuad Tabba <tabba@google.com>

Thanks,
/fuad


> ---
>  arch/arm64/include/asm/stacktrace/common.h | 50 ++++++++++++++++++++++
>  arch/arm64/kernel/stacktrace.c             | 41 ++----------------
>  2 files changed, 54 insertions(+), 37 deletions(-)
>
> diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/include/asm/stacktrace/common.h
> index f58b786460d3..0c5cbfdb56b5 100644
> --- a/arch/arm64/include/asm/stacktrace/common.h
> +++ b/arch/arm64/include/asm/stacktrace/common.h
> @@ -65,6 +65,10 @@ struct unwind_state {
>  static inline bool on_overflow_stack(unsigned long sp, unsigned long size,
>                                      struct stack_info *info);
>
> +static inline bool on_accessible_stack(const struct task_struct *tsk,
> +                                      unsigned long sp, unsigned long size,
> +                                      struct stack_info *info);
> +
>  static inline bool on_stack(unsigned long sp, unsigned long size,
>                             unsigned long low, unsigned long high,
>                             enum stack_type type, struct stack_info *info)
> @@ -120,4 +124,50 @@ static inline void unwind_init_common(struct unwind_state *state,
>         state->prev_type = STACK_TYPE_UNKNOWN;
>  }
>
> +static inline int unwind_next_common(struct unwind_state *state,
> +                                    struct stack_info *info)
> +{
> +       struct task_struct *tsk = state->task;
> +       unsigned long fp = state->fp;
> +
> +       if (fp & 0x7)
> +               return -EINVAL;
> +
> +       if (!on_accessible_stack(tsk, fp, 16, info))
> +               return -EINVAL;
> +
> +       if (test_bit(info->type, state->stacks_done))
> +               return -EINVAL;
> +
> +       /*
> +        * As stacks grow downward, any valid record on the same stack must be
> +        * at a strictly higher address than the prior record.
> +        *
> +        * Stacks can nest in several valid orders, e.g.
> +        *
> +        * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL
> +        * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW
> +        *
> +        * ... but the nesting itself is strict. Once we transition from one
> +        * stack to another, it's never valid to unwind back to that first
> +        * stack.
> +        */
> +       if (info->type == state->prev_type) {
> +               if (fp <= state->prev_fp)
> +                       return -EINVAL;
> +       } else {
> +               __set_bit(state->prev_type, state->stacks_done);
> +       }
> +
> +       /*
> +        * Record this frame record's values and location. The prev_fp and
> +        * prev_type are only meaningful to the next unwind_next() invocation.
> +        */
> +       state->fp = READ_ONCE(*(unsigned long *)(fp));
> +       state->pc = READ_ONCE(*(unsigned long *)(fp + 8));
> +       state->prev_fp = fp;
> +       state->prev_type = info->type;
> +
> +       return 0;
> +}
>  #endif /* __ASM_STACKTRACE_COMMON_H */
> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> index 94a5dd2ab8fd..834851939364 100644
> --- a/arch/arm64/kernel/stacktrace.c
> +++ b/arch/arm64/kernel/stacktrace.c
> @@ -81,48 +81,15 @@ static int notrace unwind_next(struct unwind_state *state)
>         struct task_struct *tsk = state->task;
>         unsigned long fp = state->fp;
>         struct stack_info info;
> +       int err;
>
>         /* Final frame; nothing to unwind */
>         if (fp == (unsigned long)task_pt_regs(tsk)->stackframe)
>                 return -ENOENT;
>
> -       if (fp & 0x7)
> -               return -EINVAL;
> -
> -       if (!on_accessible_stack(tsk, fp, 16, &info))
> -               return -EINVAL;
> -
> -       if (test_bit(info.type, state->stacks_done))
> -               return -EINVAL;
> -
> -       /*
> -        * As stacks grow downward, any valid record on the same stack must be
> -        * at a strictly higher address than the prior record.
> -        *
> -        * Stacks can nest in several valid orders, e.g.
> -        *
> -        * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL
> -        * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW
> -        *
> -        * ... but the nesting itself is strict. Once we transition from one
> -        * stack to another, it's never valid to unwind back to that first
> -        * stack.
> -        */
> -       if (info.type == state->prev_type) {
> -               if (fp <= state->prev_fp)
> -                       return -EINVAL;
> -       } else {
> -               __set_bit(state->prev_type, state->stacks_done);
> -       }
> -
> -       /*
> -        * Record this frame record's values and location. The prev_fp and
> -        * prev_type are only meaningful to the next unwind_next() invocation.
> -        */
> -       state->fp = READ_ONCE(*(unsigned long *)(fp));
> -       state->pc = READ_ONCE(*(unsigned long *)(fp + 8));
> -       state->prev_fp = fp;
> -       state->prev_type = info.type;
> +       err = unwind_next_common(state, &info);
> +       if (err)
> +               return err;
>
>         state->pc = ptrauth_strip_insn_pac(state->pc);
>
> --
> 2.37.0.170.g444d1eabd0-goog
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2022-07-15 13:59 UTC|newest]

Thread overview: 162+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-15  6:10 [PATCH v4 00/18] KVM nVHE Hypervisor stack unwinder Kalesh Singh
2022-07-15  6:10 ` Kalesh Singh
2022-07-15  6:10 ` Kalesh Singh
2022-07-15  6:10 ` [PATCH v4 01/18] arm64: stacktrace: Add shared header for common stack unwinding code Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15 12:37   ` Mark Brown
2022-07-15 12:37     ` Mark Brown
2022-07-15 12:37     ` Mark Brown
2022-07-15 13:58   ` Fuad Tabba
2022-07-15 13:58     ` Fuad Tabba
2022-07-15 13:58     ` Fuad Tabba
2022-07-18 12:52   ` Russell King (Oracle)
2022-07-18 12:52     ` Russell King (Oracle)
2022-07-18 12:52     ` Russell King (Oracle)
2022-07-18 15:26     ` Kalesh Singh
2022-07-18 15:26       ` Kalesh Singh
2022-07-18 15:26       ` Kalesh Singh
2022-07-18 16:00       ` Russell King (Oracle)
2022-07-18 16:00         ` Russell King (Oracle)
2022-07-18 16:00         ` Russell King (Oracle)
2022-07-15  6:10 ` [PATCH v4 02/18] arm64: stacktrace: Factor out on_accessible_stack_common() Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15 13:58   ` Fuad Tabba
2022-07-15 13:58     ` Fuad Tabba
2022-07-15 13:58     ` Fuad Tabba
2022-07-15 16:28   ` Mark Brown
2022-07-15 16:28     ` Mark Brown
2022-07-15 16:28     ` Mark Brown
2022-07-15  6:10 ` [PATCH v4 03/18] arm64: stacktrace: Factor out unwind_next_common() Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15 13:58   ` Fuad Tabba [this message]
2022-07-15 13:58     ` Fuad Tabba
2022-07-15 13:58     ` Fuad Tabba
2022-07-15 16:29   ` Mark Brown
2022-07-15 16:29     ` Mark Brown
2022-07-15 16:29     ` Mark Brown
2022-07-15  6:10 ` [PATCH v4 04/18] arm64: stacktrace: Handle frame pointer from different address spaces Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15 13:56   ` Fuad Tabba
2022-07-15 13:56     ` Fuad Tabba
2022-07-15 13:56     ` Fuad Tabba
2022-07-18 17:40     ` Kalesh Singh
2022-07-18 17:40       ` Kalesh Singh
2022-07-18 17:40       ` Kalesh Singh
2022-07-15  6:10 ` [PATCH v4 05/18] arm64: stacktrace: Factor out common unwind() Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15 13:58   ` Fuad Tabba
2022-07-15 13:58     ` Fuad Tabba
2022-07-15 13:58     ` Fuad Tabba
2022-07-15  6:10 ` [PATCH v4 06/18] arm64: stacktrace: Add description of stacktrace/common.h Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15 13:59   ` Fuad Tabba
2022-07-15 13:59     ` Fuad Tabba
2022-07-15 13:59     ` Fuad Tabba
2022-07-17  9:57   ` Marc Zyngier
2022-07-17  9:57     ` Marc Zyngier
2022-07-17  9:57     ` Marc Zyngier
2022-07-18 16:53     ` Kalesh Singh
2022-07-18 16:53       ` Kalesh Singh
2022-07-18 16:53       ` Kalesh Singh
2022-07-15  6:10 ` [PATCH v4 07/18] KVM: arm64: On stack overflow switch to hyp overflow_stack Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-18  9:46   ` Fuad Tabba
2022-07-18  9:46     ` Fuad Tabba
2022-07-18  9:46     ` Fuad Tabba
2022-07-15  6:10 ` [PATCH v4 08/18] KVM: arm64: Add PROTECTED_NVHE_STACKTRACE Kconfig Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-18  6:55   ` Marc Zyngier
2022-07-18  6:55     ` Marc Zyngier
2022-07-18  6:55     ` Marc Zyngier
2022-07-18 17:03     ` Kalesh Singh
2022-07-18 17:03       ` Kalesh Singh
2022-07-18 17:03       ` Kalesh Singh
2022-07-19 10:35       ` Marc Zyngier
2022-07-19 10:35         ` Marc Zyngier
2022-07-19 10:35         ` Marc Zyngier
2022-07-19 18:23         ` Kalesh Singh
2022-07-19 18:23           ` Kalesh Singh
2022-07-19 18:23           ` Kalesh Singh
2022-07-15  6:10 ` [PATCH v4 09/18] KVM: arm64: Allocate shared pKVM hyp stacktrace buffers Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-18  7:13   ` Marc Zyngier
2022-07-18  7:13     ` Marc Zyngier
2022-07-18  7:13     ` Marc Zyngier
2022-07-18 17:27     ` Kalesh Singh
2022-07-18 17:27       ` Kalesh Singh
2022-07-18 17:27       ` Kalesh Singh
2022-07-18 10:00   ` Fuad Tabba
2022-07-18 10:00     ` Fuad Tabba
2022-07-18 10:00     ` Fuad Tabba
2022-07-15  6:10 ` [PATCH v4 10/18] KVM: arm64: Stub implementation of pKVM HYP stack unwinder Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-18  7:20   ` Marc Zyngier
2022-07-18  7:20     ` Marc Zyngier
2022-07-18  7:20     ` Marc Zyngier
2022-07-15  6:10 ` [PATCH v4 11/18] KVM: arm64: Stub implementation of non-protected nVHE " Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-18  7:30   ` Marc Zyngier
2022-07-18  7:30     ` Marc Zyngier
2022-07-18  7:30     ` Marc Zyngier
2022-07-18 16:51     ` Kalesh Singh
2022-07-18 16:51       ` Kalesh Singh
2022-07-18 16:51       ` Kalesh Singh
2022-07-18 16:57       ` Marc Zyngier
2022-07-18 16:57         ` Marc Zyngier
2022-07-18 16:57         ` Marc Zyngier
2022-07-15  6:10 ` [PATCH v4 12/18] KVM: arm64: Save protected-nVHE (pKVM) hyp stacktrace Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-18  9:36   ` Marc Zyngier
2022-07-18  9:36     ` Marc Zyngier
2022-07-18  9:36     ` Marc Zyngier
2022-07-18 17:32     ` Kalesh Singh
2022-07-18 17:32       ` Kalesh Singh
2022-07-18 17:32       ` Kalesh Singh
2022-07-18 10:07   ` Fuad Tabba
2022-07-18 10:07     ` Fuad Tabba
2022-07-18 10:07     ` Fuad Tabba
2022-07-18 17:36     ` Kalesh Singh
2022-07-18 17:36       ` Kalesh Singh
2022-07-18 17:36       ` Kalesh Singh
2022-07-15  6:10 ` [PATCH v4 13/18] KVM: arm64: Prepare non-protected nVHE hypervisor stacktrace Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10 ` [PATCH v4 14/18] KVM: arm64: Implement protected nVHE hyp stack unwinder Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10 ` [PATCH v4 15/18] KVM: arm64: Implement non-protected " Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10 ` [PATCH v4 16/18] KVM: arm64: Introduce pkvm_dump_backtrace() Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10 ` [PATCH v4 17/18] KVM: arm64: Introduce hyp_dump_backtrace() Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10 ` [PATCH v4 18/18] KVM: arm64: Dump nVHE hypervisor stack on panic Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15  6:10   ` Kalesh Singh
2022-07-15 13:55 ` [PATCH v4 00/18] KVM nVHE Hypervisor stack unwinder Fuad Tabba
2022-07-15 13:55   ` Fuad Tabba
2022-07-15 13:55   ` Fuad Tabba
2022-07-15 18:58   ` Kalesh Singh
2022-07-15 18:58     ` Kalesh Singh
2022-07-15 18:58     ` Kalesh Singh
2022-07-16  0:04     ` Kalesh Singh
2022-07-16  0:04       ` Kalesh Singh
2022-07-16  0:04       ` Kalesh Singh
2022-07-19 10:43 ` Marc Zyngier
2022-07-19 10:43   ` Marc Zyngier
2022-07-19 10:43   ` Marc Zyngier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CA+EHjTyDbH=7Bqo61CEadSKRsRHKsSWQcf=kbx5T_Fsj0-bL4g@mail.gmail.com' \
    --to=tabba@google.com \
    --cc=alexandru.elisei@arm.com \
    --cc=andreyknvl@gmail.com \
    --cc=android-mm@google.com \
    --cc=ardb@kernel.org \
    --cc=ast@kernel.org \
    --cc=broonie@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=drjones@redhat.com \
    --cc=elver@google.com \
    --cc=james.morse@arm.com \
    --cc=kaleshsingh@google.com \
    --cc=keirf@google.com \
    --cc=kernel-team@android.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=madvenka@linux.microsoft.com \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=mhiramat@kernel.org \
    --cc=oupton@google.com \
    --cc=qperret@google.com \
    --cc=russell.king@oracle.com \
    --cc=suzuki.poulose@arm.com \
    --cc=vincenzo.frascino@arm.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=will@kernel.org \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.