linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>
To: Mark Rutland <mark.rutland@arm.com>
Cc: broonie@kernel.org, jpoimboe@redhat.com, ardb@kernel.org,
	nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com,
	catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org,
	linux-arm-kernel@lists.infradead.org,
	live-patching@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v11 1/5] arm64: Call stack_backtrace() only from within walk_stackframe()
Date: Thu, 9 Dec 2021 22:13:50 -0600	[thread overview]
Message-ID: <ed143443-a406-a83e-bf7c-18083b98e503@linux.microsoft.com> (raw)
In-Reply-To: <1dffea0a-fd99-ccd6-625f-c5e573186741@linux.microsoft.com>

Hey Mark,

Do you have any comments on the rest of the series? I am working on the next version of the patchset.
If you have any other comments, I will wait.

Thanks.

Madhavan

On 11/30/21 2:29 PM, Madhavan T. Venkataraman wrote:
> 
> 
> On 11/30/21 12:29 PM, Mark Rutland wrote:
>> On Tue, Nov 30, 2021 at 11:13:28AM -0600, Madhavan T. Venkataraman wrote:
>>> On 11/30/21 9:05 AM, Mark Rutland wrote:
>>>> On Tue, Nov 23, 2021 at 01:37:19PM -0600, madvenka@linux.microsoft.com wrote:
>>>>> From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>
>>>>>
>>>>> Currently, arch_stack_walk() calls start_backtrace() and walk_stackframe()
>>>>> separately. There is no need to do that. Instead, call start_backtrace()
>>>>> from within walk_stackframe(). In other words, walk_stackframe() is the only
>>>>> unwind function a consumer needs to call.
>>
>>>>> @@ -143,15 +140,19 @@ static int notrace unwind_frame(struct task_struct *tsk,
>>>>>  NOKPROBE_SYMBOL(unwind_frame);
>>>>>  
>>>>>  static void notrace walk_stackframe(struct task_struct *tsk,
>>>>> -				    struct stackframe *frame,
>>>>> +				    unsigned long fp, unsigned long pc,
>>>>>  				    bool (*fn)(void *, unsigned long), void *data)
>>>>>  {
>>>>> +	struct stackframe frame;
>>>>> +
>>>>> +	start_backtrace(&frame, fp, pc);
>>>>> +
>>>>>  	while (1) {
>>>>>  		int ret;
>>>>>  
>>>>> -		if (!fn(data, frame->pc))
>>>>> +		if (!fn(data, frame.pc))
>>>>>  			break;
>>>>> -		ret = unwind_frame(tsk, frame);
>>>>> +		ret = unwind_frame(tsk, &frame);
>>>>>  		if (ret < 0)
>>>>>  			break;
>>>>>  	}
>>>>> @@ -195,17 +196,19 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
>>>>>  			      void *cookie, struct task_struct *task,
>>>>>  			      struct pt_regs *regs)
>>>>>  {
>>>>> -	struct stackframe frame;
>>>>> -
>>>>> -	if (regs)
>>>>> -		start_backtrace(&frame, regs->regs[29], regs->pc);
>>>>> -	else if (task == current)
>>>>> -		start_backtrace(&frame,
>>>>> -				(unsigned long)__builtin_frame_address(1),
>>>>> -				(unsigned long)__builtin_return_address(0));
>>>>> -	else
>>>>> -		start_backtrace(&frame, thread_saved_fp(task),
>>>>> -				thread_saved_pc(task));
>>>>> -
>>>>> -	walk_stackframe(task, &frame, consume_entry, cookie);
>>>>> +	unsigned long fp, pc;
>>>>> +
>>>>> +	if (regs) {
>>>>> +		fp = regs->regs[29];
>>>>> +		pc = regs->pc;
>>>>> +	} else if (task == current) {
>>>>> +		/* Skip arch_stack_walk() in the stack trace. */
>>>>> +		fp = (unsigned long)__builtin_frame_address(1);
>>>>> +		pc = (unsigned long)__builtin_return_address(0);
>>>>> +	} else {
>>>>> +		/* Caller guarantees that the task is not running. */
>>>>> +		fp = thread_saved_fp(task);
>>>>> +		pc = thread_saved_pc(task);
>>>>> +	}
>>>>> +	walk_stackframe(task, fp, pc, consume_entry, cookie);
>>>>
>>>> I'd prefer to leave this as-is. The new and old structure are largely
>>>> equivalent, so we haven't made this any simpler, but we have added more
>>>> arguments to walk_stackframe().
>>>>
>>>
>>> This is just to simplify things when we eventually add arch_stack_walk_reliable().
>>> That is all. All of the unwinding is done by a single unwinding function and
>>> there are two consumers of that unwinding function - arch_stack_walk() and
>>> arch_stack_walk_reliable().
>>
>> I understand the theory, but I don't think that moving the start_backtrace()
>> call actually simplifies this in a meaningful way, and I think it'll make it
>> harder for us to make more meaningful simplifications later on.
>>
>> As of patch 4 of this series, we'll have:
>>
>> | noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
>> | 				      void *cookie, struct task_struct *task,
>> | 				      struct pt_regs *regs)
>> | {
>> | 	unsigned long fp, pc;
>> | 
>> | 	if (regs) {
>> | 		fp = regs->regs[29];
>> | 		pc = regs->pc;
>> | 	} else if (task == current) {
>> | 		/* Skip arch_stack_walk() in the stack trace. */
>> | 		fp = (unsigned long)__builtin_frame_address(1);
>> | 		pc = (unsigned long)__builtin_return_address(0);
>> | 	} else {
>> | 		/* Caller guarantees that the task is not running. */
>> | 		fp = thread_saved_fp(task);
>> | 		pc = thread_saved_pc(task);
>> | 	}
>> | 	walk_stackframe(task, fp, pc, consume_entry, cookie);
>> | }
>> | 
>> | noinline int notrace arch_stack_walk_reliable(stack_trace_consume_fn consume_fn,
>> |                                              void *cookie,
>> |                                              struct task_struct *task)
>> | {
>> | 	unsigned long fp, pc;
>> | 
>> | 	if (task == current) {
>> | 		/* Skip arch_stack_walk_reliable() in the stack trace. */
>> | 		fp = (unsigned long)__builtin_frame_address(1);
>> | 		pc = (unsigned long)__builtin_return_address(0);
>> | 	} else {
>> | 		/* Caller guarantees that the task is not running. */
>> | 		fp = thread_saved_fp(task);
>> | 		pc = thread_saved_pc(task);
>> | 	}
>> | 	if (unwind(task, fp, pc, consume_fn, cookie))
>> | 		return 0;
>> | 	return -EINVAL;
>> | }
>>
>> Which I do not think is substantially simpler than the naive extrapolation from
>> what we currently have, e.g.
>>
>> | noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
>> | 				      void *cookie, struct task_struct *task,
>> | 				      struct pt_regs *regs)
>> | {
>> |	struct stackframe frame;
>> | 
>> | 	if (regs) {
>> |		unwind_init(&frame, regs->regs[29], regs->pc)
>> | 	} else if (task == current) {
>> |		unwind_init(&frame, __builtin_frame_address(1),
>> |			    __builtin_return_address(0));
>> | 	} else {
>> |		unwind_init(&frame, thread_saved_fp(task),
>> |			    thread_saved_pc(task);
>> | 	}
>> | 	walk_stackframe(task, &frame, consume_entry, cookie);
>> | }
>> | 
>> | noinline int notrace arch_stack_walk_reliable(stack_trace_consume_fn consume_fn,
>> |                                              void *cookie,
>> |                                              struct task_struct *task)
>> | {
>> |	struct stackframe frame;
>> | 
>> | 	if (task == current) {
>> |		unwind_init(&frame, __builtin_frame_address(1),
>> |			    __builtin_return_address(0));
>> | 	} else {
>> |		unwind_init(&frame, thread_saved_fp(task),
>> |			    thread_saved_pc(task);
>> | 	}
>> | 	if (unwind(task, &frame, consume_fn, cookie))
>> | 		return 0;
>> | 	return -EINVAL;
>> | }
>>
>> Further, I think we can factor this in a different way to reduce the
>> duplication:
>>
>> | /*
>> |  * TODO: document requirements here
>> |  */
>> | static inline void unwind_init_from_current_regs(struct stackframe *frame,
>> | 						 struct pt_regs *regs)
>> | {
>> | 	unwind_init(frame, regs->regs[29], regs->pc);
>> | }
>> | 
>> | /*
>> |  * TODO: document requirements here
>> |  */
>> | static inline void unwind_init_from_blocked_task(struct stackframe *frame,
>> | 						 struct task_struct *tsk)
>> | {
>> | 	unwind_init(&frame, thread_saved_fp(task),
>> | 		    thread_saved_pc(task));
>> | }
>> | 
>> | /*
>> |  * TODO: document requirements here
>> |  *
>> |  * Note: this is always inlined, and we expect our caller to be a noinline
>> |  * function, such that this starts from our caller's caller.
>> |  */
>> | static __always_inline void unwind_init_from_caller(struct stackframe *frame)
>> | {
>> | 	unwind_init(frame, __builtin_frame_address(1),
>> | 		    __builtin_return_address(0));
>> | }
>> |
>> | noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
>> | 				      void *cookie, struct task_struct *task,
>> | 				      struct pt_regs *regs)
>> | {
>> |	struct stackframe frame;
>> | 
>> | 	if (regs)
>> |		unwind_init_current_regs(&frame, regs);
>> |	else if (task == current)
>> |		unwind_init_from_caller(&frame);
>> |	else
>> |		unwind_init_blocked_task(&frame, task);
>> |
>> |	unwind(task, &frame, consume_entry, cookie);
>> | }
>> |
>> | noinline int notrace arch_stack_walk_reliable(stack_trace_consume_fn consume_fn,
>> |                                              void *cookie,
>> |                                              struct task_struct *task)
>> | {
>> |	struct stackframe frame;
>> | 
>> | 	if (task == current)
>> |		unwind_init_from_caller(&frame);
>> | 	else
>> |		unwind_init_from_blocked_task(&frame, task);
>> |
>> | 	if (unwind(task, &frame, consume_fn, cookie))
>> | 		return 0;
>> | 	return -EINVAL;
>> | }
>>
>> ... which minimizes the duplication and allows us to add specialized
>> initialization for each case if necessary, which I believe we will need in
>> future to make unwinding across exception boundaries (such as when starting
>> with regs) more useful.
>>
>> Thanks,
>> Mark.
>>
> 
> OK. I don't mind doing it this way.
> 
> Thanks.
> 
> Madhavan
> 

  reply	other threads:[~2021-12-10  4:13 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <8b861784d85a21a9bf08598938c11aff1b1249b9>
2021-11-23 19:37 ` [PATCH v11 0/5] arm64: Reorganize the unwinder and implement stack trace reliability checks madvenka
2021-11-23 19:37   ` [PATCH v11 1/5] arm64: Call stack_backtrace() only from within walk_stackframe() madvenka
2021-11-25 13:48     ` Mark Brown
2021-11-30 15:05     ` Mark Rutland
2021-11-30 17:13       ` Madhavan T. Venkataraman
2021-11-30 18:29         ` Mark Rutland
2021-11-30 20:29           ` Madhavan T. Venkataraman
2021-12-10  4:13             ` Madhavan T. Venkataraman [this message]
2021-11-23 19:37   ` [PATCH v11 2/5] arm64: Rename unwinder functions madvenka
2021-11-24 17:10     ` Mark Brown
2021-11-30 15:08     ` Mark Rutland
2021-11-30 17:15       ` Madhavan T. Venkataraman
2021-11-23 19:37   ` [PATCH v11 3/5] arm64: Make the unwind loop in unwind() similar to other architectures madvenka
2021-11-25 14:30     ` Mark Brown
2021-11-23 19:37   ` [PATCH v11 4/5] arm64: Introduce stack trace reliability checks in the unwinder madvenka
2021-11-25 14:56     ` Mark Brown
2021-11-25 16:59       ` Madhavan T. Venkataraman
2021-11-26 13:29         ` Mark Brown
2021-11-26 17:23           ` Madhavan T. Venkataraman
2021-11-23 19:37   ` [PATCH v11 5/5] arm64: Create a list of SYM_CODE functions, check return PC against list madvenka
2021-11-25 15:05     ` Mark Brown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ed143443-a406-a83e-bf7c-18083b98e503@linux.microsoft.com \
    --to=madvenka@linux.microsoft.com \
    --cc=ardb@kernel.org \
    --cc=broonie@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=jmorris@namei.org \
    --cc=jpoimboe@redhat.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=live-patching@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=nobuta.keiya@fujitsu.com \
    --cc=sjitindarsingh@gmail.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).