linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Hansen <dave.hansen@intel.com>
To: Rik van Riel <riel@surriel.com>, linux-kernel@vger.kernel.org
Cc: x86@vger.kernel.org, kernel-team@fb.com,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Thomas Gleixner <tglx@linutronix.de>, Dave Jones <dsj@fb.com>,
	Andy Lutomirski <luto@kernel.org>
Subject: Re: [PATCH] x86,mm: print likely CPU at segfault time
Date: Wed, 3 Aug 2022 07:49:34 -0700	[thread overview]
Message-ID: <61ce1a14-7bac-fea8-b065-83a1c0704258@intel.com> (raw)
In-Reply-To: <20220802160900.7a68909b@imladris.surriel.com>

On 8/2/22 13:09, Rik van Riel wrote:
> Add a printk to show_signal_msg() to print the CPU, core, and socket

Nit:     ^ printk(), please

> --- a/arch/x86/mm/fault.c
> +++ b/arch/x86/mm/fault.c
> @@ -782,6 +782,12 @@ show_signal_msg(struct pt_regs *regs, unsigned long error_code,
>  
>  	print_vma_addr(KERN_CONT " in ", regs->ip);
>  
> +	printk(KERN_CONT " on CPU %d (core %d, socket %d)",
> +	       raw_smp_processor_id(),
> +	       topology_core_id(raw_smp_processor_id()),
> +	       topology_physical_package_id(raw_smp_processor_id()));

This seems totally sane to me.  I have found myself looking through
customer-provided *oopses* more than once trying to figure out if the
same CPU cores were at fault.  This extends that to userspace crashes
too.  I've also found myself trying to map back from logical CPU numbers
to core and package.

One nit: Preempt is enabled here, right?  I understand that this thing
is fundamentally racy, but if we did:

	int cpu = READ_ONCE(raw_smp_processor_id());

it would make it internally *consistent*.  Without that, we could
theoretically get three different raw_smp_processor_id()'s.  It might
even make the code look a wee bit nicer.

The changelog here is great, but  couple of comments would also be nice:

	/* This is a racy snapshot, but is better than nothing: */
	int cpu = READ_ONCE(raw_smp_processor_id());
...
	/*
	 * Dump the likely CPU where the fatal segfault happened.  This
	 * can help help identify buggy pieces of hardware.
	 */
	printk(KERN_CONT " on CPU %d (core %d, socket %d)", cpu,
	       topology_core_id(cpu),
	       topology_physical_package_id(cpu));

If you want to wait a bit and see if you get any other comments, this
seems like something we can suck in after the merge window.

  reply	other threads:[~2022-08-03 14:49 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-02 20:09 [PATCH] x86,mm: print likely CPU at segfault time Rik van Riel
2022-08-03 14:49 ` Dave Hansen [this message]
  -- strict thread matches above, loose matches on Subject: below --
2021-07-19 19:00 Rik van Riel
2021-07-19 19:20 ` Dave Hansen
2021-07-19 19:34   ` Rik van Riel
2021-07-21 20:38     ` Thomas Gleixner
2021-07-21 20:36 ` Thomas Gleixner
2021-07-24  1:38   ` Rik van Riel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=61ce1a14-7bac-fea8-b065-83a1c0704258@intel.com \
    --to=dave.hansen@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=dsj@fb.com \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=riel@surriel.com \
    --cc=tglx@linutronix.de \
    --cc=x86@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).