All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH 2/3] x86/mce: Avoid infinite loop for copy from user recovery
@ 2021-07-22 13:54 Jue Wang
  2021-07-22 15:19 ` Luck, Tony
  0 siblings, 1 reply; 12+ messages in thread
From: Jue Wang @ 2021-07-22 13:54 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Borislav Petkov, dinghui, huangcun, linux-edac, linux-kernel,
	HORIGUCHI NAOYA(堀口 直也),
	Oscar Salvador, x86, Song, Youquan

This patch assumes the UC error consumed in kernel is always the same UC.

Yet it's possible two UCs on different pages are consumed in a row.
The patch below will panic on the 2nd MCE. How can we make the code works
on multiple UC errors?


> + int count = ++current->mce_count;
> +
> + /* First call, save all the details */
> + if (count == 1) {
> + current->mce_addr = m->addr;
> + current->mce_kflags = m->kflags;
> + current->mce_ripv = !!(m->mcgstatus & MCG_STATUS_RIPV);
> + current->mce_whole_page = whole_page(m);
> + current->mce_kill_me.func = func;
> + }
> ......
> + /* Second or later call, make sure page address matches the one from first call */
> + if (count > 1 && (current->mce_addr >> PAGE_SHIFT) != (m->addr >> PAGE_SHIFT))
> + mce_panic("Machine checks to different user pages", m, msg);

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] x86/mce: Avoid infinite loop for copy from user recovery
  2021-07-22 13:54 [PATCH 2/3] x86/mce: Avoid infinite loop for copy from user recovery Jue Wang
@ 2021-07-22 15:19 ` Luck, Tony
  2021-07-22 23:30   ` Jue Wang
  0 siblings, 1 reply; 12+ messages in thread
From: Luck, Tony @ 2021-07-22 15:19 UTC (permalink / raw)
  To: Jue Wang
  Cc: Borislav Petkov, dinghui, huangcun, linux-edac, linux-kernel,
	HORIGUCHI NAOYA(堀口 直也),
	Oscar Salvador, x86, Song, Youquan

On Thu, Jul 22, 2021 at 06:54:37AM -0700, Jue Wang wrote:
> This patch assumes the UC error consumed in kernel is always the same UC.
> 
> Yet it's possible two UCs on different pages are consumed in a row.
> The patch below will panic on the 2nd MCE. How can we make the code works
> on multiple UC errors?
> 
> 
> > + int count = ++current->mce_count;
> > +
> > + /* First call, save all the details */
> > + if (count == 1) {
> > + current->mce_addr = m->addr;
> > + current->mce_kflags = m->kflags;
> > + current->mce_ripv = !!(m->mcgstatus & MCG_STATUS_RIPV);
> > + current->mce_whole_page = whole_page(m);
> > + current->mce_kill_me.func = func;
> > + }
> > ......
> > + /* Second or later call, make sure page address matches the one from first call */
> > + if (count > 1 && (current->mce_addr >> PAGE_SHIFT) != (m->addr >> PAGE_SHIFT))
> > + mce_panic("Machine checks to different user pages", m, msg);

The issue is getting the information about the location
of the error from the machine check handler to the "task_work"
function that processes it. Currently there is a single place
to store the address of the error in the task structure:

	current->mce_addr = m->addr;

Plausibly that could be made into an array, indexed by
current->mce_count to save mutiple addresses (perhaps
also need mce_kflags, mce_ripv, etc. to also be arrays).

But I don't want to pre-emptively make such a change without
some data to show that situations arise with multiple errors
to different addresses:
1) Actually occur
2) Would be recovered if we made the change.

The first would be indicated by seeing the:

	"Machine checks to different user pages"

panic. You'd have to code up the change to have arrays
to confirm that would fix the problem.

-Tony

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] x86/mce: Avoid infinite loop for copy from user recovery
  2021-07-22 15:19 ` Luck, Tony
@ 2021-07-22 23:30   ` Jue Wang
  2021-07-23  0:14     ` Luck, Tony
  0 siblings, 1 reply; 12+ messages in thread
From: Jue Wang @ 2021-07-22 23:30 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Borislav Petkov, dinghui, huangcun, linux-edac, linux-kernel,
	HORIGUCHI NAOYA(堀口 直也),
	Oscar Salvador, x86, Song, Youquan

I think the challenge being the uncorrectable errors are essentially
random. It's
just a matter of time for >1 UC errors to show up in sequential kernel accesses.

It's easy to create such cases with artificial error injections.

I suspect we want to design this part of the kernel to be able to handle generic
cases?

Thanks,
-Jue

On Thu, Jul 22, 2021 at 8:19 AM Luck, Tony <tony.luck@intel.com> wrote:
>
> On Thu, Jul 22, 2021 at 06:54:37AM -0700, Jue Wang wrote:
> > This patch assumes the UC error consumed in kernel is always the same UC.
> >
> > Yet it's possible two UCs on different pages are consumed in a row.
> > The patch below will panic on the 2nd MCE. How can we make the code works
> > on multiple UC errors?
> >
> >
> > > + int count = ++current->mce_count;
> > > +
> > > + /* First call, save all the details */
> > > + if (count == 1) {
> > > + current->mce_addr = m->addr;
> > > + current->mce_kflags = m->kflags;
> > > + current->mce_ripv = !!(m->mcgstatus & MCG_STATUS_RIPV);
> > > + current->mce_whole_page = whole_page(m);
> > > + current->mce_kill_me.func = func;
> > > + }
> > > ......
> > > + /* Second or later call, make sure page address matches the one from first call */
> > > + if (count > 1 && (current->mce_addr >> PAGE_SHIFT) != (m->addr >> PAGE_SHIFT))
> > > + mce_panic("Machine checks to different user pages", m, msg);
>
> The issue is getting the information about the location
> of the error from the machine check handler to the "task_work"
> function that processes it. Currently there is a single place
> to store the address of the error in the task structure:
>
>         current->mce_addr = m->addr;
>
> Plausibly that could be made into an array, indexed by
> current->mce_count to save mutiple addresses (perhaps
> also need mce_kflags, mce_ripv, etc. to also be arrays).
>
> But I don't want to pre-emptively make such a change without
> some data to show that situations arise with multiple errors
> to different addresses:
> 1) Actually occur
> 2) Would be recovered if we made the change.
>
> The first would be indicated by seeing the:
>
>         "Machine checks to different user pages"
>
> panic. You'd have to code up the change to have arrays
> to confirm that would fix the problem.
>
> -Tony

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] x86/mce: Avoid infinite loop for copy from user recovery
  2021-07-22 23:30   ` Jue Wang
@ 2021-07-23  0:14     ` Luck, Tony
  2021-07-23  3:47       ` Jue Wang
  0 siblings, 1 reply; 12+ messages in thread
From: Luck, Tony @ 2021-07-23  0:14 UTC (permalink / raw)
  To: Jue Wang
  Cc: Borislav Petkov, dinghui, huangcun, linux-edac, linux-kernel,
	HORIGUCHI NAOYA(堀口 直也),
	Oscar Salvador, x86, Song, Youquan

On Thu, Jul 22, 2021 at 04:30:44PM -0700, Jue Wang wrote:
> I think the challenge being the uncorrectable errors are essentially
> random. It's
> just a matter of time for >1 UC errors to show up in sequential kernel accesses.
> 
> It's easy to create such cases with artificial error injections.
> 
> I suspect we want to design this part of the kernel to be able to handle generic
> cases?

Remember that:
1) These errors are all in application memory
2) We reset the count every time we get into the task_work function that
   will return to user

So the multiple error scenario here is one where we hit errors
on different user pages on a single trip into the kernel.

Hitting the same page is easy. The kernel has places where it
can hit poison with page faults disabled, and it then enables
page faults and retries the same access, and hits poison again.

I'm not aware of, nor expecting to find, places where the kernel
tries to access user address A and hits poison, and then tries to
access user address B (without returrning to user between access
A and access B).

-Tony

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] x86/mce: Avoid infinite loop for copy from user recovery
  2021-07-23  0:14     ` Luck, Tony
@ 2021-07-23  3:47       ` Jue Wang
  2021-07-23  4:01         ` Luck, Tony
  0 siblings, 1 reply; 12+ messages in thread
From: Jue Wang @ 2021-07-23  3:47 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Borislav Petkov, dinghui, huangcun, linux-edac, linux-kernel,
	HORIGUCHI NAOYA(堀口 直也),
	Oscar Salvador, x86, Song, Youquan

On Thu, Jul 22, 2021 at 5:14 PM Luck, Tony <tony.luck@intel.com> wrote:
>
> I'm not aware of, nor expecting to find, places where the kernel
> tries to access user address A and hits poison, and then tries to
> access user address B (without returrning to user between access
> A and access B).
This seems a reasonablely easy scenario.

A user space app allocates a buffer of xyz KB/MB/GB.

Unfortunately the dimms are bad and multiple cache lines have
uncorrectable errors in them on different pages.

Then the user space app tries to write the content of the buffer into some
file via write(2) from the entire buffer in one go.

We have some test cases like this repros reliably with infinite MCE loop.

I believe the key here is that in the real world this will happen,
in particular the bit flips tend to be clustered physically -
same dimm row, dimm column, or same rank, same device etc.
>
> -Tony

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH 2/3] x86/mce: Avoid infinite loop for copy from user recovery
  2021-07-23  3:47       ` Jue Wang
@ 2021-07-23  4:01         ` Luck, Tony
  2021-07-23  4:16           ` Jue Wang
  0 siblings, 1 reply; 12+ messages in thread
From: Luck, Tony @ 2021-07-23  4:01 UTC (permalink / raw)
  To: Jue Wang
  Cc: Borislav Petkov, dinghui, huangcun, linux-edac, linux-kernel,
	HORIGUCHI NAOYA(堀口 直也),
	Oscar Salvador, x86, Song, Youquan

>> I'm not aware of, nor expecting to find, places where the kernel
>> tries to access user address A and hits poison, and then tries to
>> access user address B (without returrning to user between access
>> A and access B).
>This seems a reasonablely easy scenario.
>
> A user space app allocates a buffer of xyz KB/MB/GB.
>
> Unfortunately the dimms are bad and multiple cache lines have
> uncorrectable errors in them on different pages.
>
> Then the user space app tries to write the content of the buffer into some
> file via write(2) from the entire buffer in one go.

Before this patch Linux gets into an infinite loop taking machine
checks on the first of the poison addresses in the buffer.

With this patch (and also patch 3/3 in this series). There are
a few machine checks on the first poison address (I think the number
depends on the alignment of the poison within a page ... but I'm
not sure). My test code shows 4 machine checks at the same
address. Then Linux returns a short byte count to the user
showing how many bytes were actually written to the file.

The fast that there are many more poison lines in the buffer
beyond the place where the write stopped on the first one is
irrelevant.

[Well, if the second poisoned line is immediately after the first
you may hit h/w prefetch issues and h/w may signal a fatal
machine check ... but that's a different problem that s/w could
only solve with painful LFENCE operations between each 64-bytes
of the copy]

-Tony

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] x86/mce: Avoid infinite loop for copy from user recovery
  2021-07-23  4:01         ` Luck, Tony
@ 2021-07-23  4:16           ` Jue Wang
  2021-07-23 14:47             ` Luck, Tony
  0 siblings, 1 reply; 12+ messages in thread
From: Jue Wang @ 2021-07-23  4:16 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Borislav Petkov, dinghui, huangcun, linux-edac, linux-kernel,
	HORIGUCHI NAOYA(堀口 直也),
	Oscar Salvador, x86, Song, Youquan

On Thu, Jul 22, 2021 at 9:01 PM Luck, Tony <tony.luck@intel.com> wrote:
>
> >> I'm not aware of, nor expecting to find, places where the kernel
> >> tries to access user address A and hits poison, and then tries to
> >> access user address B (without returrning to user between access
> >> A and access B).
> >This seems a reasonablely easy scenario.
> >
> > A user space app allocates a buffer of xyz KB/MB/GB.
> >
> > Unfortunately the dimms are bad and multiple cache lines have
> > uncorrectable errors in them on different pages.
> >
> > Then the user space app tries to write the content of the buffer into some
> > file via write(2) from the entire buffer in one go.
>
> Before this patch Linux gets into an infinite loop taking machine
> checks on the first of the poison addresses in the buffer.
>
> With this patch (and also patch 3/3 in this series). There are
> a few machine checks on the first poison address (I think the number
> depends on the alignment of the poison within a page ... but I'm
> not sure). My test code shows 4 machine checks at the same
> address. Then Linux returns a short byte count to the user
> showing how many bytes were actually written to the file.
>
> The fast that there are many more poison lines in the buffer
> beyond the place where the write stopped on the first one is
> irrelevant.
In our test, the application memory was anon.
With 1 UC error injected, the test always passes with the error
recovered and a SIGBUS delivered to user space.

When there are >1 UC errors in buffer, then indefinite mce loop.
>
> [Well, if the second poisoned line is immediately after the first
> you may hit h/w prefetch issues and h/w may signal a fatal
> machine check ... but that's a different problem that s/w could
> only solve with painful LFENCE operations between each 64-bytes
> of the copy]
>
> -Tony

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH 2/3] x86/mce: Avoid infinite loop for copy from user recovery
  2021-07-23  4:16           ` Jue Wang
@ 2021-07-23 14:47             ` Luck, Tony
  0 siblings, 0 replies; 12+ messages in thread
From: Luck, Tony @ 2021-07-23 14:47 UTC (permalink / raw)
  To: Jue Wang
  Cc: Borislav Petkov, dinghui, huangcun, linux-edac, linux-kernel,
	HORIGUCHI NAOYA(堀口 直也),
	Oscar Salvador, x86, Song, Youquan

> In our test, the application memory was anon.
> With 1 UC error injected, the test always passes with the error
> recovered and a SIGBUS delivered to user space.
>
> When there are >1 UC errors in buffer, then indefinite mce loop.

Do you still see the infinite loop with these three patches on top of
v5.14-rc, rather than a short byte return value from write, or

	mce_panic("Machine checks to different user pages", m, msg);

-Tony



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] x86/mce: Avoid infinite loop for copy from user recovery
  2021-07-31 20:43 ` Luck, Tony
@ 2021-08-02 15:29   ` Jue Wang
  0 siblings, 0 replies; 12+ messages in thread
From: Jue Wang @ 2021-08-02 15:29 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Borislav Petkov, dinghui, huangcun, linux-edac, linux-kernel,
	HORIGUCHI NAOYA(堀口 直也),
	Oscar Salvador, x86, Song, Youquan

On Sat, Jul 31, 2021 at 1:43 PM Luck, Tony <tony.luck@intel.com> wrote:
>
> > After cherry picking patch 1 & 2, I saw the following with 2 UC errors injected
> > into the user space buffer passed into write(2), as expected:
> >
> > [  287.994754] Kernel panic - not syncing: Machine checks to different
> > user pages
>
> Interesting.  What are the offsets of the two injected errors in your test (both
> w.r.t. the start of the buffer, and within a page).
They are just random offsets into the first 2 pages of the buffer (4k aligned),
1 error per page. To be precise: 0x440 and 0x1c0 within each page.

>
> > The kernel tested with has its x86/mce and mm/memory-failure aligned with
> > upstream till around 2020/11.
> >
> > Is there any other patch that I have missed to the write syscall etc?
>
> There is a long series of patches from Al Viro to lib/iov_iter.c that are maybe
> also relevent in making the kernel copy from user stop at the first poison
> address in the buffer.
Thanks for the pointer.

Looks like [1],[2] are not yet merged.

Is lib/iov_iter.c the only place the kernel performs a copy from user
and gets multiple
poisons? I suspect not.

For example, lots of kernel accesses to user space memory are from kernel agents
like khugepaged, NUMA auto balancing etc. These paths are not handled by the fix
to lib/iov_iter.c.

I think the fix might have to be made to #MC handler's behavior wrt
the task work.
Send #MC signals and perform memory-failures actions from a task work is fine
for #MCs originated from user space, but not suitable for kernel
accessing poisoned
memory (user space memory). For the latter #MC handler must handle
#MC recovery in the exception context without resorting to task work;
this may be
OK since the recovery action for the later case is minimal: mark PG_hwpoison and
remove the kernel mapping.

1. https://lore.kernel.org/linux-mm/20210326000235.370514-2-tony.luck@intel.com/
2. https://lore.kernel.org/linux-mm/20210326000235.370514-3-tony.luck@intel.com/

>
> -Tony

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH 2/3] x86/mce: Avoid infinite loop for copy from user recovery
  2021-07-31  6:30 Jue Wang
@ 2021-07-31 20:43 ` Luck, Tony
  2021-08-02 15:29   ` Jue Wang
  0 siblings, 1 reply; 12+ messages in thread
From: Luck, Tony @ 2021-07-31 20:43 UTC (permalink / raw)
  To: Jue Wang
  Cc: Borislav Petkov, dinghui, huangcun, linux-edac, linux-kernel,
	HORIGUCHI NAOYA(堀口 直也),
	Oscar Salvador, x86, Song, Youquan

> After cherry picking patch 1 & 2, I saw the following with 2 UC errors injected
> into the user space buffer passed into write(2), as expected:
>
> [  287.994754] Kernel panic - not syncing: Machine checks to different
> user pages

Interesting.  What are the offsets of the two injected errors in your test (both
w.r.t. the start of the buffer, and within a page).

> The kernel tested with has its x86/mce and mm/memory-failure aligned with
> upstream till around 2020/11.
>
> Is there any other patch that I have missed to the write syscall etc?

There is a long series of patches from Al Viro to lib/iov_iter.c that are maybe
also relevent in making the kernel copy from user stop at the first poison
address in the buffer.

-Tony

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH 2/3] x86/mce: Avoid infinite loop for copy from user recovery
@ 2021-07-31  6:30 Jue Wang
  2021-07-31 20:43 ` Luck, Tony
  0 siblings, 1 reply; 12+ messages in thread
From: Jue Wang @ 2021-07-31  6:30 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Borislav Petkov, dinghui, huangcun, Jue Wang, linux-edac,
	linux-kernel, HORIGUCHI NAOYA(堀口 直也),
	Oscar Salvador, x86, Song, Youquan

Been busy with some other work.

After cherry picking patch 1 & 2, I saw the following with 2 UC errors injected
into the user space buffer passed into write(2), as expected:

[  287.994754] Kernel panic - not syncing: Machine checks to different
user pages

The kernel tested with has its x86/mce and mm/memory-failure aligned with
upstream till around 2020/11.

Is there any other patch that I have missed to the write syscall etc?

Thanks,
-Jue

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 2/3] x86/mce: Avoid infinite loop for copy from user recovery
  2021-07-06 19:06 [PATCH 0/3] More machine check recovery fixes Tony Luck
@ 2021-07-06 19:06 ` Tony Luck
  0 siblings, 0 replies; 12+ messages in thread
From: Tony Luck @ 2021-07-06 19:06 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Tony Luck, Ding Hui, naoya.horiguchi, osalvador, Youquan Song,
	huangcun, x86, linux-edac, linux-kernel

Recovery action when get_user() triggers a machine check uses the fixup
path to make get_user() return -EFAULT.  Also queue_task_work() sets up
so that kill_me_maybe() will be called on return to user mode to send a
SIGBUS to the current process.

But there are places in the kernel where the code assumes that this
EFAULT return was simply because of a page fault. The code takes some
action to fix that, and then retries the access. This results in a second
machine check.

While processing this second machine check queue_task_work() is called
again. But since this uses the same callback_head structure that
was used in the first call, the net result is an entry on the
current->task_works list that points to itself. When task_work_run()
is called it loops forever in this code:

	do {
		next = work->next;
		work->func(work);
		work = next;
		cond_resched();
	} while (work);

Add a counter (current->mce_count) to keep track of repeated machine checks
before task_work() is called. First machine check saves the address information
and calls task_work_add(). Subsequent machine checks before that task_work
call back is executed check that the address is in the same page as the first
machine check (since the callback will offline exactly one page).

Expected worst case is two machine checks before moving on (e.g. one user
access with page faults disabled, then a repeat to the same addrsss with
page faults enabled). Just in case there is some code that loops forever
enforce a limit of 10.

Signed-off-by: Tony Luck <tony.luck@intel.com>
---
 arch/x86/kernel/cpu/mce/core.c | 39 ++++++++++++++++++++++++++--------
 include/linux/sched.h          |  1 +
 2 files changed, 31 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index dd03971e5ad5..957ec60cd2a8 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1250,6 +1250,9 @@ static void __mc_scan_banks(struct mce *m, struct pt_regs *regs, struct mce *fin
 
 static void kill_me_now(struct callback_head *ch)
 {
+	struct task_struct *p = container_of(ch, struct task_struct, mce_kill_me);
+
+	p->mce_count = 0;
 	force_sig(SIGBUS);
 }
 
@@ -1259,6 +1262,7 @@ static void kill_me_maybe(struct callback_head *cb)
 	int flags = MF_ACTION_REQUIRED;
 	int ret;
 
+	p->mce_count = 0;
 	pr_err("Uncorrected hardware memory error in user-access at %llx", p->mce_addr);
 
 	if (!p->mce_ripv)
@@ -1287,19 +1291,36 @@ static void kill_me_never(struct callback_head *cb)
 {
 	struct task_struct *p = container_of(cb, struct task_struct, mce_kill_me);
 
+	p->mce_count = 0;
 	pr_err("Kernel accessed poison in user space at %llx\n", p->mce_addr);
 	if (!memory_failure(p->mce_addr >> PAGE_SHIFT, 0))
 		set_mce_nospec(p->mce_addr >> PAGE_SHIFT, p->mce_whole_page);
 }
 
-static void queue_task_work(struct mce *m, void (*func)(struct callback_head *))
+static void queue_task_work(struct mce *m, char *msg, void (*func)(struct callback_head *))
 {
-	current->mce_addr = m->addr;
-	current->mce_kflags = m->kflags;
-	current->mce_ripv = !!(m->mcgstatus & MCG_STATUS_RIPV);
-	current->mce_whole_page = whole_page(m);
+	int count = ++current->mce_count;
+
+	/* First call, save all the details */
+	if (count == 1) {
+		current->mce_addr = m->addr;
+		current->mce_kflags = m->kflags;
+		current->mce_ripv = !!(m->mcgstatus & MCG_STATUS_RIPV);
+		current->mce_whole_page = whole_page(m);
+		current->mce_kill_me.func = func;
+	}
 
-	current->mce_kill_me.func = func;
+	/* Ten is likley overkill. Don't expect more than two faults before task_work() */
+	if (count > 10)
+		mce_panic("Too many machine checks while accessing user data", m, msg);
+
+	/* Second or later call, make sure page address matches the one from first call */
+	if (count > 1 && (current->mce_addr >> PAGE_SHIFT) != (m->addr >> PAGE_SHIFT))
+		mce_panic("Machine checks to different user pages", m, msg);
+
+	/* Do not call task_work_add() more than once */
+	if (count > 1)
+		return;
 
 	task_work_add(current, &current->mce_kill_me, TWA_RESUME);
 }
@@ -1438,9 +1459,9 @@ noinstr void do_machine_check(struct pt_regs *regs)
 		BUG_ON(!on_thread_stack() || !user_mode(regs));
 
 		if (kill_current_task)
-			queue_task_work(&m, kill_me_now);
+			queue_task_work(&m, msg, kill_me_now);
 		else
-			queue_task_work(&m, kill_me_maybe);
+			queue_task_work(&m, msg, kill_me_maybe);
 
 	} else {
 		/*
@@ -1458,7 +1479,7 @@ noinstr void do_machine_check(struct pt_regs *regs)
 		}
 
 		if (m.kflags & MCE_IN_KERNEL_COPYIN)
-			queue_task_work(&m, kill_me_never);
+			queue_task_work(&m, msg, kill_me_never);
 	}
 out:
 	mce_wrmsrl(MSR_IA32_MCG_STATUS, 0);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index ec8d07d88641..f6935787e7e8 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1394,6 +1394,7 @@ struct task_struct {
 					mce_whole_page : 1,
 					__mce_reserved : 62;
 	struct callback_head		mce_kill_me;
+	int				mce_count;
 #endif
 
 #ifdef CONFIG_KRETPROBES
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-08-02 15:30 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-22 13:54 [PATCH 2/3] x86/mce: Avoid infinite loop for copy from user recovery Jue Wang
2021-07-22 15:19 ` Luck, Tony
2021-07-22 23:30   ` Jue Wang
2021-07-23  0:14     ` Luck, Tony
2021-07-23  3:47       ` Jue Wang
2021-07-23  4:01         ` Luck, Tony
2021-07-23  4:16           ` Jue Wang
2021-07-23 14:47             ` Luck, Tony
  -- strict thread matches above, loose matches on Subject: below --
2021-07-31  6:30 Jue Wang
2021-07-31 20:43 ` Luck, Tony
2021-08-02 15:29   ` Jue Wang
2021-07-06 19:06 [PATCH 0/3] More machine check recovery fixes Tony Luck
2021-07-06 19:06 ` [PATCH 2/3] x86/mce: Avoid infinite loop for copy from user recovery Tony Luck

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.