live-patching.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Petr Mladek <pmladek@suse.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: gor@linux.ibm.com, jpoimboe@redhat.com, jikos@kernel.org,
	mbenes@suse.cz, mingo@kernel.org, linux-kernel@vger.kernel.org,
	joe.lawrence@redhat.com, fweisbec@gmail.com, tglx@linutronix.de,
	hca@linux.ibm.com, svens@linux.ibm.com, sumanthk@linux.ibm.com,
	live-patching@vger.kernel.org, paulmck@kernel.org,
	rostedt@goodmis.org, x86@kernel.org
Subject: Re: [RFC][PATCH v2 09/11] context_tracking,livepatch: Dont disturb NOHZ_FULL
Date: Wed, 6 Oct 2021 12:29:32 +0200	[thread overview]
Message-ID: <YV16jKrB5Azu/nD+@alley> (raw)
In-Reply-To: <YV1mmv5QbB/vf3/O@hirez.programming.kicks-ass.net>

On Wed 2021-10-06 11:04:26, Peter Zijlstra wrote:
> On Wed, Oct 06, 2021 at 10:12:17AM +0200, Petr Mladek wrote:
> > IMHO, the original solution from v1 was better. We only needed to
> 
> It was also terribly broken in other 'fun' ways. See below.
> 
> > be careful when updating task->patch_state and clearing
> > TIF_PATCH_PENDING to avoid the race.
> > 
> > The following might work:
> > 
> > static int klp_check_and_switch_task(struct task_struct *task, void *arg)
> > {
> > 	int ret;
> > 
> > 	/*
> > 	 * Stack is reliable only when the task is not running on any CPU,
> > 	 * except for the task running this code.
> > 	 */
> > 	if (task_curr(task) && task != current) {
> > 		/*
> > 		 * This only succeeds when the task is in NOHZ_FULL user
> > 		 * mode. Such a task might be migrated immediately. We
> > 		 * only need to be careful to set task->patch_state before
> > 		 * clearing TIF_PATCH_PENDING so that the task migrates
> > 		 * itself when entring kernel in the meatime.
> > 		 */
> > 		if (is_ct_user(task)) {
> > 			klp_update_patch_state(task);
> > 			return 0;
> > 		}
> > 
> > 		return -EBUSY;
> > 	}
> > 
> > 	ret = klp_check_stack(task, arg);
> > 	if (ret)
> > 		return ret;
> > 
> > 	/*
> > 	 * The task neither is running on any CPU and nor it can get
> > 	 * running. As a result, the ordering is not important and
> > 	 * barrier is not needed.
> > 	 */
> > 	task->patch_state = klp_target_state;
> > 	clear_tsk_thread_flag(task, TIF_PATCH_PENDING);
> > 
> > 	return 0;
> > }
> > 
> > , where is_ct_user(task) would return true when task is running in
> > CONTEXT_USER. If I get the context_tracking API correctly then
> > it might be implemeted the following way:
> 
> That's not sufficient, you need to tag the remote task with a ct_work
> item to also runs klp_update_patch_state(), otherwise the remote CPU can
> enter kernel space between checking is_ct_user() and doing
> klp_update_patch_state():
> 
> 	CPU0				CPU1
> 
> 					<user>
> 
> 	if (is_ct_user()) // true
> 					<kernel-entry>
> 					  // run some kernel code
> 	  klp_update_patch_state()
> 	  *WHOOPSIE*
> 
> 
> So it needs to be something like:
> 
> 
> 	CPU0				CPU1
> 
> 					<user>
> 
> 	if (context_tracking_set_cpu_work(task_cpu(), CT_WORK_KLP))
> 
> 					<kernel-entry>
> 	  klp_update_patch_state	  klp_update_patch_state()
> 
> 
> So that CPU0 and CPU1 race to complete klp_update_patch_state() *before*
> any regular (!noinstr) code gets run.

Grr, you are right. I thought that we migrated the task when entering
kernel even before. But it seems that we do it only when leaving
the kernel in exit_to_user_mode_loop().


> Which then means it needs to look something like:
> 
> noinstr void klp_update_patch_state(struct task_struct *task)
> {
> 	struct thread_info *ti = task_thread_info(task);
> 
> 	preempt_disable_notrace();
> 	if (arch_test_bit(TIF_PATCH_PENDING, (unsigned long *)&ti->flags)) {
> 		/*
> 		 * Order loads of TIF_PATCH_PENDING vs klp_target_state.
> 		 * See klp_init_transition().
> 		 */
> 		smp_rmb();
> 		task->patch_state = __READ_ONCE(klp_target_state);
> 		/*
> 		 * Concurrent against self; must observe updated
> 		 * task->patch_state if !TIF_PATCH_PENDING.
> 		 */
> 		smp_mb__before_atomic();

IMHO, smp_wmb() should be enough. We are here only when this
CPU set task->patch_state right above. So that CPU running
this code should see the correct task->patch_state.

The read barrier is needed only when @task is entering kernel and
does not see TIF_PATCH_PENDING. It is handled by smp_rmb() in
the "else" branch below.

It is possible that both CPUs see TIF_PATCH_PENDING and both
set task->patch_state. But it should not cause any harm
because they set the same value. Unless something really
crazy happens with the internal CPU busses and caches.


> 		arch_clear_bit(TIF_PATCH_PENDING, (unsigned long *)&ti->flags);
> 	} else {
> 		/*
> 		 * Concurrent against self, see smp_mb__before_atomic()
> 		 * above.
> 		 */
> 		smp_rmb();

Yeah, this is the counter part against the above smp_wmb().

> 	}
> 	preempt_enable_notrace();
> }

Now, I am scared to increase my paranoia level and search for even more
possible races. I feel overwhelmed at the moment ;-)

Best Regards,
Petr

  reply	other threads:[~2021-10-06 10:29 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-29 15:17 [PATCH v2 00/11] sched,rcu,context_tracking,livepatch: Improve livepatch task transitions for idle and NOHZ_FULL Peter Zijlstra
2021-09-29 15:17 ` [PATCH v2 01/11] sched: Improve try_invoke_on_locked_down_task() Peter Zijlstra
2021-09-29 15:17 ` [PATCH v2 02/11] sched,rcu: Rework try_invoke_on_locked_down_task() Peter Zijlstra
2021-09-29 15:17 ` [PATCH v2 03/11] sched,livepatch: Use task_call_func() Peter Zijlstra
2021-10-05 11:40   ` Petr Mladek
2021-10-05 14:03     ` Peter Zijlstra
2021-10-06  8:59   ` Miroslav Benes
2021-09-29 15:17 ` [PATCH v2 04/11] sched: Simplify wake_up_*idle*() Peter Zijlstra
2021-10-13 14:32   ` Qian Cai
2021-10-19  3:47     ` Qian Cai
2021-10-19  8:56       ` Peter Zijlstra
2021-10-19  9:10         ` Peter Zijlstra
2021-10-19 15:32           ` Qian Cai
2021-10-19 15:50             ` Peter Zijlstra
2021-10-19 19:22               ` Qian Cai
2021-10-19 20:27                 ` Peter Zijlstra
     [not found]   ` <CGME20211022134630eucas1p2e79e2816587d182c580459d567c1f2a9@eucas1p2.samsung.com>
2021-10-22 13:46     ` Marek Szyprowski
2021-09-29 15:17 ` [PATCH v2 05/11] sched,livepatch: Use wake_up_if_idle() Peter Zijlstra
2021-10-05 12:00   ` Petr Mladek
2021-10-06  9:16   ` Miroslav Benes
2021-10-07  9:18     ` Vasily Gorbik
2021-10-07 10:02       ` Peter Zijlstra
2021-10-13 19:37   ` Arnd Bergmann
2021-10-14 10:42     ` Peter Zijlstra
2021-09-29 15:17 ` [RFC][PATCH v2 06/11] context_tracking: Prefix user_{enter,exit}*() Peter Zijlstra
2021-09-29 15:17 ` [RFC][PATCH v2 07/11] context_tracking: Add an atomic sequence/state count Peter Zijlstra
2021-09-29 15:17 ` [RFC][PATCH v2 08/11] context_tracking,rcu: Replace RCU dynticks counter with context_tracking Peter Zijlstra
2021-09-29 18:37   ` Paul E. McKenney
2021-09-29 19:09     ` Peter Zijlstra
2021-09-29 19:11     ` Peter Zijlstra
2021-09-29 19:13     ` Peter Zijlstra
2021-09-29 19:24       ` Peter Zijlstra
2021-09-29 19:45         ` Paul E. McKenney
2021-09-29 18:54   ` Peter Zijlstra
2021-09-29 15:17 ` [RFC][PATCH v2 09/11] context_tracking,livepatch: Dont disturb NOHZ_FULL Peter Zijlstra
2021-10-06  8:12   ` Petr Mladek
2021-10-06  9:04     ` Peter Zijlstra
2021-10-06 10:29       ` Petr Mladek [this message]
2021-10-06 11:41         ` Peter Zijlstra
2021-10-06 11:48         ` Miroslav Benes
2021-09-29 15:17 ` [RFC][PATCH v2 10/11] livepatch: Remove klp_synchronize_transition() Peter Zijlstra
2021-10-06 12:30   ` Petr Mladek
2021-09-29 15:17 ` [RFC][PATCH v2 11/11] context_tracking,x86: Fix text_poke_sync() vs NOHZ_FULL Peter Zijlstra
2021-10-21 18:39   ` Marcelo Tosatti
2021-10-21 18:40     ` Marcelo Tosatti
2021-10-21 19:25     ` Peter Zijlstra
2021-10-21 19:57       ` Marcelo Tosatti
2021-10-21 20:18         ` Peter Zijlstra
2021-10-26 18:19           ` Marcelo Tosatti
2021-10-26 19:38             ` Peter Zijlstra
2021-09-29 18:03 ` [PATCH v2 00/11] sched,rcu,context_tracking,livepatch: Improve livepatch task transitions for idle and NOHZ_FULL Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YV16jKrB5Azu/nD+@alley \
    --to=pmladek@suse.com \
    --cc=fweisbec@gmail.com \
    --cc=gor@linux.ibm.com \
    --cc=hca@linux.ibm.com \
    --cc=jikos@kernel.org \
    --cc=joe.lawrence@redhat.com \
    --cc=jpoimboe@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=live-patching@vger.kernel.org \
    --cc=mbenes@suse.cz \
    --cc=mingo@kernel.org \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=sumanthk@linux.ibm.com \
    --cc=svens@linux.ibm.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).