All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Leo Yan <leo.yan@linaro.org>
Cc: Will Deacon <will@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Mike Leach <mike.leach@linaro.org>,
	Adrian Hunter <adrian.hunter@intel.com>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H . Peter Anvin" <hpa@zytor.com>,
	x86@kernel.org,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Mathieu Poirier <mathieu.poirier@linaro.org>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Jiri Olsa <jolsa@redhat.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH RFC 1/6] perf/x86: Add perf text poke event
Date: Fri, 1 Nov 2019 11:04:40 +0100	[thread overview]
Message-ID: <20191101100440.GU4131@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20191031073136.GC21153@leoy-ThinkPad-X240s>

On Thu, Oct 31, 2019 at 03:31:36PM +0800, Leo Yan wrote:

> Before move farward, I'd like to step back to describe clearly what's
> current problem on Arm64 and check one question for jump label:
> 
> I checked the kernel code, both kprobe and ftrace both uses
> stop_machine() to alter instructions,

That's not currect for Aargh64, see aarch64_insn_patch_text_nosync(),
which is used in both ftrace and jump_label.

> since all CPUs run into stop
> machine's synchronization, there have no race condition between
> instructions transition and CPUs execte the altered instruction; thus
> it's safe for kprobe and ftrace to use perf event PERF_TEXT_POKE_UPDATE
> to notify instruction transition and can allow us to read out 'correct'
> instruction for decoder.

Agreed, IFF patching happens using stop_machine(), things are easy. ARM
is (so far) exclusively using stop_machine() based text_poking, although
the last time I spoke to Will about this, he said the _nosync stuff is
possible on 32bit too, just nobody has bothered implementing it.

> But for jump label, it doesn't use the stop_machine() and perf event
> PERF_TEXT_POKE_UPDATE will introduce race condition as below (Let's see
> the example for transition from nop to branch):
> 
>               CPU0                                      CPU1
>   NOP instruction
>    `-> static_key_enable()
>         `-> aarch64_insn_patch_text_nosync()
>              `-> perf event PERF_TEXT_POKE_UPDATE
>                                                      -> Execute nop
>                                                         instruction
>              `-> aarch64_insn_write()
>              `-> __flush_icache_range()
> 
> Since x86 platform have INT3 as a mediate state, it can avoid the
> race condition between CPU0 (who is do transition) and other CPUs (who
> is possible to execute nop/branch).

Ah, you found the _nosync thing in jump_label, here's the one in ftrace:

arch/arm64/kernel/ftrace.c:     if (aarch64_insn_patch_text_nosync((void *)pc, new))

And yes, this is racy.

> > The thing is, as I argued, the instruction state between PRE and POST is
> > ambiguous. This makes it impossible to decode the branch decision
> > stream.
> > 
> > Suppose CPU0 emits the PRE event at T1 and the POST event at T5, but we
> > have CPU1 covering the instruction at T3.
> > 
> > How do you decide where CPU1 goes and what the next conditional branch
> > is?
> 
> Sorry for my not well thought.
> 
> I agree that T3 is an uncertain state with below flow:
> 
>       CPU0                                             CPU1
>   perf event PERF_TEXT_POKE_UPDATE_PRE   -> T1
> 
>     Int3 / NOP                                       -> T3
> 
>     Int3 / branch                                    -> T3'
> 
>   perf event PERF_TEXT_POKE_UPDATE_POST  -> T5
> 
> Except if the trace has extra info and can use old/new instructions
> combination for analysis, otherwise PRE/POST pair events aren't helpful
> for resolve this issue (if trace decoder can do this, then the change in
> kernel will be much simpler).
> 
> Below are two potential options we can use on Arm64 platform:
> 
> - Change to use stop_machine() for jump label; this might introduce
>   performance issue if jump label is altered frequently.
> 
>   To mitigate the impaction, we can only use stop_machine() when
>   detect the perf events are enabled, otherwise will rollback to use
>   the old code path.
> 
> - We can use breakpoint to emulate the similiar flow with x86's int3,
>   thus we can dismiss the race condition between one CPU alters
>   instruction and other CPUs run into the alternative instruction.
> 
> @Will, @Mark, could you help review this?  Appreciate any comments
> and suggestions.  And please let me know if you want to consolidate
> related works with your side (or as you know if there have ongoing
> discussion or someone works on this).

Given people are building larger Aargh64 machines (I've heard about 100+
CPUs already), I'm thinking the 3rd option is the most performant.

But yes, as you mention earlier, we can make this optional on the
TEXT_POKE_UPDATE event being in use.

I'm thinking something along the lines of:

static uintptr_t nosync_addr;
static u32 nosync_insn;

int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
{
	const u32 break = // some_breakpoint_insn;
	uintptr_t tp = (uintptr_t)addr;
	int ret;

	lockdep_assert_held(&text_mutex);

	/* A64 instructions must be word aligned */
	if (tp & 0x3)
		return -EINVAL;

	if (perf_text_poke_update_enabled()) {

		nosync_insn = insn;
		smp_store_release(&nosync_addr, tp);

		ret = aarch64_insn_write(addr, break);
		if (ret == 0)
			__flush_icache_range(tp, tp + AARCH64_INSN_SIZE);

		perf_event_text_poke(....);
	}

	ret = aarch64_insn_write(addr, insn);
	if (ret == 0)
		__flush_icache_range(tp, tp + AARCH64_INSN_SIZE);

	return ret;
}

And have the 'break' handler do:

aarch64_insn_break_handler(struct pt_regs *regs)
{
	unsigned long addr = smp_load_acquire(&nosync_addr);
	u32 insn = nosync_insn;

	if (regs->ip != addr)
		return;

	// emulate @insn
}

I understood from Will the whole nosync scheme only works for a limited
set of instructions, but you only have to implement emulation for the
actual instructions used of course.

(which is what we do on x86)

Does this sound workable?

  reply	other threads:[~2019-11-01 10:05 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-25 12:59 [PATCH RFC 0/6] perf/x86: Add perf text poke event Adrian Hunter
2019-10-25 12:59 ` [PATCH RFC 1/6] " Adrian Hunter
2019-10-30 10:47   ` Leo Yan
2019-10-30 12:46     ` Peter Zijlstra
2019-10-30 14:19       ` Leo Yan
2019-10-30 15:00         ` Mike Leach
2019-10-30 16:23         ` Peter Zijlstra
2019-10-31  7:31           ` Leo Yan
2019-11-01 10:04             ` Peter Zijlstra [this message]
2019-11-01 10:09               ` Peter Zijlstra
2019-11-04  2:23               ` Leo Yan
2019-11-08 15:05                 ` Leo Yan
2019-11-11 14:46                   ` Peter Zijlstra
2019-11-11 15:39                     ` Will Deacon
2019-11-11 16:05                       ` Peter Zijlstra
2019-11-11 17:29                         ` Will Deacon
2019-11-11 20:32                           ` Peter Zijlstra
     [not found]             ` <CAJ9a7VgZH7g=rFDpKf=FzEcyBVLS_WjqbrqtRnjOi7WOY4st+w@mail.gmail.com>
2019-11-01 10:06               ` Peter Zijlstra
2019-11-04 10:40   ` Peter Zijlstra
2019-11-04 12:32     ` Adrian Hunter
2019-10-25 12:59 ` [PATCH RFC 2/6] perf dso: Refactor dso_cache__read() Adrian Hunter
2019-10-25 14:54   ` Arnaldo Carvalho de Melo
2019-10-28 15:39   ` Jiri Olsa
2019-10-29  9:19     ` Adrian Hunter
2019-11-12 11:18   ` [tip: perf/core] " tip-bot2 for Adrian Hunter
2019-10-25 12:59 ` [PATCH RFC 3/6] perf dso: Add dso__data_write_cache_addr() Adrian Hunter
2019-10-28 15:45   ` Jiri Olsa
2019-10-29  9:20     ` Adrian Hunter
2019-11-12 11:18   ` [tip: perf/core] " tip-bot2 for Adrian Hunter
2019-10-25 12:59 ` [PATCH RFC 4/6] perf tools: Add support for PERF_RECORD_TEXT_POKE Adrian Hunter
2019-10-25 12:59 ` [PATCH RFC 5/6] perf auxtrace: Add auxtrace_cache__remove() Adrian Hunter
2019-10-25 14:48   ` Arnaldo Carvalho de Melo
2019-11-12 11:18   ` [tip: perf/core] " tip-bot2 for Adrian Hunter
2019-10-25 13:00 ` [PATCH RFC 6/6] perf intel-pt: Add support for text poke events Adrian Hunter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191101100440.GU4131@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=bp@alien8.de \
    --cc=hpa@zytor.com \
    --cc=jolsa@redhat.com \
    --cc=leo.yan@linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mathieu.poirier@linaro.org \
    --cc=mike.leach@linaro.org \
    --cc=mingo@redhat.com \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.