All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mark Rutland <mark.rutland@arm.com>
To: Torsten Duwe <duwe@lst.de>
Cc: Steven Rostedt <rostedt@goodmis.org>,
	Will Deacon <will.deacon@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Julien Thierry <julien.thierry@arm.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Ingo Molnar <mingo@redhat.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Arnd Bergmann <arnd@arndb.de>,
	AKASHI Takahiro <takahiro.akashi@linaro.org>,
	Amit Daniel Kachhap <amit.kachhap@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, live-patching@vger.kernel.org
Subject: Re: [PATCH v6] arm64: implement ftrace with regs
Date: Mon, 7 Jan 2019 11:19:54 +0000	[thread overview]
Message-ID: <20190107111954.GA11732@lakrids.cambridge.arm.com> (raw)
In-Reply-To: <20190104224145.GA28236@lst.de>

On Fri, Jan 04, 2019 at 11:41:45PM +0100, Torsten Duwe wrote:
> On Fri, Jan 04, 2019 at 01:06:48PM -0500, Steven Rostedt wrote:
> > On Fri, 4 Jan 2019 17:50:18 +0000
> > Mark Rutland <mark.rutland@arm.com> wrote:
> > 
> > > At Linux Plumbers, I had a conversation with Steve Rostedt, and we came
> > > to the conclusion that (withut heavyweight synchronization) patching two
> > > NOPs at runtime isn't safe, since a CPU might have executed the first
> > > NOP as a NOP before another CPU patches both instructions. So a CPU
> > > might execute:
> > > 
> > > 	NOP
> > > 	BL	ftrace_regs_caller
> > > 
> > > ... rather than the expected:
> > > 
> > > 	MOV	X9, X30
> > > 	BL	ftrace_regs_caller
> > > 
> > > ... and therefore X9 contains some UNKNOWN value, rather than the
> > > original LR value.
> 
> I'm perfectly aware of that; an earlier version had barriers, attempting
> to avoid just that, which Mark(?) wrote weren't neccessary.

The problem was that even with barriers, the only guarantee you get is
that instructions are made visible in order, not what the other CPU has
executed.

For example:	

I.e. 

	CPU#1				CPU#2
					NOP#1
	Patches NOP#1 -> INSN#1
	Cache maintenance
	Barrier
	
					// INSN#1 now visible to CPU#2,
					// but NOP#1 was already
					// executed as a NOP.
	
	Patches NOP#2 -> INSN#2
	Cache maintenance
	Barrier
					INSN#2

> But is this a realistic scenario? All function entries are aligned 8 bytes.
> Are there arm64 implementations out there that fetch only 4 bytes and
> give a chance to mess with the 2nd 4 bytes? You at arm.com should know, and
> I won't be surprised if the answer is a weird "yes". Or maybe it's just
> another erratum lurking somewhere...

The alignment of the instructions provides no guarantee here. Regardless
of what contemporary implementations *may* do, the architecture provides
absolutely no guarantee.

For example, even if CPU#2 fetched both NOPs together, the cache
maintenance and barrier may cause it to throw away any speculative work
after executing NOP#1. Upon re-fetching, it could see both new INSNs,
but as it's already executed the first as a NOP, it will not re-execute
it as INSN#1.

Also consider pre-emption by a hypervisor or firmware may occur
mid-sequence.

> My point is: those 2 insn will _never_ be split by any alignment
> boundary > 8; does that mean anything, have you considered this?

This has no impact whatsoever.

> 
> > > I wonder if we could solve that by patching the kernel at build-time, to
> > > add the MOV X9, X30 in place of the first NOP. If we were to do that, we
> > > could also update the addresses to pooint at the second NOP, simplifying
> > > the changes to the runtime code.
> > 
> > You can also patch it at boot up when there's only one CPU running, and
> > interrupts are disabled.
> 
> May I remind about possible performance hits? 

Sure; please get some numbers either way.

> Even the NOPs had a tiny impact
> on certain in-order implementations. I'd rather switch between the mov and
> a "b +2".

Be careful; the architecture only permits live patching between certain
instructions. Please see ARM DDI 0487D.a, section B2.2.5, "Concurrent
modification and execution of instructions".

Per that, it's not safe to live-patch MOV->B or B->MOV.

It's *also* not safe to live-patch NOP->MOV, or vice-versa.

So I strongly suspect we must unconditionally patch the MOV in early.

Thanks,
Mark.

WARNING: multiple messages have this Message-ID (diff)
From: Mark Rutland <mark.rutland@arm.com>
To: Torsten Duwe <duwe@lst.de>
Cc: Arnd Bergmann <arnd@arndb.de>,
	Julien Thierry <julien.thierry@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Will Deacon <will.deacon@arm.com>,
	linux-kernel@vger.kernel.org,
	Steven Rostedt <rostedt@goodmis.org>,
	AKASHI Takahiro <takahiro.akashi@linaro.org>,
	Ingo Molnar <mingo@redhat.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Amit Daniel Kachhap <amit.kachhap@arm.com>,
	live-patching@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v6] arm64: implement ftrace with regs
Date: Mon, 7 Jan 2019 11:19:54 +0000	[thread overview]
Message-ID: <20190107111954.GA11732@lakrids.cambridge.arm.com> (raw)
In-Reply-To: <20190104224145.GA28236@lst.de>

On Fri, Jan 04, 2019 at 11:41:45PM +0100, Torsten Duwe wrote:
> On Fri, Jan 04, 2019 at 01:06:48PM -0500, Steven Rostedt wrote:
> > On Fri, 4 Jan 2019 17:50:18 +0000
> > Mark Rutland <mark.rutland@arm.com> wrote:
> > 
> > > At Linux Plumbers, I had a conversation with Steve Rostedt, and we came
> > > to the conclusion that (withut heavyweight synchronization) patching two
> > > NOPs at runtime isn't safe, since a CPU might have executed the first
> > > NOP as a NOP before another CPU patches both instructions. So a CPU
> > > might execute:
> > > 
> > > 	NOP
> > > 	BL	ftrace_regs_caller
> > > 
> > > ... rather than the expected:
> > > 
> > > 	MOV	X9, X30
> > > 	BL	ftrace_regs_caller
> > > 
> > > ... and therefore X9 contains some UNKNOWN value, rather than the
> > > original LR value.
> 
> I'm perfectly aware of that; an earlier version had barriers, attempting
> to avoid just that, which Mark(?) wrote weren't neccessary.

The problem was that even with barriers, the only guarantee you get is
that instructions are made visible in order, not what the other CPU has
executed.

For example:	

I.e. 

	CPU#1				CPU#2
					NOP#1
	Patches NOP#1 -> INSN#1
	Cache maintenance
	Barrier
	
					// INSN#1 now visible to CPU#2,
					// but NOP#1 was already
					// executed as a NOP.
	
	Patches NOP#2 -> INSN#2
	Cache maintenance
	Barrier
					INSN#2

> But is this a realistic scenario? All function entries are aligned 8 bytes.
> Are there arm64 implementations out there that fetch only 4 bytes and
> give a chance to mess with the 2nd 4 bytes? You at arm.com should know, and
> I won't be surprised if the answer is a weird "yes". Or maybe it's just
> another erratum lurking somewhere...

The alignment of the instructions provides no guarantee here. Regardless
of what contemporary implementations *may* do, the architecture provides
absolutely no guarantee.

For example, even if CPU#2 fetched both NOPs together, the cache
maintenance and barrier may cause it to throw away any speculative work
after executing NOP#1. Upon re-fetching, it could see both new INSNs,
but as it's already executed the first as a NOP, it will not re-execute
it as INSN#1.

Also consider pre-emption by a hypervisor or firmware may occur
mid-sequence.

> My point is: those 2 insn will _never_ be split by any alignment
> boundary > 8; does that mean anything, have you considered this?

This has no impact whatsoever.

> 
> > > I wonder if we could solve that by patching the kernel at build-time, to
> > > add the MOV X9, X30 in place of the first NOP. If we were to do that, we
> > > could also update the addresses to pooint at the second NOP, simplifying
> > > the changes to the runtime code.
> > 
> > You can also patch it at boot up when there's only one CPU running, and
> > interrupts are disabled.
> 
> May I remind about possible performance hits? 

Sure; please get some numbers either way.

> Even the NOPs had a tiny impact
> on certain in-order implementations. I'd rather switch between the mov and
> a "b +2".

Be careful; the architecture only permits live patching between certain
instructions. Please see ARM DDI 0487D.a, section B2.2.5, "Concurrent
modification and execution of instructions".

Per that, it's not safe to live-patch MOV->B or B->MOV.

It's *also* not safe to live-patch NOP->MOV, or vice-versa.

So I strongly suspect we must unconditionally patch the MOV in early.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2019-01-07 11:20 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-04 14:10 [PATCH v6] arm64: implement ftrace with regs Torsten Duwe
2019-01-04 14:10 ` Torsten Duwe
2019-01-04 17:50 ` Mark Rutland
2019-01-04 17:50   ` Mark Rutland
2019-01-04 18:06   ` Steven Rostedt
2019-01-04 18:06     ` Steven Rostedt
2019-01-04 22:41     ` Torsten Duwe
2019-01-04 22:41       ` Torsten Duwe
2019-01-05 11:05       ` Torsten Duwe
2019-01-05 11:05         ` Torsten Duwe
2019-01-05 20:00         ` Steven Rostedt
2019-01-05 20:00           ` Steven Rostedt
2019-01-07 11:19       ` Mark Rutland [this message]
2019-01-07 11:19         ` Mark Rutland
2019-01-14 12:13   ` Balbir Singh
2019-01-14 12:13     ` Balbir Singh
2019-01-14 12:26     ` Mark Rutland
2019-01-14 12:26       ` Mark Rutland
2019-01-16 15:56       ` Julien Thierry
2019-01-16 15:56         ` Julien Thierry
2019-01-16 18:01         ` Julien Thierry
2019-01-16 18:01           ` Julien Thierry
2019-01-07  4:57 ` Amit Daniel Kachhap
2019-01-07  4:57   ` Amit Daniel Kachhap
2019-01-16  9:57 ` Julien Thierry
2019-01-16  9:57   ` Julien Thierry
2019-01-16 10:08   ` Julien Thierry
2019-01-16 10:08     ` Julien Thierry
2019-01-17 15:48   ` Torsten Duwe
2019-01-17 15:48     ` Torsten Duwe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190107111954.GA11732@lakrids.cambridge.arm.com \
    --to=mark.rutland@arm.com \
    --cc=amit.kachhap@arm.com \
    --cc=ard.biesheuvel@linaro.org \
    --cc=arnd@arndb.de \
    --cc=catalin.marinas@arm.com \
    --cc=duwe@lst.de \
    --cc=jpoimboe@redhat.com \
    --cc=julien.thierry@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=live-patching@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=rostedt@goodmis.org \
    --cc=takahiro.akashi@linaro.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.