linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nadav Amit <namit@vmware.com>
To: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>, Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	"H . Peter Anvin" <hpa@zytor.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"x86@kernel.org" <x86@kernel.org>, Borislav Petkov <bp@alien8.de>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [RFC PATCH 0/5] x86: dynamic indirect call promotion
Date: Wed, 28 Nov 2018 19:34:52 +0000	[thread overview]
Message-ID: <9EACED43-EC21-41FB-BFAC-4E98C3842FD9@vmware.com> (raw)
In-Reply-To: <20181128160849.epmoto4o5jaxxxol@treble>

> On Nov 28, 2018, at 8:08 AM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> 
> On Wed, Oct 17, 2018 at 05:54:15PM -0700, Nadav Amit wrote:
>> This RFC introduces indirect call promotion in runtime, which for the
>> matter of simplification (and branding) will be called here "relpolines"
>> (relative call + trampoline). Relpolines are mainly intended as a way
>> of reducing retpoline overheads due to Spectre v2.
>> 
>> Unlike indirect call promotion through profile guided optimization, the
>> proposed approach does not require a profiling stage, works well with
>> modules whose address is unknown and can adapt to changing workloads.
>> 
>> The main idea is simple: for every indirect call, we inject a piece of
>> code with fast- and slow-path calls. The fast path is used if the target
>> matches the expected (hot) target. The slow-path uses a retpoline.
>> During training, the slow-path is set to call a function that saves the
>> call source and target in a hash-table and keep count for call
>> frequency. The most common target is then patched into the hot path.
>> 
>> The patching is done on-the-fly by patching the conditional branch
>> (opcode and offset) that is used to compare the target to the hot
>> target. This allows to direct all cores to the fast-path, while patching
>> the slow-path and vice-versa. Patching follows 2 more rules: (1) Only
>> patch a single byte when the code might be executed by any core. (2)
>> When patching more than one byte, ensure that all cores do not run the
>> to-be-patched-code by preventing this code from being preempted, and
>> using synchronize_sched() after patching the branch that jumps over this
>> code.
>> 
>> Changing all the indirect calls to use relpolines is done using assembly
>> macro magic. There are alternative solutions, but this one is
>> relatively simple and transparent. There is also logic to retrain the
>> software predictor, but the policy it uses may need to be refined.
>> 
>> Eventually the results are not bad (2 VCPU VM, throughput reported):
>> 
>> 		base		relpoline
>> 		----		---------
>> nginx 	22898 		25178 (+10%)
>> redis-ycsb	24523		25486 (+4%)
>> dbench	2144		2103 (+2%)
>> 
>> When retpolines are disabled, and if retraining is off, performance
>> benefits are up to 2% (nginx), but are much less impressive.
> 
> Hi Nadav,
> 
> Peter pointed me to these patches during a discussion about retpoline
> profiling.  Personally, I think this is brilliant.  This could help
> networking and filesystem intensive workloads a lot.

Thanks! I was a bit held-back by the relatively limited number of responses.
I finished another version two weeks ago, and every day I think: "should it
be RFCv2 or v1”, ending up not sending it…

There is one issue that I realized while working on the new version: I’m not
sure it is well-defined what an outline retpoline is allowed to do. The
indirect branch promotion code can change rflags, which might cause
correction issues. In practice, using gcc, it is not a problem.

> Some high-level comments:
> 
> - "Relpoline" looks confusingly a lot like "retpoline".  How about
>  "optpoline"?  To avoid confusing myself I will hereafter refer to it
>  as such :-)

Sure. For the academic paper we submitted, we used a much worse name that my
colleague came up with. I’m ok with anything other than that name (not
mentioned to prevent double-blinding violations). I’ll go with your name.

> - Instead of patching one byte at a time, is there a reason why
>  text_poke_bp() can't be used?  That would greatly simplify the
>  patching process, as everything could be patched in a single step.

I thought of it and maybe it is somehow possible, but there are several
problems, for which I didn’t find a simple solution:

1. An indirect branch inside the BP handler might be the one we patch

2. An indirect branch inside an interrupt or NMI handler might be the
   one we patch

3. Overall, we need to patch more than a single instruction, and
   text_poke_bp() is not suitable

> - In many cases, a single direct call may not be sufficient, as there
>  could be for example multiple tasks using different network protocols
>  which need different callbacks for the same call site.

We want to know during compilation how many targets to use. It is not
super-simple to support multiple inlined targets, but it is feasible if you
are willing to annotate when multiple targets are needed. We have a version
which uses outlined indirect branch promotion when there are multiple
targets, but it’s not ready for prime time, and the code-cache misses can
induce some overheads.

> - I'm not sure about the periodic retraining logic, it seems a bit
>  nondeterministic and bursty.

I agree. It can be limited to cases in which modules are loaded/removed,
or when the user explicitly asks for it to take place.

> 
> So I'd propose the following changes:
> 
> - In the optpoline, reserve space for multiple (5 or so) comparisons and
>  direct calls.  Maybe the number of reserved cmp/jne/call slots can be
>  tweaked by the caller somehow.  Or maybe it could grow as needed.
>  Starting out, they would just be NOPs.
> 
> - Instead of the temporary learning mode, add permanent tracking to
>  detect a direct call "miss" -- i.e., when none of the existing direct
>  calls are applicable and the retpoline will be used.
> 
> - In the case of a miss (or N misses), it could trigger a direct call
>  patching operation to be run later (workqueue or syscall exit).  If
>  all the direct call slots are full, it could patch the least recently
>  modified one.  If this causes thrashing (>x changes over y time), it
>  could increase the number of direct call slots using a trampoline.
>  Even if there were several slots, CPU branch prediction would
>  presumably help make it much faster than a basic retpoline.
> 
> Thoughts?

I’m ok with these changes in general, although having multiple inline
targets is not super-simple. However, there are a few issues:

- There is potentially a negative impact due to code-size increase which
  I was worried about.

- I see no reason not to use all the available slots immediately when
  we encounter a miss.

- The order of the branches might be “wrong” (unoptimized) if we do not
  do any relearning.

- The main question is what to do if we run out of slots and still get
  (many?) misses. I presume the right thing is to disable the optpoline
  and jump over it to the retpoline.

Thanks again for the feedback, and please let me know what you think about
my concerns.

  reply	other threads:[~2018-11-28 19:35 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-18  0:54 [RFC PATCH 0/5] x86: dynamic indirect call promotion Nadav Amit
2018-10-18  0:54 ` [RFC PATCH 1/5] x86: introduce preemption disable prefix Nadav Amit
2018-10-18  1:22   ` Andy Lutomirski
2018-10-18  3:12     ` Nadav Amit
2018-10-18  3:26       ` Nadav Amit
2018-10-18  3:51       ` Andy Lutomirski
2018-10-18 16:47         ` Nadav Amit
2018-10-18 17:00           ` Andy Lutomirski
2018-10-18 17:25             ` Nadav Amit
2018-10-18 17:29               ` Andy Lutomirski
2018-10-18 17:42                 ` Nadav Amit
2018-10-19  1:08             ` Nadav Amit
2018-10-19  4:29               ` Andy Lutomirski
2018-10-19  4:44                 ` Nadav Amit
2018-10-20  1:22                   ` Masami Hiramatsu
2018-10-19  5:00                 ` Alexei Starovoitov
2018-10-19  8:22                   ` Peter Zijlstra
2018-10-19 14:47                     ` Alexei Starovoitov
2018-10-19  8:19                 ` Peter Zijlstra
2018-10-19 10:38                 ` Oleg Nesterov
2018-10-19  8:33               ` Peter Zijlstra
2018-10-19 14:29                 ` Andy Lutomirski
2018-11-29  9:46                   ` Peter Zijlstra
2018-10-18  7:54     ` Peter Zijlstra
2018-10-18 18:14       ` Nadav Amit
2018-10-18  0:54 ` [RFC PATCH 2/5] x86: patch indirect branch promotion Nadav Amit
2018-10-18  0:54 ` [RFC PATCH 3/5] x86: interface for accessing indirect branch locations Nadav Amit
2018-10-18  0:54 ` [RFC PATCH 4/5] x86: learning and patching indirect branch targets Nadav Amit
2018-10-18  0:54 ` [RFC PATCH 5/5] x86: relpoline: disabling interface Nadav Amit
2018-10-23 18:36 ` [RFC PATCH 0/5] x86: dynamic indirect call promotion Dave Hansen
2018-10-23 20:32   ` Nadav Amit
2018-10-23 20:37     ` Dave Hansen
2018-11-28 16:08 ` Josh Poimboeuf
2018-11-28 19:34   ` Nadav Amit [this message]
2018-11-29  0:38     ` Josh Poimboeuf
2018-11-29  1:40       ` Andy Lutomirski
2018-11-29  2:06         ` Nadav Amit
2018-11-29  3:24           ` Andy Lutomirski
2018-11-29  4:36             ` Josh Poimboeuf
2018-11-29  6:06             ` Andy Lutomirski
2018-11-29 15:19               ` Josh Poimboeuf
2018-12-01  6:52                 ` Nadav Amit
2018-12-01 14:25                   ` Josh Poimboeuf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9EACED43-EC21-41FB-BFAC-4E98C3842FD9@vmware.com \
    --to=namit@vmware.com \
    --cc=bp@alien8.de \
    --cc=dwmw@amazon.co.uk \
    --cc=hpa@zytor.com \
    --cc=jpoimboe@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).