All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: Juergen Gross <jgross@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Henry Wang <Henry.Wang@arm.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: Revert of the 4.17 hypercall handler changes Re: [PATCH-for-4.17] xen: fix generated code for calling hypercall handlers
Date: Thu, 10 Nov 2022 09:09:17 +0100	[thread overview]
Message-ID: <e9537cbc-7618-32f8-d5a7-990c661c0243@suse.com> (raw)
In-Reply-To: <C8C5E837-5A3B-4E79-A18E-41EE4B6A4086@citrix.com>

On 09.11.2022 21:16, George Dunlap wrote:
>> On 4 Nov 2022, at 05:01, Andrew Cooper <Andrew.Cooper3@citrix.com> wrote:
>> On 03/11/2022 16:36, Juergen Gross wrote:
>>> The code generated for the call_handlers_*() macros needs to avoid
>>> undefined behavior when multiple handlers share the same priority.
>>> The issue is the hypercall number being unverified fed into the macros
>>> and then used to set a mask via "mask = 1ULL << <hypercall-number>".
>>>
>>> Avoid a shift amount of more than 63 by setting mask to zero in case
>>> the hypercall number is too large.
>>>
>>> Fixes: eca1f00d0227 ("xen: generate hypercall interface related code")
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
>> This is not a suitable fix.  There being a security issue is just the
>> tip of the iceberg.
> 
> At the x86 Maintainer’s meeting on Monday, we (Andrew, Jan, and I) talked about this patch.  Here is my summary of the conversation (with the caveat that I may get some of the details wrong).

Just a couple of remarks, mainly to extend context:

> The proposed benefits of the series are:
> 
> 1. By removing indirect calls, it removes those as a “speculative attack surface”.
> 
> 2. By removing indirect calls, it provides some performance benefit, since indirect calls  require an extra memory fetch.
> 
> 3. It avoids casting function pointers to function pointers of a different type.  Our current practice is *technically* UB, and is incompatible with some hardware safety mechanisms which we may want to take advantage of at some point in the future; the series addresses both.
> 
> There were two incidental technical problems pointed out:
> 
> 1. A potential shift of more than 64 bytes, which is UB; this has been fixed.
> 
> 2. The prototype for the kexec_op call was changed from unsigned long to unsigned int; this is an ABI change which will cause differing behavior.  Jan will be looking at how he can fix this, now that it’s been noted.

Patch was already sent and is now fully acked. Will go in later this morning.

> But the more fundamental costs include:
> 
> 1. The code is much more difficult now to reason about
> 
> 2. The code is much larger
> 
> 3. The long if/else chain could theoretically help hypercalls at the top if the chain, but would definitely begin to hurt hypercalls at the bottom of the chain; and the more hypercalls we add, the more of a theoretical performance penalty this will have

After Andrew's remark on how branch history works I've looked at Intel's
ORM, finding that they actually recommend a hybrid approach: Frequently
used numbers dealt with separately, infrequently used ones dealt with by
a common indirect call.

> 4. By using 64-bit masks, the implementation limits the number of hypercalls to 64; a number we are likely to exceed if we implement ABIv2 to be compatible with AMD SEV.

This very much depends on how we encode the new hypercall numbers. In my
proposal a single bit is used, and handlers remain the same. Therefore in
that model there wouldn't really be an extension of hypercall numbers to
cover here.

> Additionally, there is a question about whether some of the alleged benefits actually help:
> 
> 1. On AMD processors, we enable IBRS, which completely removes indirect calls as a speculative attack surface already.  And on Intel processors, this attack surface has already been significantly reduced.  So removing indirect calls is not as important an issue.
> 
> 2. Normal branches are *also* a surface of speculative attacks; so even apart from the above, all this series does is change one potential attack surface for another one.
> 
> 3. When we analyze theoretical performance with deep CPU pipelines and speculation in mind, the theoretical disadvantage of indirect branches goes away; and depending on the hardware, there is a theoretical performance degradation.

I'm inclined to change this to "may go away". As Andrew said on the call, an
important criteria for the performance of indirect calls is how long it takes
to recognize misprediction, and hence how much work needs to be thrown away
and re-done. Which in turn means there's a more significant impact here when
the rate of mis-predictions is higher.

> 4. From a practical perspective, the performance tests are very much insufficient to show either that this is an improvement, or that does not cause a performance regression.  To show that there hasn’t been a performance degradation, a battery of tests needs to be done on hardware from a variety of different vendors and cpu generations, since each of them will have different properties after all speculative mitigations have been applied.
> 
> So the argument is as follows:
> 
> There is no speculative benefit for the series; there is insufficient performance evidence, either to justify a performance benefit or to allay doubts about a performance regression; and the benefit that there is insufficient to counterbalance the costs, and so the series should be reverted.
> 
> At the end of the discussion, Jan and I agreed that Andrew had made a good case for the series to be removed at some point.  The discussion needs to be concluded on the list, naturally; and if there is a consensus to remove the series, the next question would be whether we should revert it now, before 4.17.0, or wait until after the release and revert it then (perhaps with a backport to 4.17.1).

As per above a 3rd option may want considering: Only partially going back to
the original model of using indirect calls (e.g. for all hypercalls which
aren't explicitly assigned a priority).

It's not just this which leaves me thinking that we shouldn't revert now,
but instead take our time to decide what is going to be best long term. If
then we still decide to fully revert, it can still be an option to do the
revert also for 4.17.x (x > 0).

Jan


      parent reply	other threads:[~2022-11-10  8:09 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-03 16:36 [PATCH-for-4.17] xen: fix generated code for calling hypercall handlers Juergen Gross
2022-11-03 16:45 ` Jan Beulich
2022-11-03 16:50   ` Henry Wang
2022-11-04  5:01 ` Revert of the 4.17 hypercall handler changes " Andrew Cooper
2022-11-04  5:26   ` Juergen Gross
2022-11-04  7:36   ` Jan Beulich
2022-11-04 21:04   ` George Dunlap
2022-11-09 20:16   ` George Dunlap
2022-11-10  6:25     ` Juergen Gross
2022-11-10  8:09     ` Jan Beulich [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e9537cbc-7618-32f8-d5a7-990c661c0243@suse.com \
    --to=jbeulich@suse.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=George.Dunlap@citrix.com \
    --cc=Henry.Wang@arm.com \
    --cc=jgross@suse.com \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.