From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,URIBL_BLOCKED,USER_AGENT_NEOMUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7717C04EB9 for ; Thu, 29 Nov 2018 22:17:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A251C2146F for ; Thu, 29 Nov 2018 22:17:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A251C2146F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726728AbeK3JYu (ORCPT ); Fri, 30 Nov 2018 04:24:50 -0500 Received: from mx1.redhat.com ([209.132.183.28]:59828 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726264AbeK3JYt (ORCPT ); Fri, 30 Nov 2018 04:24:49 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id DB6D1394D3C; Thu, 29 Nov 2018 22:17:49 +0000 (UTC) Received: from treble (ovpn-123-4.rdu2.redhat.com [10.10.123.4]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B557C17563; Thu, 29 Nov 2018 22:17:47 +0000 (UTC) Date: Thu, 29 Nov 2018 16:17:45 -0600 From: Josh Poimboeuf To: Andy Lutomirski Cc: Linus Torvalds , Steven Rostedt , Peter Zijlstra , X86 ML , LKML , Ard Biesheuvel , Ingo Molnar , Thomas Gleixner , Masami Hiramatsu , Jason Baron , Jiri Kosina , David Laight , Borislav Petkov , julia@ni.com, jeyu@kernel.org, "H. Peter Anvin" Subject: Re: [PATCH v2 4/4] x86/static_call: Add inline static call implementation for x86-64 Message-ID: <20181129221745.jxxqjsocergfzrb4@treble> References: <20181129124404.2fe55dd0@gandalf.local.home> <20181129125857.75c55b96@gandalf.local.home> <20181129134725.6d86ade6@gandalf.local.home> <20181129202452.56f4j2wdct6qbaqo@treble> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20181129202452.56f4j2wdct6qbaqo@treble> User-Agent: NeoMutt/20180716 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Thu, 29 Nov 2018 22:17:50 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Nov 29, 2018 at 02:24:52PM -0600, Josh Poimboeuf wrote: > On Thu, Nov 29, 2018 at 11:27:00AM -0800, Andy Lutomirski wrote: > > On Thu, Nov 29, 2018 at 11:08 AM Linus Torvalds > > wrote: > > > > > > On Thu, Nov 29, 2018 at 10:58 AM Linus Torvalds > > > wrote: > > > > > > > > In contrast, if the call was wrapped in an inline asm, we'd *know* the > > > > compiler couldn't turn a "call wrapper(%rip)" into anything else. > > > > > > Actually, I think I have a better model - if the caller is done with inline asm. > > > > > > What you can do then is basically add a single-byte prefix to the > > > "call" instruction that does nothing (say, cs override), and then > > > replace *that* with a 'int3' instruction. > > > > > > Boom. Done. > > > > > > Now, the "int3" handler can just update the instruction in-place, but > > > leave the "int3" in place, and then return to the next instruction > > > byte (which is just the normal branch instruction without the prefix > > > byte). > > > > > > The cross-CPU case continues to work, because the 'int3' remains in > > > place until after the IPI. > > > > Hmm, cute. But then the calls are in inline asm, which results in > > giant turds like we have for the pvop vcalls. And, if they start > > being used more generally, we potentially have ABI issues where the > > calling convention isn't quite what the asm expects, and we explode. > > > > I propose a different solution: > > > > As in this patch set, we have a direct and an indirect version. The > > indirect version remains exactly the same as in this patch set. The > > direct version just only does the patching when all seems well: the > > call instruction needs to be 0xe8, and we only do it when the thing > > doesn't cross a cache line. Does that work? In the rare case where > > the compiler generates something other than 0xe8 or crosses a cache > > line, then the thing just remains as a call to the out of line jmp > > trampoline. Does that seem reasonable? It's a very minor change to > > the patch set. > > Maybe that would be ok. If my math is right, we would use the > out-of-line version almost 5% of the time due to cache misalignment of > the address. BTW, this means that if any of a trampoline's callers crosses cache boundaries then we won't be able to poison the trampoline. Which is kind of sad. -- Josh