From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49319C2D0A3 for ; Thu, 29 Oct 2020 13:22:02 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C2CD2206A1 for ; Thu, 29 Oct 2020 13:22:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="qOF6mXwT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C2CD2206A1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=+v3X9pGoIfZvLNs5gjmp+yen+obhSwoSCzOhroJc3MY=; b=qOF6mXwTNxzAtFN6lOd6EgmKB TaKTlSE8+93IWu4IqEj5Itqgi3erDN7qeTWuvtry3IbSmSF1GFvu7gii1mRZf1Yx6KgBvQaQuJ0+N AvmloM+V74867y+LD8xsNtD7/o7bl/9DbZUknnujEb8VIm2P0KTk+1gcU8mU0ejsJwJXEhldwuw7k V4vmbveB3V0o40MmdTNU/937uLKwu9QFSXjtATkbsPaujk1vQj2h/CGXi+VR+E14vUGVTVgzYyaO8 ZV3UoHy+hCHNtF6/GGzZ9PIHE3i/OwWCyldUSGXz/pjZNJC1d8GHT0wgyf/aqL4Hoz0KBNQKx7rol DZIWcqBWA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kY7rv-0003k3-Bz; Thu, 29 Oct 2020 13:21:31 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kY7rs-0003j0-0l for linux-arm-kernel@lists.infradead.org; Thu, 29 Oct 2020 13:21:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DFCA3139F; Thu, 29 Oct 2020 06:21:26 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.55.248]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 71D913F719; Thu, 29 Oct 2020 06:21:25 -0700 (PDT) Date: Thu, 29 Oct 2020 13:21:22 +0000 From: Mark Rutland To: Ard Biesheuvel Subject: Re: [PATCH v2] arm64: implement support for static call trampolines Message-ID: <20201029132122.GB61831@C02TD0UTHF1T.local> References: <20201028184114.6834-1-ardb@kernel.org> <20201029115026.GA61831@C02TD0UTHF1T.local> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201029_092128_217036_FF7F7BDF X-CRM114-Status: GOOD ( 36.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Zijlstra , Catalin Marinas , Will Deacon , James Morse , Linux ARM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, Oct 29, 2020 at 12:59:43PM +0100, Ard Biesheuvel wrote: > On Thu, 29 Oct 2020 at 12:50, Mark Rutland wrote: > > On Wed, Oct 28, 2020 at 07:41:14PM +0100, Ard Biesheuvel wrote: > > > Implement arm64 support for the 'unoptimized' static call variety, > > > which routes all calls through a single trampoline that is patched > > > to perform a tail call to the selected function. > > > > Given the complexity and subtlety here, do we actually need this? > > The more time I spend on this, the more I lean towards 'no' :-) That would make this all simpler! :) [...] > > > Since static call targets may be located in modules loaded out of > > > direct branching range, we need to be able to fall back to issuing > > > a ADRP/ADD pair to load the branch target into R16 and use a BR > > > instruction. As this involves patching more than a single B or NOP > > > instruction (for which the architecture makes special provisions > > > in terms of the synchronization needed), we may need to run the > > > full blown instruction patching logic that uses stop_machine(). It > > > also means that once we've patched in a ADRP/ADD pair once, we are > > > quite restricted in the patching we can code subsequently, and we > > > may end up using an indirect call after all (note that > > > > Noted. I guess we > > > > [...] > > > > ? Sorry; I was playing on the commit message ending on "(note that", as I wasn't sure where that was going. > > > + * > > > + * The architecture permits us to patch B instructions into NOPs or vice versa > > > + * directly, but patching any other instruction sequence requires careful > > > + * synchronization. Since branch targets may be out of range for ordinary > > > + * immediate branch instructions, we may have to fall back to ADRP/ADD/BR > > > + * sequences in some cases, which complicates things considerably; since any > > > + * sleeping tasks may have been preempted right in the middle of any of these > > > + * sequences, we have to carefully transform one into the other, and ensure > > > + * that it is safe to resume execution at any point in the sequence for tasks > > > + * that have already executed part of it. > > > + * > > > + * So the rules are: > > > + * - we start out with (A) or (B) > > > + * - a branch within immediate range can always be patched in at offset 0x4; > > > + * - sequence (A) can be turned into (B) for NULL branch targets; > > > + * - a branch outside of immediate range can be patched using (C), but only if > > > + * . the sequence being updated is (A) or (B), or > > > + * . the branch target address modulo 4k results in the same ADD opcode > > > + * (which could occur when patching the same far target a second time) > > > + * - once we have patched in (C) we cannot go back to (A) or (B), so patching > > > + * in a NULL target now requires sequence (D); > > > + * - if we cannot patch a far target using (C), we fall back to sequence (E), > > > + * which loads the function pointer from memory. > > > > Cases C-E all use an indirect branch, which goes against one of the > > arguments for having static calls (the assumption that CPUs won't > > mis-predict direct branches). Similarly case E is a literal pool with > > more steps. > > > > That means that for us, static calls would only be an opportunistic > > optimization rather than a hardening feature. Do they actually save us > > much, or could we get by with an inline literal pool in the trampoline? > > Another assumption this is based on is that a literal load is more > costly than a ADRP/ADD. Agreed. I think in practice it's going to depend on the surrounding context and microarchitecture. If the result is being fed into a BR, I'd expect no difference on a big OoO core, and even for a small in-order core it'll depend on how/when the core can forward the result relative to predicting the branch. > > It'd be much easier to reason consistently if the trampoline were > > always: > > > > | BTI C > > | LDR X16, _literal // pc-relative load > > | BR X16 > > | _literal: > > | < patch a 64-bit value here atomically > > > > > ... and I would strongly prefer that to having multiple sequences that > > could all be live -- I'm really not keen on the complexity and subtlety > > that entails. > > I don't see this having any benefit over a ADRP/LDR pair that accesses > the static call key struct directly. Even better! Thanks, Mark. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel