From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.7 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE7D1C31E45 for ; Fri, 14 Jun 2019 03:26:43 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AD3F2208CA for ; Fri, 14 Jun 2019 03:26:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="q2hyHc+g"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="ZuPth7El" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AD3F2208CA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rl9R92uY08q5DL/+qpS8+V7uNIFAS7xFfNPq0hfE2mM=; b=q2hyHc+gS9y784 EoYmkSl3WsXqbCyov6y7PLSxcCaCKsCW1dpyzSnZFe+iZyNKgou8DGzz8LiwrAnTtRB5Urc0jRxxg khRY+RHeT5LJ0vgOb0P3CIbIlzpAgZ/1yFGjVt8dXE9YhcnrNi62VMn6DYuf6rkFSAh23AV6MCqrX 0g9xEDiGooDnJr9vENQWQbhbqXqrU8kM+85xpocMkkOhQO1FGOwZdz/2ho5mj2cnP8yTex5vuuKLy cjuAwo6VIvIA+Lq9CRYc2GeLX4hCZUcsi4gOBPO5ByiOwwoQ1i1Ipl4i/pIYqN19F8qLNQEtQ9J+2 MPsIvYLJK1pzQMVnn2Zg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hbcrL-0007FI-GD; Fri, 14 Jun 2019 03:26:35 +0000 Received: from mail-pf1-x441.google.com ([2607:f8b0:4864:20::441]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hbcev-0001HS-9T for linux-arm-kernel@lists.infradead.org; Fri, 14 Jun 2019 03:13:49 +0000 Received: by mail-pf1-x441.google.com with SMTP id m30so489479pff.8 for ; Thu, 13 Jun 2019 20:13:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kulBovnYLz4YjgSJ0sOniJhX8Qcc8ljUXoag6+mJQMI=; b=ZuPth7El82cfeoWv8qXI0kULOoymbsqUIM1ht3vdgKpkGkPrBhXQjROt8Y46oqOOZN NW0wSdYyeYqZbWOn8EkbIGVulRgHXgcl8gWsNQFY5Ho5YsDuostC4jlQrdPSLMXql821 uIPnibCnCusSMq9GlFaiDZs7zRIL5CXUIWhDrasfXcSXkC/b5kIdkIu3IJXUoBJI5D9m 72+PNQK90lICef7SjkbVshOMWasZqTLV07YKmMK9uetNJrv+ZQGx4PZmsbyGy5FJyYxH nN5PgAknmx1xcU9tLut4myAh+yT9NN1ZatTB4RArnw5pHuc+UhsIfPdarUdobDXL18LY HfcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kulBovnYLz4YjgSJ0sOniJhX8Qcc8ljUXoag6+mJQMI=; b=kG+n2uGVsfYqRcL5SmrTobJZxSVtRnUp0vqtm7oP95MB5Sd65jA0KD0YTxUnL22AtI 7UH+pcFKUy9+HnTL8ImF7wCCI21hDsOv59c1N5+vBV4HQ8UwtdYzheUTb+Cn1BSmhjTw SSwEA++HcdzfW4UV7X8uX0O7PkCKLwUuLbM9sarDzwuUqShc0R72q4Oyo9yoxDQw20/6 BzlamcKmfKUIEQnN+zdSh9A2y+vqz5aHZLhWz6+H+Dg6MfI3wSWYKBZc9CZ1TjNA80c3 ZURD6HiB7LlImruGPipD28KbgrzBdvMqnBpEuOfIh+CrOaIUrHYGJjVPXwKb4Ny4/QWe Nwqw== X-Gm-Message-State: APjAAAVFwv2YSnR+Ag6R+kgtJxVu/Gm8Vt4Tjmiwe+JNaivl8FpqOCMz SrcYbEjFVI3QnJs8mg06T7EaO7fQ+qg= X-Google-Smtp-Source: APXvYqx0Kkci7UqnQCFalHHZtOTO2SjyfHBEjwe2NpR5FtlxaURVTZ0I9PFv4Aq0n91C/9rOr621Ww== X-Received: by 2002:a63:6ecf:: with SMTP id j198mr20165303pgc.437.1560482024157; Thu, 13 Jun 2019 20:13:44 -0700 (PDT) Received: from localhost ([122.172.66.84]) by smtp.gmail.com with ESMTPSA id l44sm1270624pje.29.2019.06.13.20.13.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 13 Jun 2019 20:13:43 -0700 (PDT) From: Viresh Kumar To: linux-arm-kernel@lists.infradead.org, Julien Thierry Subject: [PATCH v4.4 42/45] arm/arm64: smccc: Implement SMCCC v1.1 inline primitive Date: Fri, 14 Jun 2019 08:38:25 +0530 Message-Id: X-Mailer: git-send-email 2.21.0.rc0.269.g1a574e7a288b In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190613_201345_530031_3439B68A X-CRM114-Status: GOOD ( 14.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Marc Zyngier , Viresh Kumar , Will Deacon , stable@vger.kernel.org, mark.brown@arm.com, Catalin Marinas , Russell King Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier commit f2d3b2e8759a5833df6f022e42df2d581e6d843c upstream. One of the major improvement of SMCCC v1.1 is that it only clobbers the first 4 registers, both on 32 and 64bit. This means that it becomes very easy to provide an inline version of the SMC call primitive, and avoid performing a function call to stash the registers that would otherwise be clobbered by SMCCC v1.0. Reviewed-by: Robin Murphy Tested-by: Ard Biesheuvel Signed-off-by: Marc Zyngier Signed-off-by: Catalin Marinas Signed-off-by: Viresh Kumar --- include/linux/arm-smccc.h | 141 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 141 insertions(+) diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h index 4c45fd75db5d..60c2ad6316d8 100644 --- a/include/linux/arm-smccc.h +++ b/include/linux/arm-smccc.h @@ -122,5 +122,146 @@ asmlinkage void arm_smccc_hvc(unsigned long a0, unsigned long a1, unsigned long a5, unsigned long a6, unsigned long a7, struct arm_smccc_res *res); +/* SMCCC v1.1 implementation madness follows */ +#ifdef CONFIG_ARM64 + +#define SMCCC_SMC_INST "smc #0" +#define SMCCC_HVC_INST "hvc #0" + +#elif defined(CONFIG_ARM) +#include +#include + +#define SMCCC_SMC_INST __SMC(0) +#define SMCCC_HVC_INST __HVC(0) + +#endif + +#define ___count_args(_0, _1, _2, _3, _4, _5, _6, _7, _8, x, ...) x + +#define __count_args(...) \ + ___count_args(__VA_ARGS__, 7, 6, 5, 4, 3, 2, 1, 0) + +#define __constraint_write_0 \ + "+r" (r0), "=&r" (r1), "=&r" (r2), "=&r" (r3) +#define __constraint_write_1 \ + "+r" (r0), "+r" (r1), "=&r" (r2), "=&r" (r3) +#define __constraint_write_2 \ + "+r" (r0), "+r" (r1), "+r" (r2), "=&r" (r3) +#define __constraint_write_3 \ + "+r" (r0), "+r" (r1), "+r" (r2), "+r" (r3) +#define __constraint_write_4 __constraint_write_3 +#define __constraint_write_5 __constraint_write_4 +#define __constraint_write_6 __constraint_write_5 +#define __constraint_write_7 __constraint_write_6 + +#define __constraint_read_0 +#define __constraint_read_1 +#define __constraint_read_2 +#define __constraint_read_3 +#define __constraint_read_4 "r" (r4) +#define __constraint_read_5 __constraint_read_4, "r" (r5) +#define __constraint_read_6 __constraint_read_5, "r" (r6) +#define __constraint_read_7 __constraint_read_6, "r" (r7) + +#define __declare_arg_0(a0, res) \ + struct arm_smccc_res *___res = res; \ + register u32 r0 asm("r0") = a0; \ + register unsigned long r1 asm("r1"); \ + register unsigned long r2 asm("r2"); \ + register unsigned long r3 asm("r3") + +#define __declare_arg_1(a0, a1, res) \ + struct arm_smccc_res *___res = res; \ + register u32 r0 asm("r0") = a0; \ + register typeof(a1) r1 asm("r1") = a1; \ + register unsigned long r2 asm("r2"); \ + register unsigned long r3 asm("r3") + +#define __declare_arg_2(a0, a1, a2, res) \ + struct arm_smccc_res *___res = res; \ + register u32 r0 asm("r0") = a0; \ + register typeof(a1) r1 asm("r1") = a1; \ + register typeof(a2) r2 asm("r2") = a2; \ + register unsigned long r3 asm("r3") + +#define __declare_arg_3(a0, a1, a2, a3, res) \ + struct arm_smccc_res *___res = res; \ + register u32 r0 asm("r0") = a0; \ + register typeof(a1) r1 asm("r1") = a1; \ + register typeof(a2) r2 asm("r2") = a2; \ + register typeof(a3) r3 asm("r3") = a3 + +#define __declare_arg_4(a0, a1, a2, a3, a4, res) \ + __declare_arg_3(a0, a1, a2, a3, res); \ + register typeof(a4) r4 asm("r4") = a4 + +#define __declare_arg_5(a0, a1, a2, a3, a4, a5, res) \ + __declare_arg_4(a0, a1, a2, a3, a4, res); \ + register typeof(a5) r5 asm("r5") = a5 + +#define __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res) \ + __declare_arg_5(a0, a1, a2, a3, a4, a5, res); \ + register typeof(a6) r6 asm("r6") = a6 + +#define __declare_arg_7(a0, a1, a2, a3, a4, a5, a6, a7, res) \ + __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res); \ + register typeof(a7) r7 asm("r7") = a7 + +#define ___declare_args(count, ...) __declare_arg_ ## count(__VA_ARGS__) +#define __declare_args(count, ...) ___declare_args(count, __VA_ARGS__) + +#define ___constraints(count) \ + : __constraint_write_ ## count \ + : __constraint_read_ ## count \ + : "memory" +#define __constraints(count) ___constraints(count) + +/* + * We have an output list that is not necessarily used, and GCC feels + * entitled to optimise the whole sequence away. "volatile" is what + * makes it stick. + */ +#define __arm_smccc_1_1(inst, ...) \ + do { \ + __declare_args(__count_args(__VA_ARGS__), __VA_ARGS__); \ + asm volatile(inst "\n" \ + __constraints(__count_args(__VA_ARGS__))); \ + if (___res) \ + *___res = (typeof(*___res)){r0, r1, r2, r3}; \ + } while (0) + +/* + * arm_smccc_1_1_smc() - make an SMCCC v1.1 compliant SMC call + * + * This is a variadic macro taking one to eight source arguments, and + * an optional return structure. + * + * @a0-a7: arguments passed in registers 0 to 7 + * @res: result values from registers 0 to 3 + * + * This macro is used to make SMC calls following SMC Calling Convention v1.1. + * The content of the supplied param are copied to registers 0 to 7 prior + * to the SMC instruction. The return values are updated with the content + * from register 0 to 3 on return from the SMC instruction if not NULL. + */ +#define arm_smccc_1_1_smc(...) __arm_smccc_1_1(SMCCC_SMC_INST, __VA_ARGS__) + +/* + * arm_smccc_1_1_hvc() - make an SMCCC v1.1 compliant HVC call + * + * This is a variadic macro taking one to eight source arguments, and + * an optional return structure. + * + * @a0-a7: arguments passed in registers 0 to 7 + * @res: result values from registers 0 to 3 + * + * This macro is used to make HVC calls following SMC Calling Convention v1.1. + * The content of the supplied param are copied to registers 0 to 7 prior + * to the HVC instruction. The return values are updated with the content + * from register 0 to 3 on return from the HVC instruction if not NULL. + */ +#define arm_smccc_1_1_hvc(...) __arm_smccc_1_1(SMCCC_HVC_INST, __VA_ARGS__) + #endif /*__ASSEMBLY__*/ #endif /*__LINUX_ARM_SMCCC_H*/ -- 2.21.0.rc0.269.g1a574e7a288b _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel