From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A136C43334 for ; Thu, 2 Jun 2022 14:10:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235745AbiFBOKu (ORCPT ); Thu, 2 Jun 2022 10:10:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233544AbiFBOKs (ORCPT ); Thu, 2 Jun 2022 10:10:48 -0400 Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com [IPv6:2a00:1450:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE8152A6895 for ; Thu, 2 Jun 2022 07:10:46 -0700 (PDT) Received: by mail-ed1-x535.google.com with SMTP id z7so6406671edm.13 for ; Thu, 02 Jun 2022 07:10:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vrull.eu; s=google; h=from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=GucZHTMrdRbN8D9pH+E1S2Q+0YrgmAdLWLA+reBAIk8=; b=ePPk8jpr2YHwOEc9/GF/xx0h5cp/PMt12MkeozoVu+9FbOIh4RYG60zERne/H4nrnG m4lzUaOxT/YlkCvTtbR85dwuiFewyGyNuxmBhbIP7VtAUM4FpyFJJAFamF6c0lauvLJV Mql+dgIzYSNabLwd1+hW06VcQVoFL9Sx4u38fQhsYYO76g9VXZ6/fdV2ze74WYG/7UZ2 iVy8HUrmtFdTYh9NEl3oKGdhSZCajKxt8OW8m39aiv9kxkgQQ8s6TU7Q+Blmzf8xPeE8 MStMlG3IBbE6giZ5858RcPeKhWDkOxu1cqFZoYRkWXddEKH9t5EgnI2RP5gnEwEgJaqo ohDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=GucZHTMrdRbN8D9pH+E1S2Q+0YrgmAdLWLA+reBAIk8=; b=z80BiA3xF9Z9dA450mM1KKbBu+EGrlXBH/fhMf7ylh/2HRN+wDWvFuVQOgaWpoyjT6 v+JQ82ctj9T8+NUVRBYjlP/QVX9Guw7+3jehw4ia72pyCGvXFz5JRaR6bxihD+A0ASbR kc32SL9QyLBsgVwA1HNaQgHx57bgenjVJLKrTpVlnsyby7492nJPoGLx4JOAyuClNfIb k0xyIR+sfyucCaK5TYVikoH6MWnrdR9+xXzSuUx7IlZcXDD4MpBV9V5/GZveRNYKCbs0 /SUMIqqdhR0npc4J/bH/HV+FAa67t879flbyq5DaEUL1FIQw7maaQdh1/zQ8lOvMaTim FHMQ== X-Gm-Message-State: AOAM532oGPoar74x4Tny0dn5wzFdvmP84q93de3eevjLnAg7Rab4BiWk DUqZ0GgOmDJJbWA20Aui254miw== X-Google-Smtp-Source: ABdhPJxdkWhVxjvMafCzxMI3ltxPuwBg6Z+0cZg4vKoaeUGmpFkY4nkx1v8ZFbjxMdsSmFd9awaybw== X-Received: by 2002:a05:6402:f17:b0:42d:d3f3:244c with SMTP id i23-20020a0564020f1700b0042dd3f3244cmr5711877eda.52.1654179045292; Thu, 02 Jun 2022 07:10:45 -0700 (PDT) Received: from beast.fritz.box (62-178-148-172.cable.dynamic.surfer.at. [62.178.148.172]) by smtp.gmail.com with ESMTPSA id e26-20020a170906081a00b006f39ffe23fdsm1852312ejd.0.2022.06.02.07.10.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Jun 2022 07:10:44 -0700 (PDT) From: Christoph Muellner X-Google-Original-From: Christoph Muellner To: Palmer Dabbelt , Paul Walmsley , Albert Ou , Christoph Muellner , Heiko Stuebner , Philipp Tomsich , Aaron Durbin , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH] riscv: Add Zawrs support for spinlocks Date: Thu, 2 Jun 2022 16:10:32 +0200 Message-Id: <20220602141032.169907-1-christoph.muellner@vrull.io> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The current RISC-V code uses the generic ticket lock implementation, that calls the macros smp_cond_load_relaxed() and smp_cond_load_acquire(). Currently, RISC-V uses the generic implementation of these macros. This patch introduces a RISC-V specific implementation, of these macros, that peels off the first loop iteration and modifies the waiting loop such, that it is possible to use the WRS instruction of the Zawrs ISA extension to stall the CPU. The resulting implementation of smp_cond_load_*() will only work for 32-bit or 64-bit types for RV64 and 32-bit types for RV32. This is caused by the restrictions of the LR instruction (RISC-V only has LR.W and LR.D). Compiler assertions guard this new restriction. This patch uses the existing RISC-V ISA extension framework to detect the presents of Zawrs at run-time. If available a NOP instruction will be replaced by WRS. A similar patch could add support for the PAUSE instruction of the Zihintpause ISA extension. The whole mechanism is gated by Kconfig setting, which defaults to Y. The Zawrs specification can be found here: https://github.com/riscv/riscv-zawrs/blob/main/zawrs.adoc Note, that the Zawrs extension is not frozen or ratified yet. Therefore this patch is an RFC and not intended to get merged. Signed-off-by: Christoph Muellner --- arch/riscv/Kconfig | 10 +++ arch/riscv/include/asm/barrier.h | 97 ++++++++++++++++++++++++++++ arch/riscv/include/asm/errata_list.h | 12 +++- arch/riscv/include/asm/hwcap.h | 3 +- arch/riscv/kernel/cpu.c | 1 + arch/riscv/kernel/cpufeature.c | 13 ++++ 6 files changed, 133 insertions(+), 3 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 905e550e0fd3..054872317d4a 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -358,6 +358,16 @@ config RISCV_ISA_C If you don't know what to do here, say Y. +config RISCV_ISA_ZAWRS + bool "Zawrs extension support" + select RISCV_ALTERNATIVE + default y + help + Adds support to dynamically detect the presence of the Zawrs extension + (wait for reservation set) and enable its usage. + + If you don't know what to do here, say Y. + config RISCV_ISA_SVPBMT bool "SVPBMT extension support" depends on 64BIT && MMU diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h index d0e24aaa2aa0..69b8f1f4b80c 100644 --- a/arch/riscv/include/asm/barrier.h +++ b/arch/riscv/include/asm/barrier.h @@ -12,6 +12,8 @@ #ifndef __ASSEMBLY__ +#include + #define nop() __asm__ __volatile__ ("nop") #define RISCV_FENCE(p, s) \ @@ -42,6 +44,69 @@ do { \ ___p1; \ }) +#if __riscv_xlen == 64 + +#define __riscv_lrsc_word(t) \ + (sizeof(t) == sizeof(int) || \ + sizeof(t) == sizeof(long)) + +#define __riscv_lr(ptr) \ + sizeof(*ptr) == sizeof(int) ? "lr.w" : "lr.d" + +#elif __riscv_xlen == 32 + +#define __riscv_lrsc_word(ptr) \ + (sizeof(*ptr) == sizeof(int)) + +#define __riscv_lr(t) "lr.w" + +#else +#error "Unexpected __riscv_xlen" +#endif /* __riscv_xlen */ + +#define compiletime_assert_atomic_lrsc_type(t) \ + compiletime_assert(__riscv_lrsc_word(t), \ + "Need type compatible with LR/SC instructions.") + +#define ___smp_load_reservedN(pfx, ptr) \ +({ \ + typeof(*ptr) ___p1; \ + __asm__ __volatile__ ("lr." pfx " %[p], %[c]\n" \ + : [p]"=&r" (___p1), [c]"+A"(*ptr)); \ + ___p1; \ +}) + +#define ___smp_load_reserved32(ptr) \ + ___smp_load_reservedN("w", ptr) + +#define ___smp_load_reserved64(ptr) \ + ___smp_load_reservedN("d", ptr) + +#define __smp_load_reserved_relaxed(ptr) \ +({ \ + typeof(*ptr) ___p1; \ + compiletime_assert_atomic_lrsc_type(*ptr); \ + if (sizeof(*ptr) == 32) { \ + ___p1 = ___smp_load_reserved32(ptr); \ + } else { \ + ___p1 = ___smp_load_reserved64(ptr); \ + } \ + ___p1; \ +}) + +#define __smp_load_reserved_acquire(ptr) \ +({ \ + typeof(*ptr) ___p1; \ + compiletime_assert_atomic_lrsc_type(*ptr); \ + if (sizeof(*ptr) == 32) { \ + ___p1 = ___smp_load_reserved32(ptr); \ + } else { \ + ___p1 = ___smp_load_reserved64(ptr); \ + } \ + RISCV_FENCE(r,rw); \ + ___p1; \ +}) + /* * This is a very specific barrier: it's currently only used in two places in * the kernel, both in the scheduler. See include/linux/spinlock.h for the two @@ -69,6 +134,38 @@ do { \ */ #define smp_mb__after_spinlock() RISCV_FENCE(iorw,iorw) +#define smp_cond_load_relaxed(ptr, cond_expr) \ +({ \ + typeof(ptr) __PTR = (ptr); \ + __unqual_scalar_typeof(*ptr) VAL; \ + VAL = READ_ONCE(*__PTR); \ + if (!cond_expr) { \ + for (;;) { \ + VAL = __smp_load_reserved_relaxed(__PTR); \ + if (cond_expr) \ + break; \ + ALT_WRS(); \ + } \ + } \ + (typeof(*ptr))VAL; \ +}) + +#define smp_cond_load_acquire(ptr, cond_expr) \ +({ \ + typeof(ptr) __PTR = (ptr); \ + __unqual_scalar_typeof(*ptr) VAL; \ + VAL = smp_load_acquire(__PTR); \ + if (!cond_expr) { \ + for (;;) { \ + VAL = __smp_load_reserved_acquire(__PTR); \ + if (cond_expr) \ + break; \ + ALT_WRS(); \ + } \ + } \ + (typeof(*ptr))VAL; \ +}) + #include #endif /* __ASSEMBLY__ */ diff --git a/arch/riscv/include/asm/errata_list.h b/arch/riscv/include/asm/errata_list.h index 9e2888dbb5b1..b9aa0b346493 100644 --- a/arch/riscv/include/asm/errata_list.h +++ b/arch/riscv/include/asm/errata_list.h @@ -19,8 +19,9 @@ #define ERRATA_THEAD_NUMBER 1 #endif -#define CPUFEATURE_SVPBMT 0 -#define CPUFEATURE_NUMBER 1 +#define CPUFEATURE_ZAWRS 0 +#define CPUFEATURE_SVPBMT 1 +#define CPUFEATURE_NUMBER 2 #ifdef __ASSEMBLY__ @@ -42,6 +43,13 @@ asm(ALTERNATIVE("sfence.vma %0", "sfence.vma", SIFIVE_VENDOR_ID, \ ERRATA_SIFIVE_CIP_1200, CONFIG_ERRATA_SIFIVE_CIP_1200) \ : : "r" (addr) : "memory") +#define ZAWRS_WRS ".long 0x1000073" +#define ALT_WRS() \ +asm volatile(ALTERNATIVE( \ + "nop\n\t", \ + ZAWRS_WRS "\n\t", \ + 0, CPUFEATURE_ZAWRS, CONFIG_RISCV_ISA_ZAWRS)) + /* * _val is marked as "will be overwritten", so need to set it to 0 * in the default case. diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h index 4e2486881840..c7dd8cc38bec 100644 --- a/arch/riscv/include/asm/hwcap.h +++ b/arch/riscv/include/asm/hwcap.h @@ -51,7 +51,8 @@ extern unsigned long elf_hwcap; * available logical extension id. */ enum riscv_isa_ext_id { - RISCV_ISA_EXT_SSCOFPMF = RISCV_ISA_EXT_BASE, + RISCV_ISA_EXT_ZAWRS = RISCV_ISA_EXT_BASE, + RISCV_ISA_EXT_SSCOFPMF, RISCV_ISA_EXT_SVPBMT, RISCV_ISA_EXT_ID_MAX = RISCV_ISA_EXT_MAX, }; diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c index fba9e9f46a8c..6c3a10ff5358 100644 --- a/arch/riscv/kernel/cpu.c +++ b/arch/riscv/kernel/cpu.c @@ -87,6 +87,7 @@ int riscv_of_parent_hartid(struct device_node *node) * extensions by an underscore. */ static struct riscv_isa_ext_data isa_ext_arr[] = { + __RISCV_ISA_EXT_DATA(zawrs, RISCV_ISA_EXT_ZAWRS), __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), __RISCV_ISA_EXT_DATA(svpbmt, RISCV_ISA_EXT_SVPBMT), __RISCV_ISA_EXT_DATA("", RISCV_ISA_EXT_MAX), diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index dea3ea19deee..fc2c47a1784b 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -199,6 +199,7 @@ void __init riscv_fill_hwcap(void) } else { SET_ISA_EXT_MAP("sscofpmf", RISCV_ISA_EXT_SSCOFPMF); SET_ISA_EXT_MAP("svpbmt", RISCV_ISA_EXT_SVPBMT); + SET_ISA_EXT_MAP("zawrs", RISCV_ISA_EXT_ZAWRS); } #undef SET_ISA_EXT_MAP } @@ -251,6 +252,14 @@ struct cpufeature_info { bool (*check_func)(unsigned int stage); }; +static bool __init_or_module cpufeature_zawrs_check_func(unsigned int stage) +{ + if (stage == RISCV_ALTERNATIVES_EARLY_BOOT) + return false; + + return riscv_isa_extension_available(NULL, ZAWRS); +} + static bool __init_or_module cpufeature_svpbmt_check_func(unsigned int stage) { #ifdef CONFIG_RISCV_ISA_SVPBMT @@ -267,6 +276,10 @@ static bool __init_or_module cpufeature_svpbmt_check_func(unsigned int stage) static const struct cpufeature_info __initdata_or_module cpufeature_list[CPUFEATURE_NUMBER] = { + { + .name = "zawrs", + .check_func = cpufeature_zawrs_check_func + }, { .name = "svpbmt", .check_func = cpufeature_svpbmt_check_func -- 2.35.3 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 93421C43334 for ; Thu, 2 Jun 2022 14:11:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:To :From:Reply-To:Cc:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=nIx0cNs8iEkQJZ3JLGKdox7Za7Gea4uF6DSkWtpY2xI=; b=QKGzOvsaxu0wjW 3wzPazWPDxVdULV1bHLFCPGzSPlJ/gE+S2TjgCOATfnth4FJ1R0E7+CKJWh9Y3q+ApNCOb/NlInqn oD1hsfwMumeApOA/Hq1/DOUZjSG8G/0hgNF4QuhC2J8ftwnBVpvxZ8WpEhlAHsLLemcEJ8C+PQi6e FKZxqY9QLlTgewNEHLA3+BwW/oHx4Qb/iARw8ctQ7mggN3IvIQ0og6xG1jlJvxCtkucVaM21MHvKX Ivo3i0qA4b7ofhx6riANVToscngZhapuF1owkuEWzPPV6WtgjEFR6snaXgw2NIR4PvKRQFyQy8xm8 8OzPaTvvN6AB6x3AITPw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nwlXI-003WRc-2f; Thu, 02 Jun 2022 14:10:52 +0000 Received: from mail-ed1-x530.google.com ([2a00:1450:4864:20::530]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nwlXD-003WQp-R0 for linux-riscv@lists.infradead.org; Thu, 02 Jun 2022 14:10:50 +0000 Received: by mail-ed1-x530.google.com with SMTP id c2so6431872edf.5 for ; Thu, 02 Jun 2022 07:10:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vrull.eu; s=google; h=from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=GucZHTMrdRbN8D9pH+E1S2Q+0YrgmAdLWLA+reBAIk8=; b=ePPk8jpr2YHwOEc9/GF/xx0h5cp/PMt12MkeozoVu+9FbOIh4RYG60zERne/H4nrnG m4lzUaOxT/YlkCvTtbR85dwuiFewyGyNuxmBhbIP7VtAUM4FpyFJJAFamF6c0lauvLJV Mql+dgIzYSNabLwd1+hW06VcQVoFL9Sx4u38fQhsYYO76g9VXZ6/fdV2ze74WYG/7UZ2 iVy8HUrmtFdTYh9NEl3oKGdhSZCajKxt8OW8m39aiv9kxkgQQ8s6TU7Q+Blmzf8xPeE8 MStMlG3IBbE6giZ5858RcPeKhWDkOxu1cqFZoYRkWXddEKH9t5EgnI2RP5gnEwEgJaqo ohDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=GucZHTMrdRbN8D9pH+E1S2Q+0YrgmAdLWLA+reBAIk8=; b=19mTWw5CzTjS5lrYyvAQw814JUGNmWZoU/yEqRVvE+sc252a4jLXzKod5qulNWX+eD lWyjvERUfs4LcoW62d03wvEjmTSQinz0D9NOTg1WlJAmMcUeiMgypNoGs0J3OdAKzNdA LYa5PaGBMPvxXZOMn6Snm2DzA4dMenM4g1fb73Dyn5CG1PMy3W76IEwHoVvflX3Eui+1 YvRngaGg/gbrK8P0zdoJb97ZAFjb2xMtp6z/k7ohJ2oo/29Ob/rhATk4uD+9gDkixSFE Qc9wmNcvyKfXIxx1Lmgf7JHVcRWCKn3EQ6vwg7sJUsZym7Q8t3HfD37SLCeuN1aAuXXM Lomw== X-Gm-Message-State: AOAM531MVHzUAxHZZYdg7OV2njfKOnSk7LnyFcoZSl6XqsuRt6e/DHx7 5Br2X5bFVtN7ZwCGLxCh5O7bqw== X-Google-Smtp-Source: ABdhPJxdkWhVxjvMafCzxMI3ltxPuwBg6Z+0cZg4vKoaeUGmpFkY4nkx1v8ZFbjxMdsSmFd9awaybw== X-Received: by 2002:a05:6402:f17:b0:42d:d3f3:244c with SMTP id i23-20020a0564020f1700b0042dd3f3244cmr5711877eda.52.1654179045292; Thu, 02 Jun 2022 07:10:45 -0700 (PDT) Received: from beast.fritz.box (62-178-148-172.cable.dynamic.surfer.at. [62.178.148.172]) by smtp.gmail.com with ESMTPSA id e26-20020a170906081a00b006f39ffe23fdsm1852312ejd.0.2022.06.02.07.10.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Jun 2022 07:10:44 -0700 (PDT) From: Christoph Muellner X-Google-Original-From: Christoph Muellner To: Palmer Dabbelt , Paul Walmsley , Albert Ou , Christoph Muellner , Heiko Stuebner , Philipp Tomsich , Aaron Durbin , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH] riscv: Add Zawrs support for spinlocks Date: Thu, 2 Jun 2022 16:10:32 +0200 Message-Id: <20220602141032.169907-1-christoph.muellner@vrull.io> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220602_071048_027650_778440AB X-CRM114-Status: GOOD ( 27.44 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The current RISC-V code uses the generic ticket lock implementation, that calls the macros smp_cond_load_relaxed() and smp_cond_load_acquire(). Currently, RISC-V uses the generic implementation of these macros. This patch introduces a RISC-V specific implementation, of these macros, that peels off the first loop iteration and modifies the waiting loop such, that it is possible to use the WRS instruction of the Zawrs ISA extension to stall the CPU. The resulting implementation of smp_cond_load_*() will only work for 32-bit or 64-bit types for RV64 and 32-bit types for RV32. This is caused by the restrictions of the LR instruction (RISC-V only has LR.W and LR.D). Compiler assertions guard this new restriction. This patch uses the existing RISC-V ISA extension framework to detect the presents of Zawrs at run-time. If available a NOP instruction will be replaced by WRS. A similar patch could add support for the PAUSE instruction of the Zihintpause ISA extension. The whole mechanism is gated by Kconfig setting, which defaults to Y. The Zawrs specification can be found here: https://github.com/riscv/riscv-zawrs/blob/main/zawrs.adoc Note, that the Zawrs extension is not frozen or ratified yet. Therefore this patch is an RFC and not intended to get merged. Signed-off-by: Christoph Muellner --- arch/riscv/Kconfig | 10 +++ arch/riscv/include/asm/barrier.h | 97 ++++++++++++++++++++++++++++ arch/riscv/include/asm/errata_list.h | 12 +++- arch/riscv/include/asm/hwcap.h | 3 +- arch/riscv/kernel/cpu.c | 1 + arch/riscv/kernel/cpufeature.c | 13 ++++ 6 files changed, 133 insertions(+), 3 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 905e550e0fd3..054872317d4a 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -358,6 +358,16 @@ config RISCV_ISA_C If you don't know what to do here, say Y. +config RISCV_ISA_ZAWRS + bool "Zawrs extension support" + select RISCV_ALTERNATIVE + default y + help + Adds support to dynamically detect the presence of the Zawrs extension + (wait for reservation set) and enable its usage. + + If you don't know what to do here, say Y. + config RISCV_ISA_SVPBMT bool "SVPBMT extension support" depends on 64BIT && MMU diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h index d0e24aaa2aa0..69b8f1f4b80c 100644 --- a/arch/riscv/include/asm/barrier.h +++ b/arch/riscv/include/asm/barrier.h @@ -12,6 +12,8 @@ #ifndef __ASSEMBLY__ +#include + #define nop() __asm__ __volatile__ ("nop") #define RISCV_FENCE(p, s) \ @@ -42,6 +44,69 @@ do { \ ___p1; \ }) +#if __riscv_xlen == 64 + +#define __riscv_lrsc_word(t) \ + (sizeof(t) == sizeof(int) || \ + sizeof(t) == sizeof(long)) + +#define __riscv_lr(ptr) \ + sizeof(*ptr) == sizeof(int) ? "lr.w" : "lr.d" + +#elif __riscv_xlen == 32 + +#define __riscv_lrsc_word(ptr) \ + (sizeof(*ptr) == sizeof(int)) + +#define __riscv_lr(t) "lr.w" + +#else +#error "Unexpected __riscv_xlen" +#endif /* __riscv_xlen */ + +#define compiletime_assert_atomic_lrsc_type(t) \ + compiletime_assert(__riscv_lrsc_word(t), \ + "Need type compatible with LR/SC instructions.") + +#define ___smp_load_reservedN(pfx, ptr) \ +({ \ + typeof(*ptr) ___p1; \ + __asm__ __volatile__ ("lr." pfx " %[p], %[c]\n" \ + : [p]"=&r" (___p1), [c]"+A"(*ptr)); \ + ___p1; \ +}) + +#define ___smp_load_reserved32(ptr) \ + ___smp_load_reservedN("w", ptr) + +#define ___smp_load_reserved64(ptr) \ + ___smp_load_reservedN("d", ptr) + +#define __smp_load_reserved_relaxed(ptr) \ +({ \ + typeof(*ptr) ___p1; \ + compiletime_assert_atomic_lrsc_type(*ptr); \ + if (sizeof(*ptr) == 32) { \ + ___p1 = ___smp_load_reserved32(ptr); \ + } else { \ + ___p1 = ___smp_load_reserved64(ptr); \ + } \ + ___p1; \ +}) + +#define __smp_load_reserved_acquire(ptr) \ +({ \ + typeof(*ptr) ___p1; \ + compiletime_assert_atomic_lrsc_type(*ptr); \ + if (sizeof(*ptr) == 32) { \ + ___p1 = ___smp_load_reserved32(ptr); \ + } else { \ + ___p1 = ___smp_load_reserved64(ptr); \ + } \ + RISCV_FENCE(r,rw); \ + ___p1; \ +}) + /* * This is a very specific barrier: it's currently only used in two places in * the kernel, both in the scheduler. See include/linux/spinlock.h for the two @@ -69,6 +134,38 @@ do { \ */ #define smp_mb__after_spinlock() RISCV_FENCE(iorw,iorw) +#define smp_cond_load_relaxed(ptr, cond_expr) \ +({ \ + typeof(ptr) __PTR = (ptr); \ + __unqual_scalar_typeof(*ptr) VAL; \ + VAL = READ_ONCE(*__PTR); \ + if (!cond_expr) { \ + for (;;) { \ + VAL = __smp_load_reserved_relaxed(__PTR); \ + if (cond_expr) \ + break; \ + ALT_WRS(); \ + } \ + } \ + (typeof(*ptr))VAL; \ +}) + +#define smp_cond_load_acquire(ptr, cond_expr) \ +({ \ + typeof(ptr) __PTR = (ptr); \ + __unqual_scalar_typeof(*ptr) VAL; \ + VAL = smp_load_acquire(__PTR); \ + if (!cond_expr) { \ + for (;;) { \ + VAL = __smp_load_reserved_acquire(__PTR); \ + if (cond_expr) \ + break; \ + ALT_WRS(); \ + } \ + } \ + (typeof(*ptr))VAL; \ +}) + #include #endif /* __ASSEMBLY__ */ diff --git a/arch/riscv/include/asm/errata_list.h b/arch/riscv/include/asm/errata_list.h index 9e2888dbb5b1..b9aa0b346493 100644 --- a/arch/riscv/include/asm/errata_list.h +++ b/arch/riscv/include/asm/errata_list.h @@ -19,8 +19,9 @@ #define ERRATA_THEAD_NUMBER 1 #endif -#define CPUFEATURE_SVPBMT 0 -#define CPUFEATURE_NUMBER 1 +#define CPUFEATURE_ZAWRS 0 +#define CPUFEATURE_SVPBMT 1 +#define CPUFEATURE_NUMBER 2 #ifdef __ASSEMBLY__ @@ -42,6 +43,13 @@ asm(ALTERNATIVE("sfence.vma %0", "sfence.vma", SIFIVE_VENDOR_ID, \ ERRATA_SIFIVE_CIP_1200, CONFIG_ERRATA_SIFIVE_CIP_1200) \ : : "r" (addr) : "memory") +#define ZAWRS_WRS ".long 0x1000073" +#define ALT_WRS() \ +asm volatile(ALTERNATIVE( \ + "nop\n\t", \ + ZAWRS_WRS "\n\t", \ + 0, CPUFEATURE_ZAWRS, CONFIG_RISCV_ISA_ZAWRS)) + /* * _val is marked as "will be overwritten", so need to set it to 0 * in the default case. diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h index 4e2486881840..c7dd8cc38bec 100644 --- a/arch/riscv/include/asm/hwcap.h +++ b/arch/riscv/include/asm/hwcap.h @@ -51,7 +51,8 @@ extern unsigned long elf_hwcap; * available logical extension id. */ enum riscv_isa_ext_id { - RISCV_ISA_EXT_SSCOFPMF = RISCV_ISA_EXT_BASE, + RISCV_ISA_EXT_ZAWRS = RISCV_ISA_EXT_BASE, + RISCV_ISA_EXT_SSCOFPMF, RISCV_ISA_EXT_SVPBMT, RISCV_ISA_EXT_ID_MAX = RISCV_ISA_EXT_MAX, }; diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c index fba9e9f46a8c..6c3a10ff5358 100644 --- a/arch/riscv/kernel/cpu.c +++ b/arch/riscv/kernel/cpu.c @@ -87,6 +87,7 @@ int riscv_of_parent_hartid(struct device_node *node) * extensions by an underscore. */ static struct riscv_isa_ext_data isa_ext_arr[] = { + __RISCV_ISA_EXT_DATA(zawrs, RISCV_ISA_EXT_ZAWRS), __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), __RISCV_ISA_EXT_DATA(svpbmt, RISCV_ISA_EXT_SVPBMT), __RISCV_ISA_EXT_DATA("", RISCV_ISA_EXT_MAX), diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index dea3ea19deee..fc2c47a1784b 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -199,6 +199,7 @@ void __init riscv_fill_hwcap(void) } else { SET_ISA_EXT_MAP("sscofpmf", RISCV_ISA_EXT_SSCOFPMF); SET_ISA_EXT_MAP("svpbmt", RISCV_ISA_EXT_SVPBMT); + SET_ISA_EXT_MAP("zawrs", RISCV_ISA_EXT_ZAWRS); } #undef SET_ISA_EXT_MAP } @@ -251,6 +252,14 @@ struct cpufeature_info { bool (*check_func)(unsigned int stage); }; +static bool __init_or_module cpufeature_zawrs_check_func(unsigned int stage) +{ + if (stage == RISCV_ALTERNATIVES_EARLY_BOOT) + return false; + + return riscv_isa_extension_available(NULL, ZAWRS); +} + static bool __init_or_module cpufeature_svpbmt_check_func(unsigned int stage) { #ifdef CONFIG_RISCV_ISA_SVPBMT @@ -267,6 +276,10 @@ static bool __init_or_module cpufeature_svpbmt_check_func(unsigned int stage) static const struct cpufeature_info __initdata_or_module cpufeature_list[CPUFEATURE_NUMBER] = { + { + .name = "zawrs", + .check_func = cpufeature_zawrs_check_func + }, { .name = "svpbmt", .check_func = cpufeature_svpbmt_check_func -- 2.35.3 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv