From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751577AbdFFJVU (ORCPT ); Tue, 6 Jun 2017 05:21:20 -0400 Received: from mail-oi0-f66.google.com ([209.85.218.66]:34514 "EHLO mail-oi0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751490AbdFFJUv (ORCPT ); Tue, 6 Jun 2017 05:20:51 -0400 MIME-Version: 1.0 In-Reply-To: References: From: Arnd Bergmann Date: Tue, 6 Jun 2017 11:20:50 +0200 X-Google-Sender-Auth: yDXRuKKg1EIOwIQfFoJ24P2fGuA Message-ID: Subject: Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs To: Palmer Dabbelt Cc: Linux Kernel Mailing List , Olof Johansson , albert@sifive.com Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 6, 2017 at 6:56 AM, Palmer Dabbelt wrote: > On Mon, 29 May 2017 04:17:40 PDT (-0700), Arnd Bergmann wrote: >> On Sat, May 27, 2017 at 2:57 AM, Palmer Dabbelt wrote: >>> On Tue, 23 May 2017 04:46:22 PDT (-0700), Arnd Bergmann wrote: >> This raises a much more general question about how you want to deal >> with SoC implementations in the future. The two most common ways of >> doing this are: >> >> - Every major platform gets a Kconfig option in the architecture menu, >> and that selects the essential drivers (irqchip, clocksource, pinctrl, >> clk, ...) that you need for that platform, along with architecture features >> (ISA level and optional features, ...) >> >> - The architecture code knows nothing about the SoC and just keeps >> to the basics (CPU architecture level selection, SMP/MMU/etc enabled, >> selecting drivers that everyone needs) and leaves the rest up to be >> selected in the defconfig file. >> >> On ARM, we have a bit of both, which is not as good as being >> consistent one way or another. > > This is actually an open question in RISC-V land right now. We should be > spinning up a platform specification working group this summer to try and work > things out. While this will have to be ironed out, I believe the plan is to > define a small number of base platforms (maybe one for embedded systems with no > programmable PMAs, and one for larger machines with a bit more > configurability). I'd anticipate that we'll have a platform Kconfig menu entry > for every platform that gets written down in a specification (just like we have > an entry for our base ISAs) and then defconfig entries for various > implementations that select the relevant platform in addition to the drivers > actually on board. > > For now we've got a handful of defconfig entries for the various platforms we > support (the ISA simulator and our FPGA implementation), but there's no silicon > so we're not stuck with what's there. I don't anticipate we'll add more than a > handful of these until the platform spec work is underway. Ok. Another related point, which may or may not be obvious: when you come up with platform definitions, they should not be mutually exclusive. For instance, supporting both MMU/NOMMU, big/little-endian or 32/64-bit kernels will of course require building separate binaries, but almost every other configuration option should be backward compatible: An SMP kernel should run on a uniprocessor machine and vice versa (using only one CPU), and you should be able to run a kernel with support multiple instruction set revisions by restricting the build to the smallest subset of instructions. On older ARM platforms and most MIPS platforms, we are still restricted to building a kernel binary that will only run on a particular SoC family and not even another SoC with the same CPU core. Fixing this for most ARM platforms required a lot of work that you should avoid by requiring them all to work with a common kernel from the start. Similarly, ARM has an incompatibility that prevents us from running on older (ARMv4/v5) along with newer (ARMv6 or higher) instruction set versions with a single kernel. Avoiding this on RISC-V may become challenging as one of the strengths of the architecture is its flexibility: Someone may come up with their own architecture extensions that they really want to support in the kernel but can't get it to work without making the kernel binary incompatible with other implementations. Do you already have a policy for how to deal with this? Usually by the time someone ships hardware, it's too late and it becomes hard to argue for their kernel port to not get merged. >>>>> +config RV_ATOMIC >>>>> + bool "Use atomic memory instructions (RV32A or RV64A)" >>>>> + default y >>>>> + >>>>> +config RV_SYSRISCV_ATOMIC >>>>> + bool "Include support for atomic operation syscalls" >>>>> + default n >>>>> + help >>>>> + If atomic memory instructions are present, i.e., >>>>> + CONFIG_RV_ATOMIC, this includes support for the syscall that >>>>> + provides atomic accesses. This is only useful to run >>>>> + binaries that require atomic access but were compiled with >>>>> + -mno-atomic. >>>>> + >>>>> + If CONFIG_RV_ATOMIC is unset, this option is mandatory. >>>> >>>> I wonder what the cost would be of always providing the syscalls >>>> for compatibility. This is also something worth putting into a VDSO >>>> instead of exposing the syscall: >>>> >>>> That way, user space that is built with -mno-atomic can call into >>>> the vdso, which depending on the hardware support will perform >>>> the atomic operation directly or enter the syscall. >>> >>> I think the theory is that most Linux-capable systems are going to have the A >>> extension, and that this is really there for educational/hobbyist use. >>> >>> The actual implementation isn't that expensive: it's no-SMP-only, so it's just >>> a disable/enable interrupts that wraps a compare+exchange. I think putting it >>> in the VDSO is actually the right thing to do: that way non-A user code will >>> still have reasonable performance. >>> >>> I'll add it to my TODO list. >> >> This would be another variant of the question above. > > After talking with people a bit about this, I think the sane thing to do here > is to just always provide the atomic system call (and VDSO implementation), and > to default to enabling the atomic extensions. While I expect non-atomic > implementations to be very much the exception, there's a stronger argument for > supporting non-atomic userspace programs with reasonable performance on atomic > systems -- the first round of small Linux-capable embedded systems might not > support atomic (though ours will), but we don't want to wed software to a > syscall in this case. > > Before the VDSO was suggested I was leaning towards disabling the atomic system > call by default, but since the VDSO implementation will provide very good > performance for non-atomic userspace code on atomic systems I think it's best > to just enable it everywhere. It's only a few bytes of binary size. Ok. Arnd