All of lore.kernel.org
 help / color / mirror / Atom feed
* RISC-V Linux Port v1
@ 2017-05-23  0:41 Palmer Dabbelt
  2017-05-23  0:41 ` [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64} Palmer Dabbelt
                   ` (11 more replies)
  0 siblings, 12 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23  0:41 UTC (permalink / raw)
  To: linux-kernel, Arnd Bergmann, olof; +Cc: albert

We'd like to submit for inclusion in Linux a port for the RISC-V architecture.
While it is doubtlessly not complete, we think it is far enough along to start
the upstreaming process.  Our binutils and GCC ports have been accepted and
released, and we plan on submitting glibc patches soon.

This port targets Version 1.10 of the RISC-V Privileged ISA, and supports both
the RV32 and RV64 user ISAs.  The RISC-V community and the 60-some member
companies of the RISC-V Foundation are quite eager to have a single, standard
Linux port.  We thank you in advance for your help in this process and for your
feedback on the software contribution itself.

These patches build and boot on top of 4.12-rc2.  I understand that the merge
window is closed, but it was suggested that the best time to submit a new
architecture port would be right after an RC2 as the earliest point at which
the tree is usually generally churn-free enough.  While we optimistically hope
that we can get the port in for the 4.13 merge window, we're also eager to
ensure that the user-visible ABI is sane so we can proceed with our glibc port.
We'd like to at least get any user ABI issues shaken out as soon as possible,
even if we don't make it into 4.13.

Albert and I will volunteer to maintain this port if it's OK with everyone.

We'd like to thank the various members of the RISC-V software community who
have helped us with the port.

Thanks!

In addition to the threaded messages, our port can be found on Git Hib

  https://github.com/riscv/riscv-linux/tree/riscv-for-submission-v1

[PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64}
[PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
[PATCH 3/7] RISC-V: Device Tree Documentation
[PATCH 4/7] RISC-V: arch/riscv/include
[PATCH 5/7] RISC-V: arch/riscv/lib
[PATCH 6/7] RISC-V: arch/riscv/kernel
[PATCH 7/7] RISC-V: arch/riscv/mm

^ permalink raw reply	[flat|nested] 283+ messages in thread

* [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64}
  2017-05-23  0:41 RISC-V Linux Port v1 Palmer Dabbelt
@ 2017-05-23  0:41 ` Palmer Dabbelt
  2017-05-23 11:30   ` Arnd Bergmann
  2017-05-23  0:41 ` [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs Palmer Dabbelt
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23  0:41 UTC (permalink / raw)
  To: linux-kernel, Arnd Bergmann, olof; +Cc: albert, Palmer Dabbelt

RISC-V has both 32-bit and 64-bit base ISAs, but they are very similar.
Like some other platforms, we'd like to share one arch directory between
the two of them.
---
 Makefile | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/Makefile b/Makefile
index 63e10bd4f14a..0606f28cd4fd 100644
--- a/Makefile
+++ b/Makefile
@@ -269,6 +269,14 @@ ifeq ($(ARCH),x86_64)
         SRCARCH := x86
 endif
 
+# Additional ARCH settings for RISC-V
+ifeq ($(ARCH),riscv32)
+	SRCARCH := riscv
+endif
+ifeq ($(ARCH),riscv64)
+	SRCARCH := riscv
+endif
+
 # Additional ARCH settings for sparc
 ifeq ($(ARCH),sparc32)
        SRCARCH := sparc
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-05-23  0:41 RISC-V Linux Port v1 Palmer Dabbelt
  2017-05-23  0:41 ` [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64} Palmer Dabbelt
@ 2017-05-23  0:41 ` Palmer Dabbelt
  2017-05-23  1:27   ` Olof Johansson
                     ` (3 more replies)
  2017-05-23  0:41 ` [PATCH 3/7] RISC-V: Device Tree Documentation Palmer Dabbelt
                   ` (9 subsequent siblings)
  11 siblings, 4 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23  0:41 UTC (permalink / raw)
  To: linux-kernel, Arnd Bergmann, olof; +Cc: albert, Palmer Dabbelt

---
 arch/riscv/.gitignore                |  35 ++++
 arch/riscv/Kconfig                   | 300 +++++++++++++++++++++++++++++++++++
 arch/riscv/Makefile                  |  64 ++++++++
 arch/riscv/configs/riscv32_spike     |  47 ++++++
 arch/riscv/configs/riscv64_freedom-u |  52 ++++++
 arch/riscv/configs/riscv64_qemu      |  64 ++++++++
 arch/riscv/configs/riscv64_spike     |  45 ++++++
 7 files changed, 607 insertions(+)
 create mode 100644 arch/riscv/.gitignore
 create mode 100644 arch/riscv/Kconfig
 create mode 100644 arch/riscv/Makefile
 create mode 100644 arch/riscv/configs/riscv32_spike
 create mode 100644 arch/riscv/configs/riscv64_freedom-u
 create mode 100644 arch/riscv/configs/riscv64_qemu
 create mode 100644 arch/riscv/configs/riscv64_spike

diff --git a/arch/riscv/.gitignore b/arch/riscv/.gitignore
new file mode 100644
index 000000000000..376d06eb5d52
--- /dev/null
+++ b/arch/riscv/.gitignore
@@ -0,0 +1,35 @@
+# Now un-ignore all files.
+!*
+
+# But then re-ignore the files listed in the Linux .gitignore
+# Normal rules
+#
+.*
+*.o
+*.o.*
+*.a
+*.s
+*.ko
+*.so
+*.so.dbg
+*.mod.c
+*.i
+*.lst
+*.symtypes
+*.order
+modules.builtin
+*.elf
+*.bin
+*.gz
+*.bz2
+*.lzma
+*.xz
+*.lzo
+*.patch
+*.gcno
+
+include/generated
+kernel/vmlinux.lds
+
+# Then reinclude .gitignore.
+!.gitignore
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
new file mode 100644
index 000000000000..510ead1d3343
--- /dev/null
+++ b/arch/riscv/Kconfig
@@ -0,0 +1,300 @@
+#
+# For a description of the syntax of this configuration file,
+# see Documentation/kbuild/kconfig-language.txt.
+#
+
+config RISCV
+	def_bool y
+	select OF
+	select OF_EARLY_FLATTREE
+	select OF_IRQ
+	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
+	select ARCH_WANT_FRAME_POINTERS
+	select CLONE_BACKWARDS
+	select COMMON_CLK
+	select GENERIC_CLOCKEVENTS
+	select GENERIC_CPU_DEVICES
+	select GENERIC_IRQ_SHOW
+	select GENERIC_PCI_IOMAP
+	select GENERIC_STRNCPY_FROM_USER
+	select GENERIC_STRNLEN_USER
+	select GENERIC_SMP_IDLE_THREAD
+	select GENERIC_ATOMIC64 if !64BIT || !RV_ATOMIC
+	select ARCH_WANT_OPTIONAL_GPIOLIB
+	select HAVE_MEMBLOCK
+	select HAVE_DMA_API_DEBUG
+	select HAVE_DMA_CONTIGUOUS
+	select HAVE_GENERIC_DMA_COHERENT
+	select IRQ_DOMAIN
+	select NO_BOOTMEM
+	select RV_ATOMIC if SMP
+	select RV_SYSRISCV_ATOMIC if !RV_ATOMIC
+	select SPARSE_IRQ
+	select SYSCTL_EXCEPTION_TRACE
+	select HAVE_ARCH_TRACEHOOK
+	select MODULES_USE_ELF_RELA if MODULES
+
+config MMU
+	def_bool y
+
+# even on 32-bit, physical (and DMA) addresses are > 32-bits
+config ARCH_PHYS_ADDR_T_64BIT
+	def_bool y
+
+config ARCH_DMA_ADDR_T_64BIT
+	def_bool y
+
+config STACKTRACE_SUPPORT
+	def_bool y
+
+config RWSEM_GENERIC_SPINLOCK
+	def_bool y
+
+config GENERIC_BUG
+	def_bool y
+	depends on BUG
+	select GENERIC_BUG_RELATIVE_POINTERS if 64BIT
+
+config GENERIC_BUG_RELATIVE_POINTERS
+	bool
+
+config GENERIC_CALIBRATE_DELAY
+	def_bool y
+
+config GENERIC_CSUM
+	def_bool y
+
+config GENERIC_HWEIGHT
+	def_bool y
+
+config PGTABLE_LEVELS
+	int
+	default 3 if 64BIT
+	default 2
+
+config HAVE_KPROBES
+	def_bool n
+
+config DMA_NOOP_OPS
+	def_bool y
+
+menu "Platform type"
+
+config SMP
+	bool "Symmetric Multi-Processing"
+	help
+	  This enables support for systems with more than one CPU.  If
+	  you say N here, the kernel will run on single and
+	  multiprocessor machines, but will use only one CPU of a
+	  multiprocessor machine. If you say Y here, the kernel will run
+	  on many, but not all, single processor machines. On a single
+	  processor machine, the kernel will run faster if you say N
+	  here.
+
+	  If you don't know what to do here, say N.
+
+config NR_CPUS
+	int "Maximum number of CPUs (2-32)"
+	range 2 32
+	depends on SMP
+	default "8"
+
+choice
+	prompt "CPU selection"
+	default CPU_RV_GENERIC
+
+config CPU_RV_GENERIC
+	bool "Generic RISC-V"
+	select CPU_SUPPORTS_32BIT_KERNEL
+	select CPU_SUPPORTS_64BIT_KERNEL
+
+endchoice
+
+config PLIC
+	bool "Platform-Level Interrupt Controller"
+	default y
+	help
+	   This enables support for the PLIC chip found in standard RISC-V
+	   systems. The PLIC is the top-most interrupt controller found in
+	   the system, connected directly to the core complex. All other
+	   interrupt sources (MSI, GPIO, etc) are subordinate to the PLIC.
+
+	   If you don't know what to do here, say Y.
+
+config CPU_SUPPORTS_32BIT_KERNEL
+	bool
+config CPU_SUPPORTS_64BIT_KERNEL
+	bool
+
+config SBI_CONSOLE
+	tristate "SBI console support"
+	select TTY
+	default y
+
+config RVC
+	bool "Use compressed instructions (RV32C or RV64C)"
+	default n
+
+config RV_ATOMIC
+	bool "Use atomic memory instructions (RV32A or RV64A)"
+	default y
+
+config RV_SYSRISCV_ATOMIC
+	bool "Include support for atomic operation syscalls"
+	default n
+	help
+	  If atomic memory instructions are present, i.e.,
+	  CONFIG_RV_ATOMIC, this includes support for the syscall that
+	  provides atomic accesses.  This is only useful to run
+	  binaries that require atomic access but were compiled with
+	  -mno-atomic.
+
+	  If CONFIG_RV_ATOMIC is unset, this option is mandatory.
+
+config RV_PUM
+	def_bool y
+	prompt "Protect User Memory" if EXPERT
+	---help---
+	  Protect User Memory (PUM) prevents the kernel from inadvertently
+	  accessing user-space memory.  There is a small performance cost
+	  and kernel size increase if this is enabled.
+
+	  If unsure, say Y.
+
+endmenu
+
+menu "Kernel type"
+
+choice
+	prompt "Kernel code model"
+	default 64BIT
+
+config 32BIT
+	bool "32-bit kernel"
+	depends on CPU_SUPPORTS_32BIT_KERNEL
+	help
+	  Select this option to build a 32-bit kernel.
+
+config 64BIT
+	bool "64-bit kernel"
+	depends on CPU_SUPPORTS_64BIT_KERNEL
+	help
+	  Select this option to build a 64-bit kernel.
+
+endchoice
+
+source "mm/Kconfig"
+
+source "kernel/Kconfig.preempt"
+
+source "kernel/Kconfig.hz"
+
+endmenu
+
+menu "Bus support"
+
+config PCI
+	bool "PCI support"
+	select PCI_MSI
+	help
+	  This feature enables support for PCI bus system. If you say Y
+	  here, the kernel will include drivers and infrastructure code
+	  to support PCI bus devices.
+
+config PCI_DOMAINS
+	def_bool PCI
+
+config PCI_DOMAINS_GENERIC
+	def_bool PCI
+
+config PCI_SYSCALL
+	def_bool PCI
+
+source "drivers/pci/Kconfig"
+
+endmenu
+
+source "init/Kconfig"
+
+source "kernel/Kconfig.freezer"
+
+menu "Executable file formats"
+
+source "fs/Kconfig.binfmt"
+
+endmenu
+
+menu "Power management options"
+
+source kernel/power/Kconfig
+
+endmenu
+
+source "net/Kconfig"
+
+source "drivers/Kconfig"
+
+source "fs/Kconfig"
+
+menu "Kernel hacking"
+
+config CMDLINE_BOOL
+	bool "Built-in kernel command line"
+	default n
+	help
+	  For most platforms, it is firmware or second stage bootloader
+	  that by default specifies the kernel command line options.
+	  However, it might be necessary or advantageous to either override
+	  the default kernel command line or add a few extra options to it.
+	  For such cases, this option allows hardcoding command line options
+	  directly into the kernel.
+
+	  For that, choose 'Y' here and fill in the extra boot parameters
+	  in CONFIG_CMDLINE.
+
+	  The built-in options will be concatenated to the default command
+	  line if CMDLINE_OVERRIDE is set to 'N'. Otherwise, the default
+	  command line will be ignored and replaced by the built-in string.
+
+config CMDLINE
+	string "Built-in kernel command string"
+	depends on CMDLINE_BOOL
+	default ""
+	help
+	  Supply command-line options at build time by entering them here.
+
+config CMDLINE_OVERRIDE
+	bool "Built-in command line overrides bootloader arguments"
+	default n
+	depends on CMDLINE_BOOL
+	help
+	  Set this option to 'Y' to have the kernel ignore the bootloader
+	  or firmware command line.  Instead, the built-in command line
+	  will be used exclusively.
+
+config EARLY_PRINTK
+	bool "Early printk"
+	default n
+	help
+	  This option enables special console drivers which allow the kernel
+	  to print messages very early in the bootup process.
+
+	  This is useful for kernel debugging when your machine crashes very
+	  early before the console code is initialized. For normal operation
+	  it is not recommended because it looks ugly and doesn't cooperate
+	  with klogd/syslogd or the X server. You should normally N here,
+	  unless you want to debug such a crash.
+
+
+source "lib/Kconfig.debug"
+
+config CMDLINE_BOOL
+	bool
+endmenu
+
+source "security/Kconfig"
+
+source "crypto/Kconfig"
+
+source "lib/Kconfig"
+
diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
new file mode 100644
index 000000000000..07ef200e0675
--- /dev/null
+++ b/arch/riscv/Makefile
@@ -0,0 +1,64 @@
+# This file is included by the global makefile so that you can add your own
+# architecture-specific flags and dependencies. Remember to do have actions
+# for "archclean" and "archdep" for cleaning up and making dependencies for
+# this architecture
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License.  See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+
+LDFLAGS         :=
+OBJCOPYFLAGS    := -O binary
+LDFLAGS_vmlinux :=
+KBUILD_AFLAGS_MODULE += -fPIC
+KBUILD_CFLAGS_MODULE += -fPIC
+
+ifeq ($(ARCH),riscv)
+	KBUILD_DEFCONFIG = riscv64_spike
+else
+	KBUILD_DEFCONFIG = $(ARCH)_spike
+endif
+
+export BITS
+ifeq ($(CONFIG_64BIT),y)
+	BITS := 64
+	UTS_MACHINE := riscv64
+
+	KBUILD_CFLAGS += -mabi=lp64
+	KBUILD_AFLAGS += -mabi=lp64
+	KBUILD_MARCH = rv64im
+	LDFLAGS += -melf64lriscv
+else
+	BITS := 32
+	UTS_MACHINE := riscv32
+
+	KBUILD_CFLAGS += -mabi=ilp32
+	KBUILD_AFLAGS += -mabi=ilp32
+	KBUILD_MARCH = rv32im
+	LDFLAGS += -melf32lriscv
+endif
+
+ifeq ($(CONFIG_RV_ATOMIC),y)
+	KBUILD_RV_ATOMIC = a
+endif
+
+KBUILD_CFLAGS += -Wall
+
+ifeq ($(CONFIG_RVC),y)
+	KBUILD_RVC = c
+endif
+
+KBUILD_AFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_RV_ATOMIC)fd$(KBUILD_RVC)
+
+KBUILD_CFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_RV_ATOMIC)$(KBUILD_RVC)
+KBUILD_CFLAGS += -mno-save-restore
+KBUILD_CFLAGS += -mstrict-align
+
+head-y := arch/riscv/kernel/head.o
+
+core-y += arch/riscv/kernel/ arch/riscv/mm/
+
+libs-y += arch/riscv/lib/
+
+all: vmlinux
diff --git a/arch/riscv/configs/riscv32_spike b/arch/riscv/configs/riscv32_spike
new file mode 100644
index 000000000000..c224f7dcb4da
--- /dev/null
+++ b/arch/riscv/configs/riscv32_spike
@@ -0,0 +1,47 @@
+CONFIG_64BIT=n
+CONFIG_32BIT=y
+CONFIG_PCI=y
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_CRYPTO_HW is not set
diff --git a/arch/riscv/configs/riscv64_freedom-u b/arch/riscv/configs/riscv64_freedom-u
new file mode 100644
index 000000000000..519cb8219b40
--- /dev/null
+++ b/arch/riscv/configs/riscv64_freedom-u
@@ -0,0 +1,52 @@
+CONFIG_CROSS_COMPILE="riscv64-unknown-linux-gnu-"
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+# CONFIG_SGETMASK_SYSCALL is not set
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+# CONFIG_COMPACTION is not set
+CONFIG_HZ_100=y
+CONFIG_PCI_MSI=y
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+CONFIG_OF=y
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+# CONFIG_UNUSED_SYMBOLS is not set
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_EARLY_PRINTK is not set
+# CONFIG_CRYPTO_HW is not set
diff --git a/arch/riscv/configs/riscv64_qemu b/arch/riscv/configs/riscv64_qemu
new file mode 100644
index 000000000000..4b1190ad2676
--- /dev/null
+++ b/arch/riscv/configs/riscv64_qemu
@@ -0,0 +1,64 @@
+# CONFIG_COMPACTION is not set
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+CONFIG_HZ_100=y
+# CONFIG_CROSS_COMPILE is not set
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+CONFIG_NAMESPACES=y
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_NET=y
+CONFIG_PACKET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_LRO is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+# CONFIG_BLK_DEV is not set
+CONFIG_SCSI=y
+CONFIG_BLK_DEV_SD=y
+CONFIG_SCSI_VIRTIO=y
+CONFIG_NETDEVICES=y
+CONFIG_VIRTIO_NET=y
+# CONFIG_ETHERNET is not set
+# CONFIG_WLAN is not set
+# CONFIG_INPUT_MOUSEDEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_SERIAL_8250=y
+# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_NR_UARTS=1
+CONFIG_SERIAL_8250_RUNTIME_UARTS=1
+CONFIG_VIRTIO_CONSOLE=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+CONFIG_VIRTIO_MMIO=y
+CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT4_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_CMDLINE_BOOL=y
+CONFIG_CMDLINE="virtio_mmio.device=0x200@0x400:1 virtio_mmio.device=0x200@0x600:2 virtio_mmio.device=0x200@0x800:3 lpj=100000"
+CONFIG_CMDLINE_OVERRIDE=y
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_CRYPTO_ANSI_CPRNG is not set
+# CONFIG_CRYPTO_HW is not set
diff --git a/arch/riscv/configs/riscv64_spike b/arch/riscv/configs/riscv64_spike
new file mode 100644
index 000000000000..a48e92cba88e
--- /dev/null
+++ b/arch/riscv/configs/riscv64_spike
@@ -0,0 +1,45 @@
+CONFIG_PCI=y
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_CRYPTO_HW is not set
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 3/7] RISC-V: Device Tree Documentation
  2017-05-23  0:41 RISC-V Linux Port v1 Palmer Dabbelt
  2017-05-23  0:41 ` [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64} Palmer Dabbelt
  2017-05-23  0:41 ` [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs Palmer Dabbelt
@ 2017-05-23  0:41 ` Palmer Dabbelt
  2017-05-23 12:03   ` Arnd Bergmann
  2017-05-23  0:41 ` [PATCH 4/7] RISC-V: arch/riscv/include Palmer Dabbelt
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23  0:41 UTC (permalink / raw)
  To: linux-kernel, Arnd Bergmann, olof; +Cc: albert, Palmer Dabbelt

---
 .../interrupt-controller/riscv,cpu-intc.txt        | 46 ++++++++++++++++++++++
 .../bindings/interrupt-controller/riscv,plic0.txt  | 44 +++++++++++++++++++++
 2 files changed, 90 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
 create mode 100644 Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt

diff --git a/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt b/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
new file mode 100644
index 000000000000..62f02e834ff9
--- /dev/null
+++ b/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
@@ -0,0 +1,46 @@
+RISC-V Hart-Level Interrupt Controller (HLIC)
+---------------------------------------------
+
+RISC-V cores include Control Status Registers (CSRs) which are local to each
+hart and can be read or written by software. Some of these CSRs are used to
+control local interrupts connected to the core.
+
+Typical examples of local interrupts on a RISC-V core include: software IPI
+interrupts, timer interrupts, and a link to the PLIC interrupt controller.
+
+Required properties:
+- compatible : "riscv,cpu-intc"
+- #interrupt-cells : should be <1>
+- interrupt-controller : Identifies the node as an interrupt controller
+
+Furthermore, this interrupt-controller MUST be embedded inside the cpu
+definition of the hart whose CSRs control these local interrupts.
+
+Example:
+
+	cpu1: cpu@1 {
+		clock-frequency = <1600000000>;
+		compatible = "riscv";
+		d-cache-block-size = <64>;
+		d-cache-sets = <64>;
+		d-cache-size = <16384>;
+		d-tlb-sets = <1>;
+		d-tlb-size = <32>;
+		device_type = "cpu";
+		i-cache-block-size = <64>;
+		i-cache-sets = <64>;
+		i-cache-size = <16384>;
+		i-tlb-sets = <1>;
+		i-tlb-size = <32>;
+		mmu-type = "riscv,sv39";
+		next-level-cache = <&L2>;
+		reg = <1>;
+		riscv,isa = "rv64imac";
+		status = "okay";
+		tlb-split;
+		cpu1-intc: interrupt-controller {
+			#interrupt-cells = <1>;
+			compatible = "riscv,cpu-intc";
+			interrupt-controller;
+		};
+	};
diff --git a/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt b/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt
new file mode 100644
index 000000000000..c05b5806f7d2
--- /dev/null
+++ b/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt
@@ -0,0 +1,44 @@
+RISC-V Platform-Level Interrupt Controller (PLIC)
+-------------------------------------------------
+
+RISC-V cores typically include a PLIC, which route interrupts from multiple
+devices to multiple hart contexts.  The PLIC is connected to the interrupt
+controller embedded in a RISC-V core via the interrupt-related CSRs.
+
+A hart context is a priviledge mode in a hardware execution thread.  For
+example, in an 4 core system with 2-way SMT, you have 8 harts and probably
+at least two priviledge modes per hart; machine mode and supervisor mode.
+
+Each interrupt can be enabled on per-context basis. Any context can claim
+a pending enabled interrupt and then release it once it has been handled.
+
+Each interrupt has a configurable priority. Higher priority interrupts are
+serviced firs. Each context can specify a priority threshold. Interrupts
+with priority below this threshold will not cause the PLIC to raise its
+interrupt line leading to the context.
+
+Required properties:
+- compatible : "riscv,plic0"
+- #address-cells : should be <0>
+- #interrupt-cells : should be <1>
+- interrupt-controller : Identifies the node as an interrupt controller
+- reg : Should contain 1 register range (address and length)
+- riscv,ndev : Specifies the number of interrupts attached to the PLIC
+- interrupts-extended : Specifies which contexts are connected to the PLIC
+
+Example:
+
+	plic: interrupt-controller@c000000 {
+		#address-cells = <0>;
+		#interrupt-cells = <1>;
+		compatible = "riscv,plic0";
+		interrupt-controller;
+		interrupts-extended = <
+			&cpu0-intc 11
+			&cpu1-intc 11 &cpu1-intc 9
+			&cpu2-intc 11 &cpu2-intc 9
+			&cpu3-intc 11 &cpu3-intc 9
+			&cpu4-intc 11 &cpu4-intc 9>;
+		reg = <0xc000000 0x4000000>;
+		riscv,ndev = <10>;
+	};
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 4/7] RISC-V: arch/riscv/include
  2017-05-23  0:41 RISC-V Linux Port v1 Palmer Dabbelt
                   ` (2 preceding siblings ...)
  2017-05-23  0:41 ` [PATCH 3/7] RISC-V: Device Tree Documentation Palmer Dabbelt
@ 2017-05-23  0:41 ` Palmer Dabbelt
  2017-05-23 12:55   ` Arnd Bergmann
  2017-05-23  0:41 ` [PATCH 5/7] RISC-V: arch/riscv/lib Palmer Dabbelt
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23  0:41 UTC (permalink / raw)
  To: linux-kernel, Arnd Bergmann, olof; +Cc: albert, Palmer Dabbelt

---
 arch/riscv/include/asm/Kbuild             |  60 ++++
 arch/riscv/include/asm/asm-offsets.h      |   1 +
 arch/riscv/include/asm/asm.h              |  65 +++++
 arch/riscv/include/asm/atomic.h           | 349 +++++++++++++++++++++++
 arch/riscv/include/asm/atomic64.h         | 355 +++++++++++++++++++++++
 arch/riscv/include/asm/barrier.h          |  33 +++
 arch/riscv/include/asm/bitops.h           | 229 +++++++++++++++
 arch/riscv/include/asm/bug.h              |  81 ++++++
 arch/riscv/include/asm/cache.h            |  23 ++
 arch/riscv/include/asm/cacheflush.h       |  40 +++
 arch/riscv/include/asm/cmpxchg.h          | 125 ++++++++
 arch/riscv/include/asm/csr.h              | 126 +++++++++
 arch/riscv/include/asm/delay.h            |  29 ++
 arch/riscv/include/asm/device.h           |  28 ++
 arch/riscv/include/asm/dma-mapping.h      |  61 ++++
 arch/riscv/include/asm/elf.h              |  83 ++++++
 arch/riscv/include/asm/io.h               |  36 +++
 arch/riscv/include/asm/irq.h              |  32 +++
 arch/riscv/include/asm/irqflags.h         |  64 +++++
 arch/riscv/include/asm/kprobes.h          |  23 ++
 arch/riscv/include/asm/linkage.h          |  21 ++
 arch/riscv/include/asm/mmu.h              |  27 ++
 arch/riscv/include/asm/mmu_context.h      |  70 +++++
 arch/riscv/include/asm/page.h             | 138 +++++++++
 arch/riscv/include/asm/pci.h              |  51 ++++
 arch/riscv/include/asm/pgalloc.h          | 126 +++++++++
 arch/riscv/include/asm/pgtable-32.h       |  26 ++
 arch/riscv/include/asm/pgtable-64.h       |  85 ++++++
 arch/riscv/include/asm/pgtable-bits.h     |  49 ++++
 arch/riscv/include/asm/pgtable.h          | 426 ++++++++++++++++++++++++++++
 arch/riscv/include/asm/processor.h        | 103 +++++++
 arch/riscv/include/asm/ptrace.h           | 117 ++++++++
 arch/riscv/include/asm/sbi.h              | 101 +++++++
 arch/riscv/include/asm/serial.h           |  43 +++
 arch/riscv/include/asm/setup.h            |  20 ++
 arch/riscv/include/asm/smp.h              |  42 +++
 arch/riscv/include/asm/spinlock.h         | 156 ++++++++++
 arch/riscv/include/asm/spinlock_types.h   |  34 +++
 arch/riscv/include/asm/string.h           |  31 ++
 arch/riscv/include/asm/switch_to.h        |  71 +++++
 arch/riscv/include/asm/syscall.h          |  91 ++++++
 arch/riscv/include/asm/syscalls.h         |  26 ++
 arch/riscv/include/asm/thread_info.h      | 103 +++++++
 arch/riscv/include/asm/timex.h            |  55 ++++
 arch/riscv/include/asm/tlb.h              |  25 ++
 arch/riscv/include/asm/tlbflush.h         |  95 +++++++
 arch/riscv/include/asm/uaccess.h          | 455 ++++++++++++++++++++++++++++++
 arch/riscv/include/asm/unistd.h           |  17 ++
 arch/riscv/include/asm/vdso.h             |  32 +++
 arch/riscv/include/asm/word-at-a-time.h   |  56 ++++
 arch/riscv/include/uapi/asm/Kbuild        |  10 +
 arch/riscv/include/uapi/asm/auxvec.h      |  24 ++
 arch/riscv/include/uapi/asm/bitsperlong.h |  25 ++
 arch/riscv/include/uapi/asm/byteorder.h   |  23 ++
 arch/riscv/include/uapi/asm/elf.h         |  83 ++++++
 arch/riscv/include/uapi/asm/ptrace.h      |  69 +++++
 arch/riscv/include/uapi/asm/sigcontext.h  |  30 ++
 arch/riscv/include/uapi/asm/siginfo.h     |  24 ++
 arch/riscv/include/uapi/asm/unistd.h      |  23 ++
 59 files changed, 4846 insertions(+)
 create mode 100644 arch/riscv/include/asm/Kbuild
 create mode 100644 arch/riscv/include/asm/asm-offsets.h
 create mode 100644 arch/riscv/include/asm/asm.h
 create mode 100644 arch/riscv/include/asm/atomic.h
 create mode 100644 arch/riscv/include/asm/atomic64.h
 create mode 100644 arch/riscv/include/asm/barrier.h
 create mode 100644 arch/riscv/include/asm/bitops.h
 create mode 100644 arch/riscv/include/asm/bug.h
 create mode 100644 arch/riscv/include/asm/cache.h
 create mode 100644 arch/riscv/include/asm/cacheflush.h
 create mode 100644 arch/riscv/include/asm/cmpxchg.h
 create mode 100644 arch/riscv/include/asm/csr.h
 create mode 100644 arch/riscv/include/asm/delay.h
 create mode 100644 arch/riscv/include/asm/device.h
 create mode 100644 arch/riscv/include/asm/dma-mapping.h
 create mode 100644 arch/riscv/include/asm/elf.h
 create mode 100644 arch/riscv/include/asm/io.h
 create mode 100644 arch/riscv/include/asm/irq.h
 create mode 100644 arch/riscv/include/asm/irqflags.h
 create mode 100644 arch/riscv/include/asm/kprobes.h
 create mode 100644 arch/riscv/include/asm/linkage.h
 create mode 100644 arch/riscv/include/asm/mmu.h
 create mode 100644 arch/riscv/include/asm/mmu_context.h
 create mode 100644 arch/riscv/include/asm/page.h
 create mode 100644 arch/riscv/include/asm/pci.h
 create mode 100644 arch/riscv/include/asm/pgalloc.h
 create mode 100644 arch/riscv/include/asm/pgtable-32.h
 create mode 100644 arch/riscv/include/asm/pgtable-64.h
 create mode 100644 arch/riscv/include/asm/pgtable-bits.h
 create mode 100644 arch/riscv/include/asm/pgtable.h
 create mode 100644 arch/riscv/include/asm/processor.h
 create mode 100644 arch/riscv/include/asm/ptrace.h
 create mode 100644 arch/riscv/include/asm/sbi.h
 create mode 100644 arch/riscv/include/asm/serial.h
 create mode 100644 arch/riscv/include/asm/setup.h
 create mode 100644 arch/riscv/include/asm/smp.h
 create mode 100644 arch/riscv/include/asm/spinlock.h
 create mode 100644 arch/riscv/include/asm/spinlock_types.h
 create mode 100644 arch/riscv/include/asm/string.h
 create mode 100644 arch/riscv/include/asm/switch_to.h
 create mode 100644 arch/riscv/include/asm/syscall.h
 create mode 100644 arch/riscv/include/asm/syscalls.h
 create mode 100644 arch/riscv/include/asm/thread_info.h
 create mode 100644 arch/riscv/include/asm/timex.h
 create mode 100644 arch/riscv/include/asm/tlb.h
 create mode 100644 arch/riscv/include/asm/tlbflush.h
 create mode 100644 arch/riscv/include/asm/uaccess.h
 create mode 100644 arch/riscv/include/asm/unistd.h
 create mode 100644 arch/riscv/include/asm/vdso.h
 create mode 100644 arch/riscv/include/asm/word-at-a-time.h
 create mode 100644 arch/riscv/include/uapi/asm/Kbuild
 create mode 100644 arch/riscv/include/uapi/asm/auxvec.h
 create mode 100644 arch/riscv/include/uapi/asm/bitsperlong.h
 create mode 100644 arch/riscv/include/uapi/asm/byteorder.h
 create mode 100644 arch/riscv/include/uapi/asm/elf.h
 create mode 100644 arch/riscv/include/uapi/asm/ptrace.h
 create mode 100644 arch/riscv/include/uapi/asm/sigcontext.h
 create mode 100644 arch/riscv/include/uapi/asm/siginfo.h
 create mode 100644 arch/riscv/include/uapi/asm/unistd.h

diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild
new file mode 100644
index 000000000000..d861450998bb
--- /dev/null
+++ b/arch/riscv/include/asm/Kbuild
@@ -0,0 +1,60 @@
+generic-y += bugs.h
+generic-y += cacheflush.h
+generic-y += checksum.h
+generic-y += clkdev.h
+generic-y += cputime.h
+generic-y += current.h
+generic-y += div64.h
+generic-y += dma.h
+generic-y += emergency-restart.h
+generic-y += errno.h
+generic-y += exec.h
+generic-y += fb.h
+generic-y += fcntl.h
+generic-y += ftrace.h
+generic-y += futex.h
+generic-y += hardirq.h
+generic-y += hash.h
+generic-y += hw_irq.h
+generic-y += ioctl.h
+generic-y += ioctls.h
+generic-y += ipcbuf.h
+generic-y += irq_regs.h
+generic-y += irq_work.h
+generic-y += kdebug.h
+generic-y += kmap_types.h
+generic-y += kvm_para.h
+generic-y += local.h
+generic-y += mm-arch-hooks.h
+generic-y += mman.h
+generic-y += module.h
+generic-y += msgbuf.h
+generic-y += mutex.h
+generic-y += param.h
+generic-y += percpu.h
+generic-y += poll.h
+generic-y += posix_types.h
+generic-y += preempt.h
+generic-y += resource.h
+generic-y += scatterlist.h
+generic-y += sections.h
+generic-y += sembuf.h
+generic-y += shmbuf.h
+generic-y += shmparam.h
+generic-y += signal.h
+generic-y += socket.h
+generic-y += sockios.h
+generic-y += stat.h
+generic-y += statfs.h
+generic-y += swab.h
+generic-y += termbits.h
+generic-y += termios.h
+generic-y += topology.h
+generic-y += trace_clock.h
+generic-y += types.h
+generic-y += ucontext.h
+generic-y += unaligned.h
+generic-y += user.h
+generic-y += vga.h
+generic-y += vmlinux.lds.h
+generic-y += xor.h
diff --git a/arch/riscv/include/asm/asm-offsets.h b/arch/riscv/include/asm/asm-offsets.h
new file mode 100644
index 000000000000..d370ee36a182
--- /dev/null
+++ b/arch/riscv/include/asm/asm-offsets.h
@@ -0,0 +1 @@
+#include <generated/asm-offsets.h>
diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h
new file mode 100644
index 000000000000..87f27603286c
--- /dev/null
+++ b/arch/riscv/include/asm/asm.h
@@ -0,0 +1,65 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ * 
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_ASM_H
+#define _ASM_RISCV_ASM_H
+
+#ifdef __ASSEMBLY__
+#define __ASM_STR(x)	x
+#else
+#define __ASM_STR(x)	#x
+#endif
+
+#if __riscv_xlen == 64
+#define __REG_SEL(a, b)	__ASM_STR(a)
+#elif __riscv_xlen == 32
+#define __REG_SEL(a, b)	__ASM_STR(b)
+#else
+#error "Unexpected __riscv_xlen"
+#endif
+
+#define REG_L		__REG_SEL(ld, lw)
+#define REG_S		__REG_SEL(sd, sw)
+#define SZREG		__REG_SEL(8, 4)
+#define LGREG		__REG_SEL(3, 2)
+
+#if __SIZEOF_POINTER__ == 8
+#define __PTR_SEL(a, b)	__ASM_STR(a)
+#elif __SIZEOF_POINTER__ == 4
+#define __PTR_SEL(a, b)	__ASM_STR(b)
+#else
+#error "Unexpected __SIZEOF_POINTER__"
+#endif
+
+#define PTR		__PTR_SEL(.dword, .word)
+#define SZPTR		__PTR_SEL(8, 4)
+#define LGPTR		__PTR_SEL(3, 2)
+
+#if (__SIZEOF_INT__ == 4)
+#define INT		__ASM_STR(.word)
+#define SZINT		__ASM_STR(4)
+#define LGINT		__ASM_STR(2)
+#else
+#error "Unexpected __SIZEOF_INT__"
+#endif
+
+#if (__SIZEOF_SHORT__ == 2)
+#define SHORT		__ASM_STR(.half)
+#define SZSHORT		__ASM_STR(2)
+#define LGSHORT		__ASM_STR(1)
+#else
+#error "Unexpected __SIZEOF_SHORT__"
+#endif
+
+#endif /* _ASM_RISCV_ASM_H */
diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
new file mode 100644
index 000000000000..2f6f78c5ddd8
--- /dev/null
+++ b/arch/riscv/include/asm/atomic.h
@@ -0,0 +1,349 @@
+/*
+ * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#ifndef _ASM_RISCV_ATOMIC_H
+#define _ASM_RISCV_ATOMIC_H
+
+#ifdef CONFIG_RV_ATOMIC
+
+#include <asm/cmpxchg.h>
+#include <asm/barrier.h>
+
+#define ATOMIC_INIT(i)	{ (i) }
+
+/**
+ * atomic_read - read atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically reads the value of @v.
+ */
+static inline int atomic_read(const atomic_t *v)
+{
+	return *((volatile int *)(&(v->counter)));
+}
+
+/**
+ * atomic_set - set atomic variable
+ * @v: pointer of type atomic_t
+ * @i: required value
+ *
+ * Atomically sets the value of @v to @i.
+ */
+static inline void atomic_set(atomic_t *v, int i)
+{
+	v->counter = i;
+}
+
+/**
+ * atomic_add - add integer to atomic variable
+ * @i: integer value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically adds @i to @v.
+ */
+static inline void atomic_add(int i, atomic_t *v)
+{
+	__asm__ __volatile__ (
+		"amoadd.w zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (i));
+}
+
+#define atomic_fetch_add atomic_fetch_add
+static inline int atomic_fetch_add(unsigned int mask, atomic_t *v)
+{
+	int out;
+
+	__asm__ __volatile__ (
+		"amoadd.w %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic_sub - subtract integer from atomic variable
+ * @i: integer value to subtract
+ * @v: pointer of type atomic_t
+ *
+ * Atomically subtracts @i from @v.
+ */
+static inline void atomic_sub(int i, atomic_t *v)
+{
+	atomic_add(-i, v);
+}
+
+#define atomic_fetch_sub atomic_fetch_sub
+static inline int atomic_fetch_sub(unsigned int mask, atomic_t *v)
+{
+	int out;
+
+	__asm__ __volatile__ (
+		"amosub.w %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic_add_return - add integer to atomic variable
+ * @i: integer value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically adds @i to @v and returns the result
+ */
+static inline int atomic_add_return(int i, atomic_t *v)
+{
+	register int c;
+
+	__asm__ __volatile__ (
+		"amoadd.w %0, %2, %1"
+		: "=r" (c), "+A" (v->counter)
+		: "r" (i));
+	return (c + i);
+}
+
+/**
+ * atomic_sub_return - subtract integer from atomic variable
+ * @i: integer value to subtract
+ * @v: pointer of type atomic_t
+ *
+ * Atomically subtracts @i from @v and returns the result
+ */
+static inline int atomic_sub_return(int i, atomic_t *v)
+{
+	return atomic_add_return(-i, v);
+}
+
+/**
+ * atomic_inc - increment atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1.
+ */
+static inline void atomic_inc(atomic_t *v)
+{
+	atomic_add(1, v);
+}
+
+/**
+ * atomic_dec - decrement atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1.
+ */
+static inline void atomic_dec(atomic_t *v)
+{
+	atomic_add(-1, v);
+}
+
+static inline int atomic_inc_return(atomic_t *v)
+{
+	return atomic_add_return(1, v);
+}
+
+static inline int atomic_dec_return(atomic_t *v)
+{
+	return atomic_sub_return(1, v);
+}
+
+/**
+ * atomic_sub_and_test - subtract value from variable and test result
+ * @i: integer value to subtract
+ * @v: pointer of type atomic_t
+ *
+ * Atomically subtracts @i from @v and returns
+ * true if the result is zero, or false for all
+ * other cases.
+ */
+static inline int atomic_sub_and_test(int i, atomic_t *v)
+{
+	return (atomic_sub_return(i, v) == 0);
+}
+
+/**
+ * atomic_inc_and_test - increment and test
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1
+ * and returns true if the result is zero, or false for all
+ * other cases.
+ */
+static inline int atomic_inc_and_test(atomic_t *v)
+{
+	return (atomic_inc_return(v) == 0);
+}
+
+/**
+ * atomic_dec_and_test - decrement and test
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1 and
+ * returns true if the result is 0, or false for all other
+ * cases.
+ */
+static inline int atomic_dec_and_test(atomic_t *v)
+{
+	return (atomic_dec_return(v) == 0);
+}
+
+/**
+ * atomic_add_negative - add and test if negative
+ * @i: integer value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically adds @i to @v and returns true
+ * if the result is negative, or false when
+ * result is greater than or equal to zero.
+ */
+static inline int atomic_add_negative(int i, atomic_t *v)
+{
+	return (atomic_add_return(i, v) < 0);
+}
+
+
+static inline int atomic_xchg(atomic_t *v, int n)
+{
+	register int c;
+
+	__asm__ __volatile__ (
+		"amoswap.w %0, %2, %1"
+		: "=r" (c), "+A" (v->counter)
+		: "r" (n));
+	return c;
+}
+
+static inline int atomic_cmpxchg(atomic_t *v, int o, int n)
+{
+	return cmpxchg(&(v->counter), o, n);
+}
+
+/**
+ * __atomic_add_unless - add unless the number is already a given value
+ * @v: pointer of type atomic_t
+ * @a: the amount to add to v...
+ * @u: ...unless v is equal to u.
+ *
+ * Atomically adds @a to @v, so long as @v was not already @u.
+ * Returns the old value of @v.
+ */
+static inline int __atomic_add_unless(atomic_t *v, int a, int u)
+{
+	register int prev, rc;
+
+	__asm__ __volatile__ (
+	"0:"
+		"lr.w %0, %2\n"
+		"beq  %0, %4, 1f\n"
+		"add  %1, %0, %3\n"
+		"sc.w %1, %1, %2\n"
+		"bnez %1, 0b\n"
+	"1:"
+		: "=&r" (prev), "=&r" (rc), "+A" (v->counter)
+		: "r" (a), "r" (u));
+	return prev;
+}
+
+/**
+ * atomic_and - Atomically clear bits in atomic variable
+ * @mask: Mask of the bits to be retained
+ * @v: pointer of type atomic_t
+ *
+ * Atomically retains the bits set in @mask from @v
+ */
+static inline void atomic_and(unsigned int mask, atomic_t *v)
+{
+	__asm__ __volatile__ (
+		"amoand.w zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+#define atomic_fetch_and atomic_fetch_and
+static inline int atomic_fetch_and(unsigned int mask, atomic_t *v)
+{
+	int out;
+
+	__asm__ __volatile__ (
+		"amoand.w %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic_or - Atomically set bits in atomic variable
+ * @mask: Mask of the bits to be set
+ * @v: pointer of type atomic_t
+ *
+ * Atomically sets the bits set in @mask in @v
+ */
+static inline void atomic_or(unsigned int mask, atomic_t *v)
+{
+	__asm__ __volatile__ (
+		"amoor.w zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+#define atomic_fetch_or atomic_fetch_or
+static inline int atomic_fetch_or(unsigned int mask, atomic_t *v)
+{
+	int out;
+
+	__asm__ __volatile__ (
+		"amoor.w %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic_xor - Atomically flips bits in atomic variable
+ * @mask: Mask of the bits to be flipped
+ * @v: pointer of type atomic_t
+ *
+ * Atomically flips the bits set in @mask in @v
+ */
+static inline void atomic_xor(unsigned int mask, atomic_t *v)
+{
+	__asm__ __volatile__ (
+		"amoxor.w zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+#define atomic_fetch_xor atomic_fetch_xor
+static inline int atomic_fetch_xor(unsigned int mask, atomic_t *v)
+{
+	int out;
+
+	__asm__ __volatile__ (
+		"amoxor.w %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/* Assume that atomic operations are already serializing */
+#define smp_mb__before_atomic_dec()	barrier()
+#define smp_mb__after_atomic_dec()	barrier()
+#define smp_mb__before_atomic_inc()	barrier()
+#define smp_mb__after_atomic_inc()	barrier()
+
+#else /* !CONFIG_RV_ATOMIC */
+
+#include <asm-generic/atomic.h>
+
+#endif /* CONFIG_RV_ATOMIC */
+
+#include <asm/atomic64.h>
+
+#endif /* _ASM_RISCV_ATOMIC_H */
diff --git a/arch/riscv/include/asm/atomic64.h b/arch/riscv/include/asm/atomic64.h
new file mode 100644
index 000000000000..5ffcc32e2d1e
--- /dev/null
+++ b/arch/riscv/include/asm/atomic64.h
@@ -0,0 +1,355 @@
+/*
+ * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#ifndef _ASM_RISCV_ATOMIC64_H
+#define _ASM_RISCV_ATOMIC64_H
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+#include <asm-generic/atomic64.h>
+#else /* !CONFIG_GENERIC_ATOMIC64 */
+
+#include <linux/types.h>
+
+#define ATOMIC64_INIT(i)	{ (i) }
+
+/**
+ * atomic64_read - read atomic64 variable
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically reads the value of @v.
+ */
+static inline s64 atomic64_read(const atomic64_t *v)
+{
+	return *((volatile long *)(&(v->counter)));
+}
+
+/**
+ * atomic64_set - set atomic64 variable
+ * @v: pointer to type atomic64_t
+ * @i: required value
+ *
+ * Atomically sets the value of @v to @i.
+ */
+static inline void atomic64_set(atomic64_t *v, s64 i)
+{
+	v->counter = i;
+}
+
+/**
+ * atomic64_add - add integer to atomic64 variable
+ * @i: integer value to add
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically adds @i to @v.
+ */
+static inline void atomic64_add(s64 a, atomic64_t *v)
+{
+	__asm__ __volatile__ (
+		"amoadd.d zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (a));
+}
+
+static inline long atomic64_fetch_add(unsigned long mask, atomic64_t *v)
+{
+        long out;
+
+        __asm__ __volatile__ (
+                "amoadd.d %2, %1, %0"
+                : "+A" (v->counter), "=r" (out)
+                : "r" (mask));
+        return out;
+}
+
+/**
+ * atomic64_sub - subtract the atomic64 variable
+ * @i: integer value to subtract
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically subtracts @i from @v.
+ */
+static inline void atomic64_sub(s64 a, atomic64_t *v)
+{
+	atomic64_add(-a, v);
+}
+
+static inline long atomic64_fetch_sub(unsigned long mask, atomic64_t *v)
+{
+        long out;
+
+        __asm__ __volatile__ (
+                "amosub.d %2, %1, %0"
+                : "+A" (v->counter), "=r" (out)
+                : "r" (mask));
+        return out;
+}
+
+/**
+ * atomic64_add_return - add and return
+ * @i: integer value to add
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically adds @i to @v and returns @i + @v
+ */
+static inline s64 atomic64_add_return(s64 a, atomic64_t *v)
+{
+	register s64 c;
+
+	__asm__ __volatile__ (
+		"amoadd.d %0, %2, %1"
+		: "=r" (c), "+A" (v->counter)
+		: "r" (a));
+	return (c + a);
+}
+
+static inline s64 atomic64_sub_return(s64 a, atomic64_t *v)
+{
+	return atomic64_add_return(-a, v);
+}
+
+/**
+ * atomic64_inc - increment atomic64 variable
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically increments @v by 1.
+ */
+static inline void atomic64_inc(atomic64_t *v)
+{
+	atomic64_add(1L, v);
+}
+
+/**
+ * atomic64_dec - decrement atomic64 variable
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically decrements @v by 1.
+ */
+static inline void atomic64_dec(atomic64_t *v)
+{
+	atomic64_add(-1L, v);
+}
+
+static inline s64 atomic64_inc_return(atomic64_t *v)
+{
+	return atomic64_add_return(1L, v);
+}
+
+static inline s64 atomic64_dec_return(atomic64_t *v)
+{
+	return atomic64_add_return(-1L, v);
+}
+
+/**
+ * atomic64_inc_and_test - increment and test
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically increments @v by 1
+ * and returns true if the result is zero, or false for all
+ * other cases.
+ */
+static inline int atomic64_inc_and_test(atomic64_t *v)
+{
+	return (atomic64_inc_return(v) == 0);
+}
+
+/**
+ * atomic64_dec_and_test - decrement and test
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically decrements @v by 1 and
+ * returns true if the result is 0, or false for all other
+ * cases.
+ */
+static inline int atomic64_dec_and_test(atomic64_t *v)
+{
+	return (atomic64_dec_return(v) == 0);
+}
+
+/**
+ * atomic64_sub_and_test - subtract value from variable and test result
+ * @a: integer value to subtract
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically subtracts @a from @v and returns
+ * true if the result is zero, or false for all
+ * other cases.
+ */
+static inline int atomic64_sub_and_test(s64 a, atomic64_t *v)
+{
+	return (atomic64_sub_return(a, v) == 0);
+}
+
+/**
+ * atomic64_add_negative - add and test if negative
+ * @a: integer value to add
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically adds @a to @v and returns true
+ * if the result is negative, or false when
+ * result is greater than or equal to zero.
+ */
+static inline int atomic64_add_negative(s64 a, atomic64_t *v)
+{
+	return (atomic64_add_return(a, v) < 0);
+}
+
+
+static inline s64 atomic64_xchg(atomic64_t *v, s64 n)
+{
+	register s64 c;
+
+	__asm__ __volatile__ (
+		"amoswap.d %0, %2, %1"
+		: "=r" (c), "+A" (v->counter)
+		: "r" (n));
+	return c;
+}
+
+static inline s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
+{
+	return cmpxchg(&(v->counter), o, n);
+}
+
+/*
+ * atomic64_dec_if_positive - decrement by 1 if old value positive
+ * @v: pointer of type atomic_t
+ *
+ * The function returns the old value of *v minus 1, even if
+ * the atomic variable, v, was not decremented.
+ */
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
+{
+	register s64 prev, rc;
+
+	__asm__ __volatile__ (
+	"0:"
+		"lr.d %0, %2\n"
+		"add  %0, %0, -1\n"
+		"bltz %0, 1f\n"
+		"sc.w %1, %0, %2\n"
+		"bnez %1, 0b\n"
+	"1:"
+		: "=&r" (prev), "=r" (rc), "+A" (v->counter));
+	return prev;
+}
+
+/**
+ * atomic64_add_unless - add unless the number is a given value
+ * @v: pointer of type atomic64_t
+ * @a: the amount to add to v...
+ * @u: ...unless v is equal to u.
+ *
+ * Atomically adds @a to @v, so long as it was not @u.
+ * Returns true if the addition occurred and false otherwise.
+ */
+static inline int atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
+{
+	register s64 tmp;
+	register int rc = 1;
+
+	__asm__ __volatile__ (
+	"0:"
+		"lr.d %0, %2\n"
+		"beq  %0, %z4, 1f\n"
+		"add  %0, %0, %3\n"
+		"sc.d %1, %0, %2\n"
+		"bnez %1, 0b\n"
+	"1:"
+		: "=&r" (tmp), "=&r" (rc), "+A" (v->counter)
+		: "rI" (a), "rJ" (u));
+	return !rc;
+}
+
+static inline int atomic64_inc_not_zero(atomic64_t *v)
+{
+	return atomic64_add_unless(v, 1, 0);
+}
+
+/**
+ * atomic64_and - Atomically clear bits in atomic variable
+ * @mask: Mask of the bits to be retained
+ * @v: pointer of type atomic_t
+ *
+ * Atomically retains the bits set in @mask from @v
+ */
+static inline void atomic64_and(s64 mask, atomic64_t *v)
+{
+	__asm__ __volatile__ (
+		"amoand.d zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+static inline long atomic64_fetch_and(unsigned long mask, atomic64_t *v)
+{
+        long out;
+
+        __asm__ __volatile__ (
+                "amoand.d %2, %1, %0"
+                : "+A" (v->counter), "=r" (out)
+                : "r" (mask));
+        return out;
+}
+
+/**
+ * atomic64_or - Atomically set bits in atomic variable
+ * @mask: Mask of the bits to be set
+ * @v: pointer of type atomic_t
+ *
+ * Atomically sets the bits set in @mask in @v
+ */
+static inline void atomic64_or(s64 mask, atomic64_t *v)
+{
+	__asm__ __volatile__ (
+		"amoor.d zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+static inline long atomic64_fetch_or(unsigned long mask, atomic64_t *v)
+{
+        long out;
+
+        __asm__ __volatile__ (
+                "amoor.d %2, %1, %0"
+                : "+A" (v->counter), "=r" (out)
+                : "r" (mask));
+        return out;
+}
+
+/**
+ * atomic64_xor - Atomically flips bits in atomic variable
+ * @mask: Mask of the bits to be flipped
+ * @v: pointer of type atomic_t
+ *
+ * Atomically flips the bits set in @mask in @v
+ */
+static inline void atomic64_xor(s64 mask, atomic64_t *v)
+{
+	__asm__ __volatile__ (
+		"amoxor.d zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+static inline long atomic64_fetch_xor(unsigned long mask, atomic64_t *v)
+{
+        long out;
+
+        __asm__ __volatile__ (
+                "amoxor.d %2, %1, %0"
+                : "+A" (v->counter), "=r" (out)
+                : "r" (mask));
+        return out;
+}
+
+#endif /* CONFIG_GENERIC_ATOMIC64 */
+
+#endif /* _ASM_RISCV_ATOMIC64_H */
diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h
new file mode 100644
index 000000000000..e340a80135ae
--- /dev/null
+++ b/arch/riscv/include/asm/barrier.h
@@ -0,0 +1,33 @@
+/*
+ * Based on arch/arm/include/asm/barrier.h
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_RISCV_BARRIER_H
+#define _ASM_RISCV_BARRIER_H
+
+#ifndef __ASSEMBLY__
+
+#define nop()	__asm__ __volatile__ ("nop")
+
+#define mb()	__asm__ __volatile__ ("fence" : : : "memory")
+
+#include <asm-generic/barrier.h>
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BARRIER_H */
diff --git a/arch/riscv/include/asm/bitops.h b/arch/riscv/include/asm/bitops.h
new file mode 100644
index 000000000000..c470f02f6f06
--- /dev/null
+++ b/arch/riscv/include/asm/bitops.h
@@ -0,0 +1,229 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_BITOPS_H
+#define _ASM_RISCV_BITOPS_H
+
+#ifndef _LINUX_BITOPS_H
+#error "Only <linux/bitops.h> can be included directly"
+#endif /* _LINUX_BITOPS_H */
+
+#ifdef __KERNEL__
+
+#include <linux/compiler.h>
+#include <linux/irqflags.h>
+#include <asm/barrier.h>
+#include <asm/bitsperlong.h>
+
+#ifdef CONFIG_RV_ATOMIC
+
+#ifndef smp_mb__before_clear_bit
+#define smp_mb__before_clear_bit()  smp_mb()
+#define smp_mb__after_clear_bit()   smp_mb()
+#endif /* smp_mb__before_clear_bit */
+
+#include <asm-generic/bitops/__ffs.h>
+#include <asm-generic/bitops/ffz.h>
+#include <asm-generic/bitops/fls.h>
+#include <asm-generic/bitops/__fls.h>
+#include <asm-generic/bitops/fls64.h>
+#include <asm-generic/bitops/find.h>
+#include <asm-generic/bitops/sched.h>
+#include <asm-generic/bitops/ffs.h>
+
+#include <asm-generic/bitops/hweight.h>
+
+#if (BITS_PER_LONG == 64)
+#define __AMO(op)	"amo" #op ".d"
+#elif (BITS_PER_LONG == 32)
+#define __AMO(op)	"amo" #op ".w"
+#else
+#error "Unexpected BITS_PER_LONG"
+#endif
+
+#define __test_and_op_bit(op, mod, nr, addr)			\
+({								\
+	unsigned long __res, __mask;				\
+	__mask = BIT_MASK(nr);					\
+	__asm__ __volatile__ (					\
+		__AMO(op) " %0, %2, %1"				\
+		: "=r" (__res), "+A" (addr[BIT_WORD(nr)])	\
+		: "r" (mod(__mask)));				\
+	((__res & __mask) != 0);				\
+})
+
+#define __op_bit(op, mod, nr, addr)				\
+	__asm__ __volatile__ (					\
+		__AMO(op) " zero, %1, %0"			\
+		: "+A" (addr[BIT_WORD(nr)])			\
+		: "r" (mod(BIT_MASK(nr))))
+
+/* Bitmask modifiers */
+#define __NOP(x)	(x)
+#define __NOT(x)	(~(x))
+
+/**
+ * test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It may be reordered on other architectures than x86.
+ * It also implies a memory barrier.
+ */
+static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
+{
+	return __test_and_op_bit(or, __NOP, nr, addr);
+}
+
+/**
+ * test_and_clear_bit - Clear a bit and return its old value
+ * @nr: Bit to clear
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It can be reordered on other architectures other than x86.
+ * It also implies a memory barrier.
+ */
+static inline int test_and_clear_bit(int nr, volatile unsigned long *addr)
+{
+	return __test_and_op_bit(and, __NOT, nr, addr);
+}
+
+/**
+ * test_and_change_bit - Change a bit and return its old value
+ * @nr: Bit to change
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It also implies a memory barrier.
+ */
+static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
+{
+	return __test_and_op_bit(xor, __NOP, nr, addr);
+}
+
+/**
+ * set_bit - Atomically set a bit in memory
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This function is atomic and may not be reordered.  See __set_bit()
+ * if you do not require the atomic guarantees.
+ *
+ * Note: there are no guarantees that this function will not be reordered
+ * on non x86 architectures, so if you are writing portable code,
+ * make sure not to rely on its reordering guarantees.
+ *
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
+static inline void set_bit(int nr, volatile unsigned long *addr)
+{
+	__op_bit(or, __NOP, nr, addr);
+}
+
+/**
+ * clear_bit - Clears a bit in memory
+ * @nr: Bit to clear
+ * @addr: Address to start counting from
+ *
+ * clear_bit() is atomic and may not be reordered.  However, it does
+ * not contain a memory barrier, so if it is used for locking purposes,
+ * you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
+ * in order to ensure changes are visible on other processors.
+ */
+static inline void clear_bit(int nr, volatile unsigned long *addr)
+{
+	__op_bit(and, __NOT, nr, addr);
+}
+
+/**
+ * change_bit - Toggle a bit in memory
+ * @nr: Bit to change
+ * @addr: Address to start counting from
+ *
+ * change_bit() is atomic and may not be reordered. It may be
+ * reordered on other architectures than x86.
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
+static inline void change_bit(int nr, volatile unsigned long *addr)
+{
+	__op_bit(xor, __NOP, nr, addr);
+}
+
+/**
+ * test_and_set_bit_lock - Set a bit and return its old value, for lock
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is atomic and provides acquire barrier semantics.
+ * It can be used to implement bit locks.
+ */
+static inline int test_and_set_bit_lock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	return test_and_set_bit(nr, addr);
+}
+
+/**
+ * clear_bit_unlock - Clear a bit in memory, for unlock
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This operation is atomic and provides release barrier semantics.
+ */
+static inline void clear_bit_unlock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	clear_bit(nr, addr);
+}
+
+/**
+ * __clear_bit_unlock - Clear a bit in memory, for unlock
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This operation is like clear_bit_unlock, however it is not atomic.
+ * It does provide release barrier semantics so it can be used to unlock
+ * a bit lock, however it would only be used if no other CPU can modify
+ * any bits in the memory until the lock is released (a good example is
+ * if the bit lock itself protects access to the other bits in the word).
+ */
+static inline void __clear_bit_unlock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	clear_bit(nr, addr);
+}
+
+#undef __test_and_op_bit
+#undef __op_bit
+#undef __NOP
+#undef __NOT
+#undef __AMO
+
+#include <asm-generic/bitops/non-atomic.h>
+#include <asm-generic/bitops/le.h>
+#include <asm-generic/bitops/ext2-atomic.h>
+
+#else /* !CONFIG_RV_ATOMIC */
+
+#include <asm-generic/bitops.h>
+
+#endif /* CONFIG_RV_ATOMIC */
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_BITOPS_H */
diff --git a/arch/riscv/include/asm/bug.h b/arch/riscv/include/asm/bug.h
new file mode 100644
index 000000000000..10d894ac3137
--- /dev/null
+++ b/arch/riscv/include/asm/bug.h
@@ -0,0 +1,81 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_BUG_H
+#define _ASM_RISCV_BUG_H
+
+#include <linux/compiler.h>
+#include <linux/const.h>
+#include <linux/types.h>
+
+#include <asm/asm.h>
+
+#ifdef CONFIG_GENERIC_BUG
+#define __BUG_INSN	_AC(0x00100073, UL) /* sbreak */
+
+#ifndef __ASSEMBLY__
+typedef u32 bug_insn_t;
+
+#ifdef CONFIG_GENERIC_BUG_RELATIVE_POINTERS
+#define __BUG_ENTRY_ADDR	INT " 1b - 2b"
+#define __BUG_ENTRY_FILE	INT " %0 - 2b"
+#else
+#define __BUG_ENTRY_ADDR	PTR " 1b"
+#define __BUG_ENTRY_FILE	PTR " %0"
+#endif
+
+#ifdef CONFIG_DEBUG_BUGVERBOSE
+#define __BUG_ENTRY			\
+	__BUG_ENTRY_ADDR "\n\t"		\
+	__BUG_ENTRY_FILE "\n\t"		\
+	SHORT " %1"
+#else
+#define __BUG_ENTRY			\
+	__BUG_ENTRY_ADDR
+#endif
+
+#define BUG()							\
+do {								\
+	__asm__ __volatile__ (					\
+		"1:\n\t"					\
+			"sbreak\n"				\
+			".pushsection __bug_table,\"a\"\n\t"	\
+		"2:\n\t"					\
+			__BUG_ENTRY "\n\t"			\
+			".org 2b + %2\n\t"			\
+			".popsection"				\
+		:						\
+		: "i" (__FILE__), "i" (__LINE__),		\
+		  "i" (sizeof(struct bug_entry)));		\
+	unreachable();						\
+} while (0)
+
+#define HAVE_ARCH_BUG
+#endif /* !__ASSEMBLY__ */
+#endif /* CONFIG_GENERIC_BUG */
+
+#include <asm-generic/bug.h>
+
+#ifndef __ASSEMBLY__
+
+struct pt_regs;
+struct task_struct;
+
+extern void die(struct pt_regs *regs, const char *str);
+extern void do_trap(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, struct task_struct *tsk);
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BUG_H */
diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h
new file mode 100644
index 000000000000..02082e118178
--- /dev/null
+++ b/arch/riscv/include/asm/cache.h
@@ -0,0 +1,23 @@
+/*
+ * Copyright (C) 2017 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_CACHE_H
+#define _ASM_RISCV_CACHE_H
+
+#define L1_CACHE_SHIFT		6
+
+#define L1_CACHE_BYTES		(1 << L1_CACHE_SHIFT)
+
+#endif /* _ASM_RISCV_CACHE_H */
diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
new file mode 100644
index 000000000000..0546ae75d368
--- /dev/null
+++ b/arch/riscv/include/asm/cacheflush.h
@@ -0,0 +1,40 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_CACHEFLUSH_H
+#define _ASM_RISCV_CACHEFLUSH_H
+
+#include <asm-generic/cacheflush.h>
+
+#undef flush_icache_range
+#undef flush_icache_user_range
+
+static inline void local_flush_icache_all(void)
+{
+	asm volatile ("fence.i" ::: "memory");
+}
+
+#ifndef CONFIG_SMP
+
+#define flush_icache_range(start, end) local_flush_icache_all()
+#define flush_icache_user_range(vma, pg, addr, len) local_flush_icache_all()
+
+#else /* CONFIG_SMP */
+
+#define flush_icache_range(start, end) sbi_remote_fence_i(0)
+#define flush_icache_user_range(vma, pg, addr, len) sbi_remote_fence_i(0)
+
+#endif /* CONFIG_SMP */
+
+#endif /* _ASM_RISCV_CACHEFLUSH_H */
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
new file mode 100644
index 000000000000..c875e6279902
--- /dev/null
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -0,0 +1,125 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_CMPXCHG_H
+#define _ASM_RISCV_CMPXCHG_H
+
+#include <linux/bug.h>
+
+#ifdef CONFIG_RV_ATOMIC
+
+#include <asm/barrier.h>
+
+#define __xchg(new, ptr, size)					\
+({								\
+	__typeof__(ptr) __ptr = (ptr);				\
+	__typeof__(new) __new = (new);				\
+	__typeof__(*(ptr)) __ret;				\
+	switch (size) {						\
+	case 4:							\
+		__asm__ __volatile__ (				\
+			"amoswap.w %0, %2, %1"			\
+			: "=r" (__ret), "+A" (*__ptr)		\
+			: "r" (__new));				\
+		break;						\
+	case 8:							\
+		__asm__ __volatile__ (				\
+			"amoswap.d %0, %2, %1"			\
+			: "=r" (__ret), "+A" (*__ptr)		\
+			: "r" (__new));				\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+	}							\
+	__ret;							\
+})
+
+#define xchg(ptr, x)    (__xchg((x), (ptr), sizeof(*(ptr))))
+
+
+/*
+ * Atomic compare and exchange.  Compare OLD with MEM, if identical,
+ * store NEW in MEM.  Return the initial value in MEM.  Success is
+ * indicated by comparing RETURN with OLD.
+ */
+#define __cmpxchg(ptr, old, new, size)					\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(old) __old = (old);					\
+	__typeof__(new) __new = (new);					\
+	__typeof__(*(ptr)) __ret;					\
+	register unsigned int __rc;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+		"0:"							\
+			"lr.w %0, %2\n"					\
+			"bne  %0, %z3, 1f\n"				\
+			"sc.w %1, %z4, %2\n"				\
+			"bnez %1, 0b\n"					\
+		"1:"							\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new));			\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+		"0:"							\
+			"lr.d %0, %2\n"					\
+			"bne  %0, %z3, 1f\n"				\
+			"sc.d %1, %z4, %2\n"				\
+			"bnez %1, 0b\n"					\
+		"1:"							\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new));			\
+		break;							\
+	default:							\
+		BUILD_BUG();						\
+	}								\
+	__ret;								\
+})
+
+#define __cmpxchg_mb(ptr, old, new, size)			\
+({								\
+	__typeof__(*(ptr)) __ret;				\
+	smp_mb();						\
+	__ret = __cmpxchg((ptr), (old), (new), (size));		\
+	smp_mb();						\
+	__ret;							\
+})
+
+#define cmpxchg(ptr, o, n) \
+	(__cmpxchg_mb((ptr), (o), (n), sizeof(*(ptr))))
+
+#define cmpxchg_local(ptr, o, n) \
+	(__cmpxchg((ptr), (o), (n), sizeof(*(ptr))))
+
+#define cmpxchg64(ptr, o, n)			\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);	\
+	cmpxchg((ptr), (o), (n));		\
+})
+
+#define cmpxchg64_local(ptr, o, n)		\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);	\
+	cmpxchg_local((ptr), (o), (n));		\
+})
+
+#else /* !CONFIG_RV_ATOMIC */
+
+#include <asm-generic/cmpxchg.h>
+
+#endif /* CONFIG_RV_ATOMIC */
+
+#endif /* _ASM_RISCV_CMPXCHG_H */
diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
new file mode 100644
index 000000000000..9df94cb0041a
--- /dev/null
+++ b/arch/riscv/include/asm/csr.h
@@ -0,0 +1,126 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_CSR_H
+#define _ASM_RISCV_CSR_H
+
+#include <linux/const.h>
+
+/* Status register flags */
+#define SR_IE   _AC(0x00000002, UL) /* Interrupt Enable */
+#define SR_PIE  _AC(0x00000020, UL) /* Previous IE */
+#define SR_PS   _AC(0x00000100, UL) /* Previously Supervisor */
+#define SR_SUM  _AC(0x00040000, UL) /* Supervisor may access User Memory */
+
+#define SR_FS           _AC(0x00006000, UL) /* Floating-point Status */
+#define SR_FS_OFF       _AC(0x00000000, UL)
+#define SR_FS_INITIAL   _AC(0x00002000, UL)
+#define SR_FS_CLEAN     _AC(0x00004000, UL)
+#define SR_FS_DIRTY     _AC(0x00006000, UL)
+
+#define SR_XS           _AC(0x00018000, UL) /* Extension Status */
+#define SR_XS_OFF       _AC(0x00000000, UL)
+#define SR_XS_INITIAL   _AC(0x00008000, UL)
+#define SR_XS_CLEAN     _AC(0x00010000, UL)
+#define SR_XS_DIRTY     _AC(0x00018000, UL)
+
+#ifndef CONFIG_64BIT
+#define SR_SD   _AC(0x80000000, UL) /* FS/XS dirty */
+#else
+#define SR_SD   _AC(0x8000000000000000, UL) /* FS/XS dirty */
+#endif
+
+/* SPTBR flags */
+#if __riscv_xlen == 32
+#define SPTBR_PPN     _AC(0x003FFFFF, UL)
+#define SPTBR_MODE_32 _AC(0x80000000, UL)
+#define SPTBR_MODE    SPTBR_MODE_32
+#else
+#define SPTBR_PPN     _AC(0x00000FFFFFFFFFFF, UL)
+#define SPTBR_MODE_39 _AC(0x8000000000000000, UL)
+#define SPTBR_MODE    SPTBR_MODE_39
+#endif
+
+/* Interrupt Enable and Interrupt Pending flags */
+#define SIE_SSIE _AC(0x00000002, UL) /* Software Interrupt Enable */
+#define SIE_STIE _AC(0x00000020, UL) /* Timer Interrupt Enable */
+
+#define EXC_INST_MISALIGNED     0
+#define EXC_INST_ACCESS         1
+#define EXC_BREAKPOINT          3
+#define EXC_LOAD_ACCESS         5
+#define EXC_STORE_ACCESS        7
+#define EXC_SYSCALL             8
+#define EXC_INST_PAGE_FAULT     12
+#define EXC_LOAD_PAGE_FAULT     13
+#define EXC_STORE_PAGE_FAULT    15
+
+#ifndef __ASSEMBLY__
+
+#define csr_swap(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrw %0, " #csr ", %1"		\
+			      : "=r" (__v) : "rK" (__v));	\
+	__v;							\
+})
+
+#define csr_read(csr)						\
+({								\
+	register unsigned long __v;				\
+	__asm__ __volatile__ ("csrr %0, " #csr			\
+			      : "=r" (__v));			\
+	__v;							\
+})
+
+#define csr_write(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrw " #csr ", %0"		\
+			      : : "rK" (__v));			\
+})
+
+#define csr_read_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrs %0, " #csr ", %1"		\
+			      : "=r" (__v) : "rK" (__v));	\
+	__v;							\
+})
+
+#define csr_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrs " #csr ", %0"		\
+			      : : "rK" (__v));			\
+})
+
+#define csr_read_clear(csr, val)				\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrc %0, " #csr ", %1"		\
+			      : "=r" (__v) : "rK" (__v));	\
+	__v;							\
+})
+
+#define csr_clear(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrc " #csr ", %0"		\
+			      : : "rK" (__v));			\
+})
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_CSR_H */
diff --git a/arch/riscv/include/asm/delay.h b/arch/riscv/include/asm/delay.h
new file mode 100644
index 000000000000..18d399d3e689
--- /dev/null
+++ b/arch/riscv/include/asm/delay.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2016 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_DELAY_H
+#define _ASM_RISCV_DELAY_H
+
+extern unsigned long timebase;
+
+#define udelay udelay
+extern void udelay(unsigned long usecs);
+
+#define ndelay ndelay
+extern void ndelay(unsigned long nsecs);
+
+extern void __delay(unsigned long cycles);
+
+#endif /* _ASM_RISCV_DELAY_H */
diff --git a/arch/riscv/include/asm/device.h b/arch/riscv/include/asm/device.h
new file mode 100644
index 000000000000..89e8ce022e37
--- /dev/null
+++ b/arch/riscv/include/asm/device.h
@@ -0,0 +1,28 @@
+/*
+ * Copyright (C) 2016 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+
+#ifndef _ASM_RISCV_DEVICE_H
+#define _ASM_RISCV_DEVICE_H
+
+#include <linux/sysfs.h>
+
+struct dev_archdata {
+	struct dma_map_ops *dma_ops;
+};
+
+struct pdev_archdata {
+};
+
+#endif /* _ASM_RISCV_DEVICE_H */
diff --git a/arch/riscv/include/asm/dma-mapping.h b/arch/riscv/include/asm/dma-mapping.h
new file mode 100644
index 000000000000..f4d485780db3
--- /dev/null
+++ b/arch/riscv/include/asm/dma-mapping.h
@@ -0,0 +1,61 @@
+/*
+ * Copyright (C) 2003-2004 Hewlett-Packard Co
+ *	David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2016 SiFive, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_RISCV_DMA_MAPPING_H
+#define __ASM_RISCV_DMA_MAPPING_H
+
+#ifdef __KERNEL__
+
+/* Use ops->dma_mapping_error (if it exists) or assume success */
+// #undef DMA_ERROR_CODE
+
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
+{
+	return &dma_noop_ops;
+}
+
+static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
+{
+	if (!dev->dma_mask)
+		return false;
+
+	return addr + size - 1 <= *dev->dma_mask;
+}
+
+static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
+{
+	return (dma_addr_t)paddr;
+}
+
+static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t dev_addr)
+{
+	return (phys_addr_t)dev_addr;
+}
+
+static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, enum dma_data_direction dir)
+{
+	/*
+	 * RISC-V is cache-coherent, so this is mostly a no-op.
+	 * However, we do need to ensure that dma_cache_sync()
+	 * enforces order, hence the mb().
+	 */
+	mb();
+}
+
+#endif	/* __KERNEL__ */
+#endif	/* __ASM_RISCV_DMA_MAPPING_H */
diff --git a/arch/riscv/include/asm/elf.h b/arch/riscv/include/asm/elf.h
new file mode 100644
index 000000000000..5ded3f6f83ea
--- /dev/null
+++ b/arch/riscv/include/asm/elf.h
@@ -0,0 +1,83 @@
+/*
+ * Copyright (C) 2003 Matjaz Breskvar <phoenix@bsemi.com>
+ * Copyright (C) 2010-2011 Jonas Bonn <jonas@southpole.se>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef _ASM_RISCV_ELF_H
+#define _ASM_RISCV_ELF_H
+
+#include <uapi/asm/elf.h>
+#include <asm/auxvec.h>
+#include <asm/byteorder.h>
+
+/* TODO: Move definition into include/uapi/linux/elf-em.h */
+#define EM_RISCV	0xF3
+
+/*
+ * These are used to set parameters in the core dumps.
+ */
+#define ELF_ARCH	EM_RISCV
+
+#ifdef CONFIG_64BIT
+#define ELF_CLASS	ELFCLASS64
+#else
+#define ELF_CLASS	ELFCLASS32
+#endif
+
+#if defined(__LITTLE_ENDIAN)
+#define ELF_DATA	ELFDATA2LSB
+#elif defined(__BIG_ENDIAN)
+#define ELF_DATA	ELFDATA2MSB
+#else
+#error "Unknown endianness"
+#endif
+
+/*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+#define elf_check_arch(x) ((x)->e_machine == EM_RISCV)
+
+#define CORE_DUMP_USE_REGSET
+#define ELF_EXEC_PAGESIZE	(PAGE_SIZE)
+
+/*
+ * This is the location that an ET_DYN program is loaded if exec'ed.  Typical
+ * use of this is to invoke "./ld.so someprog" to test out a new version of
+ * the loader.  We need to make sure that it is out of the way of the program
+ * that it will "exec", and that there is sufficient room for the brk.
+ */
+#define ELF_ET_DYN_BASE		((TASK_SIZE / 3) * 2)
+
+/*
+ * This yields a mask that user programs can use to figure out what
+ * instruction set this CPU supports.  This could be done in user space,
+ * but it's not easy, and we've already done it here.
+ */
+#define ELF_HWCAP	(0)
+
+/*
+ * This yields a string that ld.so will use to load implementation
+ * specific libraries for optimization.  This is more specific in
+ * intent than poking at uname or /proc/cpuinfo.
+ */
+#define ELF_PLATFORM	(NULL)
+
+#define ARCH_DLINFO						\
+do {								\
+	NEW_AUX_ENT(AT_SYSINFO_EHDR,				\
+		(elf_addr_t)current->mm->context.vdso);		\
+} while (0)
+
+
+#define ARCH_HAS_SETUP_ADDITIONAL_PAGES
+struct linux_binprm;
+extern int arch_setup_additional_pages(struct linux_binprm *bprm,
+	int uses_interp);
+
+#endif /* _ASM_RISCV_ELF_H */
diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h
new file mode 100644
index 000000000000..d942555a7a08
--- /dev/null
+++ b/arch/riscv/include/asm/io.h
@@ -0,0 +1,36 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_IO_H
+#define _ASM_RISCV_IO_H
+
+#include <asm-generic/io.h>
+
+#ifdef __KERNEL__
+
+#ifdef CONFIG_MMU
+
+extern void __iomem *ioremap(phys_addr_t offset, unsigned long size);
+
+#define ioremap_nocache(addr, size) ioremap((addr), (size))
+#define ioremap_wc(addr, size) ioremap((addr), (size))
+#define ioremap_wt(addr, size) ioremap((addr), (size))
+
+extern void iounmap(void __iomem *addr);
+
+#endif /* CONFIG_MMU */
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_IO_H */
diff --git a/arch/riscv/include/asm/irq.h b/arch/riscv/include/asm/irq.h
new file mode 100644
index 000000000000..ce024e60f585
--- /dev/null
+++ b/arch/riscv/include/asm/irq.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_IRQ_H
+#define _ASM_RISCV_IRQ_H
+
+#define NR_IRQS         0
+
+#define INTERRUPT_CAUSE_SOFTWARE    1
+#define INTERRUPT_CAUSE_TIMER       5
+#define INTERRUPT_CAUSE_EXTERNAL    9
+
+void riscv_timer_interrupt(void);
+
+#include <asm-generic/irq.h>
+
+/* The value of csr sie before init_traps runs (core is up) */
+DECLARE_PER_CPU(atomic_long_t, riscv_early_sie);
+
+#endif /* _ASM_RISCV_IRQ_H */
diff --git a/arch/riscv/include/asm/irqflags.h b/arch/riscv/include/asm/irqflags.h
new file mode 100644
index 000000000000..f86d8173a2de
--- /dev/null
+++ b/arch/riscv/include/asm/irqflags.h
@@ -0,0 +1,64 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+
+#ifndef _ASM_RISCV_IRQFLAGS_H
+#define _ASM_RISCV_IRQFLAGS_H
+
+#include <asm/processor.h>
+#include <asm/csr.h>
+
+/* read interrupt enabled status */
+static inline unsigned long arch_local_save_flags(void)
+{
+	return csr_read(sstatus);
+}
+
+/* unconditionally enable interrupts */
+static inline void arch_local_irq_enable(void)
+{
+	csr_set(sstatus, SR_IE);
+}
+
+/* unconditionally disable interrupts */
+static inline void arch_local_irq_disable(void)
+{
+	csr_clear(sstatus, SR_IE);
+}
+
+/* get status and disable interrupts */
+static inline unsigned long arch_local_irq_save(void)
+{
+	return csr_read_clear(sstatus, SR_IE);
+}
+
+/* test flags */
+static inline int arch_irqs_disabled_flags(unsigned long flags)
+{
+	return !(flags & SR_IE);
+}
+
+/* test hardware interrupt enable bit */
+static inline int arch_irqs_disabled(void)
+{
+	return arch_irqs_disabled_flags(arch_local_save_flags());
+}
+
+/* set interrupt enabled status */
+static inline void arch_local_irq_restore(unsigned long flags)
+{
+	csr_set(sstatus, flags & SR_IE);
+}
+
+#endif /* _ASM_RISCV_IRQFLAGS_H */
diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
new file mode 100644
index 000000000000..61a03e7c1975
--- /dev/null
+++ b/arch/riscv/include/asm/kprobes.h
@@ -0,0 +1,23 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+
+#ifndef ASM_RISCV_KPROBES_H
+#define ASM_RISCV_KPROBES_H
+
+#ifdef CONFIG_KPROBES
+#error "RISC-V doesn't skpport CONFIG_KPROBES"
+#endif
+
+#endif
diff --git a/arch/riscv/include/asm/linkage.h b/arch/riscv/include/asm/linkage.h
new file mode 100644
index 000000000000..6d4d4f6b6951
--- /dev/null
+++ b/arch/riscv/include/asm/linkage.h
@@ -0,0 +1,21 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_LINKAGE_H
+#define _ASM_RISCV_LINKAGE_H
+
+#define __ALIGN		.balign 4
+#define __ALIGN_STR	".balign 4"
+
+#endif /* _ASM_RISCV_LINKAGE_H */
diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h
new file mode 100644
index 000000000000..9eeee484fd81
--- /dev/null
+++ b/arch/riscv/include/asm/mmu.h
@@ -0,0 +1,27 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+
+#ifndef _ASM_RISCV_MMU_H
+#define _ASM_RISCV_MMU_H
+
+#ifndef __ASSEMBLY__
+
+typedef struct {
+	void *vdso;
+} mm_context_t;
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_MMU_H */
diff --git a/arch/riscv/include/asm/mmu_context.h b/arch/riscv/include/asm/mmu_context.h
new file mode 100644
index 000000000000..44053fefa533
--- /dev/null
+++ b/arch/riscv/include/asm/mmu_context.h
@@ -0,0 +1,70 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_MMU_CONTEXT_H
+#define _ASM_RISCV_MMU_CONTEXT_H
+
+#include <asm-generic/mm_hooks.h>
+
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <asm/tlbflush.h>
+
+static inline void enter_lazy_tlb(struct mm_struct *mm,
+	struct task_struct *task)
+{
+}
+
+/* Initialize context-related info for a new mm_struct */
+static inline int init_new_context(struct task_struct *task,
+	struct mm_struct *mm)
+{
+	return 0;
+}
+
+static inline void destroy_context(struct mm_struct *mm)
+{
+}
+
+static inline pgd_t *current_pgdir(void)
+{
+	return pfn_to_virt(csr_read(sptbr) & SPTBR_PPN);
+}
+
+static inline void set_pgdir(pgd_t *pgd)
+{
+	csr_write(sptbr, virt_to_pfn(pgd) | SPTBR_MODE);
+}
+
+static inline void switch_mm(struct mm_struct *prev,
+	struct mm_struct *next, struct task_struct *task)
+{
+	if (likely(prev != next)) {
+		set_pgdir(next->pgd);
+		local_flush_tlb_all();
+	}
+}
+
+static inline void activate_mm(struct mm_struct *prev,
+			       struct mm_struct *next)
+{
+	switch_mm(prev, next, NULL);
+}
+
+static inline void deactivate_mm(struct task_struct *task,
+	struct mm_struct *mm)
+{
+}
+
+#endif /* _ASM_RISCV_MMU_CONTEXT_H */
diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
new file mode 100644
index 000000000000..1cde0b295d25
--- /dev/null
+++ b/arch/riscv/include/asm/page.h
@@ -0,0 +1,138 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ * Copyright (C) 2017 XiaojingZhu <zhuxiaoj@ict.ac.cn>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_PAGE_H
+#define _ASM_RISCV_PAGE_H
+
+#include <linux/pfn.h>
+#include <linux/const.h>
+
+#define PAGE_SHIFT	(12)
+#define PAGE_SIZE	(_AC(1, UL) << PAGE_SHIFT)
+#define PAGE_MASK	(~(PAGE_SIZE - 1))
+
+#ifdef __KERNEL__
+
+/*
+ * PAGE_OFFSET -- the first address of the first page of memory.
+ * When not using MMU this corresponds to the first free page in
+ * physical memory (aligned on a page boundary).
+ */
+#ifdef CONFIG_64BIT
+#define PAGE_OFFSET		_AC(0xffffffff80000000, UL)
+#else
+#define PAGE_OFFSET		_AC(0xc0000000, UL)
+#endif
+
+#define KERN_VIRT_SIZE (-PAGE_OFFSET)
+
+#ifndef __ASSEMBLY__
+
+#define PAGE_UP(addr)	(((addr)+((PAGE_SIZE)-1))&(~((PAGE_SIZE)-1)))
+#define PAGE_DOWN(addr)	((addr)&(~((PAGE_SIZE)-1)))
+
+/* align addr on a size boundary - adjust address up/down if needed */
+#define _ALIGN_UP(addr, size)	(((addr)+((size)-1))&(~((size)-1)))
+#define _ALIGN_DOWN(addr, size)	((addr)&(~((size)-1)))
+
+/* align addr on a size boundary - adjust address up if needed */
+#define _ALIGN(addr, size)	_ALIGN_UP(addr, size)
+
+#define clear_page(pgaddr)			memset((pgaddr), 0, PAGE_SIZE)
+#define copy_page(to, from)			memcpy((to), (from), PAGE_SIZE)
+
+#define clear_user_page(pgaddr, vaddr, page)	memset((pgaddr), 0, PAGE_SIZE)
+#define copy_user_page(vto, vfrom, vaddr, topg) \
+			memcpy((vto), (vfrom), PAGE_SIZE)
+
+/*
+ * Use struct definitions to apply C type checking
+ */
+
+/* Page Global Directory entry */
+typedef struct {
+	unsigned long pgd;
+} pgd_t;
+
+/* Page Table entry */
+typedef struct {
+	unsigned long pte;
+} pte_t;
+
+typedef struct {
+	unsigned long pgprot;
+} pgprot_t;
+
+typedef struct page *pgtable_t;
+
+#define pte_val(x)	((x).pte)
+#define pgd_val(x)	((x).pgd)
+#define pgprot_val(x)	((x).pgprot)
+
+#define __pte(x)	((pte_t) { (x) })
+#define __pgd(x)	((pgd_t) { (x) })
+#define __pgprot(x)	((pgprot_t) { (x) })
+
+#ifdef CONFIG_64BITS
+#define PTE_FMT "%016lx"
+#else
+#define PTE_FMT "%08lx"
+#endif
+
+extern unsigned long va_pa_offset;
+extern unsigned long pfn_base;
+
+extern unsigned long max_low_pfn;
+extern unsigned long min_low_pfn;
+
+#define __pa(x)		((unsigned long)(x) - va_pa_offset)
+#define __va(x)		((void *)((unsigned long) (x) + va_pa_offset))
+
+#define phys_to_pfn(phys)	(PFN_DOWN(phys))
+#define pfn_to_phys(pfn)	(PFN_PHYS(pfn))
+
+#define virt_to_pfn(vaddr)	(phys_to_pfn(__pa(vaddr)))
+#define pfn_to_virt(pfn)	(__va(pfn_to_phys(pfn)))
+
+#define virt_to_page(vaddr)	(pfn_to_page(virt_to_pfn(vaddr)))
+#define page_to_virt(page)	(pfn_to_virt(page_to_pfn(page)))
+
+#define page_to_phys(page)	(pfn_to_phys(page_to_pfn(page)))
+#define page_to_bus(page)	(page_to_phys(page))
+#define phys_to_page(paddr)	(pfn_to_page(phys_to_pfn(paddr)))
+
+#define pfn_valid(pfn)		(((pfn) >= pfn_base) && (((pfn)-pfn_base) < max_mapnr))
+
+#define ARCH_PFN_OFFSET		(pfn_base)
+
+#endif /* __ASSEMBLY__ */
+
+#define virt_addr_valid(vaddr)	(pfn_valid(virt_to_pfn(vaddr)))
+
+#endif /* __KERNEL__ */
+
+#define VM_DATA_DEFAULT_FLAGS	(VM_READ | VM_WRITE | \
+				 VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
+
+#include <asm-generic/memory_model.h>
+#include <asm-generic/getorder.h>
+
+/* vDSO support */
+/* We do define AT_SYSINFO_EHDR but don't use the gate mechanism */
+#define __HAVE_ARCH_GATE_AREA
+
+#endif /* _ASM_RISCV_PAGE_H */
diff --git a/arch/riscv/include/asm/pci.h b/arch/riscv/include/asm/pci.h
new file mode 100644
index 000000000000..ad46530f5faa
--- /dev/null
+++ b/arch/riscv/include/asm/pci.h
@@ -0,0 +1,51 @@
+/*
+ * Copyright (C) 2016 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef __ASM_RISCV_PCI_H
+#define __ASM_RISCV_PCI_H
+#ifdef __KERNEL__
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/io.h>
+
+#define PCIBIOS_MIN_IO		0x1000
+#define PCIBIOS_MIN_MEM		0
+
+/* RISC-V shim does not initialize PCI bus */
+#define pcibios_assign_all_busses() 1
+
+/* RISC-V TileLink and PCIe share the share address space */
+#define PCI_DMA_BUS_IS_PHYS 1
+
+extern int isa_dma_bridge_buggy;
+
+#ifdef CONFIG_PCI
+static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
+{
+	/* no legacy IRQ on risc-v */
+	return -ENODEV;
+}
+
+static inline int pci_proc_domain(struct pci_bus *bus)
+{
+	/* always show the domain in /proc */
+	return 1;
+}
+#endif  /* CONFIG_PCI */
+
+#endif  /* __KERNEL__ */
+#endif  /* __ASM_PCI_H */
diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h
new file mode 100644
index 000000000000..fde046080e5d
--- /dev/null
+++ b/arch/riscv/include/asm/pgalloc.h
@@ -0,0 +1,126 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_PGALLOC_H
+#define _ASM_RISCV_PGALLOC_H
+
+#include <linux/mm.h>
+#include <asm/tlb.h>
+
+static inline void pmd_populate_kernel(struct mm_struct *mm,
+	pmd_t *pmd, pte_t *pte)
+{
+	unsigned long pfn = virt_to_pfn(pte);
+
+	set_pmd(pmd, __pmd((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+
+static inline void pmd_populate(struct mm_struct *mm,
+	pmd_t *pmd, pgtable_t pte)
+{
+	unsigned long pfn = virt_to_pfn(page_address(pte));
+
+	set_pmd(pmd, __pmd((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+
+#ifndef __PAGETABLE_PMD_FOLDED
+static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+{
+	unsigned long pfn = virt_to_pfn(pmd);
+
+	set_pud(pud, __pud((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+#endif /* __PAGETABLE_PMD_FOLDED */
+
+#define pmd_pgtable(pmd)	pmd_page(pmd)
+
+static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+{
+	pgd_t *pgd;
+
+	pgd = (pgd_t *)__get_free_page(GFP_KERNEL);
+	if (likely(pgd != NULL)) {
+		memset(pgd, 0, USER_PTRS_PER_PGD * sizeof(pgd_t));
+		/* Copy kernel mappings */
+		memcpy(pgd + USER_PTRS_PER_PGD,
+			init_mm.pgd + USER_PTRS_PER_PGD,
+			(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
+	}
+	return pgd;
+}
+
+static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
+{
+	free_page((unsigned long)pgd);
+}
+
+#ifndef __PAGETABLE_PMD_FOLDED
+
+static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
+{
+	return (pmd_t *)__get_free_page(
+		GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+}
+
+static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
+{
+	free_page((unsigned long)pmd);
+}
+
+#define __pmd_free_tlb(tlb, pmd, addr)  pmd_free((tlb)->mm, pmd)
+
+#endif /* __PAGETABLE_PMD_FOLDED */
+
+static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
+	unsigned long address)
+{
+	return (pte_t *)__get_free_page(
+		GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+}
+
+static inline struct page *pte_alloc_one(struct mm_struct *mm,
+	unsigned long address)
+{
+	struct page *pte;
+
+	pte = alloc_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+	if (likely(pte != NULL)) {
+		pgtable_page_ctor(pte);
+	}
+	return pte;
+}
+
+static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
+{
+	free_page((unsigned long)pte);
+}
+
+static inline void pte_free(struct mm_struct *mm, pgtable_t pte)
+{
+	pgtable_page_dtor(pte);
+	__free_page(pte);
+}
+
+#define __pte_free_tlb(tlb, pte, buf)   \
+do {                                    \
+	pgtable_page_dtor(pte);         \
+	tlb_remove_page((tlb), pte);    \
+} while (0)
+
+static inline void check_pgt_cache(void)
+{
+}
+
+#endif /* _ASM_RISCV_PGALLOC_H */
diff --git a/arch/riscv/include/asm/pgtable-32.h b/arch/riscv/include/asm/pgtable-32.h
new file mode 100644
index 000000000000..ae6ad80f80a5
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-32.h
@@ -0,0 +1,26 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_32_H
+#define _ASM_RISCV_PGTABLE_32_H
+
+#include <asm-generic/pgtable-nopmd.h>
+#include <linux/const.h>
+
+/* Size of region mapped by a page global directory */
+#define PGDIR_SHIFT     22
+#define PGDIR_SIZE      (_AC(1, UL) << PGDIR_SHIFT)
+#define PGDIR_MASK      (~(PGDIR_SIZE - 1))
+
+#endif /* _ASM_RISCV_PGTABLE_32_H */
diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h
new file mode 100644
index 000000000000..f4f6dd1690f1
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-64.h
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_64_H
+#define _ASM_RISCV_PGTABLE_64_H
+
+#include <linux/const.h>
+
+#define PGDIR_SHIFT     30
+/* Size of region mapped by a page global directory */
+#define PGDIR_SIZE      (_AC(1, UL) << PGDIR_SHIFT)
+#define PGDIR_MASK      (~(PGDIR_SIZE - 1))
+
+#define PMD_SHIFT       21
+/* Size of region mapped by a page middle directory */
+#define PMD_SIZE        (_AC(1, UL) << PMD_SHIFT)
+#define PMD_MASK        (~(PMD_SIZE - 1))
+
+/* Page Middle Directory entry */
+typedef struct {
+	unsigned long pmd;
+} pmd_t;
+
+#define pmd_val(x)      ((x).pmd)
+#define __pmd(x)        ((pmd_t) { (x) })
+
+#define PTRS_PER_PMD    (PAGE_SIZE / sizeof(pmd_t))
+
+static inline int pud_present(pud_t pud)
+{
+	return (pud_val(pud) & _PAGE_PRESENT);
+}
+
+static inline int pud_none(pud_t pud)
+{
+	return (pud_val(pud) == 0);
+}
+
+static inline int pud_bad(pud_t pud)
+{
+	return !pud_present(pud);
+}
+
+static inline void set_pud(pud_t *pudp, pud_t pud)
+{
+	*pudp = pud;
+}
+
+static inline void pud_clear(pud_t *pudp)
+{
+	set_pud(pudp, __pud(0));
+}
+
+static inline unsigned long pud_page_vaddr(pud_t pud)
+{
+	return (unsigned long)pfn_to_virt(pud_val(pud) >> _PAGE_PFN_SHIFT);
+}
+
+#define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
+
+static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
+{
+	return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(addr);
+}
+
+static inline pmd_t pfn_pmd(unsigned long pfn, pgprot_t prot)
+{
+	return __pmd((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+#define pmd_ERROR(e) \
+	pr_err("%s:%d: bad pmd %016lx.\n", __FILE__, __LINE__, pmd_val(e))
+
+#endif /* _ASM_RISCV_PGTABLE_64_H */
diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm/pgtable-bits.h
new file mode 100644
index 000000000000..79eb880a1749
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-bits.h
@@ -0,0 +1,49 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_BITS_H
+#define _ASM_RISCV_PGTABLE_BITS_H
+
+/*
+ * PTE format:
+ * | XLEN-1  10 | 9             8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0
+ *       PFN      reserved for SW   D   A   G   U   X   W   R   V
+ */
+
+#define _PAGE_ACCESSED_OFFSET 6
+
+#define _PAGE_PRESENT   (1 << 0)
+#define _PAGE_READ      (1 << 1)    /* Readable */
+#define _PAGE_WRITE     (1 << 2)    /* Writable */
+#define _PAGE_EXEC      (1 << 3)    /* Executable */
+#define _PAGE_USER      (1 << 4)    /* User */
+#define _PAGE_GLOBAL    (1 << 5)    /* Global */
+#define _PAGE_ACCESSED  (1 << _PAGE_ACCESSED_OFFSET)  /* Set by hardware on any access */
+#define _PAGE_DIRTY     (1 << 7)    /* Set by hardware on any write */
+#define _PAGE_SOFT      (1 << 8)    /* Reserved for software */
+
+#define _PAGE_SPECIAL   _PAGE_SOFT
+#define _PAGE_TABLE     _PAGE_PRESENT
+
+#define _PAGE_PFN_SHIFT 10
+
+/* Set of bits to preserve across pte_modify() */
+#define _PAGE_CHG_MASK  (~(unsigned long)(_PAGE_PRESENT | _PAGE_READ |	\
+					  _PAGE_WRITE | _PAGE_EXEC |	\
+					  _PAGE_USER | _PAGE_GLOBAL))
+
+/* Advertise support for _PAGE_SPECIAL */
+#define __HAVE_ARCH_PTE_SPECIAL
+
+#endif /* _ASM_RISCV_PGTABLE_BITS_H */
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
new file mode 100644
index 000000000000..8d88c80b0c5e
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable.h
@@ -0,0 +1,426 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_H
+#define _ASM_RISCV_PGTABLE_H
+
+#include <linux/mmzone.h>
+
+#include <asm/pgtable-bits.h>
+
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_MMU
+
+/* Page Upper Directory not used in RISC-V */
+#include <asm-generic/pgtable-nopud.h>
+#include <asm/page.h>
+#include <asm/tlbflush.h>
+#include <linux/mm_types.h>
+
+#ifdef CONFIG_64BIT
+#include <asm/pgtable-64.h>
+#else
+#include <asm/pgtable-32.h>
+#endif /* CONFIG_64BIT */
+
+/* Number of entries in the page global directory */
+#define PTRS_PER_PGD    (PAGE_SIZE / sizeof(pgd_t))
+/* Number of entries in the page table */
+#define PTRS_PER_PTE    (PAGE_SIZE / sizeof(pte_t))
+
+/* Number of PGD entries that a user-mode program can use */
+#define USER_PTRS_PER_PGD   (TASK_SIZE / PGDIR_SIZE)
+#define FIRST_USER_ADDRESS  0
+
+/* Page protection bits */
+#define _PAGE_BASE	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER)
+
+#define PAGE_NONE		__pgprot(0)
+#define PAGE_READ		__pgprot(_PAGE_BASE | _PAGE_READ)
+#define PAGE_WRITE		__pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_WRITE)
+#define PAGE_EXEC		__pgprot(_PAGE_BASE | _PAGE_EXEC)
+#define PAGE_READ_EXEC		__pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC)
+#define PAGE_WRITE_EXEC		__pgprot(_PAGE_BASE | _PAGE_READ |	\
+					 _PAGE_EXEC | _PAGE_WRITE)
+
+#define PAGE_COPY		PAGE_READ
+#define PAGE_COPY_EXEC		PAGE_EXEC
+#define PAGE_COPY_READ_EXEC	PAGE_READ_EXEC
+#define PAGE_SHARED		PAGE_WRITE
+#define PAGE_SHARED_EXEC	PAGE_WRITE_EXEC
+
+#define _PAGE_KERNEL		_PAGE_READ \
+				| _PAGE_WRITE \
+				| _PAGE_PRESENT \
+				| _PAGE_ACCESSED \
+				| _PAGE_DIRTY
+
+#define PAGE_KERNEL		__pgprot(_PAGE_KERNEL)
+#define PAGE_KERNEL_EXEC	__pgprot(_PAGE_KERNEL | _PAGE_EXEC)
+
+extern pgd_t swapper_pg_dir[];
+
+/* MAP_PRIVATE permissions: xwr (copy-on-write) */
+#define __P000	PAGE_NONE
+#define __P001	PAGE_READ
+#define __P010	PAGE_COPY
+#define __P011	PAGE_COPY
+#define __P100	PAGE_EXEC
+#define __P101	PAGE_READ_EXEC
+#define __P110	PAGE_COPY_EXEC
+#define __P111	PAGE_COPY_READ_EXEC
+
+/* MAP_SHARED permissions: xwr */
+#define __S000	PAGE_NONE
+#define __S001	PAGE_READ
+#define __S010	PAGE_SHARED
+#define __S011	PAGE_SHARED
+#define __S100	PAGE_EXEC
+#define __S101	PAGE_READ_EXEC
+#define __S110	PAGE_SHARED_EXEC
+#define __S111	PAGE_SHARED_EXEC
+
+/*
+ * ZERO_PAGE is a global shared page that is always zero,
+ * used for zero-mapped memory areas, etc.
+ */
+extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
+
+static inline int pmd_present(pmd_t pmd)
+{
+	return (pmd_val(pmd) & _PAGE_PRESENT);
+}
+
+static inline int pmd_none(pmd_t pmd)
+{
+	return (pmd_val(pmd) == 0);
+}
+
+static inline int pmd_bad(pmd_t pmd)
+{
+	return !pmd_present(pmd);
+}
+
+static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
+{
+	*pmdp = pmd;
+}
+
+static inline void pmd_clear(pmd_t *pmdp)
+{
+	set_pmd(pmdp, __pmd(0));
+}
+
+
+static inline pgd_t pfn_pgd(unsigned long pfn, pgprot_t prot)
+{
+	return __pgd((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+#define pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
+
+/* Locate an entry in the page global directory */
+static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr)
+{
+	return mm->pgd + pgd_index(addr);
+}
+/* Locate an entry in the kernel page global directory */
+#define pgd_offset_k(addr)      pgd_offset(&init_mm, (addr))
+
+static inline struct page *pmd_page(pmd_t pmd)
+{
+	return pfn_to_page(pmd_val(pmd) >> _PAGE_PFN_SHIFT);
+}
+
+static inline unsigned long pmd_page_vaddr(pmd_t pmd)
+{
+	return (unsigned long)pfn_to_virt(pmd_val(pmd) >> _PAGE_PFN_SHIFT);
+}
+
+/* Yields the page frame number (PFN) of a page table entry */
+static inline unsigned long pte_pfn(pte_t pte)
+{
+	return (pte_val(pte) >> _PAGE_PFN_SHIFT);
+}
+
+#define pte_page(x)     pfn_to_page(pte_pfn(x))
+
+/* Constructs a page table entry */
+static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot)
+{
+	return __pte((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+static inline pte_t mk_pte(struct page *page, pgprot_t prot)
+{
+	return pfn_pte(page_to_pfn(page), prot);
+}
+
+#define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+
+static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long addr)
+{
+	return (pte_t *)pmd_page_vaddr(*pmd) + pte_index(addr);
+}
+
+#define pte_offset_map(dir, addr)	pte_offset_kernel((dir), (addr))
+#define pte_unmap(pte)			((void)(pte))
+
+/*
+ * Certain architectures need to do special things when PTEs within
+ * a page table are directly modified.  Thus, the following hook is
+ * made available.
+ */
+static inline void set_pte(pte_t *ptep, pte_t pteval)
+{
+	*ptep = pteval;
+}
+
+static inline void set_pte_at(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep, pte_t pteval)
+{
+	set_pte(ptep, pteval);
+}
+
+static inline void pte_clear(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep)
+{
+	set_pte_at(mm, addr, ptep, __pte(0));
+}
+
+static inline int pte_present(pte_t pte)
+{
+	return (pte_val(pte) & _PAGE_PRESENT);
+}
+
+static inline int pte_none(pte_t pte)
+{
+	return (pte_val(pte) == 0);
+}
+
+/* static inline int pte_read(pte_t pte) */
+
+static inline int pte_write(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_WRITE;
+}
+
+static inline int pte_huge(pte_t pte)
+{
+	return pte_present(pte)
+		&& (pte_val(pte) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC));
+}
+
+/* static inline int pte_exec(pte_t pte) */
+
+static inline int pte_dirty(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_DIRTY;
+}
+
+static inline int pte_young(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_ACCESSED;
+}
+
+static inline int pte_special(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_SPECIAL;
+}
+
+/* static inline pte_t pte_rdprotect(pte_t pte) */
+
+static inline pte_t pte_wrprotect(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_WRITE));
+}
+
+/* static inline pte_t pte_mkread(pte_t pte) */
+
+static inline pte_t pte_mkwrite(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_WRITE);
+}
+
+/* static inline pte_t pte_mkexec(pte_t pte) */
+
+static inline pte_t pte_mkdirty(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_DIRTY);
+}
+
+static inline pte_t pte_mkclean(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_DIRTY));
+}
+
+static inline pte_t pte_mkyoung(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_ACCESSED);
+}
+
+static inline pte_t pte_mkold(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_ACCESSED));
+}
+
+static inline pte_t pte_mkspecial(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_SPECIAL);
+}
+
+/* Modify page protection bits */
+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+	return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
+}
+
+#define pgd_ERROR(e) \
+	pr_err("%s:%d: bad pgd " PTE_FMT ".\n", __FILE__, __LINE__, pgd_val(e))
+
+
+/* Commit new configuration to MMU hardware */
+static inline void update_mmu_cache(struct vm_area_struct *vma,
+	unsigned long address, pte_t *ptep)
+{
+	/* The kernel assumes that TLBs don't cache invalid entries, but
+	 * in RISC-V, SFENCE.VMA specifies an ordering constraint, not a
+	 * cache flush; it is necessary even after writing invalid entries.
+	 * Relying on flush_tlb_fix_spurious_fault would suffice, but
+	 * the extra traps reduce performance.  So, eagerly SFENCE.VMA. */
+	local_flush_tlb_page(address);
+}
+
+#define __HAVE_ARCH_PTE_SAME
+static inline int pte_same(pte_t pte_a, pte_t pte_b)
+{
+	return pte_val(pte_a) == pte_val(pte_b);
+}
+
+#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
+static inline int ptep_set_access_flags(struct vm_area_struct *vma,
+					unsigned long address, pte_t *ptep,
+					pte_t entry, int dirty)
+{
+	if (!pte_same(*ptep, entry))
+		set_pte_at(vma->vm_mm, address, ptep, entry);
+	/* update_mmu_cache will unconditionally execute, handling both
+	 * the case that the PTE changed and the spurious fault case. */
+	return true;
+}
+
+#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
+static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+}
+
+#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
+static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
+					    unsigned long address,
+					    pte_t *ptep)
+{
+	if (!pte_young(*ptep))
+		return 0;
+	return test_and_clear_bit(_PAGE_ACCESSED_OFFSET, &pte_val(*ptep));
+}
+
+#define __HAVE_ARCH_PTEP_SET_WRPROTECT
+static inline void ptep_set_wrprotect(struct mm_struct *mm,
+				      unsigned long address, pte_t *ptep)
+{
+	atomic_long_and(~(unsigned long)_PAGE_WRITE, (atomic_long_t *)ptep);
+}
+
+#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
+static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
+					 unsigned long address, pte_t *ptep)
+{
+	/*
+	 * This comment is borrowed from x86, but applies equally to RISC-V:
+	 *
+	 * Clearing the accessed bit without a TLB flush
+	 * doesn't cause data corruption. [ It could cause incorrect
+	 * page aging and the (mistaken) reclaim of hot pages, but the
+	 * chance of that should be relatively low. ]
+	 *
+	 * So as a performance optimization don't flush the TLB when
+	 * clearing the accessed bit, it will eventually be flushed by
+	 * a context switch or a VM operation anyway. [ In the rare
+	 * event of it not getting flushed for a long time the delay
+	 * shouldn't really matter because there's no real memory
+	 * pressure for swapout to react to. ]
+	 */
+	return ptep_test_and_clear_young(vma, address, ptep);
+}
+
+/*
+ * Encode and decode a swap entry
+ *
+ * Format of swap PTE:
+ *	bit            0:	_PAGE_PRESENT (zero)
+ *	bit            1:	reserved for future use (zero)
+ *	bits      2 to 6:	swap type
+ *	bits 7 to XLEN-1:	swap offset
+ */
+#define __SWP_TYPE_SHIFT	2
+#define __SWP_TYPE_BITS		5
+#define __SWP_TYPE_MASK		((1UL << __SWP_TYPE_BITS) - 1)
+#define __SWP_OFFSET_SHIFT	(__SWP_TYPE_BITS + __SWP_TYPE_SHIFT)
+
+#define MAX_SWAPFILES_CHECK()	\
+	BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > __SWP_TYPE_BITS)
+
+#define __swp_type(x)		(((x).val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK)
+#define __swp_offset(x)		((x).val >> __SWP_OFFSET_SHIFT)
+#define __swp_entry(type, offset) ((swp_entry_t) \
+	{ ((type) << __SWP_TYPE_SHIFT) | ((offset) << __SWP_OFFSET_SHIFT) })
+
+#define __pte_to_swp_entry(pte)	((swp_entry_t) { pte_val(pte) })
+#define __swp_entry_to_pte(x)	((pte_t) { (x).val })
+
+#ifdef CONFIG_FLATMEM
+#define kern_addr_valid(addr)   (1) /* FIXME */
+#endif
+
+extern void paging_init(void);
+
+static inline void pgtable_cache_init(void)
+{
+	/* No page table caches to initialize */
+}
+
+#endif /* CONFIG_MMU */
+
+#define VMALLOC_SIZE     (KERN_VIRT_SIZE >> 1)
+#define VMALLOC_END      (PAGE_OFFSET - 1)
+#define VMALLOC_START    (PAGE_OFFSET - VMALLOC_SIZE)
+
+/* Task size is 0x40000000000 for RV64 or 0xb800000 for RV32.
+ * Note that PGDIR_SIZE must evenly divide TASK_SIZE.
+ */
+#ifdef CONFIG_64BIT
+#define TASK_SIZE (PGDIR_SIZE * PTRS_PER_PGD / 2)
+#else
+#define TASK_SIZE VMALLOC_START
+#endif
+
+#include <asm-generic/pgtable.h>
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PGTABLE_H */
diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
new file mode 100644
index 000000000000..4f749e8b936b
--- /dev/null
+++ b/arch/riscv/include/asm/processor.h
@@ -0,0 +1,103 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_PROCESSOR_H
+#define _ASM_RISCV_PROCESSOR_H
+
+#include <linux/const.h>
+
+#include <asm/ptrace.h>
+
+/*
+ * This decides where the kernel will search for a free chunk of vm
+ * space during mmap's.
+ */
+#define TASK_UNMAPPED_BASE	PAGE_ALIGN(TASK_SIZE >> 1)
+
+#ifdef __KERNEL__
+#define STACK_TOP		TASK_SIZE
+#define STACK_TOP_MAX		STACK_TOP
+#define STACK_ALIGN		16
+#endif /* __KERNEL__ */
+
+#ifndef __ASSEMBLY__
+
+struct task_struct;
+struct pt_regs;
+
+/*
+ * Default implementation of macro that returns current
+ * instruction pointer ("program counter").
+ */
+#define current_text_addr()	({ __label__ _l; _l: &&_l;})
+
+/* CPU-specific state of a task */
+struct thread_struct {
+	/* Callee-saved registers */
+	unsigned long ra;
+	unsigned long sp;	/* Kernel mode stack */
+	unsigned long s[12];	/* s[0]: frame pointer */
+	struct user_fpregs_struct fstate;
+};
+
+#define INIT_THREAD {					\
+	.sp = sizeof(init_stack) + (long)&init_stack,	\
+}
+
+/* Return saved (kernel) PC of a blocked thread. */
+#define thread_saved_pc(t)	((t)->thread.ra)
+#define thread_saved_sp(t)	((t)->thread.sp)
+#define thread_saved_fp(t)	((t)->thread.s[0])
+
+#define task_pt_regs(tsk)						\
+	((struct pt_regs *)(task_stack_page(tsk) + THREAD_SIZE		\
+			    - ALIGN(sizeof(struct pt_regs), STACK_ALIGN)))
+
+#define KSTK_EIP(tsk)		(task_pt_regs(tsk)->sepc)
+#define KSTK_ESP(tsk)		(task_pt_regs(tsk)->sp)
+
+
+/* Do necessary setup to start up a newly executed thread. */
+extern void start_thread(struct pt_regs *regs,
+			unsigned long pc, unsigned long sp);
+
+/* Free all resources held by a thread. */
+static inline void release_thread(struct task_struct *dead_task)
+{
+}
+
+extern unsigned long get_wchan(struct task_struct *p);
+
+
+static inline void cpu_relax(void)
+{
+#ifdef __riscv_muldiv
+	int dummy;
+	/* In lieu of a halt instruction, induce a long-latency stall. */
+	__asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy));
+#endif
+	barrier();
+}
+
+static inline void wait_for_interrupt(void)
+{
+	__asm__ __volatile__ ("wfi");
+}
+
+struct device_node;
+extern int riscv_of_processor_hart(struct device_node *node);
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PROCESSOR_H */
diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h
new file mode 100644
index 000000000000..9fa00d56a1f9
--- /dev/null
+++ b/arch/riscv/include/asm/ptrace.h
@@ -0,0 +1,117 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_PTRACE_H
+#define _ASM_RISCV_PTRACE_H
+
+#include <uapi/asm/ptrace.h>
+#include <asm/csr.h>
+
+#ifndef __ASSEMBLY__
+
+struct pt_regs {
+	unsigned long sepc;
+	unsigned long ra;
+	unsigned long sp;
+	unsigned long gp;
+	unsigned long tp;
+	unsigned long t0;
+	unsigned long t1;
+	unsigned long t2;
+	unsigned long s0;
+	unsigned long s1;
+	unsigned long a0;
+	unsigned long a1;
+	unsigned long a2;
+	unsigned long a3;
+	unsigned long a4;
+	unsigned long a5;
+	unsigned long a6;
+	unsigned long a7;
+	unsigned long s2;
+	unsigned long s3;
+	unsigned long s4;
+	unsigned long s5;
+	unsigned long s6;
+	unsigned long s7;
+	unsigned long s8;
+	unsigned long s9;
+	unsigned long s10;
+	unsigned long s11;
+	unsigned long t3;
+	unsigned long t4;
+	unsigned long t5;
+	unsigned long t6;
+	/* Supervisor CSRs */
+	unsigned long sstatus;
+	unsigned long sbadaddr;
+	unsigned long scause;
+};
+
+#ifdef CONFIG_64BIT
+#define REG_FMT "%016lx"
+#else
+#define REG_FMT "%08lx"
+#endif
+
+#define user_mode(regs) (((regs)->sstatus & SR_PS) == 0)
+
+
+/* Helpers for working with the instruction pointer */
+#define GET_IP(regs) ((regs)->sepc)
+#define SET_IP(regs, val) (GET_IP(regs) = (val))
+
+static inline unsigned long instruction_pointer(struct pt_regs *regs)
+{
+	return GET_IP(regs);
+}
+static inline void instruction_pointer_set(struct pt_regs *regs,
+                                           unsigned long val)
+{
+	SET_IP(regs, val);
+}
+
+#define profile_pc(regs) instruction_pointer(regs)
+
+/* Helpers for working with the user stack pointer */
+#define GET_USP(regs) ((regs)->sp)
+#define SET_USP(regs, val) (GET_USP(regs) = (val))
+
+static inline unsigned long user_stack_pointer(struct pt_regs *regs)
+{
+	return GET_USP(regs);
+}
+static inline void user_stack_pointer_set(struct pt_regs *regs,
+                                          unsigned long val)
+{
+	SET_USP(regs, val);
+}
+
+/* Helpers for working with the frame pointer */
+#define GET_FP(regs) ((regs)->s0)
+#define SET_FP(regs, val) (GET_FP(regs) = (val))
+
+static inline unsigned long frame_pointer(struct pt_regs *regs)
+{
+	return GET_FP(regs);
+}
+static inline void frame_pointer_set(struct pt_regs *regs,
+                                     unsigned long val)
+{
+	SET_FP(regs, val);
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PTRACE_H */
diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
new file mode 100644
index 000000000000..a86ecf9cc1dd
--- /dev/null
+++ b/arch/riscv/include/asm/sbi.h
@@ -0,0 +1,101 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_SBI_H
+#define _ASM_RISCV_SBI_H
+
+#include <linux/types.h>
+
+#define SBI_SET_TIMER 0
+#define SBI_CONSOLE_PUTCHAR 1
+#define SBI_CONSOLE_GETCHAR 2
+#define SBI_CLEAR_IPI 3
+#define SBI_SEND_IPI 4
+#define SBI_REMOTE_FENCE_I 5
+#define SBI_REMOTE_SFENCE_VMA 6
+#define SBI_REMOTE_SFENCE_VMA_ASID 7
+#define SBI_SHUTDOWN 8
+
+#define SBI_CALL(which, arg0, arg1, arg2) ({			\
+	register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0);	\
+	register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1);	\
+	register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2);	\
+	register uintptr_t a7 asm ("a7") = (uintptr_t)(which);	\
+	asm volatile ("ecall"					\
+		      : "+r" (a0)				\
+		      : "r" (a1), "r" (a2), "r" (a7)		\
+		      : "memory");				\
+	a0;							\
+})
+
+/* Lazy implementations until SBI is finalized */
+#define SBI_CALL_0(which) SBI_CALL(which, 0, 0, 0)
+#define SBI_CALL_1(which, arg0) SBI_CALL(which, arg0, 0, 0)
+#define SBI_CALL_2(which, arg0, arg1) SBI_CALL(which, arg0, arg1, 0)
+
+static inline void sbi_console_putchar(int ch)
+{
+	SBI_CALL_1(SBI_CONSOLE_PUTCHAR, ch);
+}
+
+static inline int sbi_console_getchar(void)
+{
+	return SBI_CALL_0(SBI_CONSOLE_GETCHAR);
+}
+
+static inline void sbi_set_timer(uint64_t stime_value)
+{
+#if __riscv_xlen == 32
+	SBI_CALL_2(SBI_SET_TIMER, stime_value, stime_value >> 32);
+#else
+	SBI_CALL_1(SBI_SET_TIMER, stime_value);
+#endif
+}
+
+static inline void sbi_shutdown(void)
+{
+	SBI_CALL_0(SBI_SHUTDOWN);
+}
+
+static inline void sbi_clear_ipi(void)
+{
+	SBI_CALL_0(SBI_CLEAR_IPI);
+}
+
+static inline void sbi_send_ipi(const unsigned long *hart_mask)
+{
+	SBI_CALL_1(SBI_SEND_IPI, hart_mask);
+}
+
+static inline void sbi_remote_fence_i(const unsigned long *hart_mask)
+{
+	SBI_CALL_1(SBI_REMOTE_FENCE_I, hart_mask);
+}
+
+static inline void sbi_remote_sfence_vma(const unsigned long *hart_mask,
+					 unsigned long start,
+					 unsigned long size)
+{
+	SBI_CALL_1(SBI_REMOTE_SFENCE_VMA, hart_mask);
+}
+
+static inline void sbi_remote_sfence_vma_asid(const unsigned long *hart_mask,
+					      unsigned long start,
+					      unsigned long size,
+					      unsigned long asid)
+{
+	SBI_CALL_1(SBI_REMOTE_SFENCE_VMA_ASID, hart_mask);
+}
+
+#endif
diff --git a/arch/riscv/include/asm/serial.h b/arch/riscv/include/asm/serial.h
new file mode 100644
index 000000000000..d783dbe80a4b
--- /dev/null
+++ b/arch/riscv/include/asm/serial.h
@@ -0,0 +1,43 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_SERIAL_H
+#define _ASM_RISCV_SERIAL_H
+
+/*
+ * FIXME: interim serial support for riscv-qemu
+ *
+ * Currently requires that the emulator itself create a hole at addresses
+ * 0x3f8 - 0x3ff without looking through page tables.
+ *
+ * This assumes you have a 1.8432 MHz clock for your UART.
+ */
+#define BASE_BAUD (1843200 / 16)
+
+/* Standard COM flags */
+#ifdef CONFIG_SERIAL_DETECT_IRQ
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST | ASYNC_AUTO_IRQ)
+#else
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST)
+#endif
+
+#define SERIAL_PORT_DFNS			\
+	{	/* ttyS0 */			\
+		.baud_base = BASE_BAUD,		\
+		.port      = 0x3F8,		\
+		.irq       = 4,			\
+		.flags     = STD_COM_FLAGS,	\
+	},
+
+#endif /* _ASM_RISCV_SERIAL_H */
diff --git a/arch/riscv/include/asm/setup.h b/arch/riscv/include/asm/setup.h
new file mode 100644
index 000000000000..e457854e9988
--- /dev/null
+++ b/arch/riscv/include/asm/setup.h
@@ -0,0 +1,20 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_SETUP_H
+#define _ASM_RISCV_SETUP_H
+
+#include <asm-generic/setup.h>
+
+#endif /* _ASM_RISCV_SETUP_H */
diff --git a/arch/riscv/include/asm/smp.h b/arch/riscv/include/asm/smp.h
new file mode 100644
index 000000000000..1fd60220ef29
--- /dev/null
+++ b/arch/riscv/include/asm/smp.h
@@ -0,0 +1,42 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_SMP_H
+#define _ASM_RISCV_SMP_H
+
+#include <linux/cpumask.h>
+#include <linux/irqreturn.h>
+
+#ifdef CONFIG_SMP
+
+/* SMP initialization hook for setup_arch */
+void __init init_clockevent(void);
+
+/* SMP initialization hook for setup_arch */
+void __init setup_smp(void);
+
+/* Hook for the generic smp_call_function_many() routine. */
+void arch_send_call_function_ipi_mask(struct cpumask *mask);
+
+/* Hook for the generic smp_call_function_single() routine. */
+void arch_send_call_function_single_ipi(int cpu);
+
+#define raw_smp_processor_id() (current_thread_info()->cpu)
+
+/* Interprocessor interrupt handler */
+irqreturn_t handle_ipi(void);
+
+#endif /* CONFIG_SMP */
+
+#endif /* _ASM_RISCV_SMP_H */
diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
new file mode 100644
index 000000000000..baf0f441f0e3
--- /dev/null
+++ b/arch/riscv/include/asm/spinlock.h
@@ -0,0 +1,156 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_SPINLOCK_H
+#define _ASM_RISCV_SPINLOCK_H
+
+#include <linux/kernel.h>
+#include <asm/current.h>
+
+/*
+ * Simple spin lock operations.  These provide no fairness guarantees.
+ */
+
+#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
+#define arch_spin_is_locked(x)	((x)->lock != 0)
+#define arch_spin_unlock_wait(x) \
+		do { cpu_relax(); } while ((x)->lock)
+
+static inline void arch_spin_unlock(arch_spinlock_t *lock)
+{
+	__asm__ __volatile__ (
+		"amoswap.w.rl x0, x0, %0"
+		: "=A" (lock->lock)
+		:: "memory");
+}
+
+static inline int arch_spin_trylock(arch_spinlock_t *lock)
+{
+	int tmp = 1, busy;
+
+	__asm__ __volatile__ (
+		"amoswap.w.aq %0, %2, %1"
+		: "=r" (busy), "+A" (lock->lock)
+		: "r" (tmp)
+		: "memory");
+
+	return !busy;
+}
+
+static inline void arch_spin_lock(arch_spinlock_t *lock)
+{
+	while (1) {
+		if (arch_spin_is_locked(lock))
+			continue;
+
+		if (arch_spin_trylock(lock))
+			break;
+	}
+}
+
+/***********************************************************/
+
+static inline int arch_read_can_lock(arch_rwlock_t *lock)
+{
+	return lock->lock >= 0;
+}
+
+static inline int arch_write_can_lock(arch_rwlock_t *lock)
+{
+	return lock->lock == 0;
+}
+
+static inline void arch_read_lock(arch_rwlock_t *lock)
+{
+	int tmp;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bltz	%1, 1b\n"
+		"	addi	%1, %1, 1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		: "+A" (lock->lock), "=&r" (tmp)
+		:: "memory");
+}
+
+static inline void arch_write_lock(arch_rwlock_t *lock)
+{
+	int tmp;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bnez	%1, 1b\n"
+		"	li	%1, -1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		: "+A" (lock->lock), "=&r" (tmp)
+		:: "memory");
+}
+
+static inline int arch_read_trylock(arch_rwlock_t *lock)
+{
+	int busy;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bltz	%1, 1f\n"
+		"	addi	%1, %1, 1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		"1:\n"
+		: "+A" (lock->lock), "=&r" (busy)
+		:: "memory");
+	
+	return !busy;
+}
+
+static inline int arch_write_trylock(arch_rwlock_t *lock)
+{
+	int busy;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bnez	%1, 1f\n"
+		"	li	%1, -1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		"1:\n"
+		: "+A" (lock->lock), "=&r" (busy)
+		:: "memory");
+
+	return !busy;
+}
+
+static inline void arch_read_unlock(arch_rwlock_t *lock)
+{
+	__asm__ __volatile__(
+		"amoadd.w.rl x0, %1, %0"
+		: "+A" (lock->lock)
+		: "r" (-1)
+		: "memory");
+}
+
+static inline void arch_write_unlock(arch_rwlock_t *lock)
+{
+	__asm__ __volatile__ (
+		"amoswap.w.rl x0, x0, %0"
+		: "=A" (lock->lock)
+		:: "memory");
+}
+
+#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
+#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
+
+#endif /* _ASM_RISCV_SPINLOCK_H */
diff --git a/arch/riscv/include/asm/spinlock_types.h b/arch/riscv/include/asm/spinlock_types.h
new file mode 100644
index 000000000000..0a4ff7086e8a
--- /dev/null
+++ b/arch/riscv/include/asm/spinlock_types.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_SPINLOCK_TYPES_H
+#define _ASM_RISCV_SPINLOCK_TYPES_H
+
+#ifndef __LINUX_SPINLOCK_TYPES_H
+# error "please don't include this file directly"
+#endif
+
+typedef struct {
+	volatile unsigned int lock;
+} arch_spinlock_t;
+
+#define __ARCH_SPIN_LOCK_UNLOCKED	{ 0 }
+
+typedef struct {
+	volatile unsigned int lock;
+} arch_rwlock_t;
+
+#define __ARCH_RW_LOCK_UNLOCKED		{ 0 }
+
+#endif
diff --git a/arch/riscv/include/asm/string.h b/arch/riscv/include/asm/string.h
new file mode 100644
index 000000000000..b394c02a537a
--- /dev/null
+++ b/arch/riscv/include/asm/string.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_STRING_H
+#define _ASM_RISCV_STRING_H
+
+#ifdef __KERNEL__
+
+#include <linux/types.h>
+#include <linux/linkage.h>
+
+#define __HAVE_ARCH_MEMSET
+extern asmlinkage void *memset(void *, int, size_t);
+
+#define __HAVE_ARCH_MEMCPY
+extern asmlinkage void *memcpy(void *, const void *, size_t);
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_STRING_H */
diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h
new file mode 100644
index 000000000000..d49c225baff9
--- /dev/null
+++ b/arch/riscv/include/asm/switch_to.h
@@ -0,0 +1,71 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_SWITCH_TO_H
+#define _ASM_RISCV_SWITCH_TO_H
+
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/csr.h>
+
+extern void __fstate_save(struct task_struct *);
+extern void __fstate_restore(struct task_struct *);
+
+static inline void __fstate_clean(struct pt_regs *regs)
+{
+	regs->sstatus |= (regs->sstatus & ~(SR_FS)) | SR_FS_CLEAN;
+}
+
+static inline void fstate_save(struct task_struct *task,
+                               struct pt_regs *regs)
+{
+	if ((regs->sstatus & SR_FS) == SR_FS_DIRTY) {
+		__fstate_save(task);
+		__fstate_clean(regs);
+	}
+}
+
+static inline void fstate_restore(struct task_struct *task,
+                                  struct pt_regs *regs)
+{
+	if ((regs->sstatus & SR_FS) != SR_FS_OFF) {
+		__fstate_restore(task);
+		__fstate_clean(regs);
+	}
+}
+
+static inline void __switch_to_aux(struct task_struct *prev,
+                                   struct task_struct *next)
+{
+	struct pt_regs *regs;
+
+	regs = task_pt_regs(prev);
+	if (unlikely(regs->sstatus & SR_SD)) {
+		fstate_save(prev, regs);
+	}
+	fstate_restore(next, task_pt_regs(next));
+}
+
+extern struct task_struct *__switch_to(struct task_struct *,
+                                       struct task_struct *);
+
+#define switch_to(prev, next, last)			\
+do {							\
+	struct task_struct *__prev = (prev);		\
+	struct task_struct *__next = (next);		\
+	__switch_to_aux(__prev, __next);		\
+	((last) = __switch_to(__prev, __next));		\
+} while (0)
+
+#endif /* _ASM_RISCV_SWITCH_TO_H */
diff --git a/arch/riscv/include/asm/syscall.h b/arch/riscv/include/asm/syscall.h
new file mode 100644
index 000000000000..070609831f85
--- /dev/null
+++ b/arch/riscv/include/asm/syscall.h
@@ -0,0 +1,91 @@
+/*
+ * Copyright (C) 2008-2009 Red Hat, Inc.  All rights reserved.
+ * Copyright 2010 Tilera Corporation. All Rights Reserved.
+ * Copyright 2015 Regents of the University of California, Berkeley
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ *
+ * See asm-generic/syscall.h for descriptions of what we must do here.
+ */
+
+#ifndef _ASM_RISCV_SYSCALL_H
+#define _ASM_RISCV_SYSCALL_H
+
+#include <linux/sched.h>
+#include <linux/err.h>
+
+/* The array of function pointers for syscalls. */
+extern void *sys_call_table[];
+
+/*
+ * Only the low 32 bits of orig_r0 are meaningful, so we return int.
+ * This importantly ignores the high bits on 64-bit, so comparisons
+ * sign-extend the low 32 bits.
+ */
+static inline int syscall_get_nr(struct task_struct *task,
+				 struct pt_regs *regs)
+{
+	return regs->a7;
+}
+
+static inline void syscall_set_nr(struct task_struct *task,
+				  struct pt_regs *regs,
+				  int sysno)
+{
+	regs->a7 = sysno;
+}
+
+static inline void syscall_rollback(struct task_struct *task,
+				    struct pt_regs *regs)
+{
+	/* FIXME: We can't do this... */
+}
+
+static inline long syscall_get_error(struct task_struct *task,
+				     struct pt_regs *regs)
+{
+	unsigned long error = regs->a0;
+
+	return IS_ERR_VALUE(error) ? error : 0;
+}
+
+static inline long syscall_get_return_value(struct task_struct *task,
+					    struct pt_regs *regs)
+{
+	return regs->a0;
+}
+
+static inline void syscall_set_return_value(struct task_struct *task,
+					    struct pt_regs *regs,
+					    int error, long val)
+{
+	regs->a0 = (long) error ?: val;
+}
+
+static inline void syscall_get_arguments(struct task_struct *task,
+					 struct pt_regs *regs,
+					 unsigned int i, unsigned int n,
+					 unsigned long *args)
+{
+	BUG_ON(i + n > 6);
+	memcpy(args, &regs->a0 + i * sizeof(regs->a0), n * sizeof(args[0]));
+}
+
+static inline void syscall_set_arguments(struct task_struct *task,
+					 struct pt_regs *regs,
+					 unsigned int i, unsigned int n,
+					 const unsigned long *args)
+{
+	BUG_ON(i + n > 6);
+	memcpy(&regs->a0 + i * sizeof(regs->a0), args, n * sizeof(regs->a0));
+}
+
+#endif	/* _ASM_TILE_SYSCALL_H */
diff --git a/arch/riscv/include/asm/syscalls.h b/arch/riscv/include/asm/syscalls.h
new file mode 100644
index 000000000000..1dd23596b0d3
--- /dev/null
+++ b/arch/riscv/include/asm/syscalls.h
@@ -0,0 +1,26 @@
+/*
+ * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_SYSCALLS_H
+#define _ASM_RISCV_SYSCALLS_H
+
+#include <linux/linkage.h>
+
+#include <asm-generic/syscalls.h>
+
+/* kernel/sys_riscv.c */
+asmlinkage long sys_sysriscv(unsigned long, unsigned long,
+	unsigned long, unsigned long);
+
+#endif /* _ASM_RISCV_SYSCALLS_H */
diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
new file mode 100644
index 000000000000..7446f960b71a
--- /dev/null
+++ b/arch/riscv/include/asm/thread_info.h
@@ -0,0 +1,103 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_THREAD_INFO_H
+#define _ASM_RISCV_THREAD_INFO_H
+
+#ifdef __KERNEL__
+
+#include <asm/page.h>
+#include <linux/const.h>
+
+/* thread information allocation */
+#define THREAD_SIZE_ORDER	(1)
+#define THREAD_SIZE		(PAGE_SIZE << THREAD_SIZE_ORDER)
+
+#ifndef __ASSEMBLY__
+
+#include <asm/processor.h>
+#include <asm/csr.h>
+
+typedef unsigned long mm_segment_t;
+
+/*
+ * low level task data that entry.S needs immediate access to
+ * - this struct should fit entirely inside of one cache line
+ * - this struct resides at the bottom of the supervisor stack
+ * - if the members of this struct changes, the assembly constants
+ *   in asm-offsets.c must be updated accordingly
+ */
+struct thread_info {
+	struct task_struct	*task;		/* main task structure */
+	unsigned long		flags;		/* low level flags */
+	__u32			cpu;		/* current CPU */
+	int                     preempt_count;  /* 0 => preemptable, <0 => BUG */
+	mm_segment_t		addr_limit;
+};
+
+/*
+ * macros/functions for gaining access to the thread information structure
+ *
+ * preempt_count needs to be 1 initially, until the scheduler is functional.
+ */
+#define INIT_THREAD_INFO(tsk)			\
+{						\
+	.task		= &tsk,			\
+	.flags		= 0,			\
+	.cpu		= 0,			\
+	.preempt_count	= INIT_PREEMPT_COUNT,	\
+	.addr_limit	= KERNEL_DS,		\
+}
+
+#define init_thread_info	(init_thread_union.thread_info)
+#define init_stack		(init_thread_union.stack)
+
+/*
+ * Pointer to the thread_info struct of the current process
+ */
+static inline struct thread_info *current_thread_info(void)
+{
+	register struct thread_info *tp __asm__ ("tp");
+	return tp;
+}
+
+#endif /* !__ASSEMBLY__ */
+
+/*
+ * thread information flags
+ * - these are process state flags that various assembly files may need to
+ *   access
+ * - pending work-to-be-done flags are in lowest half-word
+ * - other flags in upper half-word(s)
+ */
+#define TIF_SYSCALL_TRACE	0	/* syscall trace active */
+#define TIF_NOTIFY_RESUME	1	/* callback before returning to user */
+#define TIF_SIGPENDING		2	/* signal pending */
+#define TIF_NEED_RESCHED	3	/* rescheduling necessary */
+#define TIF_RESTORE_SIGMASK	4	/* restore signal mask in do_signal() */
+#define TIF_MEMDIE		5	/* is terminating due to OOM killer */
+#define TIF_SYSCALL_TRACEPOINT  6       /* syscall tracepoint instrumentation */
+
+#define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+#define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
+#define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
+#define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
+
+#define _TIF_WORK_MASK \
+	(_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED)
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_THREAD_INFO_H */
diff --git a/arch/riscv/include/asm/timex.h b/arch/riscv/include/asm/timex.h
new file mode 100644
index 000000000000..0a236ab7d178
--- /dev/null
+++ b/arch/riscv/include/asm/timex.h
@@ -0,0 +1,55 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_TIMEX_H
+#define _ASM_RISCV_TIMEX_H
+
+#include <asm/param.h>
+
+#define CLOCK_TICK_RATE (HZ * 100UL)
+
+typedef unsigned long cycles_t;
+
+static inline cycles_t get_cycles(void)
+{
+#if __riscv_xlen >= 64
+	cycles_t n;
+
+	__asm__ __volatile__ (
+		"rdtime %0"
+		: "=r" (n));
+	return n;
+#else
+	u32 lo, hi, tmp;
+
+	__asm__ __volatile__ (
+		"1:\n"
+		"rdtimeh %0\n"
+		"rdtime %1\n"
+		"rdtimeh %2\n"
+		"bne %0, %2, 1b"
+		: "=&r" (hi), "=&r" (lo), "=&r" (tmp));
+	return ((u64)hi << 32) | lo;
+#endif
+}
+
+#define ARCH_HAS_READ_CURRENT_TIMER
+
+static inline int read_current_timer(unsigned long *timer_val)
+{
+	*timer_val = get_cycles();
+	return 0;
+}
+
+#endif /* _ASM_RISCV_TIMEX_H */
diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h
new file mode 100644
index 000000000000..6195b0ea718b
--- /dev/null
+++ b/arch/riscv/include/asm/tlb.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_TLB_H
+#define _ASM_RISCV_TLB_H
+
+#include <asm-generic/tlb.h>
+
+static inline void tlb_flush(struct mmu_gather *tlb)
+{
+	flush_tlb_mm(tlb->mm);
+}
+
+#endif /* _ASM_RISCV_TLB_H */
diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
new file mode 100644
index 000000000000..292879e9cc04
--- /dev/null
+++ b/arch/riscv/include/asm/tlbflush.h
@@ -0,0 +1,95 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _ASM_RISCV_TLBFLUSH_H
+#define _ASM_RISCV_TLBFLUSH_H
+
+#ifdef CONFIG_MMU
+
+/* Flush entire local TLB */
+static inline void local_flush_tlb_all(void)
+{
+	__asm__ __volatile__ ("sfence.vma" : : : "memory");
+}
+
+/* Flush one page from local TLB */
+static inline void local_flush_tlb_page(unsigned long addr)
+{
+	__asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory");
+}
+
+#ifndef CONFIG_SMP
+
+#define flush_tlb_all() local_flush_tlb_all()
+#define flush_tlb_page(vma, addr) local_flush_tlb_page(addr)
+#define flush_tlb_range(vma, start, end) local_flush_tlb_all()
+
+#else /* CONFIG_SMP */
+
+#include <asm/sbi.h>
+
+#define flush_tlb_all() sbi_remote_sfence_vma(0, 0, -1)
+#define flush_tlb_page(vma, addr) flush_tlb_range(vma, addr, 0)
+#define flush_tlb_range(vma, start, end) \
+	sbi_remote_sfence_vma(0, start, (end) - (start))
+
+#endif /* CONFIG_SMP */
+
+/* Flush the TLB entries of the specified mm context */
+static inline void flush_tlb_mm(struct mm_struct *mm)
+{
+	flush_tlb_all();
+}
+
+/* Flush a range of kernel pages */
+static inline void flush_tlb_kernel_range(unsigned long start,
+	unsigned long end)
+{
+	flush_tlb_all();
+}
+
+#else /* !CONFIG_MMU */
+
+static inline void flush_tlb_all(void)
+{
+	BUG();
+}
+
+static inline void flush_tlb_mm(struct mm_struct *mm)
+{
+	BUG();
+}
+
+static inline void flush_tlb_page(struct vm_area_struct *vma,
+	unsigned long addr)
+{
+	BUG();
+}
+
+static inline void flush_tlb_range(struct vm_area_struct *vma,
+	unsigned long start, unsigned long end)
+{
+	BUG();
+}
+
+static inline void flush_tlb_kernel_range(unsigned long start,
+	unsigned long end)
+{
+	BUG();
+}
+
+#endif /* CONFIG_MMU */
+
+#endif /* _ASM_RISCV_TLBFLUSH_H */
diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
new file mode 100644
index 000000000000..3add03baca0f
--- /dev/null
+++ b/arch/riscv/include/asm/uaccess.h
@@ -0,0 +1,455 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ *
+ * This file was copied from include/asm-generic/uaccess.h
+ */
+
+#ifndef _ASM_RISCV_UACCESS_H
+#define _ASM_RISCV_UACCESS_H
+
+/*
+ * User space memory access functions
+ */
+#include <linux/errno.h>
+#include <linux/compiler.h>
+#include <linux/thread_info.h>
+#include <asm/byteorder.h>
+#include <asm/asm.h>
+
+#ifdef CONFIG_RV_PUM
+#define __enable_user_access()						\
+	__asm__ __volatile__ ("csrs sstatus, %0" : : "r" (SR_SUM))
+#define __disable_user_access()						\
+	__asm__ __volatile__ ("csrc sstatus, %0" : : "r" (SR_SUM))
+#else
+#define __enable_user_access()
+#define __disable_user_access()
+#endif
+
+/*
+ * The fs value determines whether argument validity checking should be
+ * performed or not.  If get_fs() == USER_DS, checking is performed, with
+ * get_fs() == KERNEL_DS, checking is bypassed.
+ *
+ * For historical reasons, these macros are grossly misnamed.
+ */
+
+#define KERNEL_DS	(~0UL)
+#define USER_DS		(TASK_SIZE)
+
+#define get_ds()	(KERNEL_DS)
+#define get_fs()	(current_thread_info()->addr_limit)
+
+static inline void set_fs(mm_segment_t fs)
+{
+	current_thread_info()->addr_limit = fs;
+}
+
+#define segment_eq(a, b) ((a) == (b))
+
+#define user_addr_max()	(get_fs())
+
+
+#define VERIFY_READ	0
+#define VERIFY_WRITE	1
+
+/**
+ * access_ok: - Checks if a user space pointer is valid
+ * @type: Type of access: %VERIFY_READ or %VERIFY_WRITE.  Note that
+ *        %VERIFY_WRITE is a superset of %VERIFY_READ - if it is safe
+ *        to write to a block, it is always safe to read from it.
+ * @addr: User space pointer to start of block to check
+ * @size: Size of block to check
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * Checks if a pointer to a block of memory in user space is valid.
+ *
+ * Returns true (nonzero) if the memory block may be valid, false (zero)
+ * if it is definitely invalid.
+ *
+ * Note that, depending on architecture, this function probably just
+ * checks that the pointer is in the user space range - after calling
+ * this function, memory access functions may still return -EFAULT.
+ */
+#define access_ok(type, addr, size) ({					\
+	__chk_user_ptr(addr);						\
+	likely(__access_ok((unsigned long __force)(addr), (size)));	\
+})
+
+/* Ensure that the range [addr, addr+size) is within the process's
+ * address space
+ */
+static inline int __access_ok(unsigned long addr, unsigned long size)
+{
+	const mm_segment_t fs = get_fs();
+	return (size <= fs) && (addr <= (fs - size));
+}
+
+/*
+ * The exception table consists of pairs of addresses: the first is the
+ * address of an instruction that is allowed to fault, and the second is
+ * the address at which the program should continue.  No registers are
+ * modified, so it is entirely up to the continuation code to figure out
+ * what to do.
+ *
+ * All the routines below use bits of fixup code that are out of line
+ * with the main instruction path.  This means when everything is well,
+ * we don't even have to jump over them.  Further, they do not intrude
+ * on our cache or tlb entries.
+ */
+
+struct exception_table_entry {
+	unsigned long insn, fixup;
+};
+
+extern int fixup_exception(struct pt_regs *);
+
+#if defined(__LITTLE_ENDIAN)
+#define __MSW	1
+#define __LSW	0
+#elif defined(__BIG_ENDIAN)
+#define __MSW	0
+#define	__LSW	1
+#else
+#error "Unknown endianness"
+#endif
+
+/*
+ * The "__xxx" versions of the user access functions do not verify the address
+ * space - it must have been done previously with a separate "access_ok()"
+ * call.
+ */
+
+#ifdef CONFIG_MMU
+#define __get_user_asm(insn, x, ptr, err)			\
+do {								\
+	uintptr_t __tmp;					\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	" insn " %1, %3\n"			\
+		"2:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"3:\n"						\
+		"	li %0, %4\n"				\
+		"	li %1, 0\n"				\
+		"	jump 2b, %2\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " SZPTR "\n"			\
+		"	" PTR " 1b, 3b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=&r" (x), "=r" (__tmp)		\
+		: "m" (*(ptr)), "i" (-EFAULT));			\
+	__disable_user_access();				\
+} while (0)
+#else /* !CONFIG_MMU */
+#define __get_user_asm(insn, x, ptr, err)			\
+	__asm__ __volatile__ (					\
+		insn " %0, %1"					\
+		: "=r" (x)					\
+		: "m" (*(ptr)))
+#endif /* CONFIG_MMU */
+
+
+#ifdef CONFIG_64BIT
+#define __get_user_8(x, ptr, err) \
+	__get_user_asm("ld", x, ptr, err)
+#else /* !CONFIG_64BIT */
+#ifdef CONFIG_MMU
+#define __get_user_8(x, ptr, err)				\
+do {								\
+	u32 __user *__ptr = (u32 __user *)(ptr);		\
+	u32 __lo, __hi;						\
+	uintptr_t __tmp;					\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	lw %1, %4\n"				\
+		"2:\n"						\
+		"	lw %2, %5\n"				\
+		"3:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"4:\n"						\
+		"	li %0, %6\n"				\
+		"	li %1, 0\n"				\
+		"	li %2, 0\n"				\
+		"	jump 3b, %3\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " SZPTR "\n"			\
+		"	" PTR " 1b, 4b\n"			\
+		"	" PTR " 2b, 4b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=&r" (__lo), "=r" (__hi),	\
+			"=r" (__tmp)				\
+		: "m" (__ptr[__LSW]), "m" (__ptr[__MSW]),	\
+			"i" (-EFAULT));				\
+	__disable_user_access();				\
+	(x) = (__typeof__(x))((__typeof__((x)-(x)))(		\
+		(((u64)__hi << 32) | __lo)));			\
+} while (0)
+#else /* !CONFIG_MMU */
+#define __get_user_8(x, ptr, err) \
+	(x) = (__typeof__(x))(*((u64 __user *)(ptr)))
+#endif /* CONFIG_MMU */
+#endif /* CONFIG_64BIT */
+
+
+/**
+ * __get_user: - Get a simple variable from user space, with less checking.
+ * @x:   Variable to store result.
+ * @ptr: Source address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple variable from user space to kernel
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and the result of
+ * dereferencing @ptr must be assignable to @x without a cast.
+ *
+ * Caller must check the pointer with access_ok() before calling this
+ * function.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ * On error, the variable @x is set to zero.
+ */
+#define __get_user(x, ptr)					\
+({								\
+	register int __gu_err = 0;				\
+	const __typeof__(*(ptr)) __user *__gu_ptr = (ptr);	\
+	__chk_user_ptr(__gu_ptr);				\
+	switch (sizeof(*__gu_ptr)) {				\
+	case 1:							\
+		__get_user_asm("lb", (x), __gu_ptr, __gu_err);	\
+		break;						\
+	case 2:							\
+		__get_user_asm("lh", (x), __gu_ptr, __gu_err);	\
+		break;						\
+	case 4:							\
+		__get_user_asm("lw", (x), __gu_ptr, __gu_err);	\
+		break;						\
+	case 8:							\
+		__get_user_8((x), __gu_ptr, __gu_err);	\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+	}							\
+	__gu_err;						\
+})
+
+/**
+ * get_user: - Get a simple variable from user space.
+ * @x:   Variable to store result.
+ * @ptr: Source address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple variable from user space to kernel
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and the result of
+ * dereferencing @ptr must be assignable to @x without a cast.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ * On error, the variable @x is set to zero.
+ */
+#define get_user(x, ptr)					\
+({								\
+	const __typeof__(*(ptr)) __user *__p = (ptr);		\
+	might_fault();						\
+	access_ok(VERIFY_READ, __p, sizeof(*__p)) ?		\
+		__get_user((x), __p) :				\
+		((x) = 0, -EFAULT);				\
+})
+
+
+#ifdef CONFIG_MMU
+#define __put_user_asm(insn, x, ptr, err)			\
+do {								\
+	uintptr_t __tmp;					\
+	__typeof__(*(ptr)) __x = x;				\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	" insn " %z3, %2\n"			\
+		"2:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"3:\n"						\
+		"	li %0, %4\n"				\
+		"	jump 2b, %1\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " SZPTR "\n"			\
+		"	" PTR " 1b, 3b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=r" (__tmp), "=m" (*(ptr))	\
+		: "rJ" (__x), "i" (-EFAULT));			\
+	__disable_user_access();				\
+} while (0)
+#else /* !CONFIG_MMU */
+#define __put_user_asm(insn, x, ptr, err)			\
+	__asm__ __volatile__ (					\
+		insn " %z1, %0"					\
+		: "=m" (*(ptr))					\
+		: "rJ" ((__typeof__(*(ptr))) x))
+#endif /* CONFIG_MMU */
+
+
+#ifdef CONFIG_64BIT
+#define __put_user_8(x, ptr, err) \
+	__put_user_asm("sd", x, ptr, err)
+#else /* !CONFIG_64BIT */
+#ifdef CONFIG_MMU
+#define __put_user_8(x, ptr, err)				\
+do {								\
+	u32 __user *__ptr = (u32 __user *)(ptr);		\
+	u64 __x = (__typeof__((x)-(x)))(x);			\
+	uintptr_t __tmp;					\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	sw %z4, %2\n"				\
+		"2:\n"						\
+		"	sw %z5, %3\n"				\
+		"3:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"4:\n"						\
+		"	li %0, %6\n"				\
+		"	jump 2b, %1\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " SZPTR "\n"			\
+		"	" PTR " 1b, 4b\n"			\
+		"	" PTR " 2b, 4b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=r" (__tmp),			\
+			"=m" (__ptr[__LSW]),			\
+			"=m" (__ptr[__MSW])			\
+		: "rJ" (__x), "rJ" (__x >> 32), "i" (-EFAULT));	\
+	__disable_user_access();				\
+} while (0)
+#else /* !CONFIG_MMU */
+#define __put_user_8(x, ptr, err)				\
+	*((u64 __user *)(ptr)) = (u64)(x)
+#endif /* CONFIG_MMU */
+#endif /* CONFIG_64BIT */
+
+
+/**
+ * __put_user: - Write a simple value into user space, with less checking.
+ * @x:   Value to copy to user space.
+ * @ptr: Destination address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple value from kernel space to user
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and @x must be assignable
+ * to the result of dereferencing @ptr.
+ *
+ * Caller must check the pointer with access_ok() before calling this
+ * function.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ */
+#define __put_user(x, ptr)					\
+({								\
+	register int __pu_err = 0;				\
+	__typeof__(*(ptr)) __user *__gu_ptr = (ptr);		\
+	__chk_user_ptr(__gu_ptr);				\
+	switch (sizeof(*__gu_ptr)) {				\
+	case 1:							\
+		__put_user_asm("sb", (x), __gu_ptr, __pu_err);	\
+		break;						\
+	case 2:							\
+		__put_user_asm("sh", (x), __gu_ptr, __pu_err);	\
+		break;						\
+	case 4:							\
+		__put_user_asm("sw", (x), __gu_ptr, __pu_err);	\
+		break;						\
+	case 8:							\
+		__put_user_8((x), __gu_ptr, __pu_err);	\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+	}							\
+	__pu_err;						\
+})
+
+/**
+ * put_user: - Write a simple value into user space.
+ * @x:   Value to copy to user space.
+ * @ptr: Destination address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple value from kernel space to user
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and @x must be assignable
+ * to the result of dereferencing @ptr.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ */
+#define put_user(x, ptr)					\
+({								\
+	__typeof__(*(ptr)) __user *__p = (ptr);			\
+	might_fault();						\
+	access_ok(VERIFY_WRITE, __p, sizeof(*__p)) ?		\
+		__put_user((x), __p) :				\
+		-EFAULT;					\
+})
+
+
+extern unsigned long __must_check __copy_user(void __user *to,
+	const void __user *from, unsigned long n);
+
+static inline unsigned long
+raw_copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+	return __copy_user(to, from, n);
+}
+
+static inline unsigned long
+raw_copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+	return __copy_user(to, from, n);
+}
+
+extern long strncpy_from_user(char *dest, const char __user *src, long count);
+
+extern long __must_check strlen_user(const char __user *str);
+extern long __must_check strnlen_user(const char __user *str, long n);
+
+extern unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
+
+static inline unsigned long __must_check clear_user(void __user *to, unsigned long n)
+{
+	might_fault();
+	return access_ok(VERIFY_WRITE, to, n) ?
+		__clear_user(to, n) : n;
+}
+
+#endif /* _ASM_RISCV_UACCESS_H */
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
new file mode 100644
index 000000000000..3f5820f50912
--- /dev/null
+++ b/arch/riscv/include/asm/unistd.h
@@ -0,0 +1,17 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#define __ARCH_HAVE_MMU
+#define __ARCH_WANT_SYS_CLONE
+#include <uapi/asm/unistd.h>
diff --git a/arch/riscv/include/asm/vdso.h b/arch/riscv/include/asm/vdso.h
new file mode 100644
index 000000000000..95768a3810a7
--- /dev/null
+++ b/arch/riscv/include/asm/vdso.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright (C) 2012 ARM Limited
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_RISCV_VDSO_H
+#define _ASM_RISCV_VDSO_H
+
+#include <linux/types.h>
+
+struct vdso_data {
+};
+
+#define VDSO_SYMBOL(base, name)					\
+({								\
+	extern const char __vdso_##name[];			\
+	(void __user *)((unsigned long)(base) + __vdso_##name);	\
+})
+
+#endif /* _ASM_RISCV_VDSO_H */
diff --git a/arch/riscv/include/asm/word-at-a-time.h b/arch/riscv/include/asm/word-at-a-time.h
new file mode 100644
index 000000000000..57fcb40f2616
--- /dev/null
+++ b/arch/riscv/include/asm/word-at-a-time.h
@@ -0,0 +1,56 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ * Derived from arch/x86/include/asm/word-at-a-time.h
+ */
+
+#ifndef _ASM_RISCV_WORD_AT_A_TIME_H
+#define _ASM_RISCV_WORD_AT_A_TIME_H
+
+
+#include <linux/kernel.h>
+
+struct word_at_a_time {
+	const unsigned long one_bits, high_bits;
+};
+
+#define WORD_AT_A_TIME_CONSTANTS { REPEAT_BYTE(0x01), REPEAT_BYTE(0x80) }
+
+static inline unsigned long has_zero(unsigned long val,
+	unsigned long *bits, const struct word_at_a_time *c)
+{
+	unsigned long mask = ((val - c->one_bits) & ~val) & c->high_bits;
+	*bits = mask;
+	return mask;
+}
+
+static inline unsigned long prep_zero_mask(unsigned long val,
+	unsigned long bits, const struct word_at_a_time *c)
+{
+	return bits;
+}
+
+static inline unsigned long create_zero_mask(unsigned long bits)
+{
+	bits = (bits - 1) & ~bits;
+	return bits >> 7;
+}
+
+static inline unsigned long find_zero(unsigned long mask)
+{
+	return fls64(mask) >> 3;
+}
+
+/* The mask we created is directly usable as a bytemask */
+#define zero_bytemask(mask) (mask)
+
+#endif /* _ASM_RISCV_WORD_AT_A_TIME_H */
diff --git a/arch/riscv/include/uapi/asm/Kbuild b/arch/riscv/include/uapi/asm/Kbuild
new file mode 100644
index 000000000000..276b6dae745c
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/Kbuild
@@ -0,0 +1,10 @@
+# UAPI Header export list
+include include/uapi/asm-generic/Kbuild.asm
+
+header-y += auxvec.h
+header-y += bitsperlong.h
+header-y += byteorder.h
+header-y += ptrace.h
+header-y += sigcontext.h
+header-y += siginfo.h
+header-y += unistd.h
diff --git a/arch/riscv/include/uapi/asm/auxvec.h b/arch/riscv/include/uapi/asm/auxvec.h
new file mode 100644
index 000000000000..1376515547cd
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/auxvec.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _UAPI_ASM_RISCV_AUXVEC_H
+#define _UAPI_ASM_RISCV_AUXVEC_H
+
+/* vDSO location */
+#define AT_SYSINFO_EHDR 33
+
+#endif /* _UAPI_ASM_RISCV_AUXVEC_H */
diff --git a/arch/riscv/include/uapi/asm/bitsperlong.h b/arch/riscv/include/uapi/asm/bitsperlong.h
new file mode 100644
index 000000000000..0b3cb52fd29d
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/bitsperlong.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _UAPI_ASM_RISCV_BITSPERLONG_H
+#define _UAPI_ASM_RISCV_BITSPERLONG_H
+
+#define __BITS_PER_LONG (__SIZEOF_POINTER__ * 8)
+
+#include <asm-generic/bitsperlong.h>
+
+#endif /* _UAPI_ASM_RISCV_BITSPERLONG_H */
diff --git a/arch/riscv/include/uapi/asm/byteorder.h b/arch/riscv/include/uapi/asm/byteorder.h
new file mode 100644
index 000000000000..4ca38af2cd32
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/byteorder.h
@@ -0,0 +1,23 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _UAPI_ASM_RISCV_BYTEORDER_H
+#define _UAPI_ASM_RISCV_BYTEORDER_H
+
+#include <linux/byteorder/little_endian.h>
+
+#endif /* _UAPI_ASM_RISCV_BYTEORDER_H */
diff --git a/arch/riscv/include/uapi/asm/elf.h b/arch/riscv/include/uapi/asm/elf.h
new file mode 100644
index 000000000000..e438edd97589
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/elf.h
@@ -0,0 +1,83 @@
+/*
+ * Copyright (C) 2003 Matjaz Breskvar <phoenix@bsemi.com>
+ * Copyright (C) 2010-2011 Jonas Bonn <jonas@southpole.se>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef _UAPI_ASM_ELF_H
+#define _UAPI_ASM_ELF_H
+
+#include <asm/ptrace.h>
+
+/* ELF register definitions */
+typedef unsigned long elf_greg_t;
+typedef struct user_regs_struct elf_gregset_t;
+#define ELF_NGREG (sizeof(elf_gregset_t) / sizeof(elf_greg_t))
+
+typedef struct user_fpregs_struct elf_fpregset_t;
+
+#define ELF_RISCV_R_SYM(r_info) ((r_info) >> 32)
+#define ELF_RISCV_R_TYPE(r_info) ((r_info) & 0xffffffff)
+
+/*
+ * RISC-V relocation types
+ */
+
+/* Relocation types used by the dynamic linker */
+#define R_RISCV_NONE		0
+#define R_RISCV_32		1
+#define R_RISCV_64		2
+#define R_RISCV_RELATIVE	3
+#define R_RISCV_COPY		4
+#define R_RISCV_JUMP_SLOT	5
+#define R_RISCV_TLS_DTPMOD32	6
+#define R_RISCV_TLS_DTPMOD64	7
+#define R_RISCV_TLS_DTPREL32	8
+#define R_RISCV_TLS_DTPREL64	9
+#define R_RISCV_TLS_TPREL32	10
+#define R_RISCV_TLS_TPREL64	11
+
+/* Relocation types not used by the dynamic linker */
+#define R_RISCV_BRANCH		16
+#define R_RISCV_JAL		17
+#define R_RISCV_CALL		18
+#define R_RISCV_CALL_PLT	19
+#define R_RISCV_GOT_HI20	20
+#define R_RISCV_TLS_GOT_HI20	21
+#define R_RISCV_TLS_GD_HI20	22
+#define R_RISCV_PCREL_HI20	23
+#define R_RISCV_PCREL_LO12_I	24
+#define R_RISCV_PCREL_LO12_S	25
+#define R_RISCV_HI20		26
+#define R_RISCV_LO12_I		27
+#define R_RISCV_LO12_S		28
+#define R_RISCV_TPREL_HI20	29
+#define R_RISCV_TPREL_LO12_I	30
+#define R_RISCV_TPREL_LO12_S	31
+#define R_RISCV_TPREL_ADD	32
+#define R_RISCV_ADD8		33
+#define R_RISCV_ADD16		34
+#define R_RISCV_ADD32		35
+#define R_RISCV_ADD64		36
+#define R_RISCV_SUB8		37
+#define R_RISCV_SUB16		38
+#define R_RISCV_SUB32		39
+#define R_RISCV_SUB64		40
+#define R_RISCV_GNU_VTINHERIT	41
+#define R_RISCV_GNU_VTENTRY	42
+#define R_RISCV_ALIGN		43
+#define R_RISCV_RVC_BRANCH	44
+#define R_RISCV_RVC_JUMP	45
+#define R_RISCV_LUI			46
+#define R_RISCV_GPREL_I		47
+#define R_RISCV_GPREL_S		48
+#define R_RISCV_TPREL_I		49
+#define R_RISCV_TPREL_S		50
+#define R_RISCV_RELAX		51
+
+#endif /* _UAPI_ASM_ELF_H */
diff --git a/arch/riscv/include/uapi/asm/ptrace.h b/arch/riscv/include/uapi/asm/ptrace.h
new file mode 100644
index 000000000000..c5b93028697c
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/ptrace.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _UAPI_ASM_RISCV_PTRACE_H
+#define _UAPI_ASM_RISCV_PTRACE_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/types.h>
+
+/* User-mode register state for core dumps, ptrace, sigcontext
+ *
+ * This decouples struct pt_regs from the userspace ABI.
+ * struct user_regs_struct must form a prefix of struct pt_regs.
+ */
+struct user_regs_struct {
+	unsigned long pc;
+	unsigned long ra;
+	unsigned long sp;
+	unsigned long gp;
+	unsigned long tp;
+	unsigned long t0;
+	unsigned long t1;
+	unsigned long t2;
+	unsigned long s0;
+	unsigned long s1;
+	unsigned long a0;
+	unsigned long a1;
+	unsigned long a2;
+	unsigned long a3;
+	unsigned long a4;
+	unsigned long a5;
+	unsigned long a6;
+	unsigned long a7;
+	unsigned long s2;
+	unsigned long s3;
+	unsigned long s4;
+	unsigned long s5;
+	unsigned long s6;
+	unsigned long s7;
+	unsigned long s8;
+	unsigned long s9;
+	unsigned long s10;
+	unsigned long s11;
+	unsigned long t3;
+	unsigned long t4;
+	unsigned long t5;
+	unsigned long t6;
+};
+
+struct user_fpregs_struct {
+	__u64 f[32];
+	__u32 fcsr;
+};
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _UAPI_ASM_RISCV_PTRACE_H */
diff --git a/arch/riscv/include/uapi/asm/sigcontext.h b/arch/riscv/include/uapi/asm/sigcontext.h
new file mode 100644
index 000000000000..04967aade3a6
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/sigcontext.h
@@ -0,0 +1,30 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef _UAPI_ASM_RISCV_SIGCONTEXT_H
+#define _UAPI_ASM_RISCV_SIGCONTEXT_H
+
+#include <asm/ptrace.h>
+
+/* Signal context structure
+ *
+ * This contains the context saved before a signal handler is invoked;
+ * it is restored by sys_sigreturn / sys_rt_sigreturn.
+ */
+struct sigcontext {
+	struct user_regs_struct sc_regs;
+	struct user_fpregs_struct sc_fpregs;
+};
+
+#endif /* _UAPI_ASM_RISCV_SIGCONTEXT_H */
diff --git a/arch/riscv/include/uapi/asm/siginfo.h b/arch/riscv/include/uapi/asm/siginfo.h
new file mode 100644
index 000000000000..f96849aac662
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/siginfo.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2016 SiFive, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_SIGINFO_H
+#define __ASM_SIGINFO_H
+
+#define __ARCH_SI_PREAMBLE_SIZE	(__SIZEOF_POINTER__ == 4 ? 12 : 16)
+
+#include <asm-generic/siginfo.h>
+
+#endif
diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h
new file mode 100644
index 000000000000..124810f71633
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/unistd.h
@@ -0,0 +1,23 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <asm-generic/unistd.h>
+
+#define __NR_sysriscv  __NR_arch_specific_syscall
+#ifndef __riscv_atomic
+__SYSCALL(__NR_sysriscv, sys_sysriscv)
+#endif
+
+#define RISCV_ATOMIC_CMPXCHG    1
+#define RISCV_ATOMIC_CMPXCHG64  2
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 5/7] RISC-V: arch/riscv/lib
  2017-05-23  0:41 RISC-V Linux Port v1 Palmer Dabbelt
                   ` (3 preceding siblings ...)
  2017-05-23  0:41 ` [PATCH 4/7] RISC-V: arch/riscv/include Palmer Dabbelt
@ 2017-05-23  0:41 ` Palmer Dabbelt
  2017-05-23 10:47   ` Geert Uytterhoeven
  2017-05-23 11:19   ` Arnd Bergmann
  2017-05-23  0:41 ` [PATCH 6/7] RISC-V: arch/riscv/kernel Palmer Dabbelt
                   ` (6 subsequent siblings)
  11 siblings, 2 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23  0:41 UTC (permalink / raw)
  To: linux-kernel, Arnd Bergmann, olof; +Cc: albert, Palmer Dabbelt

---
 arch/riscv/lib/Makefile  |   5 ++
 arch/riscv/lib/ashldi3.c |  42 ++++++++++++++++
 arch/riscv/lib/ashrdi3.c |  44 +++++++++++++++++
 arch/riscv/lib/delay.c   |  42 ++++++++++++++++
 arch/riscv/lib/libgcc.h  |  46 +++++++++++++++++
 arch/riscv/lib/lshrdi3.c |  42 ++++++++++++++++
 arch/riscv/lib/memcpy.S  |  99 +++++++++++++++++++++++++++++++++++++
 arch/riscv/lib/memset.S  | 119 ++++++++++++++++++++++++++++++++++++++++++++
 arch/riscv/lib/uaccess.S | 125 +++++++++++++++++++++++++++++++++++++++++++++++
 9 files changed, 564 insertions(+)
 create mode 100644 arch/riscv/lib/Makefile
 create mode 100644 arch/riscv/lib/ashldi3.c
 create mode 100644 arch/riscv/lib/ashrdi3.c
 create mode 100644 arch/riscv/lib/delay.c
 create mode 100644 arch/riscv/lib/libgcc.h
 create mode 100644 arch/riscv/lib/lshrdi3.c
 create mode 100644 arch/riscv/lib/memcpy.S
 create mode 100644 arch/riscv/lib/memset.S
 create mode 100644 arch/riscv/lib/uaccess.S

diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile
new file mode 100644
index 000000000000..f644e582f4b8
--- /dev/null
+++ b/arch/riscv/lib/Makefile
@@ -0,0 +1,5 @@
+lib-y	:= delay.o memcpy.o memset.o uaccess.o
+
+ifeq ($(CONFIG_64BIT),)
+lib-y += ashldi3.o ashrdi3.o lshrdi3.o
+endif
diff --git a/arch/riscv/lib/ashldi3.c b/arch/riscv/lib/ashldi3.c
new file mode 100644
index 000000000000..9fb71e82ff16
--- /dev/null
+++ b/arch/riscv/lib/ashldi3.c
@@ -0,0 +1,42 @@
+/*
+ * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/export.h>
+
+#include "libgcc.h"
+
+long long __ashldi3(long long u, word_type b)
+{
+	DWunion uu, w;
+	word_type bm;
+
+	if (b == 0)
+		return u;
+
+	uu.ll = u;
+	bm = 32 - b;
+
+	if (bm <= 0) {
+		w.s.low = 0;
+		w.s.high = (unsigned int) uu.s.low << -bm;
+	} else {
+		const unsigned int carries = (unsigned int) uu.s.low >> bm;
+
+		w.s.low = (unsigned int) uu.s.low << b;
+		w.s.high = ((unsigned int) uu.s.high << b) | carries;
+	}
+
+	return w.ll;
+}
+EXPORT_SYMBOL(__ashldi3);
diff --git a/arch/riscv/lib/ashrdi3.c b/arch/riscv/lib/ashrdi3.c
new file mode 100644
index 000000000000..8a92e7e8de33
--- /dev/null
+++ b/arch/riscv/lib/ashrdi3.c
@@ -0,0 +1,44 @@
+/*
+ * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/export.h>
+
+#include "libgcc.h"
+
+long long __ashrdi3(long long u, word_type b)
+{
+	DWunion uu, w;
+	word_type bm;
+
+	if (b == 0)
+		return u;
+
+	uu.ll = u;
+	bm = 32 - b;
+
+	if (bm <= 0) {
+		/* w.s.high = 1..1 or 0..0 */
+		w.s.high =
+		    uu.s.high >> 31;
+		w.s.low = uu.s.high >> -bm;
+	} else {
+		const unsigned int carries = (unsigned int) uu.s.high << bm;
+
+		w.s.high = uu.s.high >> b;
+		w.s.low = ((unsigned int) uu.s.low >> b) | carries;
+	}
+
+	return w.ll;
+}
+EXPORT_SYMBOL(__ashrdi3);
diff --git a/arch/riscv/lib/delay.c b/arch/riscv/lib/delay.c
new file mode 100644
index 000000000000..3c7bf85a0b04
--- /dev/null
+++ b/arch/riscv/lib/delay.c
@@ -0,0 +1,42 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/delay.h>
+#include <linux/param.h>
+#include <linux/timex.h>
+#include <linux/export.h>
+
+void __delay(unsigned long cycles)
+{
+	u64 t0 = get_cycles();
+
+	while ((unsigned long)(get_cycles() - t0) < cycles)
+		cpu_relax();
+}
+
+void udelay(unsigned long usecs)
+{
+	u64 ucycles = (u64)usecs * timebase;
+	do_div(ucycles, 1000000U);
+	__delay((unsigned long)ucycles);
+}
+EXPORT_SYMBOL(udelay);
+
+void ndelay(unsigned long nsecs)
+{
+	u64 ncycles = (u64)nsecs * timebase;
+	do_div(ncycles, 1000000000U);
+	__delay((unsigned long)ncycles);
+}
+EXPORT_SYMBOL(ndelay);
diff --git a/arch/riscv/lib/libgcc.h b/arch/riscv/lib/libgcc.h
new file mode 100644
index 000000000000..2d3fa9d87922
--- /dev/null
+++ b/arch/riscv/lib/libgcc.h
@@ -0,0 +1,46 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#ifndef __ASM_LIBGCC_H
+#define __ASM_LIBGCC_H
+
+#include <asm/byteorder.h>
+
+typedef int word_type __attribute__ ((mode (__word__)));
+
+#ifdef __BIG_ENDIAN
+struct DWstruct {
+	int high, low;
+};
+#elif defined(__LITTLE_ENDIAN)
+struct DWstruct {
+	int low, high;
+};
+#else
+#error I feel sick.
+#endif
+
+typedef union {
+	struct DWstruct s;
+	long long ll;
+} DWunion;
+
+extern long long __ashldi3(long long u, word_type b);
+extern long long __ashrdi3(long long u, word_type b);
+extern word_type __cmpdi2(long long a, long long b);
+extern long long __lshrdi3(long long u, word_type b);
+extern long long __muldi3(long long u, long long v);
+extern word_type __ucmpdi2(unsigned long long a, unsigned long long b);
+
+#endif /* __ASM_LIBGCC_H */
diff --git a/arch/riscv/lib/lshrdi3.c b/arch/riscv/lib/lshrdi3.c
new file mode 100644
index 000000000000..ad4e132959f9
--- /dev/null
+++ b/arch/riscv/lib/lshrdi3.c
@@ -0,0 +1,42 @@
+/*
+ * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/export.h>
+
+#include "libgcc.h"
+
+long long __lshrdi3(long long u, word_type b)
+{
+	DWunion uu, w;
+	word_type bm;
+
+	if (b == 0)
+		return u;
+
+	uu.ll = u;
+	bm = 32 - b;
+
+	if (bm <= 0) {
+		w.s.high = 0;
+		w.s.low = (unsigned int) uu.s.high >> -bm;
+	} else {
+		const unsigned int carries = (unsigned int) uu.s.high << bm;
+
+		w.s.high = (unsigned int) uu.s.high >> b;
+		w.s.low = ((unsigned int) uu.s.low >> b) | carries;
+	}
+
+	return w.ll;
+}
+EXPORT_SYMBOL(__lshrdi3);
diff --git a/arch/riscv/lib/memcpy.S b/arch/riscv/lib/memcpy.S
new file mode 100644
index 000000000000..1d789ff57d7d
--- /dev/null
+++ b/arch/riscv/lib/memcpy.S
@@ -0,0 +1,99 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/asm.h>
+
+/* void *memcpy(void *, const void *, size_t) */
+ENTRY(memcpy)
+	move t6, a0  /* Preserve return value */
+
+	/* Defer to byte-oriented copy for small sizes */
+	sltiu a3, a2, 128
+	bnez a3, 4f
+	/* Use word-oriented copy only if low-order bits match */
+	andi a3, t6, SZREG-1
+	andi a4, a1, SZREG-1
+	bne a3, a4, 4f
+
+	beqz a3, 2f  /* Skip if already aligned */
+	/* Round to nearest double word-aligned address
+	   greater than or equal to start address */
+	andi a3, a1, ~(SZREG-1)
+	addi a3, a3, SZREG
+	/* Handle initial misalignment */
+	sub a4, a3, a1
+1:
+	lb a5, 0(a1)
+	addi a1, a1, 1
+	sb a5, 0(t6)
+	addi t6, t6, 1
+	bltu a1, a3, 1b
+	sub a2, a2, a4  /* Update count */
+
+2:
+	andi a4, a2, ~((16*SZREG)-1)
+	beqz a4, 4f
+	add a3, a1, a4
+3:
+	REG_L a4,       0(a1)
+	REG_L a5,   SZREG(a1)
+	REG_L a6, 2*SZREG(a1)
+	REG_L a7, 3*SZREG(a1)
+	REG_L t0, 4*SZREG(a1)
+	REG_L t1, 5*SZREG(a1)
+	REG_L t2, 6*SZREG(a1)
+	REG_L t3, 7*SZREG(a1)
+	REG_L t4, 8*SZREG(a1)
+	REG_L t5, 9*SZREG(a1)
+	REG_S a4,       0(t6)
+	REG_S a5,   SZREG(t6)
+	REG_S a6, 2*SZREG(t6)
+	REG_S a7, 3*SZREG(t6)
+	REG_S t0, 4*SZREG(t6)
+	REG_S t1, 5*SZREG(t6)
+	REG_S t2, 6*SZREG(t6)
+	REG_S t3, 7*SZREG(t6)
+	REG_S t4, 8*SZREG(t6)
+	REG_S t5, 9*SZREG(t6)
+	REG_L a4, 10*SZREG(a1)
+	REG_L a5, 11*SZREG(a1)
+	REG_L a6, 12*SZREG(a1)
+	REG_L a7, 13*SZREG(a1)
+	REG_L t0, 14*SZREG(a1)
+	REG_L t1, 15*SZREG(a1)
+	addi a1, a1, 16*SZREG
+	REG_S a4, 10*SZREG(t6)
+	REG_S a5, 11*SZREG(t6)
+	REG_S a6, 12*SZREG(t6)
+	REG_S a7, 13*SZREG(t6)
+	REG_S t0, 14*SZREG(t6)
+	REG_S t1, 15*SZREG(t6)
+	addi t6, t6, 16*SZREG
+	bltu a1, a3, 3b
+	andi a2, a2, (16*SZREG)-1  /* Update count */
+
+4:
+	/* Handle trailing misalignment */
+	beqz a2, 6f
+	add a3, a1, a2
+5:
+	lb a4, 0(a1)
+	addi a1, a1, 1
+	sb a4, 0(t6)
+	addi t6, t6, 1
+	bltu a1, a3, 5b
+6:
+	ret
+END(memcpy)
diff --git a/arch/riscv/lib/memset.S b/arch/riscv/lib/memset.S
new file mode 100644
index 000000000000..9d5156899232
--- /dev/null
+++ b/arch/riscv/lib/memset.S
@@ -0,0 +1,119 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+
+#include <linux/linkage.h>
+#include <asm/asm.h>
+
+/* void *memset(void *, int, size_t) */
+ENTRY(memset)
+	move t0, a0  /* Preserve return value */
+
+	/* Defer to byte-oriented fill for small sizes */
+	sltiu a3, a2, 16
+	bnez a3, 4f
+
+	/* Round to nearest XLEN-aligned address
+	   greater than or equal to start address */
+	addi a3, t0, SZREG-1
+	andi a3, a3, ~(SZREG-1)
+	beq a3, t0, 2f  /* Skip if already aligned */
+	/* Handle initial misalignment */
+	sub a4, a3, t0
+1:
+	sb a1, 0(t0)
+	addi t0, t0, 1
+	bltu t0, a3, 1b
+	sub a2, a2, a4  /* Update count */
+
+2: /* Duff's device with 32 XLEN stores per iteration */
+	/* Broadcast value into all bytes */
+	andi a1, a1, 0xff
+	slli a3, a1, 8
+	or a1, a3, a1
+	slli a3, a1, 16
+	or a1, a3, a1
+#ifdef CONFIG_64BIT
+	slli a3, a1, 32
+	or a1, a3, a1
+#endif
+
+	/* Calculate end address */
+	andi a4, a2, ~(SZREG-1)
+	add a3, t0, a4
+
+	andi a4, a4, 31*SZREG  /* Calculate remainder */
+	beqz a4, 3f            /* Shortcut if no remainder */
+	neg a4, a4
+	addi a4, a4, 32*SZREG  /* Calculate initial offset */
+
+	/* Adjust start address with offset */
+	sub t0, t0, a4
+
+	/* Jump into loop body */
+	/* Assumes 32-bit instruction lengths */
+	la a5, 3f
+#ifdef CONFIG_64BIT
+	srli a4, a4, 1
+#endif
+	add a5, a5, a4
+	jr a5
+3:
+	REG_S a1,        0(t0)
+	REG_S a1,    SZREG(t0)
+	REG_S a1,  2*SZREG(t0)
+	REG_S a1,  3*SZREG(t0)
+	REG_S a1,  4*SZREG(t0)
+	REG_S a1,  5*SZREG(t0)
+	REG_S a1,  6*SZREG(t0)
+	REG_S a1,  7*SZREG(t0)
+	REG_S a1,  8*SZREG(t0)
+	REG_S a1,  9*SZREG(t0)
+	REG_S a1, 10*SZREG(t0)
+	REG_S a1, 11*SZREG(t0)
+	REG_S a1, 12*SZREG(t0)
+	REG_S a1, 13*SZREG(t0)
+	REG_S a1, 14*SZREG(t0)
+	REG_S a1, 15*SZREG(t0)
+	REG_S a1, 16*SZREG(t0)
+	REG_S a1, 17*SZREG(t0)
+	REG_S a1, 18*SZREG(t0)
+	REG_S a1, 19*SZREG(t0)
+	REG_S a1, 20*SZREG(t0)
+	REG_S a1, 21*SZREG(t0)
+	REG_S a1, 22*SZREG(t0)
+	REG_S a1, 23*SZREG(t0)
+	REG_S a1, 24*SZREG(t0)
+	REG_S a1, 25*SZREG(t0)
+	REG_S a1, 26*SZREG(t0)
+	REG_S a1, 27*SZREG(t0)
+	REG_S a1, 28*SZREG(t0)
+	REG_S a1, 29*SZREG(t0)
+	REG_S a1, 30*SZREG(t0)
+	REG_S a1, 31*SZREG(t0)
+	addi t0, t0, 32*SZREG
+	bltu t0, a3, 3b
+	andi a2, a2, SZREG-1  /* Update count */
+
+4:
+	/* Handle trailing misalignment */
+	beqz a2, 6f
+	add a3, t0, a2
+5:
+	sb a1, 0(t0)
+	addi t0, t0, 1
+	bltu t0, a3, 5b
+6:
+	ret
+END(memset)
diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S
new file mode 100644
index 000000000000..971ae7d86f97
--- /dev/null
+++ b/arch/riscv/lib/uaccess.S
@@ -0,0 +1,125 @@
+#include <linux/linkage.h>
+#include <asm/asm.h>
+#include <asm/csr.h>
+
+	.altmacro
+	.macro fixup op reg addr lbl
+	LOCAL _epc
+_epc:
+	\op \reg, \addr
+	.section __ex_table,"a"
+	.balign SZPTR
+	PTR _epc, \lbl
+	.previous
+	.endm
+
+ENTRY(__copy_user)
+
+#ifdef CONFIG_RV_PUM
+	/* Enable access to user memory */
+	li t6, SR_SUM
+	csrs sstatus, t6
+#endif
+
+	add a3, a1, a2
+	/* Use word-oriented copy only if low-order bits match */
+	andi t0, a0, SZREG-1
+	andi t1, a1, SZREG-1
+	bne t0, t1, 2f
+
+	addi t0, a1, SZREG-1
+	andi t1, a3, ~(SZREG-1)
+	andi t0, t0, ~(SZREG-1)
+	/* a3: terminal address of source region
+	 * t0: lowest XLEN-aligned address in source
+	 * t1: highest XLEN-aligned address in source
+	 */
+	bgeu t0, t1, 2f
+	bltu a1, t0, 4f
+1:
+	fixup REG_L, t2, (a1), 10f
+	fixup REG_S, t2, (a0), 10f
+	addi a1, a1, SZREG
+	addi a0, a0, SZREG
+	bltu a1, t1, 1b
+2:
+	bltu a1, a3, 5f
+
+3:
+#ifdef CONFIG_RV_PUM
+	/* Disable access to user memory */
+	csrc sstatus, t6
+#endif
+	li a0, 0
+	ret
+4: /* Edge case: unalignment */
+	fixup lbu, t2, (a1), 10f
+	fixup sb, t2, (a0), 10f
+	addi a1, a1, 1
+	addi a0, a0, 1
+	bltu a1, t0, 4b
+	j 1b
+5: /* Edge case: remainder */
+	fixup lbu, t2, (a1), 10f
+	fixup sb, t2, (a0), 10f
+	addi a1, a1, 1
+	addi a0, a0, 1
+	bltu a1, a3, 5b
+	j 3b
+ENDPROC(__copy_user)
+
+
+ENTRY(__clear_user)
+
+#ifdef CONFIG_RV_PUM
+	/* Enable access to user memory */
+	li t6, SR_SUM
+	csrs sstatus, t6
+#endif
+
+	add a3, a0, a1
+	addi t0, a0, SZREG-1
+	andi t1, a3, ~(SZREG-1)
+	andi t0, t0, ~(SZREG-1)
+	/* a3: terminal address of target region
+	 * t0: lowest doubleword-aligned address in target region
+	 * t1: highest doubleword-aligned address in target region
+	 */
+	bgeu t0, t1, 2f
+	bltu a0, t0, 4f
+1:
+	fixup REG_S, zero, (a0), 10f
+	addi a0, a0, SZREG
+	bltu a0, t1, 1b
+2:
+	bltu a0, a3, 5f
+
+3:
+#ifdef CONFIG_RV_PUM
+	/* Disable access to user memory */
+	csrc sstatus, t6
+#endif
+	li a0, 0
+	ret
+4: /* Edge case: unalignment */
+	fixup sb, zero, (a0), 10f
+	addi a0, a0, 1
+	bltu a0, t0, 4b
+	j 1b
+5: /* Edge case: remainder */
+	fixup sb, zero, (a0), 10f
+	addi a0, a0, 1
+	bltu a0, a3, 5b
+	j 3b
+ENDPROC(__clear_user)
+
+	.section .fixup,"ax"
+	.balign 4
+10:
+#ifdef CONFIG_RV_PUM
+	/* Disable access to user memory */
+	csrs sstatus, t6
+#endif
+	sub a0, a3, a0
+	ret
+	.previous
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 6/7] RISC-V: arch/riscv/kernel
  2017-05-23  0:41 RISC-V Linux Port v1 Palmer Dabbelt
                   ` (4 preceding siblings ...)
  2017-05-23  0:41 ` [PATCH 5/7] RISC-V: arch/riscv/lib Palmer Dabbelt
@ 2017-05-23  0:41 ` Palmer Dabbelt
  2017-05-23  2:11   ` Olof Johansson
                     ` (2 more replies)
  2017-05-23  0:41 ` [PATCH 7/7] RISC-V: arch/riscv/mm Palmer Dabbelt
                   ` (5 subsequent siblings)
  11 siblings, 3 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23  0:41 UTC (permalink / raw)
  To: linux-kernel, Arnd Bergmann, olof; +Cc: albert, Palmer Dabbelt

---
 arch/riscv/kernel/Makefile         |  19 ++
 arch/riscv/kernel/asm-offsets.c    | 113 ++++++++++
 arch/riscv/kernel/cacheinfo.c      |  82 ++++++++
 arch/riscv/kernel/cpu.c            |  81 ++++++++
 arch/riscv/kernel/entry.S          | 414 +++++++++++++++++++++++++++++++++++++
 arch/riscv/kernel/head.S           | 139 +++++++++++++
 arch/riscv/kernel/irq.c            | 205 ++++++++++++++++++
 arch/riscv/kernel/module.c         | 185 +++++++++++++++++
 arch/riscv/kernel/pci.c            |  36 ++++
 arch/riscv/kernel/plic.c           | 208 +++++++++++++++++++
 arch/riscv/kernel/process.c        | 130 ++++++++++++
 arch/riscv/kernel/ptrace.c         | 148 +++++++++++++
 arch/riscv/kernel/reset.c          |  33 +++
 arch/riscv/kernel/riscv_ksyms.c    |  16 ++
 arch/riscv/kernel/sbi-con.c        | 214 +++++++++++++++++++
 arch/riscv/kernel/setup.c          | 234 +++++++++++++++++++++
 arch/riscv/kernel/signal.c         | 258 +++++++++++++++++++++++
 arch/riscv/kernel/smp.c            | 107 ++++++++++
 arch/riscv/kernel/smpboot.c        | 105 ++++++++++
 arch/riscv/kernel/stacktrace.c     | 183 ++++++++++++++++
 arch/riscv/kernel/sys_riscv.c      |  85 ++++++++
 arch/riscv/kernel/syscall_table.c  |  26 +++
 arch/riscv/kernel/time.c           | 116 +++++++++++
 arch/riscv/kernel/traps.c          | 167 +++++++++++++++
 arch/riscv/kernel/vdso.c           | 125 +++++++++++
 arch/riscv/kernel/vdso/.gitignore  |   1 +
 arch/riscv/kernel/vdso/Makefile    |  61 ++++++
 arch/riscv/kernel/vdso/sigreturn.S |  25 +++
 arch/riscv/kernel/vdso/vdso.S      |  28 +++
 arch/riscv/kernel/vdso/vdso.lds.S  |  77 +++++++
 arch/riscv/kernel/vmlinux.lds.S    |  93 +++++++++
 31 files changed, 3714 insertions(+)
 create mode 100644 arch/riscv/kernel/Makefile
 create mode 100644 arch/riscv/kernel/asm-offsets.c
 create mode 100644 arch/riscv/kernel/cacheinfo.c
 create mode 100644 arch/riscv/kernel/cpu.c
 create mode 100644 arch/riscv/kernel/entry.S
 create mode 100644 arch/riscv/kernel/head.S
 create mode 100644 arch/riscv/kernel/irq.c
 create mode 100644 arch/riscv/kernel/module.c
 create mode 100644 arch/riscv/kernel/pci.c
 create mode 100644 arch/riscv/kernel/plic.c
 create mode 100644 arch/riscv/kernel/process.c
 create mode 100644 arch/riscv/kernel/ptrace.c
 create mode 100644 arch/riscv/kernel/reset.c
 create mode 100644 arch/riscv/kernel/riscv_ksyms.c
 create mode 100644 arch/riscv/kernel/sbi-con.c
 create mode 100644 arch/riscv/kernel/setup.c
 create mode 100644 arch/riscv/kernel/signal.c
 create mode 100644 arch/riscv/kernel/smp.c
 create mode 100644 arch/riscv/kernel/smpboot.c
 create mode 100644 arch/riscv/kernel/stacktrace.c
 create mode 100644 arch/riscv/kernel/sys_riscv.c
 create mode 100644 arch/riscv/kernel/syscall_table.c
 create mode 100644 arch/riscv/kernel/time.c
 create mode 100644 arch/riscv/kernel/traps.c
 create mode 100644 arch/riscv/kernel/vdso.c
 create mode 100644 arch/riscv/kernel/vdso/.gitignore
 create mode 100644 arch/riscv/kernel/vdso/Makefile
 create mode 100644 arch/riscv/kernel/vdso/sigreturn.S
 create mode 100644 arch/riscv/kernel/vdso/vdso.S
 create mode 100644 arch/riscv/kernel/vdso/vdso.lds.S
 create mode 100644 arch/riscv/kernel/vmlinux.lds.S

diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
new file mode 100644
index 000000000000..94ac2931c56a
--- /dev/null
+++ b/arch/riscv/kernel/Makefile
@@ -0,0 +1,19 @@
+#
+# Makefile for the RISC-V Linux kernel
+#
+
+extra-y := head.o vmlinux.lds
+
+obj-y	:= cpu.o entry.o irq.o process.o ptrace.o reset.o setup.o \
+	   signal.o syscall_table.o sys_riscv.o time.o traps.o \
+	   riscv_ksyms.o stacktrace.o vdso.o cacheinfo.o vdso/
+
+CFLAGS_setup.o := -mcmodel=medany
+
+obj-$(CONFIG_SMP)		+= smpboot.o smp.o
+obj-$(CONFIG_SBI_CONSOLE)	+= sbi-con.o
+obj-$(CONFIG_PCI)		+= pci.o
+obj-$(CONFIG_MODULES)		+= module.o
+obj-$(CONFIG_PLIC)		+= plic.o
+
+clean:
diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
new file mode 100644
index 000000000000..ac2e0cfaf8a3
--- /dev/null
+++ b/arch/riscv/kernel/asm-offsets.c
@@ -0,0 +1,113 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/kbuild.h>
+#include <linux/sched.h>
+#include <asm/thread_info.h>
+#include <asm/ptrace.h>
+
+void asm_offsets(void)
+{
+	OFFSET(TASK_THREAD_INFO, task_struct, stack);
+	OFFSET(THREAD_RA, task_struct, thread.ra);
+	OFFSET(THREAD_SP, task_struct, thread.sp);
+	OFFSET(THREAD_S0, task_struct, thread.s[0]);
+	OFFSET(THREAD_S1, task_struct, thread.s[1]);
+	OFFSET(THREAD_S2, task_struct, thread.s[2]);
+	OFFSET(THREAD_S3, task_struct, thread.s[3]);
+	OFFSET(THREAD_S4, task_struct, thread.s[4]);
+	OFFSET(THREAD_S5, task_struct, thread.s[5]);
+	OFFSET(THREAD_S6, task_struct, thread.s[6]);
+	OFFSET(THREAD_S7, task_struct, thread.s[7]);
+	OFFSET(THREAD_S8, task_struct, thread.s[8]);
+	OFFSET(THREAD_S9, task_struct, thread.s[9]);
+	OFFSET(THREAD_S10, task_struct, thread.s[10]);
+	OFFSET(THREAD_S11, task_struct, thread.s[11]);
+	OFFSET(THREAD_SP, task_struct, thread.sp);
+	OFFSET(TI_TASK, thread_info, task);
+	OFFSET(TI_FLAGS, thread_info, flags);
+	OFFSET(TI_CPU, thread_info, cpu);
+
+	OFFSET(THREAD_F0,  task_struct, thread.fstate.f[0]);
+	OFFSET(THREAD_F1,  task_struct, thread.fstate.f[1]);
+	OFFSET(THREAD_F2,  task_struct, thread.fstate.f[2]);
+	OFFSET(THREAD_F3,  task_struct, thread.fstate.f[3]);
+	OFFSET(THREAD_F4,  task_struct, thread.fstate.f[4]);
+	OFFSET(THREAD_F5,  task_struct, thread.fstate.f[5]);
+	OFFSET(THREAD_F6,  task_struct, thread.fstate.f[6]);
+	OFFSET(THREAD_F7,  task_struct, thread.fstate.f[7]);
+	OFFSET(THREAD_F8,  task_struct, thread.fstate.f[8]);
+	OFFSET(THREAD_F9,  task_struct, thread.fstate.f[9]);
+	OFFSET(THREAD_F10, task_struct, thread.fstate.f[10]);
+	OFFSET(THREAD_F11, task_struct, thread.fstate.f[11]);
+	OFFSET(THREAD_F12, task_struct, thread.fstate.f[12]);
+	OFFSET(THREAD_F13, task_struct, thread.fstate.f[13]);
+	OFFSET(THREAD_F14, task_struct, thread.fstate.f[14]);
+	OFFSET(THREAD_F15, task_struct, thread.fstate.f[15]);
+	OFFSET(THREAD_F16, task_struct, thread.fstate.f[16]);
+	OFFSET(THREAD_F17, task_struct, thread.fstate.f[17]);
+	OFFSET(THREAD_F18, task_struct, thread.fstate.f[18]);
+	OFFSET(THREAD_F19, task_struct, thread.fstate.f[19]);
+	OFFSET(THREAD_F20, task_struct, thread.fstate.f[20]);
+	OFFSET(THREAD_F21, task_struct, thread.fstate.f[21]);
+	OFFSET(THREAD_F22, task_struct, thread.fstate.f[22]);
+	OFFSET(THREAD_F23, task_struct, thread.fstate.f[23]);
+	OFFSET(THREAD_F24, task_struct, thread.fstate.f[24]);
+	OFFSET(THREAD_F25, task_struct, thread.fstate.f[25]);
+	OFFSET(THREAD_F26, task_struct, thread.fstate.f[26]);
+	OFFSET(THREAD_F27, task_struct, thread.fstate.f[27]);
+	OFFSET(THREAD_F28, task_struct, thread.fstate.f[28]);
+	OFFSET(THREAD_F29, task_struct, thread.fstate.f[29]);
+	OFFSET(THREAD_F30, task_struct, thread.fstate.f[30]);
+	OFFSET(THREAD_F31, task_struct, thread.fstate.f[31]);
+	OFFSET(THREAD_FCSR, task_struct, thread.fstate.fcsr);
+
+	DEFINE(PT_SIZE, sizeof(struct pt_regs));
+	OFFSET(PT_SEPC, pt_regs, sepc);
+	OFFSET(PT_RA, pt_regs, ra);
+	OFFSET(PT_FP, pt_regs, s0);
+	OFFSET(PT_S0, pt_regs, s0);
+	OFFSET(PT_S1, pt_regs, s1);
+	OFFSET(PT_S2, pt_regs, s2);
+	OFFSET(PT_S3, pt_regs, s3);
+	OFFSET(PT_S4, pt_regs, s4);
+	OFFSET(PT_S5, pt_regs, s5);
+	OFFSET(PT_S6, pt_regs, s6);
+	OFFSET(PT_S7, pt_regs, s7);
+	OFFSET(PT_S8, pt_regs, s8);
+	OFFSET(PT_S9, pt_regs, s9);
+	OFFSET(PT_S10, pt_regs, s10);
+	OFFSET(PT_S11, pt_regs, s11);
+	OFFSET(PT_SP, pt_regs, sp);
+	OFFSET(PT_TP, pt_regs, tp);
+	OFFSET(PT_A0, pt_regs, a0);
+	OFFSET(PT_A1, pt_regs, a1);
+	OFFSET(PT_A2, pt_regs, a2);
+	OFFSET(PT_A3, pt_regs, a3);
+	OFFSET(PT_A4, pt_regs, a4);
+	OFFSET(PT_A5, pt_regs, a5);
+	OFFSET(PT_A6, pt_regs, a6);
+	OFFSET(PT_A7, pt_regs, a7);
+	OFFSET(PT_T0, pt_regs, t0);
+	OFFSET(PT_T1, pt_regs, t1);
+	OFFSET(PT_T2, pt_regs, t2);
+	OFFSET(PT_T3, pt_regs, t3);
+	OFFSET(PT_T4, pt_regs, t4);
+	OFFSET(PT_T5, pt_regs, t5);
+	OFFSET(PT_T6, pt_regs, t6);
+	OFFSET(PT_GP, pt_regs, gp);
+	OFFSET(PT_SSTATUS, pt_regs, sstatus);
+	OFFSET(PT_SBADADDR, pt_regs, sbadaddr);
+	OFFSET(PT_SCAUSE, pt_regs, scause);
+}
diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c
new file mode 100644
index 000000000000..a22ea8abbf3c
--- /dev/null
+++ b/arch/riscv/kernel/cacheinfo.c
@@ -0,0 +1,82 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/cacheinfo.h>
+#include <linux/cpu.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+
+static void ci_leaf_init(struct cacheinfo *this_leaf,
+                         struct device_node *node,
+                         enum cache_type type, unsigned int level)
+{
+        this_leaf->of_node = node;
+        this_leaf->level = level;
+        this_leaf->type = type;
+        this_leaf->physical_line_partition = 1; // not a sector cache
+        this_leaf->attributes = CACHE_WRITE_BACK | CACHE_READ_ALLOCATE | CACHE_WRITE_ALLOCATE; // TODO: add to DTS
+}
+
+static int __init_cache_level(unsigned int cpu)
+{
+	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+	struct device_node *np = of_cpu_device_node_get(cpu);
+	int levels = 0, leaves = 0, level;
+
+	if (of_property_read_bool(np, "cache-size")) ++leaves;
+	if (of_property_read_bool(np, "i-cache-size")) ++leaves;
+	if (of_property_read_bool(np, "d-cache-size")) ++leaves;
+	if (leaves > 0) levels = 1;
+
+	while ((np = of_find_next_cache_node(np))) {
+		if (!of_device_is_compatible(np, "cache")) break;
+		if (of_property_read_u32(np, "cache-level", &level)) break;
+		if (level <= levels) break;
+		if (of_property_read_bool(np, "cache-size")) ++leaves;
+		if (of_property_read_bool(np, "i-cache-size")) ++leaves;
+		if (of_property_read_bool(np, "d-cache-size")) ++leaves;
+		levels = level;
+	}
+
+	this_cpu_ci->num_levels = levels;
+	this_cpu_ci->num_leaves = leaves;
+	return 0;
+}
+
+static int __populate_cache_leaves(unsigned int cpu)
+{
+	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+	struct cacheinfo *this_leaf = this_cpu_ci->info_list;
+	struct device_node *np = of_cpu_device_node_get(cpu);
+	int levels = 1, level = 1;
+
+	if (of_property_read_bool(np, "cache-size"))   ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
+	if (of_property_read_bool(np, "i-cache-size")) ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
+	if (of_property_read_bool(np, "d-cache-size")) ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
+
+	while ((np = of_find_next_cache_node(np))) {
+		if (!of_device_is_compatible(np, "cache")) break;
+		if (of_property_read_u32(np, "cache-level", &level)) break;
+		if (level <= levels) break;
+		if (of_property_read_bool(np, "cache-size"))   ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
+		if (of_property_read_bool(np, "i-cache-size")) ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
+		if (of_property_read_bool(np, "d-cache-size")) ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
+		levels = level;
+	}
+
+	return 0;
+}
+
+DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
+DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c
new file mode 100644
index 000000000000..9cbf53eb58be
--- /dev/null
+++ b/arch/riscv/kernel/cpu.c
@@ -0,0 +1,81 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <linux/of.h>
+
+/* Return -1 if not a valid hart */
+int riscv_of_processor_hart(struct device_node *node)
+{
+	const char *isa, *status;
+	u32 hart;
+
+	if (!of_device_is_compatible(node, "riscv")) return -1;
+	if (of_property_read_u32(node, "reg", &hart) || hart >= NR_CPUS) return -1;
+	if (of_property_read_string(node, "status", &status) || strcmp(status, "okay")) return -1;
+	if (of_property_read_string(node, "riscv,isa", &isa) || isa[0] != 'r' || isa[1] != 'v') return -1;
+
+	return hart;
+}
+
+#ifdef CONFIG_PROC_FS
+
+static void *c_start(struct seq_file *m, loff_t *pos)
+{
+	*pos = cpumask_next(*pos - 1, cpu_online_mask);
+	if ((*pos) < nr_cpu_ids)
+		return (void *)(uintptr_t)(1 + *pos);
+	return NULL;
+}
+
+static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+{
+	(*pos)++;
+	return c_start(m, pos);
+}
+
+static void c_stop(struct seq_file *m, void *v)
+{
+}
+
+static int c_show(struct seq_file *m, void *v)
+{
+	unsigned long hart_id = (unsigned long)v - 1;
+	struct device_node *node = of_get_cpu_node(hart_id, NULL);
+	const char *compat, *isa, *mmu;
+
+	seq_printf(m, "hart\t: %lu\n", hart_id);
+	if (!of_property_read_string(node, "riscv,isa", &isa) && isa[0] == 'r' && isa[1] == 'v') {
+		seq_printf(m, "isa\t: %s\n", isa);
+	}
+	if (!of_property_read_string(node, "mmu-type", &mmu) && !strncmp(mmu, "riscv,", 6)) {
+		seq_printf(m, "mmu\t: %s\n", mmu+6);
+	}
+	if (!of_property_read_string(node, "compatible", &compat) && strcmp(compat, "riscv")) {
+		seq_printf(m, "uarch\t: %s\n", compat);
+	}
+	seq_printf(m, "\n");
+
+	return 0;
+}
+
+const struct seq_operations cpuinfo_op = {
+	.start	= c_start,
+	.next	= c_next,
+	.stop	= c_stop,
+	.show	= c_show
+};
+
+#endif /* CONFIG_PROC_FS */
diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
new file mode 100644
index 000000000000..d70a1be1e3a6
--- /dev/null
+++ b/arch/riscv/kernel/entry.S
@@ -0,0 +1,414 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+
+#include <asm/asm.h>
+#include <asm/csr.h>
+#include <asm/unistd.h>
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+
+	.text
+	.altmacro
+	.macro SAVE_ALL
+	LOCAL _restore_kernel_sp
+	LOCAL _save_context
+
+	/* If coming from userspace, preserve the user stack pointer and load
+	   the kernel stack pointer.  If we came from the kernel, sscratch
+	   will contain 0, and we should continue on the current stack. */
+	csrrw sp, sscratch, sp
+	bnez sp, _save_context
+
+_restore_kernel_sp:
+	csrr sp, sscratch
+_save_context:
+	addi sp, sp, -(PT_SIZE)
+	REG_S x1,  PT_RA(sp)
+	REG_S x3,  PT_GP(sp)
+	REG_S x4,  PT_TP(sp)
+	REG_S x5,  PT_T0(sp)
+	REG_S x6,  PT_T1(sp)
+	REG_S x7,  PT_T2(sp)
+	REG_S x8,  PT_S0(sp)
+	REG_S x9,  PT_S1(sp)
+	REG_S x10, PT_A0(sp)
+	REG_S x11, PT_A1(sp)
+	REG_S x12, PT_A2(sp)
+	REG_S x13, PT_A3(sp)
+	REG_S x14, PT_A4(sp)
+	REG_S x15, PT_A5(sp)
+	REG_S x16, PT_A6(sp)
+	REG_S x17, PT_A7(sp)
+	REG_S x18, PT_S2(sp)
+	REG_S x19, PT_S3(sp)
+	REG_S x20, PT_S4(sp)
+	REG_S x21, PT_S5(sp)
+	REG_S x22, PT_S6(sp)
+	REG_S x23, PT_S7(sp)
+	REG_S x24, PT_S8(sp)
+	REG_S x25, PT_S9(sp)
+	REG_S x26, PT_S10(sp)
+	REG_S x27, PT_S11(sp)
+	REG_S x28, PT_T3(sp)
+	REG_S x29, PT_T4(sp)
+	REG_S x30, PT_T5(sp)
+	REG_S x31, PT_T6(sp)
+
+	/* Disable FPU to detect illegal usage of
+	   floating point in kernel space */
+	li t0, SR_FS
+
+	csrr s0, sscratch
+	csrrc s1, sstatus, t0
+	csrr s2, sepc
+	csrr s3, sbadaddr
+	csrr s4, scause
+	REG_S s0, PT_SP(sp)
+	REG_S s1, PT_SSTATUS(sp)
+	REG_S s2, PT_SEPC(sp)
+	REG_S s3, PT_SBADADDR(sp)
+	REG_S s4, PT_SCAUSE(sp)
+	.endm
+
+	.macro RESTORE_ALL
+	REG_L a0, PT_SSTATUS(sp)
+	REG_L a2, PT_SEPC(sp)
+	csrw sstatus, a0
+	csrw sepc, a2
+
+	REG_L x1,  PT_RA(sp)
+	REG_L x3,  PT_GP(sp)
+	REG_L x4,  PT_TP(sp)
+	REG_L x5,  PT_T0(sp)
+	REG_L x6,  PT_T1(sp)
+	REG_L x7,  PT_T2(sp)
+	REG_L x8,  PT_S0(sp)
+	REG_L x9,  PT_S1(sp)
+	REG_L x10, PT_A0(sp)
+	REG_L x11, PT_A1(sp)
+	REG_L x12, PT_A2(sp)
+	REG_L x13, PT_A3(sp)
+	REG_L x14, PT_A4(sp)
+	REG_L x15, PT_A5(sp)
+	REG_L x16, PT_A6(sp)
+	REG_L x17, PT_A7(sp)
+	REG_L x18, PT_S2(sp)
+	REG_L x19, PT_S3(sp)
+	REG_L x20, PT_S4(sp)
+	REG_L x21, PT_S5(sp)
+	REG_L x22, PT_S6(sp)
+	REG_L x23, PT_S7(sp)
+	REG_L x24, PT_S8(sp)
+	REG_L x25, PT_S9(sp)
+	REG_L x26, PT_S10(sp)
+	REG_L x27, PT_S11(sp)
+	REG_L x28, PT_T3(sp)
+	REG_L x29, PT_T4(sp)
+	REG_L x30, PT_T5(sp)
+	REG_L x31, PT_T6(sp)
+
+	REG_L x2,  PT_SP(sp)
+	.endm
+
+ENTRY(handle_exception)
+	SAVE_ALL
+
+	/* Set sscratch register to 0, so that if a recursive exception
+	   occurs, the exception vector knows it came from the kernel */
+	csrw sscratch, x0
+
+	/* Compute address of current thread_info */
+	li tp, ~(THREAD_SIZE-1)
+	and tp, tp, sp
+
+	la gp, __global_pointer$
+
+	la ra, ret_from_exception
+	/* MSB of cause differentiates between
+	   interrupts and exceptions */
+	bge s4, zero, 1f
+
+	/* Handle interrupts */
+	slli a0, s4, 1
+	srli a0, a0, 1
+	move a1, sp /* pt_regs */
+	tail do_IRQ
+1:
+	/* Handle syscalls */
+	li t0, EXC_SYSCALL
+	beq s4, t0, handle_syscall
+
+	/* Handle other exceptions */
+	slli t0, s4, LGPTR
+	la t1, excp_vect_table
+	la t2, excp_vect_table_end
+	move a0, sp /* pt_regs */
+	add t0, t1, t0
+	/* Check if exception code lies within bounds */
+	bgeu t0, t2, 1f
+	REG_L t0, 0(t0)
+	jr t0
+1:
+	tail do_trap_unknown
+
+handle_syscall:
+	/* Advance SEPC to avoid executing the original
+	   scall instruction on sret */
+	addi s2, s2, 0x4
+	REG_S s2, PT_SEPC(sp)
+	/* System calls run with interrupts enabled */
+	csrs sstatus, SR_IE
+	/* Trace syscalls, but only if requested by the user. */
+	REG_L t0, TI_FLAGS(tp)
+	andi t0, t0, _TIF_SYSCALL_TRACE
+	bnez t0, handle_syscall_trace_enter
+check_syscall_nr:
+	/* Check to make sure we don't jump to a bogus syscall number. */
+	li t0, __NR_syscalls
+	la s0, sys_ni_syscall
+	/* Syscall number held in a7 */
+	bgeu a7, t0, 1f
+	la s0, sys_call_table
+	slli t0, a7, LGPTR
+	add s0, s0, t0
+	REG_L s0, 0(s0)
+1:
+	jalr s0
+
+ret_from_syscall:
+	/* Set user a0 to kernel a0 */
+	REG_S a0, PT_A0(sp)
+	/* Trace syscalls, but only if requested by the user. */
+	REG_L t0, TI_FLAGS(tp)
+	andi t0, t0, _TIF_SYSCALL_TRACE
+	bnez t0, handle_syscall_trace_exit
+
+ret_from_exception:
+	REG_L s0, PT_SSTATUS(sp)
+	csrc sstatus, SR_IE
+	andi s0, s0, SR_PS
+	bnez s0, restore_all
+
+resume_userspace:
+	/* Interrupts must be disabled here so flags are checked atomically */
+	REG_L s0, TI_FLAGS(tp) /* current_thread_info->flags */
+	andi s1, s0, _TIF_WORK_MASK
+	bnez s1, work_pending
+
+	/* Save unwound kernel stack pointer in sscratch */
+	addi s0, sp, PT_SIZE
+	csrw sscratch, s0
+restore_all:
+	RESTORE_ALL
+	sret
+
+work_pending:
+	/* Enter slow path for supplementary processing */
+	la ra, ret_from_exception
+	andi s1, s0, _TIF_NEED_RESCHED
+	bnez s1, work_resched
+work_notifysig:
+	/* Handle pending signals and notify-resume requests */
+	csrs sstatus, SR_IE /* Enable interrupts for do_notify_resume() */
+	move a0, sp /* pt_regs */
+	move a1, s0 /* current_thread_info->flags */
+	tail do_notify_resume
+work_resched:
+	tail schedule
+
+/* Slow paths for ptrace. */
+handle_syscall_trace_enter:
+	move a0, sp
+	call do_syscall_trace_enter
+	REG_L a0, PT_A0(sp)
+	REG_L a1, PT_A1(sp)
+	REG_L a2, PT_A2(sp)
+	REG_L a3, PT_A3(sp)
+	REG_L a4, PT_A4(sp)
+	REG_L a5, PT_A5(sp)
+	REG_L a6, PT_A6(sp)
+	REG_L a7, PT_A7(sp)
+	j check_syscall_nr
+handle_syscall_trace_exit:
+	move a0, sp
+	call do_syscall_trace_exit
+	j ret_from_exception
+
+END(handle_exception)
+
+ENTRY(ret_from_fork)
+	la ra, ret_from_exception
+	tail schedule_tail
+ENDPROC(ret_from_fork)
+
+ENTRY(ret_from_kernel_thread)
+	call schedule_tail
+	/* Call fn(arg) */
+	la ra, ret_from_exception
+	move a0, s1
+	jr s0
+ENDPROC(ret_from_kernel_thread)
+
+
+/*
+ * Integer register context switch
+ * The callee-saved registers must be saved and restored.
+ * 
+ *   a0: previous task_struct (must be preserved across the switch)
+ *   a1: next task_struct
+ */
+ENTRY(__switch_to)
+	/* Save context into prev->thread */
+	REG_S ra,  THREAD_RA(a0)
+	REG_S sp,  THREAD_SP(a0)
+	REG_S s0,  THREAD_S0(a0)
+	REG_S s1,  THREAD_S1(a0)
+	REG_S s2,  THREAD_S2(a0)
+	REG_S s3,  THREAD_S3(a0)
+	REG_S s4,  THREAD_S4(a0)
+	REG_S s5,  THREAD_S5(a0)
+	REG_S s6,  THREAD_S6(a0)
+	REG_S s7,  THREAD_S7(a0)
+	REG_S s8,  THREAD_S8(a0)
+	REG_S s9,  THREAD_S9(a0)
+	REG_S s10, THREAD_S10(a0)
+	REG_S s11, THREAD_S11(a0)
+	/* Restore context from next->thread */
+	REG_L ra,  THREAD_RA(a1)
+	REG_L sp,  THREAD_SP(a1)
+	REG_L s0,  THREAD_S0(a1)
+	REG_L s1,  THREAD_S1(a1)
+	REG_L s2,  THREAD_S2(a1)
+	REG_L s3,  THREAD_S3(a1)
+	REG_L s4,  THREAD_S4(a1)
+	REG_L s5,  THREAD_S5(a1)
+	REG_L s6,  THREAD_S6(a1)
+	REG_L s7,  THREAD_S7(a1)
+	REG_L s8,  THREAD_S8(a1)
+	REG_L s9,  THREAD_S9(a1)
+	REG_L s10, THREAD_S10(a1)
+	REG_L s11, THREAD_S11(a1)
+	REG_L tp, TASK_THREAD_INFO(a1)
+	ret
+ENDPROC(__switch_to)
+
+ENTRY(__fstate_save)
+	li t1, SR_FS
+	csrs sstatus, t1
+	frcsr t0
+	fsd f0,  THREAD_F0(a0)
+	fsd f1,  THREAD_F1(a0)
+	fsd f2,  THREAD_F2(a0)
+	fsd f3,  THREAD_F3(a0)
+	fsd f4,  THREAD_F4(a0)
+	fsd f5,  THREAD_F5(a0)
+	fsd f6,  THREAD_F6(a0)
+	fsd f7,  THREAD_F7(a0)
+	fsd f8,  THREAD_F8(a0)
+	fsd f9,  THREAD_F9(a0)
+	fsd f10, THREAD_F10(a0)
+	fsd f11, THREAD_F11(a0)
+	fsd f12, THREAD_F12(a0)
+	fsd f13, THREAD_F13(a0)
+	fsd f14, THREAD_F14(a0)
+	fsd f15, THREAD_F15(a0)
+	fsd f16, THREAD_F16(a0)
+	fsd f17, THREAD_F17(a0)
+	fsd f18, THREAD_F18(a0)
+	fsd f19, THREAD_F19(a0)
+	fsd f20, THREAD_F20(a0)
+	fsd f21, THREAD_F21(a0)
+	fsd f22, THREAD_F22(a0)
+	fsd f23, THREAD_F23(a0)
+	fsd f24, THREAD_F24(a0)
+	fsd f25, THREAD_F25(a0)
+	fsd f26, THREAD_F26(a0)
+	fsd f27, THREAD_F27(a0)
+	fsd f28, THREAD_F28(a0)
+	fsd f29, THREAD_F29(a0)
+	fsd f30, THREAD_F30(a0)
+	fsd f31, THREAD_F31(a0)
+	sw t0, THREAD_FCSR(a0)
+	csrc sstatus, t1
+	ret
+ENDPROC(__fstate_save)
+
+ENTRY(__fstate_restore)
+	li t1, SR_FS
+	lw t0, THREAD_FCSR(a0)
+	csrs sstatus, t1
+	fld f0,  THREAD_F0(a0)
+	fld f1,  THREAD_F1(a0)
+	fld f2,  THREAD_F2(a0)
+	fld f3,  THREAD_F3(a0)
+	fld f4,  THREAD_F4(a0)
+	fld f5,  THREAD_F5(a0)
+	fld f6,  THREAD_F6(a0)
+	fld f7,  THREAD_F7(a0)
+	fld f8,  THREAD_F8(a0)
+	fld f9,  THREAD_F9(a0)
+	fld f10, THREAD_F10(a0)
+	fld f11, THREAD_F11(a0)
+	fld f12, THREAD_F12(a0)
+	fld f13, THREAD_F13(a0)
+	fld f14, THREAD_F14(a0)
+	fld f15, THREAD_F15(a0)
+	fld f16, THREAD_F16(a0)
+	fld f17, THREAD_F17(a0)
+	fld f18, THREAD_F18(a0)
+	fld f19, THREAD_F19(a0)
+	fld f20, THREAD_F20(a0)
+	fld f21, THREAD_F21(a0)
+	fld f22, THREAD_F22(a0)
+	fld f23, THREAD_F23(a0)
+	fld f24, THREAD_F24(a0)
+	fld f25, THREAD_F25(a0)
+	fld f26, THREAD_F26(a0)
+	fld f27, THREAD_F27(a0)
+	fld f28, THREAD_F28(a0)
+	fld f29, THREAD_F29(a0)
+	fld f30, THREAD_F30(a0)
+	fld f31, THREAD_F31(a0)
+	fscsr t0
+	csrc sstatus, t1
+	ret
+ENDPROC(__fstate_restore)
+
+
+	.section ".rodata"
+	/* Exception vector table */
+ENTRY(excp_vect_table)
+	PTR do_trap_insn_misaligned
+	PTR do_trap_unknown /* instruction access exception */
+	PTR do_trap_insn_illegal
+	PTR do_trap_break
+	PTR do_trap_unknown
+	PTR do_trap_unknown /* load access exception */
+	PTR do_trap_amo_misaligned
+	PTR do_trap_unknown /* store access exception */
+	PTR do_trap_unknown /* handle_syscall */
+	PTR do_trap_unknown
+	PTR do_trap_unknown
+	PTR do_trap_unknown
+	PTR do_page_fault   /* instruction page fault */
+	PTR do_page_fault   /* load page fault */
+	PTR do_trap_unknown
+	PTR do_page_fault   /* store page fault */
+excp_vect_table_end:
+END(excp_vect_table)
+
diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
new file mode 100644
index 000000000000..52d574206d76
--- /dev/null
+++ b/arch/riscv/kernel/head.S
@@ -0,0 +1,139 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+#include <asm/asm.h>
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/thread_info.h>
+#include <asm/page.h>
+#include <asm/csr.h>
+
+__INIT
+ENTRY(_start)
+	/* Mask all interrupts */
+	csrw sie, zero
+
+	/* Disable FPU to detect illegal usage of
+	   floating point in kernel space */
+	li t0, SR_FS
+	csrc sstatus, t0
+
+#ifndef CONFIG_RV_PUM
+	/* Allow access to user memory */
+	li t0, SR_SUM
+	csrs sstatus, t0
+#endif
+
+	/* Pick one hart to run the main boot sequence */
+	la a3, hart_lottery
+	li a2, 1
+	amoadd.w a3, a2, (a3)
+	bnez a3, .Lsecondary_start
+
+	/* Save hart ID and DTB physical address */
+	mv s0, a0
+	mv s1, a1
+
+	/* Initialize page tables and relocate to virtual addresses */
+	la sp, init_thread_union + THREAD_SIZE
+	call setup_vm
+	call relocate
+
+	/* Restore C environment */
+	la tp, init_thread_union
+	li sp, THREAD_SIZE
+	add sp, sp, tp
+
+	/* Start the kernel */
+	mv a0, s0
+	mv a1, s1
+	call sbi_save
+	tail start_kernel
+
+relocate:
+	/* Relocate return address */
+	li a1, PAGE_OFFSET
+	la a0, _start
+	sub a1, a1, a0
+	add ra, ra, a1
+
+	/* Point stvec to virtual address of intruction after sptbr write */
+	la a0, 1f
+	add a0, a0, a1
+	csrw stvec, a0
+
+	/* Compute sptbr for kernel page tables, but don't load it yet */
+	la a2, swapper_pg_dir
+	srl a2, a2, PAGE_SHIFT
+	li a1, SPTBR_MODE
+	or a2, a2, a1
+
+	/* Load trampoline page directory, which will cause us to trap to
+	   stvec if VA != PA, or simply fall through if VA == PA */
+	la a0, trampoline_pg_dir
+	srl a0, a0, PAGE_SHIFT
+	or a0, a0, a1
+	sfence.vma
+	csrw sptbr, a0
+1:
+	/* Set trap vector to spin forever to help debug */
+	la a0, .Lsecondary_park
+	csrw stvec, a0
+
+	/* Load the global pointer */
+	la gp, __global_pointer$
+
+	/* Switch to kernel page tables */
+	csrw sptbr, a2
+
+	ret
+
+.Lsecondary_start:
+#ifdef CONFIG_SMP
+	li a1, CONFIG_NR_CPUS
+	bgeu a0, a1, .Lsecondary_park
+
+	la a1, __cpu_up_stack_pointer
+	slli a0, a0, LGREG
+	add a0, a0, a1
+
+.Lwait_for_cpu_up:
+	REG_L sp, (a0)
+	beqz sp, .Lwait_for_cpu_up
+	fence
+
+	/* Enable virtual memory and relocate to virtual address */
+	call relocate
+
+	/* Initialize task_struct pointer */
+	li tp, -THREAD_SIZE
+	add tp, tp, sp
+
+	tail smp_callin
+#endif
+
+.Lsecondary_park:
+	/* We lack SMP support or have too many harts, so park this hart */
+	wfi
+	j .Lsecondary_park
+END(_start)
+
+__PAGE_ALIGNED_BSS
+	/* Empty zero page */
+	.balign PAGE_SIZE
+ENTRY(empty_zero_page)
+	.fill (empty_zero_page + PAGE_SIZE) - ., 1, 0x00
+END(empty_zero_page)
diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c
new file mode 100644
index 000000000000..b772bb9539cf
--- /dev/null
+++ b/arch/riscv/kernel/irq.c
@@ -0,0 +1,205 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/irq.h>
+#include <linux/irqchip.h>
+#include <linux/irqdomain.h>
+#include <linux/interrupt.h>
+#include <linux/ftrace.h>
+#include <linux/of.h>
+#include <linux/seq_file.h>
+
+#include <asm/ptrace.h>
+#include <asm/sbi.h>
+#include <asm/smp.h>
+
+struct riscv_irq_data {
+	struct irq_chip		chip;
+	struct irq_domain	*domain;
+	int			hart;
+	char			name[20];
+};
+DEFINE_PER_CPU(struct riscv_irq_data, riscv_irq_data);
+DEFINE_PER_CPU(atomic_long_t, riscv_early_sie);
+
+static void riscv_software_interrupt(void)
+{
+#ifdef CONFIG_SMP
+	irqreturn_t ret;
+
+	ret = handle_ipi();
+	if (ret != IRQ_NONE)
+		return;
+#endif
+
+	BUG();
+}
+
+asmlinkage void __irq_entry do_IRQ(unsigned int cause, struct pt_regs *regs)
+{
+	struct pt_regs *old_regs = set_irq_regs(regs);
+	irq_enter();
+
+	/* There are three classes of interrupt: timer, software, and
+	   external devices.  We dispatch between them here.  External
+	   device interrupts use the generic IRQ mechanisms. */
+	switch (cause) {
+		case INTERRUPT_CAUSE_TIMER:
+			riscv_timer_interrupt();
+			break;
+		case INTERRUPT_CAUSE_SOFTWARE:
+			riscv_software_interrupt();
+			break;
+		default: {
+			struct irq_domain *domain = per_cpu(riscv_irq_data, smp_processor_id()).domain;
+			generic_handle_irq(irq_find_mapping(domain, cause));
+			break;
+		}
+	}
+
+	irq_exit();
+	set_irq_regs(old_regs);
+}
+
+static int riscv_irqdomain_map(struct irq_domain *d, unsigned int irq, irq_hw_number_t hwirq)
+{
+	struct riscv_irq_data *data = d->host_data;
+
+        irq_set_chip_and_handler(irq, &data->chip, handle_simple_irq);
+        irq_set_chip_data(irq, data);
+        irq_set_noprobe(irq);
+
+        return 0;
+}
+
+static const struct irq_domain_ops riscv_irqdomain_ops = {
+	.map	= riscv_irqdomain_map,
+	.xlate	= irq_domain_xlate_onecell,
+};
+
+static void riscv_irq_mask(struct irq_data *d)
+{
+	struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
+	BUG_ON(smp_processor_id() != data->hart);
+	csr_clear(sie, 1 << (long)d->hwirq);
+}
+
+static void riscv_irq_unmask(struct irq_data *d)
+{
+	struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
+	BUG_ON(smp_processor_id() != data->hart);
+	csr_set(sie, 1 << (long)d->hwirq);
+}
+
+static void riscv_irq_enable_helper(void *d)
+{
+	riscv_irq_unmask(d);
+}
+
+static void riscv_irq_enable(struct irq_data *d)
+{
+	struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
+	atomic_long_or((1 << (long)d->hwirq), &per_cpu(riscv_early_sie, data->hart));
+	if (data->hart == smp_processor_id()) {
+		riscv_irq_unmask(d);
+	} else if (cpu_online(data->hart)) {
+		smp_call_function_single(data->hart, riscv_irq_enable_helper, d, true);
+	}
+}
+
+static void riscv_irq_disable_helper(void *d)
+{
+	riscv_irq_mask(d);
+}
+
+static void riscv_irq_disable(struct irq_data *d)
+{
+	struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
+	atomic_long_and(~(1 << (long)d->hwirq), &per_cpu(riscv_early_sie, data->hart));
+	if (data->hart == smp_processor_id()) {
+		riscv_irq_mask(d);
+	} else if (cpu_online(data->hart)) {
+		smp_call_function_single(data->hart, riscv_irq_disable_helper, d, true);
+	}
+}
+
+static void riscv_irq_mask_noop(struct irq_data *d) { }
+
+static void riscv_irq_unmask_noop(struct irq_data *d) { }
+
+static void riscv_irq_enable_noop(struct irq_data *d)
+{
+	struct device_node *data = irq_data_get_irq_chip_data(d);
+	u32 hart;
+
+	if (!of_property_read_u32(data, "reg", &hart)) {
+		printk("WARNING: enabled interrupt %d for missing hart %d (this interrupt has no handler)\n", (int)d->hwirq, hart);
+	}
+}
+
+static struct irq_chip riscv_noop_chip = {
+	.name = "riscv,cpu-intc,noop",
+	.irq_mask = riscv_irq_mask_noop,
+	.irq_unmask = riscv_irq_unmask_noop,
+	.irq_enable = riscv_irq_enable_noop,
+};
+
+static int riscv_irqdomain_map_noop(struct irq_domain *d, unsigned int irq, irq_hw_number_t hwirq)
+{
+	struct device_node *data = d->host_data;
+	irq_set_chip_and_handler(irq, &riscv_noop_chip, handle_simple_irq);
+	irq_set_chip_data(irq, data);
+	return 0;
+}
+
+static const struct irq_domain_ops riscv_irqdomain_ops_noop = {
+	.map    = riscv_irqdomain_map_noop,
+	.xlate  = irq_domain_xlate_onecell,
+};
+
+static int riscv_intc_init(struct device_node *node, struct device_node *parent)
+{
+	int hart;
+
+	if (parent) return 0; // should have no interrupt parent
+
+	if ((hart = riscv_of_processor_hart(node->parent)) >= 0) {
+		struct riscv_irq_data *data = &per_cpu(riscv_irq_data, hart);
+		snprintf(data->name, sizeof(data->name), "riscv,cpu_intc,%d", hart);
+		data->hart = hart;
+		data->chip.name = data->name;
+		data->chip.irq_mask = riscv_irq_mask;
+		data->chip.irq_unmask = riscv_irq_unmask;
+		data->chip.irq_enable = riscv_irq_enable;
+		data->chip.irq_disable = riscv_irq_disable;
+		data->domain = irq_domain_add_linear(node, 8*sizeof(uintptr_t), &riscv_irqdomain_ops, data);
+		WARN_ON(!data->domain);
+		printk("%s: %d local interrupts mapped\n", data->name, 8*(int)sizeof(uintptr_t));
+	} else {
+		/* If a hart is disabled, create a no-op irq domain.
+		 * Devices may still have interrupts connected to those harts.
+		 * This is not wrong... unless they actually load a driver that needs it!
+		 */
+		irq_domain_add_linear(node, 8*sizeof(uintptr_t), &riscv_irqdomain_ops_noop, node->parent);
+	}
+	return 0;
+}
+
+IRQCHIP_DECLARE(riscv, "riscv,cpu-intc", riscv_intc_init);
+
+void __init init_IRQ(void)
+{
+	irqchip_init();
+}
diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
new file mode 100644
index 000000000000..c58d3847c05a
--- /dev/null
+++ b/arch/riscv/kernel/module.c
@@ -0,0 +1,185 @@
+/*
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License
+ *  along with this program; if not, write to the Free Software
+ *  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ *
+ *  Copyright (C) 2017 Zihao Yu
+ */
+
+#include <linux/elf.h>
+#include <linux/err.h>
+#include <linux/errno.h>
+#include <linux/moduleloader.h>
+
+static int apply_r_riscv_64_rela(struct module *me, u32 *location, Elf_Addr v) {
+	*(u64 *)location = v;
+	return 0;
+}
+
+static int apply_r_riscv_branch_rela(struct module *me, u32 *location, Elf_Addr v) {
+	s64 offset = (void*)v - (void *)location;
+	u32 imm12 = (offset & 0x1000) << (31 - 12);
+	u32 imm11 = (offset & 0x800) >> (11 - 7);
+	u32 imm10_5 = (offset & 0x7e0) << (30 - 10);
+	u32 imm4_1 = (offset & 0x1e) << (11 - 4);
+
+	*location = (*location & 0x1fff07f) | imm12 | imm11 | imm10_5 | imm4_1;
+	return 0;
+}
+
+static int apply_r_riscv_jal_rela(struct module *me, u32 *location, Elf_Addr v) {
+	s64 offset = (void*)v - (void *)location;
+	u32 imm20 = (offset & 0x100000) << (31 - 20);
+	u32 imm19_12 = (offset & 0xff000);
+	u32 imm11 = (offset & 0x800) << (20 - 11);
+	u32 imm10_1 = (offset & 0x7fe) << (30 - 10);
+
+	*location = (*location & 0xfff) | imm20 | imm19_12 | imm11 | imm10_1;
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location, Elf_Addr v) {
+	s64 offset = (void*)v - (void *)location;
+	s32 hi20;
+
+	if(offset != (s32)offset) {
+		pr_err("%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n", me->name, v, location);
+		return -EINVAL;
+	}
+
+	hi20 = (offset + 0x800) & 0xfffff000;
+	*location = (*location & 0xfff) | hi20;
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_lo12_i_rela(struct module *me, u32 *location, Elf_Addr v) {
+	/* v is the lo12 value to fill. It is calculated before calling this handler. */
+	*location = (*location & 0xfffff) | ((v & 0xfff) << 20);
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_lo12_s_rela(struct module *me, u32 *location, Elf_Addr v) {
+	/* v is the lo12 value to fill. It is calculated before calling this handler. */
+	u32 imm11_5 = (v & 0xfe0) << (31 - 11);
+	u32 imm4_0 = (v & 0x1f) << (11 - 4);
+
+	*location = (*location & 0x1fff07f) | imm11_5 | imm4_0;
+	return 0;
+}
+
+static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location, Elf_Addr v) {
+	s64 offset = (void*)v - (void *)location;
+	s32 fill_v = offset;
+	u32 hi20, lo12;
+	if(offset != fill_v) {
+		pr_err("%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n", me->name, v, location);
+		return -EINVAL;
+	}
+
+	hi20 = (offset + 0x800) & 0xfffff000;
+	lo12 = (offset - hi20) & 0xfff;
+	*location = (*location & 0xfff) | hi20;
+	*(location + 1) = (*(location + 1) & 0xfffff) | (lo12 << 20);
+	return 0;
+}
+
+static int apply_r_riscv_relax_rela(struct module *me, u32 *location, Elf_Addr v) {
+	return 0;
+}
+
+static int (*reloc_handlers_rela[]) (struct module *me, u32 *location,
+				Elf_Addr v) = {
+	[R_RISCV_64]			= apply_r_riscv_64_rela,
+	[R_RISCV_BRANCH]		= apply_r_riscv_branch_rela,
+	[R_RISCV_JAL]			= apply_r_riscv_jal_rela,
+	[R_RISCV_PCREL_HI20]		= apply_r_riscv_pcrel_hi20_rela,
+	[R_RISCV_PCREL_LO12_I]		= apply_r_riscv_pcrel_lo12_i_rela,
+	[R_RISCV_PCREL_LO12_S]		= apply_r_riscv_pcrel_lo12_s_rela,
+	[R_RISCV_CALL_PLT]		= apply_r_riscv_call_plt_rela,
+	[R_RISCV_RELAX]			= apply_r_riscv_relax_rela,
+};
+
+int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
+		       unsigned int symindex, unsigned int relsec,
+		       struct module *me) {
+	Elf_Rela *rel = (void *) sechdrs[relsec].sh_addr;
+	int (*handler)(struct module *me, u32 *location, Elf_Addr v);
+	Elf_Sym *sym;
+	u32 *location;
+	unsigned int i, type;
+	Elf_Addr v;
+	int res;
+
+	pr_debug("Applying relocate section %u to %u\n", relsec,
+	       sechdrs[relsec].sh_info);
+
+	for (i = 0; i < sechdrs[relsec].sh_size / sizeof(*rel); i++) {
+		/* This is where to make the change */
+		location = (void *)sechdrs[sechdrs[relsec].sh_info].sh_addr
+			+ rel[i].r_offset;
+		/* This is the symbol it is referring to */
+		sym = (Elf_Sym *)sechdrs[symindex].sh_addr
+			+ ELF_RISCV_R_SYM(rel[i].r_info);
+		if (IS_ERR_VALUE(sym->st_value)) {
+			/* Ignore unresolved weak symbol */
+			if (ELF_ST_BIND(sym->st_info) == STB_WEAK)
+				continue;
+			printk(KERN_WARNING "%s: Unknown symbol %s\n",
+			       me->name, strtab + sym->st_name);
+			return -ENOENT;
+		}
+
+		type = ELF_RISCV_R_TYPE(rel[i].r_info);
+
+		if (type < ARRAY_SIZE(reloc_handlers_rela))
+			handler = reloc_handlers_rela[type];
+		else
+			handler = NULL;
+
+		if (!handler) {
+			pr_err("%s: Unknown relocation type %u\n",
+			       me->name, type);
+			return -EINVAL;
+		}
+
+		v = sym->st_value + rel[i].r_addend;
+
+		if(type == R_RISCV_PCREL_LO12_I || type == R_RISCV_PCREL_LO12_S) {
+			unsigned j;
+			for (j = 0; j < sechdrs[relsec].sh_size / sizeof(*rel); j++) {
+				u64 hi20_loc = sechdrs[sechdrs[relsec].sh_info].sh_addr + rel[j].r_offset;
+				/* Find the corresponding HI20 PC-relative relocation entry */
+				if(hi20_loc == sym->st_value) {
+					Elf_Sym *hi20_sym = (Elf_Sym *)sechdrs[symindex].sh_addr + ELF_RISCV_R_SYM(rel[j].r_info);
+					u64 hi20_sym_val = hi20_sym->st_value + rel[j].r_addend;
+					/* Calculate lo12 */
+					s64 offset = hi20_sym_val - hi20_loc;
+					s32 hi20 = (offset + 0x800) & 0xfffff000;
+					s32 lo12 = offset - hi20;
+					v = lo12;
+					break;
+				}
+			}
+			if(j == sechdrs[relsec].sh_size / sizeof(*rel)) {
+				pr_err("%s: Can not find HI20 PC-relative relocation information\n", me->name);
+				return -EINVAL;
+			}
+		}
+
+		res = handler(me, location, v);
+		if (res)
+			return res;
+	}
+
+	return 0;
+}
diff --git a/arch/riscv/kernel/pci.c b/arch/riscv/kernel/pci.c
new file mode 100644
index 000000000000..4191a5ffdd67
--- /dev/null
+++ b/arch/riscv/kernel/pci.c
@@ -0,0 +1,36 @@
+/*
+ * Code borrowed from arch/arm64/kernel/pci.c
+ *
+ * Copyright (C) 2003 Anton Blanchard <anton@au.ibm.com>, IBM
+ * Copyright (C) 2014 ARM Ltd.
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/init.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/pci.h>
+
+/*
+ * Called after each bus is probed, but before its children are examined
+ */
+void pcibios_fixup_bus(struct pci_bus *bus)
+{
+	/* nothing to do, expected to be removed in the future */
+}
+
+/*
+ * We don't have to worry about legacy ISA devices, so nothing to do here
+ */
+resource_size_t pcibios_align_resource(void *data, const struct resource *res,
+				resource_size_t size, resource_size_t align)
+{
+	return res->start;
+}
diff --git a/arch/riscv/kernel/plic.c b/arch/riscv/kernel/plic.c
new file mode 100644
index 000000000000..5b3d4241f4e2
--- /dev/null
+++ b/arch/riscv/kernel/plic.c
@@ -0,0 +1,208 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/irq.h>
+#include <linux/irqchip.h>
+#include <linux/irqchip/chained_irq.h>
+#include <linux/irqdomain.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/platform_device.h>
+
+#define MAX_DEVICES	1024 // 0 is reserved
+#define MAX_CONTEXTS	15872
+
+#define PRIORITY_BASE	0
+#define ENABLE_BASE	0x2000
+#define ENABLE_SIZE	0x80
+#define HART_BASE	0x200000
+#define HART_SIZE	0x1000
+
+#define PLIC_HART_CONTEXT(data, i)	(struct plic_hart_context *)((char*)data->reg + HART_BASE + HART_SIZE*i)
+#define PLIC_ENABLE_CONTEXT(data, i)	(struct plic_enable_context *)((char*)data->reg + ENABLE_BASE + ENABLE_SIZE*i)
+#define PLIC_PRIORITY(data)		(struct plic_priority *)((char *)data->reg + PRIORITY_BASE)
+
+struct plic_hart_context {
+	volatile u32 threshold;
+	volatile u32 claim;
+};
+
+struct plic_enable_context {
+	atomic_t mask[32]; // 32-bit * 32-entry
+};
+
+struct plic_priority {
+	volatile u32 prio[MAX_DEVICES];
+};
+
+struct plic_data {
+	struct irq_chip		chip;
+	struct irq_domain	*domain;
+	u32			ndev;
+	void __iomem		*reg;
+	int			handlers;
+	struct plic_handler	*handler;
+	char			name[30];
+};
+
+struct plic_handler {
+	struct plic_hart_context	*context;
+	struct plic_data		*data;
+};
+
+static void plic_disable(struct plic_data *data, int i, int hwirq)
+{
+	struct plic_enable_context *enable = PLIC_ENABLE_CONTEXT(data, i);
+	atomic_and(~(1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
+}
+
+static void plic_enable(struct plic_data *data, int i, int hwirq)
+{
+	struct plic_enable_context *enable = PLIC_ENABLE_CONTEXT(data, i);
+	atomic_or((1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
+}
+
+// There is no need to mask/unmask PLIC interrupts
+// They are "masked" by reading claim and "unmasked" when writing it back.
+static void plic_irq_mask(struct irq_data *d) { }
+static void plic_irq_unmask(struct irq_data *d) { }
+
+static void plic_irq_enable(struct irq_data *d)
+{
+	struct plic_data *data = irq_data_get_irq_chip_data(d);
+	struct plic_priority *priority = PLIC_PRIORITY(data);
+	int i;
+	iowrite32(1, &priority->prio[d->hwirq]);
+	for (i = 0; i < data->handlers; ++i)
+		if (data->handler[i].context)
+			plic_enable(data, i, d->hwirq);
+}
+
+static void plic_irq_disable(struct irq_data *d)
+{
+	struct plic_data *data = irq_data_get_irq_chip_data(d);
+	struct plic_priority *priority = PLIC_PRIORITY(data);
+	int i;
+	iowrite32(0, &priority->prio[d->hwirq]);
+	for (i = 0; i < data->handlers; ++i)
+		if (data->handler[i].context)
+			plic_disable(data, i, d->hwirq);
+}
+
+static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq, irq_hw_number_t hwirq)
+{
+	struct plic_data *data = d->host_data;
+
+        irq_set_chip_and_handler(irq, &data->chip, handle_simple_irq);
+        irq_set_chip_data(irq, data);
+        irq_set_noprobe(irq);
+
+        return 0;
+}
+
+static const struct irq_domain_ops plic_irqdomain_ops = {
+	.map	= plic_irqdomain_map,
+	.xlate	= irq_domain_xlate_onecell,
+};
+
+static void plic_chained_handle_irq(struct irq_desc *desc)
+{
+        struct plic_handler *handler = irq_desc_get_handler_data(desc);
+	struct irq_chip *chip = irq_desc_get_chip(desc);
+	struct irq_domain *domain = handler->data->domain;
+	u32 what;
+
+	chained_irq_enter(chip, desc);
+
+	while ((what = ioread32(&handler->context->claim))) {
+		int irq = irq_find_mapping(domain, what);
+		if (irq > 0) {
+			generic_handle_irq(irq);
+		} else {
+			handle_bad_irq(desc);
+		}
+		iowrite32(what, &handler->context->claim);
+	}
+
+	chained_irq_exit(chip, desc);
+}
+
+// TODO: add a /sys interface to set priority + per-hart enables for steering
+
+static int plic_init(struct device_node *node, struct device_node *parent)
+{
+	struct plic_data *data;
+	struct resource resource;
+	int i, ok = 0;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (WARN_ON(!data)) return -ENOMEM;
+
+	data->reg = of_iomap(node, 0);
+	if (WARN_ON(!data->reg)) return -EIO;
+
+	of_property_read_u32(node, "riscv,ndev", &data->ndev);
+	if (WARN_ON(!data->ndev)) return -EINVAL;
+
+	data->handlers = of_irq_count(node);
+	if (WARN_ON(!data->handlers)) return -EINVAL;
+
+	data->handler = kzalloc(sizeof(*data->handler)*data->handlers, GFP_KERNEL);
+	if (WARN_ON(!data->handler)) return -ENOMEM;
+
+	data->domain = irq_domain_add_linear(node, data->ndev+1, &plic_irqdomain_ops, data);
+	if (WARN_ON(!data->domain)) return -ENOMEM;
+
+	of_address_to_resource(node, 0, &resource);
+	snprintf(data->name, sizeof(data->name), "riscv,plic0,%llx", resource.start);
+	data->chip.name = data->name;
+	data->chip.irq_mask = plic_irq_mask;
+	data->chip.irq_unmask = plic_irq_unmask;
+	data->chip.irq_enable = plic_irq_enable;
+	data->chip.irq_disable = plic_irq_disable;
+
+	for (i = 0; i < data->handlers; ++i) {
+		struct plic_handler *handler = &data->handler[i];
+		struct of_phandle_args parent;
+		int parent_irq, hwirq;
+
+		if (of_irq_parse_one(node, i, &parent)) continue;
+		if (parent.args[0] == -1) continue; // skip context holes
+
+		// skip any contexts that lead to inactive harts
+		if (of_device_is_compatible(parent.np, "riscv,cpu-intc") &&
+		    parent.np->parent &&
+		    riscv_of_processor_hart(parent.np->parent) < 0) continue;
+
+		parent_irq = irq_create_of_mapping(&parent);
+		if (!parent_irq) continue;
+
+		handler->context = PLIC_HART_CONTEXT(data, i);
+		handler->data = data;
+		iowrite32(0, &handler->context->threshold); // hwirq prio must be > this to trigger an interrupt
+		for (hwirq = 1; hwirq <= data->ndev; ++hwirq) plic_disable(data, i, hwirq);
+		irq_set_chained_handler_and_data(parent_irq, plic_chained_handle_irq, handler);
+		++ok;
+	}
+
+	printk("%s: mapped %d interrupts to %d/%d handlers\n", data->name, data->ndev, ok, data->handlers);
+	WARN_ON(!ok);
+	return 0;
+}
+
+IRQCHIP_DECLARE(plic0, "riscv,plic0", plic_init);
diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
new file mode 100644
index 000000000000..0552017c49ce
--- /dev/null
+++ b/arch/riscv/kernel/process.c
@@ -0,0 +1,130 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/tick.h>
+#include <linux/ptrace.h>
+
+#include <asm/unistd.h>
+#include <asm/uaccess.h>
+#include <asm/processor.h>
+#include <asm/csr.h>
+#include <asm/string.h>
+#include <asm/switch_to.h>
+
+extern asmlinkage void ret_from_fork(void);
+extern asmlinkage void ret_from_kernel_thread(void);
+
+void arch_cpu_idle(void)
+{
+	wait_for_interrupt();
+	local_irq_enable();
+}
+
+void show_regs(struct pt_regs *regs)
+{
+	show_regs_print_info(KERN_DEFAULT);
+
+	printk("sepc: " REG_FMT " ra : " REG_FMT " sp : " REG_FMT "\n",
+		regs->sepc, regs->ra, regs->sp);
+	printk(" gp : " REG_FMT " tp : " REG_FMT " t0 : " REG_FMT "\n",
+		regs->gp, regs->tp, regs->t0);
+	printk(" t1 : " REG_FMT " t2 : " REG_FMT " s0 : " REG_FMT "\n",
+		regs->t1, regs->t2, regs->s0);
+	printk(" s1 : " REG_FMT " a0 : " REG_FMT " a1 : " REG_FMT "\n",
+		regs->s1, regs->a0, regs->a1);
+	printk(" a2 : " REG_FMT " a3 : " REG_FMT " a4 : " REG_FMT "\n",
+		regs->a2, regs->a3, regs->a4);
+	printk(" a5 : " REG_FMT " a6 : " REG_FMT " a7 : " REG_FMT "\n",
+		regs->a5, regs->a6, regs->a7);
+	printk(" s2 : " REG_FMT " s3 : " REG_FMT " s4 : " REG_FMT "\n",
+		regs->s2, regs->s3, regs->s4);
+	printk(" s5 : " REG_FMT " s6 : " REG_FMT " s7 : " REG_FMT "\n",
+		regs->s5, regs->s6, regs->s7);
+	printk(" s8 : " REG_FMT " s9 : " REG_FMT " s10: " REG_FMT "\n",
+		regs->s8, regs->s9, regs->s10);
+	printk(" s11: " REG_FMT " t3 : " REG_FMT " t4 : " REG_FMT "\n",
+		regs->s11, regs->t3, regs->t4);
+	printk(" t5 : " REG_FMT " t6 : " REG_FMT "\n",
+		regs->t5, regs->t6);
+
+	printk("sstatus: " REG_FMT " sbadaddr: " REG_FMT " scause: " REG_FMT "\n",
+		regs->sstatus, regs->sbadaddr, regs->scause);
+}
+
+void start_thread(struct pt_regs *regs, unsigned long pc, 
+	unsigned long sp)
+{
+	regs->sstatus = SR_PIE /* User mode, irqs on */ | SR_FS_INITIAL;
+	regs->sepc = pc;
+	regs->sp = sp;
+	set_fs(USER_DS);
+}
+
+void flush_thread(void)
+{
+	/* Reset FPU context
+	 *	frm: round to nearest, ties to even (IEEE default)
+	 *	fflags: accrued exceptions cleared
+	 */
+	memset(&current->thread.fstate, 0,
+		sizeof(struct user_fpregs_struct));
+}
+
+int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+{
+	fstate_save(src, task_pt_regs(src));
+	*dst = *src;
+	return 0;
+}
+
+int copy_thread(unsigned long clone_flags, unsigned long usp,
+	unsigned long arg, struct task_struct *p)
+{
+	struct pt_regs *childregs = task_pt_regs(p);
+
+	/* p->thread holds context to be restored by __switch_to() */
+	if (unlikely(p->flags & PF_KTHREAD)) {
+		/* Kernel thread */
+		const register unsigned long gp __asm__ ("gp");
+		memset(childregs, 0, sizeof(struct pt_regs));
+		childregs->gp = gp;
+		childregs->sstatus = SR_PS | SR_PIE; /* Supervisor, irqs on */
+
+		p->thread.ra = (unsigned long)ret_from_kernel_thread;
+		p->thread.s[0] = usp; /* fn */
+		p->thread.s[1] = arg;
+	} else {
+		*childregs = *(current_pt_regs());
+		if (usp) /* User fork */
+			childregs->sp = usp;
+		if (clone_flags & CLONE_SETTLS)
+			childregs->tp = childregs->a5;
+		childregs->a0 = 0; /* Return value of fork() */
+		p->thread.ra = (unsigned long)ret_from_fork;
+	}
+	p->thread.sp = (unsigned long)childregs; /* kernel sp */
+	return 0;
+}
diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
new file mode 100644
index 000000000000..35d3d85817e6
--- /dev/null
+++ b/arch/riscv/kernel/ptrace.c
@@ -0,0 +1,148 @@
+/*
+ * Copyright 2010 Tilera Corporation. All Rights Reserved.
+ * Copyright 2015 Regents of the University of California
+ * Copyright 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ *
+ * Copied from arch/tile/kernel/ptrace.c
+ */
+
+#include <asm/ptrace.h>
+#include <asm/syscall.h>
+#include <asm/thread_info.h>
+#include <linux/ptrace.h>
+#include <linux/elf.h>
+#include <linux/regset.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/tracehook.h>
+#include <trace/events/syscalls.h>
+
+enum riscv_regset {
+	REGSET_X,
+};
+
+/*
+ * Get registers from task and ready the result for userspace.
+ */
+static char *getregs(struct task_struct *child, struct pt_regs *uregs)
+{
+	*uregs = *task_pt_regs(child);
+	return (char *)uregs;
+}
+
+/* Put registers back to task. */
+static void putregs(struct task_struct *child, struct pt_regs *uregs)
+{
+	struct pt_regs *regs = task_pt_regs(child);
+	*regs = *uregs;
+}
+
+static int riscv_gpr_get(struct task_struct *target,
+                         const struct user_regset *regset,
+                         unsigned int pos, unsigned int count,
+                         void *kbuf, void __user *ubuf)
+{
+	struct pt_regs regs;
+
+	getregs(target, &regs);
+
+	return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &regs, 0,
+				   sizeof(regs));
+}
+
+static int riscv_gpr_set(struct task_struct *target,
+                         const struct user_regset *regset,
+                         unsigned int pos, unsigned int count,
+                         const void *kbuf, const void __user *ubuf)
+{
+	int ret;
+	struct pt_regs regs;
+
+	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &regs, 0,
+				 sizeof(regs));
+	if (ret)
+		return ret;
+
+	putregs(target, &regs);
+
+	return 0;
+}
+
+
+static const struct user_regset riscv_user_regset[] = {
+	[REGSET_X] = {
+		.core_note_type = NT_PRSTATUS,
+		.n = ELF_NGREG,
+		.size = sizeof(elf_greg_t),
+		.align = sizeof(elf_greg_t),
+		.get = &riscv_gpr_get,
+		.set = &riscv_gpr_set,
+	},
+};
+
+static const struct user_regset_view riscv_user_native_view = {
+	.name = "riscv",
+	.e_machine = EM_RISCV,
+	.regsets = riscv_user_regset,
+	.n = ARRAY_SIZE(riscv_user_regset),
+};
+
+const struct user_regset_view *task_user_regset_view(struct task_struct *task)
+{
+	return &riscv_user_native_view;
+}
+
+void ptrace_disable(struct task_struct *child)
+{
+        clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+}
+
+long arch_ptrace(struct task_struct *child, long request,
+                 unsigned long addr, unsigned long data)
+{
+	long ret = -EIO;
+
+	switch (request) {
+	default:
+		ret = ptrace_request(child, request, addr, data);
+		break;
+	}
+
+	return ret;
+}
+
+/* Allows PTRACE_SYSCALL to work.  These are called from entry.S in
+ * {handle,ret_from}_syscall. */
+void do_syscall_trace_enter(struct pt_regs *regs)
+{
+	if (test_thread_flag(TIF_SYSCALL_TRACE)) {
+		if (tracehook_report_syscall_entry(regs))
+			syscall_set_nr(current, regs, -1);
+	}
+
+#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
+	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
+		trace_sys_enter(regs, syscall_get_nr(current, regs));
+#endif
+}
+
+void do_syscall_trace_exit(struct pt_regs *regs)
+{
+	if (test_thread_flag(TIF_SYSCALL_TRACE))
+		tracehook_report_syscall_exit(regs, 0);
+
+#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
+	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
+		trace_sys_exit(regs, regs->regs[0]);
+#endif
+}
diff --git a/arch/riscv/kernel/reset.c b/arch/riscv/kernel/reset.c
new file mode 100644
index 000000000000..58bad9598e21
--- /dev/null
+++ b/arch/riscv/kernel/reset.c
@@ -0,0 +1,33 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/reboot.h>
+#include <linux/export.h>
+#include <asm/sbi.h>
+
+void (*pm_power_off)(void) = machine_power_off;
+EXPORT_SYMBOL(pm_power_off);
+
+void machine_restart(char *cmd)
+{
+}
+
+void machine_halt(void)
+{
+}
+
+void machine_power_off(void)
+{
+  sbi_shutdown();
+}
diff --git a/arch/riscv/kernel/riscv_ksyms.c b/arch/riscv/kernel/riscv_ksyms.c
new file mode 100644
index 000000000000..ab0db6d48101
--- /dev/null
+++ b/arch/riscv/kernel/riscv_ksyms.c
@@ -0,0 +1,16 @@
+/*
+ * Copyright (C) 2017 Zihao Yu
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/export.h>
+#include <linux/uaccess.h>
+
+/*
+ * Assembly functions that may be used (directly or indirectly) by modules
+ */
+EXPORT_SYMBOL(__copy_user);
+
diff --git a/arch/riscv/kernel/sbi-con.c b/arch/riscv/kernel/sbi-con.c
new file mode 100644
index 000000000000..86baeb5ef0cd
--- /dev/null
+++ b/arch/riscv/kernel/sbi-con.c
@@ -0,0 +1,214 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/init.h>
+#include <linux/console.h>
+#include <linux/timer.h>
+#include <linux/tty.h>
+#include <linux/tty_flip.h>
+#include <linux/tty_driver.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+
+#include <asm/sbi.h>
+
+#define SBI_POLL_PERIOD 1
+#define SBI_MAX_GETCHARS 10
+
+static struct tty_driver *sbi_tty_driver;
+
+struct sbi_console_private {
+	struct tty_port port;
+	struct timer_list timer;
+
+} sbi_console_singleton;
+
+static void sbi_console_getchars(uintptr_t data)
+{
+	struct sbi_console_private *priv = (struct sbi_console_private *)data;
+	struct tty_port *port = &priv->port;
+	unsigned long flags;
+	int ch;
+
+	spin_lock_irqsave(&port->lock, flags);
+
+	if ((ch = sbi_console_getchar()) >= 0) {
+		tty_insert_flip_char(port, ch, TTY_NORMAL);
+		tty_flip_buffer_push(port);
+	}
+
+	mod_timer(&priv->timer, jiffies + SBI_POLL_PERIOD);
+
+	spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static int sbi_tty_write(struct tty_struct *tty,
+	const unsigned char *buf, int count)
+{
+	const unsigned char *end;
+
+	for (end = buf + count; buf < end; buf++) {
+		sbi_console_putchar(*buf);
+	}
+	return count;
+}
+
+static int sbi_tty_write_room(struct tty_struct *tty)
+{
+	return 1024; /* arbitrary */
+}
+
+static int sbi_tty_open(struct tty_struct *tty, struct file *filp)
+{
+	struct sbi_console_private *priv = &sbi_console_singleton;
+	struct tty_port *port = &priv->port;
+	unsigned long flags;
+
+	spin_lock_irqsave(&port->lock, flags);
+
+	if (!port->tty) {
+		tty->driver_data = priv;
+		tty->port = port;
+		port->tty = tty;
+		mod_timer(&priv->timer, jiffies);
+	}
+
+	spin_unlock_irqrestore(&port->lock, flags);
+
+	return 0;
+}
+
+static void sbi_tty_close(struct tty_struct *tty, struct file *filp)
+{
+	struct sbi_console_private *priv = tty->driver_data;
+	struct tty_port *port = &priv->port;
+	unsigned long flags;
+
+	spin_lock_irqsave(&port->lock, flags);
+
+	if (tty->count == 1) {
+		port->tty = NULL;
+		del_timer(&priv->timer);
+	}
+
+	spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static const struct tty_operations sbi_tty_ops = {
+	.open		= sbi_tty_open,
+	.close		= sbi_tty_close,
+	.write		= sbi_tty_write,
+	.write_room	= sbi_tty_write_room,
+};
+
+
+static void sbi_console_write(struct console *co, const char *buf, unsigned n)
+{
+	for ( ; n > 0; n--, buf++) {
+		if (*buf == '\n')
+			sbi_console_putchar('\r');
+		sbi_console_putchar(*buf);
+	}
+}
+
+static struct tty_driver *sbi_console_device(struct console *co, int *index)
+{
+	*index = co->index;
+	return sbi_tty_driver;
+}
+
+static int sbi_console_setup(struct console *co, char *options)
+{
+	return co->index != 0 ? -ENODEV : 0;
+}
+
+static struct console sbi_console = {
+	.name	= "sbi_console",
+	.write	= sbi_console_write,
+	.device	= sbi_console_device,
+	.setup	= sbi_console_setup,
+	.flags	= CON_PRINTBUFFER,
+	.index	= -1
+};
+
+static int __init sbi_console_init(void)
+{
+	int ret;
+	static struct tty_driver *drv;
+
+	setup_timer(&sbi_console_singleton.timer, sbi_console_getchars,
+		    (uintptr_t)&sbi_console_singleton);
+	register_console(&sbi_console);
+
+	drv = tty_alloc_driver(1, TTY_DRIVER_REAL_RAW | TTY_DRIVER_DYNAMIC_DEV);
+	if (unlikely(IS_ERR(drv)))
+		return PTR_ERR(drv);
+
+	drv->driver_name = "sbi";
+	drv->name = "ttySBI";
+	drv->major = TTY_MAJOR;
+	drv->minor_start = 0;
+	drv->type = TTY_DRIVER_TYPE_SERIAL;
+	drv->subtype = SERIAL_TYPE_NORMAL;
+	drv->init_termios = tty_std_termios;
+	tty_set_operations(drv, &sbi_tty_ops);
+
+	tty_port_init(&sbi_console_singleton.port);
+	tty_port_link_device(&sbi_console_singleton.port, drv, 0);
+
+	ret = tty_register_driver(drv);
+	if (unlikely(ret))
+		goto out_tty_put;
+
+	sbi_tty_driver = drv;
+	return 0;
+
+out_tty_put:
+	put_tty_driver(drv);
+	return ret;
+}
+
+static void __exit sbi_console_exit(void)
+{
+	tty_unregister_driver(sbi_tty_driver);
+	put_tty_driver(sbi_tty_driver);
+}
+
+module_init(sbi_console_init);
+module_exit(sbi_console_exit);
+
+MODULE_DESCRIPTION("RISC-V SBI console driver");
+MODULE_LICENSE("GPL");
+
+#ifdef CONFIG_EARLY_PRINTK
+
+static struct console early_console_dev __initdata = {
+	.name	= "early",
+	.write	= sbi_console_write,
+	.flags	= CON_PRINTBUFFER | CON_BOOT,
+	.index	= -1
+};
+
+static int __init setup_early_printk(char *str)
+{
+	if (early_console == NULL) {
+		early_console = &early_console_dev;
+		register_console(early_console);
+	}
+	return 0;
+}
+
+early_param("earlyprintk", setup_early_printk);
+
+#endif
diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
new file mode 100644
index 000000000000..5100eab75c14
--- /dev/null
+++ b/arch/riscv/kernel/setup.c
@@ -0,0 +1,234 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+ */
+
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/memblock.h>
+#include <linux/sched.h>
+#include <linux/initrd.h>
+#include <linux/console.h>
+#include <linux/screen_info.h>
+#include <linux/of_fdt.h>
+#include <linux/of_platform.h>
+#include <linux/sched/task.h>
+
+#include <asm/setup.h>
+#include <asm/sections.h>
+#include <asm/pgtable.h>
+#include <asm/smp.h>
+#include <asm/sbi.h>
+#include <asm/tlbflush.h>
+#include <asm/thread_info.h>
+
+#ifdef CONFIG_DUMMY_CONSOLE
+struct screen_info screen_info = {
+	.orig_video_lines	= 30,
+	.orig_video_cols	= 80,
+	.orig_video_mode	= 0,
+	.orig_video_ega_bx	= 0,
+	.orig_video_isVGA	= 1,
+	.orig_video_points	= 8
+};
+#endif
+
+#ifdef CONFIG_CMDLINE_BOOL
+static char __initdata builtin_cmdline[COMMAND_LINE_SIZE] = CONFIG_CMDLINE;
+#endif /* CONFIG_CMDLINE_BOOL */
+
+unsigned long va_pa_offset;
+unsigned long pfn_base;
+
+/* The lucky hart to first increment this variable will boot the other cores */
+atomic_t hart_lottery;
+
+#ifdef CONFIG_BLK_DEV_INITRD
+static void __init setup_initrd(void)
+{
+	extern char __initramfs_start[];
+	extern unsigned long __initramfs_size;
+	unsigned long size;
+
+	if (__initramfs_size > 0) {
+		initrd_start = (unsigned long)(&__initramfs_start);
+		initrd_end = initrd_start + __initramfs_size;
+	}
+
+	if (initrd_start >= initrd_end) {
+		printk(KERN_INFO "initrd not found or empty");
+		goto disable;
+	}
+	if (__pa(initrd_end) > PFN_PHYS(max_low_pfn)) {
+		printk(KERN_ERR "initrd extends beyond end of memory");
+		goto disable;
+	}
+
+	size =  initrd_end - initrd_start;
+	memblock_reserve(__pa(initrd_start), size);
+	initrd_below_start_ok = 1;
+
+	printk(KERN_INFO "Initial ramdisk at: 0x%p (%lu bytes)\n",
+		(void *)(initrd_start), size);
+	return;
+disable:
+	printk(KERN_CONT " - disabling initrd\n");
+	initrd_start = 0;
+	initrd_end = 0;
+}
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+pgd_t trampoline_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+
+#ifndef __PAGETABLE_PMD_FOLDED
+#define NUM_SWAPPER_PMDS ((uintptr_t)-PAGE_OFFSET >> PGDIR_SHIFT)
+pmd_t swapper_pmd[PTRS_PER_PMD*((-PAGE_OFFSET)/PGDIR_SIZE)] __page_aligned_bss;
+pmd_t trampoline_pmd[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+#endif
+
+asmlinkage void __init setup_vm(void)
+{
+	extern char _start;
+	uintptr_t i;
+	uintptr_t pa = (uintptr_t) &_start;
+	pgprot_t prot = __pgprot(pgprot_val(PAGE_KERNEL) | _PAGE_EXEC);
+
+	va_pa_offset = PAGE_OFFSET - pa;
+	pfn_base = PFN_DOWN(pa);
+
+	/* Sanity check alignment and size */
+	BUG_ON((PAGE_OFFSET % PGDIR_SIZE) != 0);
+	BUG_ON((pa % (PAGE_SIZE * PTRS_PER_PTE)) != 0);
+
+#ifndef __PAGETABLE_PMD_FOLDED
+	trampoline_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD] =
+		pfn_pgd(PFN_DOWN((uintptr_t)trampoline_pmd),
+			__pgprot(_PAGE_TABLE));
+	trampoline_pmd[0] = pfn_pmd(PFN_DOWN(pa), prot);
+
+	for (i = 0; i < (-PAGE_OFFSET)/PGDIR_SIZE; ++i)
+		swapper_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD + i] =
+			pfn_pgd(PFN_DOWN((uintptr_t)swapper_pmd) + i,
+				__pgprot(_PAGE_TABLE));
+	for (i = 0; i < sizeof(swapper_pmd)/sizeof(swapper_pmd[0]); i++)
+		swapper_pmd[i] = pfn_pmd(PFN_DOWN(pa + i * PMD_SIZE), prot);
+#else
+	trampoline_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD] =
+		pfn_pgd(PFN_DOWN(pa), prot);
+
+	for (i = 0; i < (-PAGE_OFFSET)/PGDIR_SIZE; ++i)
+		swapper_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD + i] =
+			pfn_pgd(PFN_DOWN(pa + i * PGDIR_SIZE), prot);
+#endif
+}
+
+void __init sbi_save(unsigned hartid, void *dtb)
+{
+	init_thread_info.cpu = hartid;
+	early_init_dt_scan(__va(dtb));
+}
+
+/* Allow the user to manually add a memory region (in case DTS is broken); "mem_end=nn[KkMmGg]" */
+static int __init mem_end_override(char *p)
+{
+	resource_size_t base, end;
+	if (!p)
+		return -EINVAL;
+	base = (uintptr_t) __pa(PAGE_OFFSET);
+	end = memparse(p, &p) & PMD_MASK;
+	if (end == 0)
+		return -EINVAL;
+	memblock_add(base, end - base);
+	return 0;
+}
+early_param("mem_end", mem_end_override);
+
+static void __init setup_bootmem(void)
+{
+	struct memblock_region *reg;
+	phys_addr_t mem_size = 0;
+
+	/* Find the memory region containing the kernel */
+	for_each_memblock(memory, reg) {
+		phys_addr_t vmlinux_end = __pa(_end);
+		phys_addr_t end = reg->base + reg->size;
+		if (reg->base <= vmlinux_end && vmlinux_end <= end) {
+			/* Reserve from the start of the region to the end of the kernel */
+			memblock_reserve(reg->base, vmlinux_end - reg->base);
+			mem_size = min(reg->size, (phys_addr_t)-PAGE_OFFSET);
+		}
+	}
+	BUG_ON(mem_size == 0);
+
+	set_max_mapnr(PFN_DOWN(mem_size));
+	max_low_pfn = pfn_base + PFN_DOWN(mem_size);
+
+#ifdef CONFIG_BLK_DEV_INITRD
+	setup_initrd();
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+	early_init_fdt_reserve_self();
+	early_init_fdt_scan_reserved_mem();
+	memblock_allow_resize();
+	memblock_dump_all();
+}
+
+void __init setup_arch(char **cmdline_p)
+{
+#ifdef CONFIG_CMDLINE_BOOL
+#ifdef CONFIG_CMDLINE_OVERRIDE
+	strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+#else
+	if (builtin_cmdline[0] != '\0') {
+		/* Append bootloader command line to built-in */
+		strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);
+		strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+		strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+	}
+#endif /* CONFIG_CMDLINE_OVERRIDE */
+#endif /* CONFIG_CMDLINE_BOOL */
+	*cmdline_p = boot_command_line;
+
+	parse_early_param();
+
+	init_mm.start_code = (unsigned long) _stext;
+	init_mm.end_code   = (unsigned long) _etext;
+	init_mm.end_data   = (unsigned long) _edata;
+	init_mm.brk        = (unsigned long) _end;
+
+	setup_bootmem();
+	paging_init();
+	unflatten_device_tree();
+
+#ifdef CONFIG_SMP
+	setup_smp();
+#endif
+
+#ifdef CONFIG_DUMMY_CONSOLE
+	conswitchp = &dummy_con;
+#endif
+}
+
+static int __init riscv_device_init(void)
+{
+	return of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
+}
+subsys_initcall_sync(riscv_device_init);
diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
new file mode 100644
index 000000000000..d71bb3377583
--- /dev/null
+++ b/arch/riscv/kernel/signal.c
@@ -0,0 +1,258 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+ */
+
+#include <linux/signal.h>
+#include <linux/uaccess.h>
+#include <linux/syscalls.h>
+#include <linux/tracehook.h>
+#include <linux/linkage.h>
+
+#include <asm/ucontext.h>
+#include <asm/vdso.h>
+#include <asm/switch_to.h>
+#include <asm/csr.h>
+
+#define DEBUG_SIG 0
+
+struct rt_sigframe {
+	struct siginfo info;
+	struct ucontext uc;
+};
+
+static long restore_sigcontext(struct pt_regs *regs,
+	struct sigcontext __user *sc)
+{
+	struct task_struct *task = current;
+	long err;
+	/* sc_regs is structured the same as the start of pt_regs */
+	err = __copy_from_user(regs, &sc->sc_regs, sizeof(sc->sc_regs));
+	err |= __copy_from_user(&task->thread.fstate, &sc->sc_fpregs,
+		sizeof(sc->sc_fpregs));
+	if (likely(!err))
+		fstate_restore(task, regs);
+	return err;
+}
+
+SYSCALL_DEFINE0(rt_sigreturn)
+{
+	struct pt_regs *regs = current_pt_regs();
+	struct rt_sigframe __user *frame;
+	struct task_struct *task;
+	sigset_t set;
+
+	/* Always make any pending restarted system calls return -EINTR */
+	current->restart_block.fn = do_no_restart_syscall;
+
+	frame = (struct rt_sigframe __user *)regs->sp;
+
+	if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
+		goto badframe;
+
+	if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
+		goto badframe;
+
+	set_current_blocked(&set);
+
+	if (restore_sigcontext(regs, &frame->uc.uc_mcontext))
+		goto badframe;
+
+	if (restore_altstack(&frame->uc.uc_stack))
+		goto badframe;
+
+	return regs->a0;
+
+badframe:
+	task = current;
+	if (show_unhandled_signals) {
+		pr_info_ratelimited("%s[%d]: bad frame in %s: "
+			"frame=%p pc=%p sp=%p\n",
+			task->comm, task_pid_nr(task), __func__,
+			frame, (void *)regs->sepc, (void *)regs->sp);
+	}
+	force_sig(SIGSEGV, task);
+	return 0;
+}
+
+static long setup_sigcontext(struct sigcontext __user *sc,
+	struct pt_regs *regs)
+{
+	struct task_struct *task = current;
+	long err;
+	/* sc_regs is structured the same as the start of pt_regs */
+	err = __copy_to_user(&sc->sc_regs, regs, sizeof(sc->sc_regs));
+	fstate_save(task, regs);
+	err |= __copy_to_user(&sc->sc_fpregs, &task->thread.fstate,
+		sizeof(sc->sc_fpregs));
+	return err;
+}
+
+static inline void __user *get_sigframe(struct ksignal *ksig,
+	struct pt_regs *regs, size_t framesize)
+{
+	unsigned long sp;
+	/* Default to using normal stack */
+	sp = regs->sp;
+
+	/*
+	 * If we are on the alternate signal stack and would overflow it, don't.
+	 * Return an always-bogus address instead so we will die with SIGSEGV.
+	 */
+	if (on_sig_stack(sp) && !likely(on_sig_stack(sp - framesize)))
+		return (void __user __force *)(-1UL);
+
+	/* This is the X/Open sanctioned signal stack switching. */
+	sp = sigsp(sp, ksig) - framesize;
+
+	/* Align the stack frame. */
+	sp &= ~0xfUL;
+
+	return (void __user *)sp;
+}
+
+
+static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+	struct pt_regs *regs)
+{
+	struct rt_sigframe __user *frame;
+	long err = 0;
+
+	frame = get_sigframe(ksig, regs, sizeof(*frame));
+	if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+		return -EFAULT;
+
+	err |= copy_siginfo_to_user(&frame->info, &ksig->info);
+
+	/* Create the ucontext. */
+	err |= __put_user(0, &frame->uc.uc_flags);
+	err |= __put_user(NULL, &frame->uc.uc_link);
+	err |= __save_altstack(&frame->uc.uc_stack, regs->sp);
+	err |= setup_sigcontext(&frame->uc.uc_mcontext, regs);
+	err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
+	if (err)
+		return -EFAULT;
+
+	/* Set up to return from userspace. */
+	regs->ra = (unsigned long)VDSO_SYMBOL(
+		current->mm->context.vdso, rt_sigreturn);
+
+	/*
+	 * Set up registers for signal handler.
+	 * Registers that we don't modify keep the value they had from
+	 * user-space at the time we took the signal.
+	 * We always pass siginfo and mcontext, regardless of SA_SIGINFO,
+	 * since some things rely on this (e.g. glibc's debug/segfault.c).
+	 */
+	regs->sepc = (unsigned long)ksig->ka.sa.sa_handler;
+	regs->sp = (unsigned long)frame;
+	regs->a0 = ksig->sig;                     /* a0: signal number */
+	regs->a1 = (unsigned long)(&frame->info); /* a1: siginfo pointer */
+	regs->a2 = (unsigned long)(&frame->uc);   /* a2: ucontext pointer */
+
+#if DEBUG_SIG
+	pr_info("SIG deliver (%s:%d): sig=%d pc=%p ra=%p sp=%p\n",
+		current->comm, task_pid_nr(current), ksig->sig,
+		(void *)regs->sepc, (void *)regs->ra, frame);
+#endif
+
+	return 0;
+}
+
+static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
+{
+	sigset_t *oldset = sigmask_to_save();
+	int ret;
+
+	/* Are we from a system call? */
+	if (regs->scause == EXC_SYSCALL) {
+		/* If so, check system call restarting.. */
+		switch (regs->a0) {
+		case -ERESTART_RESTARTBLOCK:
+		case -ERESTARTNOHAND:
+			regs->a0 = -EINTR;
+			break;
+
+		case -ERESTARTSYS:
+			if (!(ksig->ka.sa.sa_flags & SA_RESTART)) {
+				regs->a0 = -EINTR;
+				break;
+			}
+			/* fallthrough */
+		case -ERESTARTNOINTR:
+			regs->sepc -= 0x4;
+			break;
+		}
+	}
+
+	/* Set up the stack frame */
+	ret = setup_rt_frame(ksig, oldset, regs);
+
+	signal_setup_done(ret, ksig, 0);
+}
+
+static void do_signal(struct pt_regs *regs)
+{
+	struct ksignal ksig;
+
+	if (get_signal(&ksig)) {
+		/* Actually deliver the signal */
+		handle_signal(&ksig, regs);
+		return;
+	}
+
+	/* Did we come from a system call? */
+	if (regs->scause == EXC_SYSCALL) {
+		/* Restart the system call - no handlers present */
+		switch (regs->a0) {
+		case -ERESTARTNOHAND:
+		case -ERESTARTSYS:
+		case -ERESTARTNOINTR:
+			regs->sepc -= 0x4;
+			break;
+		case -ERESTART_RESTARTBLOCK:
+			regs->a7 = __NR_restart_syscall;
+			regs->sepc -= 0x4;
+			break;
+		}
+	}
+
+	/* If there is no signal to deliver, we just put the saved
+	   sigmask back. */
+	restore_saved_sigmask();
+}
+
+/*
+ * notification of userspace execution resumption
+ * - triggered by the _TIF_WORK_MASK flags
+ */
+asmlinkage void do_notify_resume(struct pt_regs *regs,
+	unsigned long thread_info_flags)
+{
+	/* Handle pending signal delivery */
+	if (thread_info_flags & _TIF_SIGPENDING) {
+		do_signal(regs);
+	}
+
+	if (thread_info_flags & _TIF_NOTIFY_RESUME) {
+		clear_thread_flag(TIF_NOTIFY_RESUME);
+		tracehook_notify_resume(regs);
+	}
+}
diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
new file mode 100644
index 000000000000..afa262c5bd40
--- /dev/null
+++ b/arch/riscv/kernel/smp.c
@@ -0,0 +1,107 @@
+/*
+ * SMP initialisation and IPI support
+ * Based on arch/arm64/kernel/smp.c
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/smp.h>
+#include <linux/sched.h>
+
+#include <asm/sbi.h>
+#include <asm/tlbflush.h>
+#include <asm/cacheflush.h>
+
+/* A collection of single bit ipi messages.  */
+static struct {
+	unsigned long bits ____cacheline_aligned;
+} ipi_data[NR_CPUS] __cacheline_aligned;
+
+enum ipi_message_type {
+	IPI_RESCHEDULE,
+	IPI_CALL_FUNC,
+	IPI_MAX
+};
+
+irqreturn_t handle_ipi(void)
+{
+	unsigned long *pending_ipis = &ipi_data[smp_processor_id()].bits;
+
+	/* Clear pending IPI */
+	csr_clear(sip, SIE_SSIE);
+
+	while (true) {
+		unsigned long ops;
+
+		mb();	/* Order bit clearing and data access. */
+
+		if ((ops = xchg(pending_ipis, 0)) == 0)
+			return IRQ_HANDLED;
+
+		if (ops & (1 << IPI_RESCHEDULE))
+			scheduler_ipi();
+
+		if (ops & (1 << IPI_CALL_FUNC))
+			generic_smp_call_function_interrupt();
+
+		BUG_ON((ops >> IPI_MAX) != 0);
+
+		mb();	/* Order data access and bit testing. */
+	}
+
+	return IRQ_HANDLED;
+}
+
+static void
+send_ipi_message(const struct cpumask *to_whom, enum ipi_message_type operation)
+{
+	int i;
+
+	mb();
+	for_each_cpu(i, to_whom)
+		set_bit(operation, &ipi_data[i].bits);
+
+	mb();
+	sbi_send_ipi(cpumask_bits(to_whom));
+}
+
+void arch_send_call_function_ipi_mask(struct cpumask *mask)
+{
+	send_ipi_message(mask, IPI_CALL_FUNC);
+}
+
+void arch_send_call_function_single_ipi(int cpu)
+{
+	send_ipi_message(cpumask_of(cpu), IPI_CALL_FUNC);
+}
+
+static void ipi_stop(void *unused)
+{
+	while (1)
+		wait_for_interrupt();
+}
+
+void smp_send_stop(void)
+{
+	on_each_cpu(ipi_stop, NULL, 1);
+}
+
+void smp_send_reschedule(int cpu)
+{
+	send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE);
+}
diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
new file mode 100644
index 000000000000..1ef2da22fb23
--- /dev/null
+++ b/arch/riscv/kernel/smpboot.c
@@ -0,0 +1,105 @@
+/*
+ * SMP initialisation and IPI support
+ * Based on arch/arm64/kernel/smp.c
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/kernel_stat.h>
+#include <linux/notifier.h>
+#include <linux/cpu.h>
+#include <linux/percpu.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/irq.h>
+#include <linux/of.h>
+#include <linux/sched/task_stack.h>
+#include <asm/mmu_context.h>
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+#include <asm/sbi.h>
+
+void *__cpu_up_stack_pointer[NR_CPUS];
+
+void __init smp_prepare_boot_cpu(void)
+{
+}
+
+void __init smp_prepare_cpus(unsigned int max_cpus)
+{
+}
+
+void __init setup_smp(void)
+{
+	struct device_node *dn = NULL;
+	int hart, im_okay_therefore_i_am = 0;
+
+	while ((dn = of_find_node_by_type(dn, "cpu"))) {
+		if ((hart = riscv_of_processor_hart(dn)) >= 0) {
+			set_cpu_possible(hart, true);
+			set_cpu_present(hart, true);
+			if (hart == smp_processor_id()) {
+				BUG_ON(im_okay_therefore_i_am);
+				im_okay_therefore_i_am = 1;
+			}
+		}
+	}
+
+	BUG_ON(!im_okay_therefore_i_am);
+}
+
+int __cpu_up(unsigned int cpu, struct task_struct *tidle)
+{
+	/* Signal cpu to start */
+	mb();
+	__cpu_up_stack_pointer[cpu] = task_stack_page(tidle) + THREAD_SIZE;
+
+	while (!cpu_online(cpu))
+		cpu_relax();
+	
+	return 0;
+}
+
+void __init smp_cpus_done(unsigned int max_cpus)
+{
+}
+
+/*
+ * C entry point for a secondary processor.
+ */
+asmlinkage void __init smp_callin(void)
+{
+	struct mm_struct *mm = &init_mm;
+
+	/* All kernel threads share the same mm context.  */
+	atomic_inc(&mm->mm_count);
+	current->active_mm = mm;
+
+	trap_init();
+	init_clockevent();
+	notify_cpu_starting(smp_processor_id());
+	set_cpu_online(smp_processor_id(), 1);
+	local_flush_tlb_all();
+	local_irq_enable();
+	preempt_disable();
+	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+}
diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
new file mode 100644
index 000000000000..3633ce052a8b
--- /dev/null
+++ b/arch/riscv/kernel/stacktrace.c
@@ -0,0 +1,183 @@
+/*
+ * Copyright (C) 2008 ARM Limited
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/export.h>
+#include <linux/kallsyms.h>
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <linux/sched/task_stack.h>
+#include <linux/stacktrace.h>
+
+#ifdef CONFIG_FRAME_POINTER
+
+struct stackframe {
+	unsigned long fp;
+	unsigned long ra;
+};
+
+static void notrace walk_stackframe(struct task_struct *task,
+	struct pt_regs *regs, bool (*fn)(unsigned long, void *), void *arg)
+{
+	unsigned long fp, sp, pc;
+
+	if (regs) {
+		fp = GET_FP(regs);
+		sp = GET_USP(regs);
+		pc = GET_IP(regs);
+	} else if (task == NULL || task == current) {
+		const register unsigned long current_sp __asm__ ("sp");
+		fp = (unsigned long)__builtin_frame_address(0);
+		sp = current_sp;
+		pc = (unsigned long)walk_stackframe;
+	} else {
+		/* task blocked in __switch_to */
+		fp = task->thread.s[0];
+		sp = task->thread.sp;
+		pc = task->thread.ra;
+	}
+
+	for (;;) {
+		unsigned long low, high;
+		struct stackframe *frame;
+
+		if (unlikely(!__kernel_text_address(pc) || fn(pc, arg)))
+			break;
+
+		/* Validate frame pointer */
+		low = sp + sizeof(struct stackframe);
+		high = ALIGN(sp, THREAD_SIZE);
+		if (unlikely(fp < low || fp > high || fp & 0x7))
+			break;
+		/* Unwind stack frame */
+		frame = (struct stackframe *)fp - 1;
+		sp = fp;
+		fp = frame->fp;
+		pc = frame->ra - 0x4;
+	}
+}
+
+#else /* !CONFIG_FRAME_POINTER */
+
+static void notrace walk_stackframe(struct task_struct *task,
+	struct pt_regs *regs, bool (*fn)(unsigned long, void *), void *arg)
+{
+	unsigned long sp, pc;
+	unsigned long *ksp;
+
+	if (regs) {
+		sp = GET_USP(regs);
+		pc = GET_IP(regs);
+	} else if (task == NULL || task == current) {
+		const register unsigned long current_sp __asm__ ("sp");
+		sp = current_sp;
+		pc = (unsigned long)walk_stackframe;
+	} else {
+		/* task blocked in __switch_to */
+		sp = task->thread.sp;
+		pc = task->thread.ra;
+	}
+
+	if (unlikely(sp & 0x7))
+		return;
+
+	ksp = (unsigned long *)sp;
+	while (!kstack_end(ksp)) {
+		if (__kernel_text_address(pc) && unlikely(fn(pc, arg))) {
+			break;
+		}
+		pc = (*ksp++) - 0x4;
+	}
+}
+
+#endif /* CONFIG_FRAME_POINTER */
+
+
+static bool print_trace_address(unsigned long pc, void *arg)
+{
+	print_ip_sym(pc);
+	return false;
+}
+
+void show_stack(struct task_struct *task, unsigned long *sp)
+{
+	printk("Call Trace:\n");
+	walk_stackframe(task, NULL, print_trace_address, NULL);
+}
+
+
+static bool save_wchan(unsigned long pc, void *arg)
+{
+	if (!in_sched_functions(pc)) {
+		unsigned long *p = arg;
+		*p = pc;
+		return true;
+	}
+	return false;
+}
+
+unsigned long get_wchan(struct task_struct *task)
+{
+	unsigned long pc;
+	pc = 0;
+	if (likely(task && task != current && task->state != TASK_RUNNING)) {
+		walk_stackframe(task, NULL, save_wchan, &pc);
+	}
+	return pc;
+}
+
+
+#ifdef CONFIG_STACKTRACE
+
+static bool __save_trace(unsigned long pc, void *arg, bool nosched)
+{
+	struct stack_trace *trace = arg;
+
+	if (unlikely(nosched && in_sched_functions(pc)))
+		return false;
+	if (unlikely(trace->skip > 0)) {
+		trace->skip--;
+		return false;
+	}
+
+	trace->entries[trace->nr_entries++] = pc;
+	return (trace->nr_entries >= trace->max_entries);
+}
+
+static bool save_trace(unsigned long pc, void *arg)
+{
+	return __save_trace(pc, arg, false);
+}
+
+/*
+ * Save stack-backtrace addresses into a stack_trace buffer.
+ */
+void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
+{
+	walk_stackframe(tsk, NULL, save_trace, trace);
+	if (trace->nr_entries < trace->max_entries)
+		trace->entries[trace->nr_entries++] = ULONG_MAX;
+}
+EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
+
+void save_stack_trace(struct stack_trace *trace)
+{
+	save_stack_trace_tsk(NULL, trace);
+}
+EXPORT_SYMBOL_GPL(save_stack_trace);
+
+#endif /* CONFIG_STACKTRACE */
diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
new file mode 100644
index 000000000000..3e07308e24f5
--- /dev/null
+++ b/arch/riscv/kernel/sys_riscv.c
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/syscalls.h>
+#include <asm/unistd.h>
+
+SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
+	unsigned long, prot, unsigned long, flags,
+	unsigned long, fd, off_t, offset)
+{
+	if (unlikely(offset & (~PAGE_MASK)))
+		return -EINVAL;
+	return sys_mmap_pgoff(addr, len, prot, flags, fd, offset >> PAGE_SHIFT);
+}
+
+#ifndef CONFIG_64BIT
+SYSCALL_DEFINE6(mmap2, unsigned long, addr, unsigned long, len,
+	unsigned long, prot, unsigned long, flags,
+	unsigned long, fd, off_t, offset)
+{
+	/* Note that the shift for mmap2 is constant (12),
+	   regardless of PAGE_SIZE */
+	if (unlikely(offset & (~PAGE_MASK >> 12)))
+		return -EINVAL;
+	return sys_mmap_pgoff(addr, len, prot, flags, fd,
+		offset >> (PAGE_SHIFT - 12));
+}
+#endif /* !CONFIG_64BIT */
+
+#ifdef CONFIG_RV_SYSRISCV_ATOMIC
+SYSCALL_DEFINE4(sysriscv, unsigned long, cmd, unsigned long, arg1,
+	unsigned long, arg2, unsigned long, arg3)
+{
+	unsigned long flags;
+	unsigned long prev;
+	unsigned int *ptr;
+	unsigned int err;
+
+	switch (cmd) {
+	case RISCV_ATOMIC_CMPXCHG:
+		ptr = (unsigned int *)arg1;
+		if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned int)))
+			return -EFAULT;
+
+		preempt_disable();
+		raw_local_irq_save(flags);
+		err = __get_user(prev, ptr);
+		if (likely(!err && prev == arg2))
+			err = __put_user(arg3, ptr);
+		raw_local_irq_restore(flags);
+		preempt_enable();
+
+		return unlikely(err) ? err : prev;
+
+	case RISCV_ATOMIC_CMPXCHG64:
+		ptr = (unsigned int *)arg1;
+		if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned long)))
+			return -EFAULT;
+
+		preempt_disable();
+		raw_local_irq_save(flags);
+		err = __get_user(prev, ptr);
+		if (likely(!err && prev == arg2))
+			err = __put_user(arg3, ptr);
+		raw_local_irq_restore(flags);
+		preempt_enable();
+
+		return unlikely(err) ? err : prev;
+	}
+
+	return -EINVAL;
+}
+#endif /* CONFIG_RV_SYSRISCV_ATOMIC */
diff --git a/arch/riscv/kernel/syscall_table.c b/arch/riscv/kernel/syscall_table.c
new file mode 100644
index 000000000000..6b5505b1dbb5
--- /dev/null
+++ b/arch/riscv/kernel/syscall_table.c
@@ -0,0 +1,26 @@
+/*
+ * Copyright (C) 2009 Arnd Bergmann <arnd@arndb.de>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/syscalls.h>
+
+#include <asm/syscalls.h>
+
+#undef __SYSCALL
+#define __SYSCALL(nr, call) [nr] = (call),
+
+void *sys_call_table[__NR_syscalls] = {
+	[0 ... __NR_syscalls - 1] = sys_ni_syscall,
+#include <asm/unistd.h>
+};
diff --git a/arch/riscv/kernel/time.c b/arch/riscv/kernel/time.c
new file mode 100644
index 000000000000..ce8c459fadaa
--- /dev/null
+++ b/arch/riscv/kernel/time.c
@@ -0,0 +1,116 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/clocksource.h>
+#include <linux/clockchips.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/delay.h>
+#include <linux/of.h>
+
+#include <asm/irq.h>
+#include <asm/csr.h>
+#include <asm/sbi.h>
+#include <asm/delay.h>
+
+unsigned long timebase;
+
+static DEFINE_PER_CPU(struct clock_event_device, clock_event);
+
+static int riscv_timer_set_next_event(unsigned long delta,
+	struct clock_event_device *evdev)
+{
+	sbi_set_timer(get_cycles() + delta);
+	return 0;
+}
+
+static int riscv_timer_set_oneshot(struct clock_event_device *evt)
+{
+	/* no-op; only one mode */
+	return 0;
+}
+
+static int riscv_timer_set_shutdown(struct clock_event_device *evt)
+{
+	/* can't stop the clock! */
+	return 0;
+}
+
+static u64 riscv_rdtime(struct clocksource *cs)
+{
+	return get_cycles();
+}
+
+static struct clocksource riscv_clocksource = {
+	.name = "riscv_clocksource",
+	.rating = 300,
+	.read = riscv_rdtime,
+#ifdef CONFIG_64BITS
+	.mask = CLOCKSOURCE_MASK(64),
+#else
+	.mask = CLOCKSOURCE_MASK(32),
+#endif /* CONFIG_64BITS */
+	.flags = CLOCK_SOURCE_IS_CONTINUOUS,
+};
+
+void riscv_timer_interrupt(void)
+{
+	int cpu = smp_processor_id();
+	struct clock_event_device *evdev = &per_cpu(clock_event, cpu);
+	evdev->event_handler(evdev);
+}
+
+void __init init_clockevent(void)
+{
+	int cpu = smp_processor_id();
+	struct clock_event_device *ce = &per_cpu(clock_event, cpu);
+
+	*ce = (struct clock_event_device){
+		.name = "riscv_timer_clockevent",
+		.features = CLOCK_EVT_FEAT_ONESHOT,
+		.rating = 300,
+		.cpumask = cpumask_of(cpu),
+		.set_next_event = riscv_timer_set_next_event,
+		.set_state_oneshot  = riscv_timer_set_oneshot,
+		.set_state_shutdown = riscv_timer_set_shutdown,
+	};
+
+	/* Enable timer interrupts */
+	csr_set(sie, SIE_STIE);
+
+	clockevents_config_and_register(ce, timebase, 100, 0x7fffffff);
+}
+
+static unsigned long __init of_timebase(void)
+{
+	struct device_node *cpu;
+	const __be32 *prop;
+
+	if ((cpu = of_find_node_by_path("/cpus")) &&
+	    (prop = of_get_property(cpu, "timebase-frequency", NULL))) {
+		return be32_to_cpu(*prop);
+	} else {
+		return 10000000;
+	}
+}
+
+void __init time_init(void)
+{
+	timebase = of_timebase();
+	lpj_fine = timebase / HZ;
+
+	clocksource_register_hz(&riscv_clocksource, timebase);
+	init_clockevent();
+}
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
new file mode 100644
index 000000000000..cbdabfbbc09f
--- /dev/null
+++ b/arch/riscv/kernel/traps.c
@@ -0,0 +1,167 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <linux/sched/signal.h>
+#include <linux/signal.h>
+#include <linux/kdebug.h>
+#include <linux/uaccess.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/irq.h>
+
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/csr.h>
+
+int show_unhandled_signals = 1;
+
+extern asmlinkage void handle_exception(void);
+
+static DEFINE_SPINLOCK(die_lock);
+
+void die(struct pt_regs *regs, const char *str)
+{
+	static int die_counter;
+	int ret;
+
+	oops_enter();
+
+	spin_lock_irq(&die_lock);
+	console_verbose();
+	bust_spinlocks(1);
+
+	pr_emerg("%s [#%d]\n", str, ++die_counter);
+	print_modules();
+	show_regs(regs);
+
+	ret = notify_die(DIE_OOPS, str, regs, 0, regs->scause, SIGSEGV);
+
+	bust_spinlocks(0);
+	add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
+	spin_unlock_irq(&die_lock);
+	oops_exit();
+
+	if (in_interrupt())
+		panic("Fatal exception in interrupt");
+	if (panic_on_oops)
+		panic("Fatal exception");
+	if (ret != NOTIFY_STOP)
+		do_exit(SIGSEGV);
+}
+
+static inline void do_trap_siginfo(int signo, int code,
+	unsigned long addr, struct task_struct *tsk)
+{
+	siginfo_t info;
+
+	info.si_signo = signo;
+	info.si_errno = 0;
+	info.si_code = code;
+	info.si_addr = (void __user *)addr;
+	force_sig_info(signo, &info, tsk);
+}
+
+void do_trap(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, struct task_struct *tsk)
+{
+	if (show_unhandled_signals && unhandled_signal(tsk, signo)
+	    && printk_ratelimit()) {
+		pr_info("%s[%d]: unhandled signal %d code 0x%x at 0x" REG_FMT,
+			tsk->comm, task_pid_nr(tsk), signo, code, addr);
+		print_vma_addr(KERN_CONT " in ", GET_IP(regs));
+		pr_cont("\n");
+		show_regs(regs);
+	}
+
+	do_trap_siginfo(signo, code, addr, tsk);
+}
+
+static void do_trap_error(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, const char *str)
+{
+	if (user_mode(regs)) {
+		do_trap(regs, signo, code, addr, current);
+	} else {
+		if (!fixup_exception(regs))
+			die(regs, str);
+	}
+}
+
+#define DO_ERROR_INFO(name, signo, code, str)				\
+asmlinkage void name(struct pt_regs *regs)				\
+{									\
+	do_trap_error(regs, signo, code, regs->sepc, "Oops - " str);	\
+}
+
+DO_ERROR_INFO(do_trap_unknown,
+	SIGILL, ILL_ILLTRP, "unknown exception");
+DO_ERROR_INFO(do_trap_amo_misaligned,
+	SIGBUS, BUS_ADRALN, "AMO address misaligned");
+DO_ERROR_INFO(do_trap_insn_misaligned,
+	SIGBUS, BUS_ADRALN, "instruction address misaligned");
+DO_ERROR_INFO(do_trap_insn_illegal,
+	SIGILL, ILL_ILLOPC, "illegal instruction");
+
+asmlinkage void do_trap_break(struct pt_regs *regs)
+{
+#ifdef CONFIG_GENERIC_BUG
+	if (!user_mode(regs)) {
+		enum bug_trap_type type;
+
+		type = report_bug(regs->sepc, regs);
+		switch (type) {
+		case BUG_TRAP_TYPE_NONE:
+			break;
+		case BUG_TRAP_TYPE_WARN:
+			regs->sepc += sizeof(bug_insn_t);
+			return;
+		case BUG_TRAP_TYPE_BUG:
+			die(regs, "Kernel BUG");
+		}
+	}
+#endif /* CONFIG_GENERIC_BUG */
+
+	do_trap_siginfo(SIGTRAP, TRAP_BRKPT, regs->sepc, current);
+	regs->sepc += 0x4;
+}
+
+#ifdef CONFIG_GENERIC_BUG
+int is_valid_bugaddr(unsigned long pc)
+{
+	bug_insn_t insn;
+
+	if (pc < PAGE_OFFSET)
+		return 0;
+	if (probe_kernel_address((bug_insn_t __user *)pc, insn))
+		return 0;
+	return (insn == __BUG_INSN);
+}
+#endif /* CONFIG_GENERIC_BUG */
+
+void __init trap_init(void)
+{
+	int hart = smp_processor_id();
+
+	/* Set sup0 scratch register to 0, indicating to exception vector
+	   that we are presently executing in the kernel */
+	csr_write(sscratch, 0);
+	/* Set the exception vector address */
+	csr_write(stvec, &handle_exception);
+	/* Enable software interrupts and setup initial mask */
+	csr_write(sie, SIE_SSIE | atomic_long_read(&per_cpu(riscv_early_sie, hart)));
+}
diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c
new file mode 100644
index 000000000000..a4007ec0cb53
--- /dev/null
+++ b/arch/riscv/kernel/vdso.c
@@ -0,0 +1,125 @@
+/*
+ * Copyright (C) 2004 Benjamin Herrenschmidt, IBM Corp.
+ *                    <benh@kernel.crashing.org>
+ * Copyright (C) 2012 ARM Limited
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/binfmts.h>
+#include <linux/err.h>
+
+#include <asm/vdso.h>
+
+extern char vdso_start[], vdso_end[];
+
+static unsigned int vdso_pages;
+static struct page **vdso_pagelist;
+
+/*
+ * The vDSO data page.
+ */
+static union {
+	struct vdso_data	data;
+	u8			page[PAGE_SIZE];
+} vdso_data_store __page_aligned_data;
+struct vdso_data *vdso_data = &vdso_data_store.data;
+
+static int __init vdso_init(void)
+{
+	unsigned int i;
+
+	vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT;
+	vdso_pagelist = kzalloc(sizeof(struct page *) * (vdso_pages + 1), GFP_KERNEL);
+	if (unlikely(vdso_pagelist == NULL)) {
+		pr_err("vdso: pagelist allocation failed\n");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < vdso_pages; i++) {
+		struct page *pg;
+		pg = virt_to_page(vdso_start + (i << PAGE_SHIFT));
+		ClearPageReserved(pg);
+		vdso_pagelist[i] = pg;
+	}
+	vdso_pagelist[i] = virt_to_page(vdso_data);
+
+	return 0;
+}
+arch_initcall(vdso_init);
+
+int arch_setup_additional_pages(struct linux_binprm *bprm,
+	int uses_interp)
+{
+	struct mm_struct *mm = current->mm;
+	unsigned long vdso_base, vdso_len;
+	int ret;
+
+	vdso_len = (vdso_pages + 1) << PAGE_SHIFT;
+
+	down_write(&mm->mmap_sem);
+	vdso_base = get_unmapped_area(NULL, 0, vdso_len, 0, 0);
+	if (unlikely(IS_ERR_VALUE(vdso_base))) {
+		ret = vdso_base;
+		goto end;
+	}
+
+	/*
+	 * Put vDSO base into mm struct. We need to do this before calling
+	 * install_special_mapping or the perf counter mmap tracking code
+	 * will fail to recognise it as a vDSO (since arch_vma_name fails).
+	 */
+	mm->context.vdso = (void *)vdso_base;
+
+	ret = install_special_mapping(mm, vdso_base, vdso_len,
+		(VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC),
+		vdso_pagelist);
+
+	if (unlikely(ret)) {
+		mm->context.vdso = NULL;
+	}
+
+end:
+	up_write(&mm->mmap_sem);
+	return ret;
+}
+
+const char *arch_vma_name(struct vm_area_struct *vma)
+{
+	if (vma->vm_mm && (vma->vm_start == (long)vma->vm_mm->context.vdso)) {
+		return "[vdso]";
+	}
+	return NULL;
+}
+
+/*
+ * Function stubs to prevent linker errors when AT_SYSINFO_EHDR is defined
+ */
+
+int in_gate_area_no_mm(unsigned long addr)
+{
+	return 0;
+}
+
+int in_gate_area(struct mm_struct *mm, unsigned long addr)
+{
+	return 0;
+}
+
+struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
+{
+	return NULL;
+}
diff --git a/arch/riscv/kernel/vdso/.gitignore b/arch/riscv/kernel/vdso/.gitignore
new file mode 100644
index 000000000000..f8b69d84238e
--- /dev/null
+++ b/arch/riscv/kernel/vdso/.gitignore
@@ -0,0 +1 @@
+vdso.lds
diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
new file mode 100644
index 000000000000..04f3ec75b217
--- /dev/null
+++ b/arch/riscv/kernel/vdso/Makefile
@@ -0,0 +1,61 @@
+# Derived from arch/{arm64,tile}/kernel/vdso/Makefile
+
+obj-vdso := sigreturn.o
+
+# Build rules
+targets := $(obj-vdso) vdso.so vdso.so.dbg
+obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
+
+#ccflags-y := -shared -fno-common -fno-builtin
+#ccflags-y += -nostdlib -Wl,-soname=linux-vdso.so.1 \
+		$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
+
+CFLAGS_vdso.so = $(c_flags)
+CFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \
+	$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
+CFLAGS_vdso_syms.o = -r
+
+obj-y += vdso.o
+
+# We also create a special relocatable object that should mirror the symbol
+# table and layout of the linked DSO.  With ld -R we can then refer to
+# these symbols in the kernel code rather than hand-coded addresses.
+extra-y += vdso.lds vdso-syms.o
+$(obj)/built-in.o: $(obj)/vdso-syms.o
+$(obj)/built-in.o: ld_flags += -R $(obj)/vdso-syms.o
+
+CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+
+# Force dependency
+$(obj)/vdso.o : $(obj)/vdso.so
+
+# Link rule for the *.so file; *.lds must be first
+$(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso)
+	$(call if_changed,vdsold)
+$(obj)/vdso-syms.o: $(src)/vdso.lds $(obj-vdso)
+	$(call if_changed,vdsold)
+
+# Strip rule for the *.so file
+$(obj)/%.so: OBJCOPYFLAGS := -S
+$(obj)/%.so: $(obj)/%.so.dbg FORCE
+	$(call if_changed,objcopy)
+
+# Assembly rules for the *.S files
+$(obj-vdso): %.o: %.S
+	$(call if_changed_dep,vdsoas)
+
+# Actual build commands
+quiet_cmd_vdsold = VDSOLD  $@
+      cmd_vdsold = $(CC) $(c_flags) -nostdlib $(CFLAGS_$(@F)) -Wl,-n -Wl,-T $^ -o $@
+quiet_cmd_vdsoas = VDSOAS  $@
+      cmd_vdsoas = $(CC) $(a_flags) -c -o $@ $<
+
+# Install commands for the unstripped file
+quiet_cmd_vdso_install = INSTALL $@
+      cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/$@
+
+vdso.so: $(obj)/vdso.so.dbg
+	@mkdir -p $(MODLIB)/vdso
+	$(call cmd,vdso_install)
+
+vdso_install: vdso.so
diff --git a/arch/riscv/kernel/vdso/sigreturn.S b/arch/riscv/kernel/vdso/sigreturn.S
new file mode 100644
index 000000000000..d5d33dbb5976
--- /dev/null
+++ b/arch/riscv/kernel/vdso/sigreturn.S
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/unistd.h>
+
+	.text
+ENTRY(__vdso_rt_sigreturn)
+	.cfi_startproc
+	.cfi_signal_frame
+	li a7, __NR_rt_sigreturn
+	scall
+	.cfi_endproc
+ENDPROC(__vdso_rt_sigreturn)
diff --git a/arch/riscv/kernel/vdso/vdso.S b/arch/riscv/kernel/vdso/vdso.S
new file mode 100644
index 000000000000..e2edb3529286
--- /dev/null
+++ b/arch/riscv/kernel/vdso/vdso.S
@@ -0,0 +1,28 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/page.h>
+
+	__PAGE_ALIGNED_DATA
+
+	.globl vdso_start, vdso_end
+	.balign PAGE_SIZE
+vdso_start:
+	.incbin "arch/riscv/kernel/vdso/vdso.so"
+	.balign PAGE_SIZE
+vdso_end:
+
+	.previous
diff --git a/arch/riscv/kernel/vdso/vdso.lds.S b/arch/riscv/kernel/vdso/vdso.lds.S
new file mode 100644
index 000000000000..4c3d72306ad8
--- /dev/null
+++ b/arch/riscv/kernel/vdso/vdso.lds.S
@@ -0,0 +1,77 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+OUTPUT_ARCH(riscv)
+
+SECTIONS
+{
+	. = SIZEOF_HEADERS;
+
+	.hash		: { *(.hash) }			:text
+	.gnu.hash	: { *(.gnu.hash) }
+	.dynsym		: { *(.dynsym) }
+	.dynstr		: { *(.dynstr) }
+	.gnu.version	: { *(.gnu.version) }
+	.gnu.version_d	: { *(.gnu.version_d) }
+	.gnu.version_r	: { *(.gnu.version_r) }
+
+	.note		: { *(.note.*) }		:text	:note
+	.dynamic	: { *(.dynamic) }		:text	:dynamic
+
+	.eh_frame_hdr	: { *(.eh_frame_hdr) }		:text	:eh_frame_hdr
+	.eh_frame	: { KEEP (*(.eh_frame)) }	:text
+
+	.rodata		: { *(.rodata .rodata.* .gnu.linkonce.r.*) }
+
+	/*
+	 * This linker script is used both with -r and with -shared.
+	 * For the layouts to match, we need to skip more than enough
+	 * space for the dynamic symbol table, etc. If this amount is
+	 * insufficient, ld -shared will error; simply increase it here.
+	 */
+	. = 0x800;
+	.text		: { *(.text .text.*) }		:text
+
+	.data		: {
+		*(.got.plt) *(.got)
+		*(.data .data.* .gnu.linkonce.d.*)
+		*(.dynbss)
+		*(.bss .bss.* .gnu.linkonce.b.*)
+	}
+}
+
+/*
+ * We must supply the ELF program headers explicitly to get just one
+ * PT_LOAD segment, and set the flags explicitly to make segments read-only.
+ */
+PHDRS
+{
+	text		PT_LOAD		FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
+	dynamic		PT_DYNAMIC	FLAGS(4);		/* PF_R */
+	note		PT_NOTE		FLAGS(4);		/* PF_R */
+	eh_frame_hdr	PT_GNU_EH_FRAME;
+}
+
+/*
+ * This controls what symbols we export from the DSO.
+ */
+VERSION
+{
+	LINUX_2.6 {
+	global:
+		__vdso_rt_sigreturn;
+	local: *;
+	};
+}
+
diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
new file mode 100644
index 000000000000..5e13c9103024
--- /dev/null
+++ b/arch/riscv/kernel/vmlinux.lds.S
@@ -0,0 +1,93 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#define LOAD_OFFSET PAGE_OFFSET
+#include <asm/vmlinux.lds.h>
+#include <asm/page.h>
+#include <asm/cache.h>
+#include <asm/thread_info.h>
+
+OUTPUT_ARCH(riscv)
+ENTRY(_start)
+
+jiffies = jiffies_64;
+
+SECTIONS
+{
+	/* Beginning of code and text segment */
+	. = LOAD_OFFSET;
+	_start = .;
+	__init_begin = .;
+	HEAD_TEXT_SECTION
+	INIT_TEXT_SECTION(PAGE_SIZE)
+	INIT_DATA_SECTION(16)
+	/* we have to discard exit text and such at runtime, not link time */
+	.exit.text :
+	{
+		EXIT_TEXT
+	}
+	.exit.data :
+	{
+		EXIT_DATA
+	}
+	PERCPU_SECTION(L1_CACHE_BYTES)
+	__init_end = .;
+
+	.text : {
+		_text = .;
+		_stext = .;
+		TEXT_TEXT
+		SCHED_TEXT
+		CPUIDLE_TEXT
+		LOCK_TEXT
+		KPROBES_TEXT
+		ENTRY_TEXT
+		IRQENTRY_TEXT
+		*(.fixup)
+		_etext = .;
+	}
+
+	/* Start of data section */
+	_sdata = .;
+	RO_DATA_SECTION(L1_CACHE_BYTES)
+	.srodata : {
+		*(.srodata*)
+	}
+
+	RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
+	.sdata : {
+		__global_pointer$ = . + 0x800;
+		*(.sdata*)
+		/* End of data section */
+		_edata = .;
+		*(.sbss*)
+	}
+
+	BSS_SECTION(0, 0, 0)
+
+	EXCEPTION_TABLE(0x10)
+	NOTES
+
+	.rel.dyn : {
+		*(.rel.dyn*)
+	}
+
+	_end = .;
+
+	STABS_DEBUG
+	DWARF_DEBUG
+
+	DISCARDS
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 7/7] RISC-V: arch/riscv/mm
  2017-05-23  0:41 RISC-V Linux Port v1 Palmer Dabbelt
                   ` (5 preceding siblings ...)
  2017-05-23  0:41 ` [PATCH 6/7] RISC-V: arch/riscv/kernel Palmer Dabbelt
@ 2017-05-23  0:41 ` Palmer Dabbelt
  2017-05-23  1:28   ` Randy Dunlap
  2017-05-23  1:16 ` RISC-V Linux Port v1 Olof Johansson
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23  0:41 UTC (permalink / raw)
  To: linux-kernel, Arnd Bergmann, olof; +Cc: albert, Palmer Dabbelt

---
 arch/riscv/mm/Makefile  |   1 +
 arch/riscv/mm/extable.c |  38 +++++++
 arch/riscv/mm/fault.c   | 279 ++++++++++++++++++++++++++++++++++++++++++++++++
 arch/riscv/mm/init.c    |  72 +++++++++++++
 arch/riscv/mm/ioremap.c |  96 +++++++++++++++++
 5 files changed, 486 insertions(+)
 create mode 100644 arch/riscv/mm/Makefile
 create mode 100644 arch/riscv/mm/extable.c
 create mode 100644 arch/riscv/mm/fault.c
 create mode 100644 arch/riscv/mm/init.c
 create mode 100644 arch/riscv/mm/ioremap.c

diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
new file mode 100644
index 000000000000..36ebe6feb5d6
--- /dev/null
+++ b/arch/riscv/mm/Makefile
@@ -0,0 +1 @@
+obj-y := init.o fault.o extable.o ioremap.o
diff --git a/arch/riscv/mm/extable.c b/arch/riscv/mm/extable.c
new file mode 100644
index 000000000000..811b298c2475
--- /dev/null
+++ b/arch/riscv/mm/extable.c
@@ -0,0 +1,38 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+ */
+
+
+#include <linux/extable.h>
+#include <linux/module.h>
+#include <linux/uaccess.h>
+
+int fixup_exception(struct pt_regs *regs)
+{
+	const struct exception_table_entry *fixup;
+
+	fixup = search_exception_tables(regs->sepc);
+	if (fixup) {
+		regs->sepc = fixup->fixup;
+		return 1;
+	}
+	return 0;
+}
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
new file mode 100644
index 000000000000..f02e286dd1c1
--- /dev/null
+++ b/arch/riscv/mm/fault.c
@@ -0,0 +1,279 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+ */
+
+
+#include <linux/mm.h>
+#include <linux/kernel.h>
+#include <linux/interrupt.h>
+#include <linux/perf_event.h>
+#include <linux/signal.h>
+#include <linux/uaccess.h>
+
+#include <asm/pgalloc.h>
+#include <asm/ptrace.h>
+#include <asm/uaccess.h>
+
+/*
+ * This routine handles page faults.  It determines the address and the
+ * problem, and then passes it off to one of the appropriate routines.
+ */
+asmlinkage void do_page_fault(struct pt_regs *regs)
+{
+	struct task_struct *tsk;
+	struct vm_area_struct *vma;
+	struct mm_struct *mm;
+	unsigned long addr, cause;
+	unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
+	int fault, code = SEGV_MAPERR;
+
+	cause = regs->scause;
+	addr = regs->sbadaddr;
+
+	tsk = current;
+	mm = tsk->mm;
+
+	/*
+	 * Fault-in kernel-space virtual memory on-demand.
+	 * The 'reference' page table is init_mm.pgd.
+	 *
+	 * NOTE! We MUST NOT take any locks for this case. We may
+	 * be in an interrupt or a critical region, and should
+	 * only copy the information from the master page table,
+	 * nothing more.
+	 */
+	if (unlikely((addr >= VMALLOC_START) && (addr <= VMALLOC_END)))
+		goto vmalloc_fault;
+
+	/* Enable interrupts if they were enabled in the parent context. */
+	if (likely(regs->sstatus & SR_PIE))
+		local_irq_enable();
+
+	/*
+	 * If we're in an interrupt, have no user context, or are running
+	 * in an atomic region, then we must not take the fault.
+	 */
+	if (unlikely(faulthandler_disabled() || !mm))
+		goto no_context;
+
+	if (user_mode(regs))
+		flags |= FAULT_FLAG_USER;
+
+	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
+
+retry:
+	down_read(&mm->mmap_sem);
+	vma = find_vma(mm, addr);
+	if (unlikely(!vma))
+		goto bad_area;
+	if (likely(vma->vm_start <= addr))
+		goto good_area;
+	if (unlikely(!(vma->vm_flags & VM_GROWSDOWN)))
+		goto bad_area;
+	if (unlikely(expand_stack(vma, addr)))
+		goto bad_area;
+
+	/*
+	 * Ok, we have a good vm_area for this memory access, so
+	 * we can handle it.
+	 */
+good_area:
+	code = SEGV_ACCERR;
+
+	switch (cause) {
+	case EXC_INST_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_EXEC))
+			goto bad_area;
+		break;
+	case EXC_LOAD_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_READ))
+			goto bad_area;
+		break;
+	case EXC_STORE_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_WRITE))
+			goto bad_area;
+		flags |= FAULT_FLAG_WRITE;
+		break;
+	default:
+		panic("%s: unhandled cause %lu", __func__, cause);
+	}
+
+	/*
+	 * If for any reason at all we could not handle the fault,
+	 * make sure we exit gracefully rather than endlessly redo
+	 * the fault.
+	 */
+	fault = handle_mm_fault(vma, addr, flags);
+
+	/*
+	 * If we need to retry but a fatal signal is pending, handle the
+	 * signal first. We do not need to release the mmap_sem because it
+	 * would already be released in __lock_page_or_retry in mm/filemap.c.
+	 */
+	if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(tsk))
+		return;
+
+	if (unlikely(fault & VM_FAULT_ERROR)) {
+		if (fault & VM_FAULT_OOM)
+			goto out_of_memory;
+		else if (fault & VM_FAULT_SIGBUS)
+			goto do_sigbus;
+		BUG();
+	}
+
+	/*
+	 * Major/minor page fault accounting is only done on the
+	 * initial attempt. If we go through a retry, it is extremely
+	 * likely that the page will be found in page cache at that point.
+	 */
+	if (flags & FAULT_FLAG_ALLOW_RETRY) {
+		if (fault & VM_FAULT_MAJOR) {
+			tsk->maj_flt++;
+			perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, addr);
+		} else {
+			tsk->min_flt++;
+			perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, addr);
+		}
+		if (fault & VM_FAULT_RETRY) {
+			/*
+			 * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
+			 * of starvation.
+			 */
+			flags &= ~(FAULT_FLAG_ALLOW_RETRY);
+			flags |= FAULT_FLAG_TRIED;
+
+			/*
+			 * No need to up_read(&mm->mmap_sem) as we would
+			 * have already released it in __lock_page_or_retry
+			 * in mm/filemap.c.
+			 */
+			goto retry;
+		}
+	}
+
+	up_read(&mm->mmap_sem);
+	return;
+
+	/*
+	 * Something tried to access memory that isn't in our memory map.
+	 * Fix it, but check if it's kernel or user first.
+	 */
+bad_area:
+	up_read(&mm->mmap_sem);
+	/* User mode accesses just cause a SIGSEGV */
+	if (user_mode(regs)) {
+		do_trap(regs, SIGSEGV, code, addr, tsk);
+		return;
+	}
+
+no_context:
+	/* Are we prepared to handle this kernel fault? */
+	if (fixup_exception(regs)) {
+		return;
+	}
+
+	/*
+	 * Oops. The kernel tried to access some bad page. We'll have to
+	 * terminate things with extreme prejudice.
+	 */
+	bust_spinlocks(1);
+	pr_alert("Unable to handle kernel %s at virtual address " REG_FMT "\n",
+		(addr < PAGE_SIZE) ? "NULL pointer dereference" :
+		"paging request", addr);
+	die(regs, "Oops");
+	do_exit(SIGKILL);
+
+	/*
+	 * We ran out of memory, call the OOM killer, and return the userspace
+	 * (which will retry the fault, or kill us if we got oom-killed).
+	 */
+out_of_memory:
+	up_read(&mm->mmap_sem);
+	if (!user_mode(regs))
+		goto no_context;
+	pagefault_out_of_memory();
+	return;
+
+do_sigbus:
+	up_read(&mm->mmap_sem);
+	/* Kernel mode? Handle exceptions or die */
+	if (!user_mode(regs))
+		goto no_context;
+	do_trap(regs, SIGBUS, BUS_ADRERR, addr, tsk);
+	return;
+
+vmalloc_fault:
+	{
+		pgd_t *pgd, *pgd_k;
+		pud_t *pud, *pud_k;
+		p4d_t *p4d, *p4d_k;
+		pmd_t *pmd, *pmd_k;
+		pte_t *pte_k;
+		int index;
+
+	        if (user_mode(regs))
+			goto bad_area;
+
+		/*
+		 * Synchronize this task's top level page-table
+		 * with the 'reference' page table.
+		 *
+		 * Do _not_ use "tsk->active_mm->pgd" here.
+		 * We might be inside an interrupt in the middle
+		 * of a task switch.
+		 */
+		index = pgd_index(addr);
+		pgd = (pgd_t *)pfn_to_virt(csr_read(sptbr)) + index;
+		pgd_k = init_mm.pgd + index;
+
+		if (!pgd_present(*pgd_k))
+			goto no_context;
+		set_pgd(pgd, *pgd_k);
+
+		p4d = p4d_offset(pgd, addr);
+		p4d_k = p4d_offset(pgd_k, addr);
+		if (!p4d_present(*p4d_k))
+			goto no_context;
+
+		pud = pud_offset(p4d, addr);
+		pud_k = pud_offset(p4d_k, addr);
+		if (!pud_present(*pud_k))
+			goto no_context;
+
+		/* Since the vmalloc area is global, it is unnecessary
+		   to copy individual PTEs */
+		pmd = pmd_offset(pud, addr);
+		pmd_k = pmd_offset(pud_k, addr);
+		if (!pmd_present(*pmd_k))
+			goto no_context;
+		set_pmd(pmd, *pmd_k);
+
+		/* Make sure the actual PTE exists as well to
+		 * catch kernel vmalloc-area accesses to non-mapped
+		 * addresses. If we don't do this, this will just
+		 * silently loop forever.
+		 */
+		pte_k = pte_offset_kernel(pmd_k, addr);
+		if (!pte_present(*pte_k))
+			goto no_context;
+		return;
+	}
+}
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
new file mode 100644
index 000000000000..b1bc26d7bdce
--- /dev/null
+++ b/arch/riscv/mm/init.c
@@ -0,0 +1,72 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/bootmem.h>
+#include <linux/initrd.h>
+#include <linux/memblock.h>
+#include <linux/swap.h>
+
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+#include <asm/pgtable.h>
+#include <asm/io.h>
+
+static void __init zone_sizes_init(void)
+{
+	unsigned long zones_size[MAX_NR_ZONES];
+	memset(zones_size, 0, sizeof(zones_size));
+	zones_size[ZONE_NORMAL] = pfn_base + max_mapnr;
+	free_area_init_node(0, zones_size, pfn_base, NULL);
+}
+
+void setup_zero_page(void)
+{
+	memset((void *)empty_zero_page, 0, PAGE_SIZE);
+}
+
+void __init paging_init(void)
+{
+	init_mm.pgd = (pgd_t *)pfn_to_virt(csr_read(sptbr));
+
+	setup_zero_page();
+	local_flush_tlb_all();
+	zone_sizes_init();
+}
+
+void __init mem_init(void)
+{
+#ifdef CONFIG_FLATMEM
+	BUG_ON(!mem_map);
+#endif /* CONFIG_FLATMEM */
+
+	high_memory = (void *)(__va(PFN_PHYS(max_low_pfn)));
+	free_all_bootmem();
+
+	mem_init_print_info(NULL);
+}
+
+void free_initmem(void)
+{
+	free_initmem_default(0);
+}
+
+#ifdef CONFIG_BLK_DEV_INITRD
+void free_initrd_mem(unsigned long start, unsigned long end)
+{
+//	free_reserved_area(start, end, 0, "initrd");
+}
+#endif /* CONFIG_BLK_DEV_INITRD */
+
diff --git a/arch/riscv/mm/ioremap.c b/arch/riscv/mm/ioremap.c
new file mode 100644
index 000000000000..0e7256441f11
--- /dev/null
+++ b/arch/riscv/mm/ioremap.c
@@ -0,0 +1,96 @@
+/*
+ * (C) Copyright 1995 1996 Linus Torvalds
+ * (C) Copyright 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ *   NON INFRINGEMENT.  See the GNU General Public License for
+ *   more details.
+ */
+
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <linux/io.h>
+
+#include <asm/pgtable.h>
+
+/*
+ * Remap an arbitrary physical address space into the kernel virtual
+ * address space. Needed when the kernel wants to access high addresses
+ * directly.
+ *
+ * NOTE! We need to allow non-page-aligned mappings too: we will obviously
+ * have to convert them into an offset in a page-aligned mapping, but the
+ * caller shouldn't need to know that small detail.
+ */
+static void __iomem *__ioremap_caller(phys_addr_t addr, size_t size,
+	pgprot_t prot, void *caller)
+{
+	phys_addr_t last_addr;
+	unsigned long offset, vaddr;
+	struct vm_struct *area;
+
+	/* Disallow wrap-around or zero size */
+	last_addr = addr + size - 1;
+	if (!size || last_addr < addr) {
+		return NULL;
+	}
+
+	/* Page-align mappings */
+	offset = addr & (~PAGE_MASK);
+	addr &= PAGE_MASK;
+	size = PAGE_ALIGN(size + offset);
+
+	area = get_vm_area_caller(size, VM_IOREMAP, caller);
+	if (!area) {
+		return NULL;
+	}
+	vaddr = (unsigned long)area->addr;
+
+	if (ioremap_page_range(vaddr, vaddr + size, addr, prot)) {
+		free_vm_area(area);
+		return NULL;
+	}
+
+	return (void __iomem *)(vaddr + offset);
+}
+
+/*
+ * ioremap     -   map bus memory into CPU space
+ * @offset:    bus address of the memory
+ * @size:      size of the resource to map
+ *
+ * ioremap performs a platform specific sequence of operations to
+ * make bus memory CPU accessible via the readb/readw/readl/writeb/
+ * writew/writel functions and the other mmio helpers. The returned
+ * address is not guaranteed to be usable directly as a virtual
+ * address.
+ *
+ * Must be freed with iounmap.
+ */
+void __iomem *ioremap(phys_addr_t offset, unsigned long size)
+{
+	return __ioremap_caller(offset, size, PAGE_KERNEL,
+		__builtin_return_address(0));
+}
+EXPORT_SYMBOL(ioremap);
+
+
+/**
+ * iounmap - Free a IO remapping
+ * @addr: virtual address from ioremap_*
+ *
+ * Caller must ensure there is only one unmapping for the same pointer.
+ */
+void iounmap(void __iomem *addr)
+{
+	vunmap((void *)((unsigned long)addr & PAGE_MASK));
+}
+EXPORT_SYMBOL(iounmap);
+
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v1
  2017-05-23  0:41 RISC-V Linux Port v1 Palmer Dabbelt
                   ` (6 preceding siblings ...)
  2017-05-23  0:41 ` [PATCH 7/7] RISC-V: arch/riscv/mm Palmer Dabbelt
@ 2017-05-23  1:16 ` Olof Johansson
  2017-05-23  1:25   ` Randy Dunlap
  2017-05-23  3:36   ` Palmer Dabbelt
  2017-05-23  2:16 ` Randy Dunlap
                   ` (3 subsequent siblings)
  11 siblings, 2 replies; 283+ messages in thread
From: Olof Johansson @ 2017-05-23  1:16 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: linux-kernel, Arnd Bergmann, albert

Hi Palmer,

On Mon, May 22, 2017 at 5:41 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> We'd like to submit for inclusion in Linux a port for the RISC-V architecture.
> While it is doubtlessly not complete, we think it is far enough along to start
> the upstreaming process.  Our binutils and GCC ports have been accepted and
> released, and we plan on submitting glibc patches soon.
>
> This port targets Version 1.10 of the RISC-V Privileged ISA, and supports both
> the RV32 and RV64 user ISAs.  The RISC-V community and the 60-some member
> companies of the RISC-V Foundation are quite eager to have a single, standard
> Linux port.  We thank you in advance for your help in this process and for your
> feedback on the software contribution itself.
>
> These patches build and boot on top of 4.12-rc2.  I understand that the merge
> window is closed, but it was suggested that the best time to submit a new
> architecture port would be right after an RC2 as the earliest point at which
> the tree is usually generally churn-free enough.  While we optimistically hope
> that we can get the port in for the 4.13 merge window, we're also eager to
> ensure that the user-visible ABI is sane so we can proceed with our glibc port.
> We'd like to at least get any user ABI issues shaken out as soon as possible,
> even if we don't make it into 4.13.

Time is right for review and eventual merge of this. Whether it makes
4.13 depends on how much discussion ensues. :)

> Albert and I will volunteer to maintain this port if it's OK with everyone.

It always makes sense to have architecture-knowledge people maintain
it; no complaints from me.

What we've seen been useful on other platforms (i.e. arm/arm64) is to
offload the per-vendor stuff to a separate tree. It might or might not
be needed here; likely to start out it won't be enough material to
need it.

> We'd like to thank the various members of the RISC-V software community who
> have helped us with the port.
>
> Thanks!
>
> In addition to the threaded messages, our port can be found on Git Hib
>
>   https://github.com/riscv/riscv-linux/tree/riscv-for-submission-v1
>
> [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64}
> [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
> [PATCH 3/7] RISC-V: Device Tree Documentation
> [PATCH 4/7] RISC-V: arch/riscv/include
> [PATCH 5/7] RISC-V: arch/riscv/lib
> [PATCH 6/7] RISC-V: arch/riscv/kernel
> [PATCH 7/7] RISC-V: arch/riscv/mm

So, one overall comment on this patchset is that it's not bisectable
(i.e. early patches add Makefile contents that refers to directories
not yet introduced).

While it's not overly important to really split up a new architecture
introduction into small incremental patches, we generally strive to
have the tree fully buildable at any given commit. Some minor
rearranging would alleviate these problems.

Also, none of the patches seem to have any descriptions. Adding some
high-level descriptions of what's in each patch in the patch itself is
useful both for reviewing now, and for educating anyone coming along
later on trying to learn about the code and why it's been implemented
as it has.

I'll add more comments on some of the individual patches; expect this
review to take a little while. Reposting once or twice a week to show
incorporated changes can be useful; more than that and it can be
harder to follow along in the discussion. It all depends on how much
comments you end up receiving.


-Olof

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v1
  2017-05-23  1:16 ` RISC-V Linux Port v1 Olof Johansson
@ 2017-05-23  1:25   ` Randy Dunlap
  2017-05-23  3:36     ` Palmer Dabbelt
  2017-05-23  3:36   ` Palmer Dabbelt
  1 sibling, 1 reply; 283+ messages in thread
From: Randy Dunlap @ 2017-05-23  1:25 UTC (permalink / raw)
  To: Olof Johansson, Palmer Dabbelt; +Cc: linux-kernel, Arnd Bergmann, albert

On 05/22/17 18:16, Olof Johansson wrote:
> Hi Palmer,
> 
> On Mon, May 22, 2017 at 5:41 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> 
>> In addition to the threaded messages, our port can be found on Git Hib
>>
>>   https://github.com/riscv/riscv-linux/tree/riscv-for-submission-v1
>>
>> [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64}
>> [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
>> [PATCH 3/7] RISC-V: Device Tree Documentation
>> [PATCH 4/7] RISC-V: arch/riscv/include
>> [PATCH 5/7] RISC-V: arch/riscv/lib
>> [PATCH 6/7] RISC-V: arch/riscv/kernel
>> [PATCH 7/7] RISC-V: arch/riscv/mm
> 
> So, one overall comment on this patchset is that it's not bisectable
> (i.e. early patches add Makefile contents that refers to directories
> not yet introduced).
> 
> While it's not overly important to really split up a new architecture
> introduction into small incremental patches, we generally strive to
> have the tree fully buildable at any given commit. Some minor
> rearranging would alleviate these problems.

Neither the email patches nor the git tree have any Signed-off-by:
entries AFAICT.

-- 
~Randy

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-05-23  0:41 ` [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs Palmer Dabbelt
@ 2017-05-23  1:27   ` Olof Johansson
  2017-05-23  1:31     ` Randy Dunlap
  2017-05-23  4:49     ` Palmer Dabbelt
  2017-05-23  5:23   ` Olof Johansson
                     ` (2 subsequent siblings)
  3 siblings, 2 replies; 283+ messages in thread
From: Olof Johansson @ 2017-05-23  1:27 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: linux-kernel, Arnd Bergmann, albert

Hi,


On Mon, May 22, 2017 at 5:41 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> ---
>  arch/riscv/.gitignore                |  35 ++++
>  arch/riscv/Kconfig                   | 300 +++++++++++++++++++++++++++++++++++
>  arch/riscv/Makefile                  |  64 ++++++++
>  arch/riscv/configs/riscv32_spike     |  47 ++++++
>  arch/riscv/configs/riscv64_freedom-u |  52 ++++++
>  arch/riscv/configs/riscv64_qemu      |  64 ++++++++
>  arch/riscv/configs/riscv64_spike     |  45 ++++++
>  7 files changed, 607 insertions(+)
>  create mode 100644 arch/riscv/.gitignore
>  create mode 100644 arch/riscv/Kconfig
>  create mode 100644 arch/riscv/Makefile
>  create mode 100644 arch/riscv/configs/riscv32_spike
>  create mode 100644 arch/riscv/configs/riscv64_freedom-u
>  create mode 100644 arch/riscv/configs/riscv64_qemu
>  create mode 100644 arch/riscv/configs/riscv64_spike

Nearly all other platforms have _defconfig in the config names. It
might get a bit excessive to prepend riscv{32,64} to all of them
though. Most other platforms have shortened it to, for example,
spike_defconfig, spike64_defconfig, qemu_defconfig,
freedom-u_defconfig.

Not going to argue too much about the color of the shed here, but
using the _defconfig naming is recommended.

>
> diff --git a/arch/riscv/.gitignore b/arch/riscv/.gitignore
> new file mode 100644
> index 000000000000..376d06eb5d52
> --- /dev/null
> +++ b/arch/riscv/.gitignore
> @@ -0,0 +1,35 @@
> +# Now un-ignore all files.
> +!*
> +
> +# But then re-ignore the files listed in the Linux .gitignore
> +# Normal rules
> +#
> +.*
> +*.o
> +*.o.*
> +*.a
> +*.s
> +*.ko
> +*.so
> +*.so.dbg
> +*.mod.c
> +*.i
> +*.lst
> +*.symtypes
> +*.order
> +modules.builtin
> +*.elf
> +*.bin
> +*.gz
> +*.bz2
> +*.lzma
> +*.xz
> +*.lzo
> +*.patch
> +*.gcno

I don't think you need to do any of this, just inherit the global one
(by not having one here)?

> +
> +include/generated

This is already covered by the global .gitignore.

> +kernel/vmlinux.lds

And if needed this should be added to the arch/riscv/kernel/.gitignore
file instead.


> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> new file mode 100644
> index 000000000000..510ead1d3343
> --- /dev/null
> +++ b/arch/riscv/Kconfig
> @@ -0,0 +1,300 @@
> +#
> +# For a description of the syntax of this configuration file,
> +# see Documentation/kbuild/kconfig-language.txt.
> +#
> +
> +config RISCV
> +       def_bool y
> +       select OF
> +       select OF_EARLY_FLATTREE
> +       select OF_IRQ
> +       select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
> +       select ARCH_WANT_FRAME_POINTERS
> +       select CLONE_BACKWARDS
> +       select COMMON_CLK
> +       select GENERIC_CLOCKEVENTS
> +       select GENERIC_CPU_DEVICES
> +       select GENERIC_IRQ_SHOW
> +       select GENERIC_PCI_IOMAP
> +       select GENERIC_STRNCPY_FROM_USER
> +       select GENERIC_STRNLEN_USER
> +       select GENERIC_SMP_IDLE_THREAD
> +       select GENERIC_ATOMIC64 if !64BIT || !RV_ATOMIC
> +       select ARCH_WANT_OPTIONAL_GPIOLIB
> +       select HAVE_MEMBLOCK
> +       select HAVE_DMA_API_DEBUG
> +       select HAVE_DMA_CONTIGUOUS
> +       select HAVE_GENERIC_DMA_COHERENT
> +       select IRQ_DOMAIN
> +       select NO_BOOTMEM
> +       select RV_ATOMIC if SMP
> +       select RV_SYSRISCV_ATOMIC if !RV_ATOMIC
> +       select SPARSE_IRQ
> +       select SYSCTL_EXCEPTION_TRACE
> +       select HAVE_ARCH_TRACEHOOK
> +       select MODULES_USE_ELF_RELA if MODULES
> +
> +config MMU
> +       def_bool y
> +
> +# even on 32-bit, physical (and DMA) addresses are > 32-bits
> +config ARCH_PHYS_ADDR_T_64BIT
> +       def_bool y
> +
> +config ARCH_DMA_ADDR_T_64BIT
> +       def_bool y
> +
> +config STACKTRACE_SUPPORT
> +       def_bool y
> +
> +config RWSEM_GENERIC_SPINLOCK
> +       def_bool y
> +
> +config GENERIC_BUG
> +       def_bool y
> +       depends on BUG
> +       select GENERIC_BUG_RELATIVE_POINTERS if 64BIT
> +
> +config GENERIC_BUG_RELATIVE_POINTERS
> +       bool
> +
> +config GENERIC_CALIBRATE_DELAY
> +       def_bool y
> +
> +config GENERIC_CSUM
> +       def_bool y
> +
> +config GENERIC_HWEIGHT
> +       def_bool y
> +
> +config PGTABLE_LEVELS
> +       int
> +       default 3 if 64BIT
> +       default 2
> +
> +config HAVE_KPROBES
> +       def_bool n
> +
> +config DMA_NOOP_OPS
> +       def_bool y
> +
> +menu "Platform type"
> +
> +config SMP
> +       bool "Symmetric Multi-Processing"
> +       help
> +         This enables support for systems with more than one CPU.  If
> +         you say N here, the kernel will run on single and
> +         multiprocessor machines, but will use only one CPU of a
> +         multiprocessor machine. If you say Y here, the kernel will run
> +         on many, but not all, single processor machines. On a single
> +         processor machine, the kernel will run faster if you say N
> +         here.
> +
> +         If you don't know what to do here, say N.
> +
> +config NR_CPUS
> +       int "Maximum number of CPUs (2-32)"
> +       range 2 32
> +       depends on SMP
> +       default "8"
> +
> +choice
> +       prompt "CPU selection"
> +       default CPU_RV_GENERIC
> +
> +config CPU_RV_GENERIC
> +       bool "Generic RISC-V"
> +       select CPU_SUPPORTS_32BIT_KERNEL
> +       select CPU_SUPPORTS_64BIT_KERNEL

Is this even needed at this point? If the only CPU you can pick
supports this, you might as well not make it an option (CPU_RV_GENERIC
that is), and just make CPU_SUPPORTS_{32,64}BIT_KERNEL 'def_bool y'
for now.

> +
> +endchoice
> +
> +config PLIC
> +       bool "Platform-Level Interrupt Controller"
> +       default y
> +       help
> +          This enables support for the PLIC chip found in standard RISC-V
> +          systems. The PLIC is the top-most interrupt controller found in
> +          the system, connected directly to the core complex. All other
> +          interrupt sources (MSI, GPIO, etc) are subordinate to the PLIC.
> +
> +          If you don't know what to do here, say Y.
> +
> +config CPU_SUPPORTS_32BIT_KERNEL
> +       bool
> +config CPU_SUPPORTS_64BIT_KERNEL
> +       bool
> +
> +config SBI_CONSOLE
> +       tristate "SBI console support"
> +       select TTY
> +       default y

Usually you end up having a DRIVER_FOO and DRIVER_FOO_CONSOLE option
to enable registering it as console.

Also, unless there's strong reason to keep it under arch/, it should
probably go under drivers/tty/.

> +config RVC
> +       bool "Use compressed instructions (RV32C or RV64C)"
> +       default n

What does "use" here mean? Use during build, or allow userspace to use them?

> +
> +config RV_ATOMIC
> +       bool "Use atomic memory instructions (RV32A or RV64A)"
> +       default y

Same for this.

> +config RV_SYSRISCV_ATOMIC
> +       bool "Include support for atomic operation syscalls"
> +       default n
> +       help
> +         If atomic memory instructions are present, i.e.,
> +         CONFIG_RV_ATOMIC, this includes support for the syscall that
> +         provides atomic accesses.  This is only useful to run
> +         binaries that require atomic access but were compiled with
> +         -mno-atomic.
> +
> +         If CONFIG_RV_ATOMIC is unset, this option is mandatory.

If it's mandatory then Kconfig language should make it so.

> +config RV_PUM
> +       def_bool y
> +       prompt "Protect User Memory" if EXPERT
> +       ---help---
> +         Protect User Memory (PUM) prevents the kernel from inadvertently
> +         accessing user-space memory.  There is a small performance cost
> +         and kernel size increase if this is enabled.
> +
> +         If unsure, say Y.
> +
> +endmenu
> +
> +menu "Kernel type"
> +
> +choice
> +       prompt "Kernel code model"
> +       default 64BIT
> +
> +config 32BIT
> +       bool "32-bit kernel"
> +       depends on CPU_SUPPORTS_32BIT_KERNEL
> +       help
> +         Select this option to build a 32-bit kernel.
> +
> +config 64BIT
> +       bool "64-bit kernel"
> +       depends on CPU_SUPPORTS_64BIT_KERNEL
> +       help
> +         Select this option to build a 64-bit kernel.
> +
> +endchoice
> +
[...]


-Olof

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 7/7] RISC-V: arch/riscv/mm
  2017-05-23  0:41 ` [PATCH 7/7] RISC-V: arch/riscv/mm Palmer Dabbelt
@ 2017-05-23  1:28   ` Randy Dunlap
  2017-05-23  2:17     ` Olof Johansson
  0 siblings, 1 reply; 283+ messages in thread
From: Randy Dunlap @ 2017-05-23  1:28 UTC (permalink / raw)
  To: Palmer Dabbelt, linux-kernel, Arnd Bergmann, olof; +Cc: albert


> diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
> new file mode 100644
> index 000000000000..f02e286dd1c1
> --- /dev/null
> +++ b/arch/riscv/mm/fault.c
> @@ -0,0 +1,279 @@
> +/*
> + * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
> + *  Lennox Wu <lennox.wu@sunplusct.com>
> + *  Chen Liqin <liqin.chen@sunplusct.com>

is "sunplusct.com" correct?  They don't have a web server?

-- 
~Randy

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-05-23  1:27   ` Olof Johansson
@ 2017-05-23  1:31     ` Randy Dunlap
  2017-05-23  4:49       ` Palmer Dabbelt
  2017-05-23  4:49     ` Palmer Dabbelt
  1 sibling, 1 reply; 283+ messages in thread
From: Randy Dunlap @ 2017-05-23  1:31 UTC (permalink / raw)
  To: Olof Johansson, Palmer Dabbelt; +Cc: linux-kernel, Arnd Bergmann, albert

On 05/22/17 18:27, Olof Johansson wrote:
> Hi,
> 
> 
> On Mon, May 22, 2017 at 5:41 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> ---
>>  arch/riscv/.gitignore                |  35 ++++
>>  arch/riscv/Kconfig                   | 300 +++++++++++++++++++++++++++++++++++
>>  arch/riscv/Makefile                  |  64 ++++++++
>>  arch/riscv/configs/riscv32_spike     |  47 ++++++
>>  arch/riscv/configs/riscv64_freedom-u |  52 ++++++
>>  arch/riscv/configs/riscv64_qemu      |  64 ++++++++
>>  arch/riscv/configs/riscv64_spike     |  45 ++++++
>>  7 files changed, 607 insertions(+)
>>  create mode 100644 arch/riscv/.gitignore
>>  create mode 100644 arch/riscv/Kconfig
>>  create mode 100644 arch/riscv/Makefile
>>  create mode 100644 arch/riscv/configs/riscv32_spike
>>  create mode 100644 arch/riscv/configs/riscv64_freedom-u
>>  create mode 100644 arch/riscv/configs/riscv64_qemu
>>  create mode 100644 arch/riscv/configs/riscv64_spike
> 
> Nearly all other platforms have _defconfig in the config names. It
> might get a bit excessive to prepend riscv{32,64} to all of them
> though. Most other platforms have shortened it to, for example,
> spike_defconfig, spike64_defconfig, qemu_defconfig,
> freedom-u_defconfig.
> 
> Not going to argue too much about the color of the shed here, but
> using the _defconfig naming is recommended.

well, the top-level Makefile looks for "make *config" to indicate that
there is a config-command in progress (or in process), so they usually
have to end in the string "config".

Have these been tested?


-- 
~Randy

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/7] RISC-V: arch/riscv/kernel
  2017-05-23  0:41 ` [PATCH 6/7] RISC-V: arch/riscv/kernel Palmer Dabbelt
@ 2017-05-23  2:11   ` Olof Johansson
  2017-05-25  1:59     ` Palmer Dabbelt
  2017-05-23 13:35   ` Arnd Bergmann
  2017-05-25 17:05   ` Pavel Machek
  2 siblings, 1 reply; 283+ messages in thread
From: Olof Johansson @ 2017-05-23  2:11 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: linux-kernel, Arnd Bergmann, albert

On Mon, May 22, 2017 at 5:41 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> ---
>  arch/riscv/kernel/Makefile         |  19 ++
>  arch/riscv/kernel/asm-offsets.c    | 113 ++++++++++
>  arch/riscv/kernel/cacheinfo.c      |  82 ++++++++
>  arch/riscv/kernel/cpu.c            |  81 ++++++++
>  arch/riscv/kernel/entry.S          | 414 +++++++++++++++++++++++++++++++++++++
>  arch/riscv/kernel/head.S           | 139 +++++++++++++
>  arch/riscv/kernel/irq.c            | 205 ++++++++++++++++++
>  arch/riscv/kernel/module.c         | 185 +++++++++++++++++
>  arch/riscv/kernel/pci.c            |  36 ++++
>  arch/riscv/kernel/plic.c           | 208 +++++++++++++++++++
>  arch/riscv/kernel/process.c        | 130 ++++++++++++
>  arch/riscv/kernel/ptrace.c         | 148 +++++++++++++
>  arch/riscv/kernel/reset.c          |  33 +++
>  arch/riscv/kernel/riscv_ksyms.c    |  16 ++
>  arch/riscv/kernel/sbi-con.c        | 214 +++++++++++++++++++
>  arch/riscv/kernel/setup.c          | 234 +++++++++++++++++++++
>  arch/riscv/kernel/signal.c         | 258 +++++++++++++++++++++++
>  arch/riscv/kernel/smp.c            | 107 ++++++++++
>  arch/riscv/kernel/smpboot.c        | 105 ++++++++++
>  arch/riscv/kernel/stacktrace.c     | 183 ++++++++++++++++
>  arch/riscv/kernel/sys_riscv.c      |  85 ++++++++
>  arch/riscv/kernel/syscall_table.c  |  26 +++
>  arch/riscv/kernel/time.c           | 116 +++++++++++
>  arch/riscv/kernel/traps.c          | 167 +++++++++++++++
>  arch/riscv/kernel/vdso.c           | 125 +++++++++++
>  arch/riscv/kernel/vdso/.gitignore  |   1 +
>  arch/riscv/kernel/vdso/Makefile    |  61 ++++++
>  arch/riscv/kernel/vdso/sigreturn.S |  25 +++
>  arch/riscv/kernel/vdso/vdso.S      |  28 +++
>  arch/riscv/kernel/vdso/vdso.lds.S  |  77 +++++++
>  arch/riscv/kernel/vmlinux.lds.S    |  93 +++++++++
>  31 files changed, 3714 insertions(+)
>  create mode 100644 arch/riscv/kernel/Makefile
>  create mode 100644 arch/riscv/kernel/asm-offsets.c
>  create mode 100644 arch/riscv/kernel/cacheinfo.c
>  create mode 100644 arch/riscv/kernel/cpu.c
>  create mode 100644 arch/riscv/kernel/entry.S
>  create mode 100644 arch/riscv/kernel/head.S
>  create mode 100644 arch/riscv/kernel/irq.c
>  create mode 100644 arch/riscv/kernel/module.c
>  create mode 100644 arch/riscv/kernel/pci.c
>  create mode 100644 arch/riscv/kernel/plic.c
>  create mode 100644 arch/riscv/kernel/process.c
>  create mode 100644 arch/riscv/kernel/ptrace.c
>  create mode 100644 arch/riscv/kernel/reset.c
>  create mode 100644 arch/riscv/kernel/riscv_ksyms.c
>  create mode 100644 arch/riscv/kernel/sbi-con.c
>  create mode 100644 arch/riscv/kernel/setup.c
>  create mode 100644 arch/riscv/kernel/signal.c
>  create mode 100644 arch/riscv/kernel/smp.c
>  create mode 100644 arch/riscv/kernel/smpboot.c
>  create mode 100644 arch/riscv/kernel/stacktrace.c
>  create mode 100644 arch/riscv/kernel/sys_riscv.c
>  create mode 100644 arch/riscv/kernel/syscall_table.c
>  create mode 100644 arch/riscv/kernel/time.c
>  create mode 100644 arch/riscv/kernel/traps.c
>  create mode 100644 arch/riscv/kernel/vdso.c
>  create mode 100644 arch/riscv/kernel/vdso/.gitignore
>  create mode 100644 arch/riscv/kernel/vdso/Makefile
>  create mode 100644 arch/riscv/kernel/vdso/sigreturn.S
>  create mode 100644 arch/riscv/kernel/vdso/vdso.S
>  create mode 100644 arch/riscv/kernel/vdso/vdso.lds.S
>  create mode 100644 arch/riscv/kernel/vmlinux.lds.S

What's missing from this patchset (ideally) is a good writeup under
DOcumentation/ on expectations of system state (and/or configuration)
upon entry of the kernel. For comparison, see the arm64 documentation
where they were quite specific in this.


This patch is also pushing size limits, and is getting unwieldy to
comment on. I'll point out a few things below with plenty of snipped
out lines.

>
> diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
> new file mode 100644
> index 000000000000..94ac2931c56a
> --- /dev/null
> +++ b/arch/riscv/kernel/Makefile
> @@ -0,0 +1,19 @@
> +#
> +# Makefile for the RISC-V Linux kernel
> +#
> +
> +extra-y := head.o vmlinux.lds
> +
> +obj-y  := cpu.o entry.o irq.o process.o ptrace.o reset.o setup.o \
> +          signal.o syscall_table.o sys_riscv.o time.o traps.o \
> +          riscv_ksyms.o stacktrace.o vdso.o cacheinfo.o vdso/
> +
> +CFLAGS_setup.o := -mcmodel=medany
> +
> +obj-$(CONFIG_SMP)              += smpboot.o smp.o
> +obj-$(CONFIG_SBI_CONSOLE)      += sbi-con.o
> +obj-$(CONFIG_PCI)              += pci.o
> +obj-$(CONFIG_MODULES)          += module.o
> +obj-$(CONFIG_PLIC)             += plic.o
> +
> +clean:
> diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
> new file mode 100644
> index 000000000000..ac2e0cfaf8a3
> --- /dev/null
> +++ b/arch/riscv/kernel/asm-offsets.c
> @@ -0,0 +1,113 @@
> +/*
> + * Copyright (C) 2012 Regents of the University of California
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful, but
> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
> + *   NON INFRINGEMENT.  See the GNU General Public License for
> + *   more details.

Hmm, I haven't seen these terms used often, but they seem to exist
around the tree in a few places. arch/tile is littered with them.

I am not a lawyer, but I can't seem any reference to "good title" in
the GPLv2 text.

Rather than having to go through the process of figuring out if this
license header is acceptable or not, you might find it easier to just
go with something more established.

> diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c
> new file mode 100644
> index 000000000000..a22ea8abbf3c
> --- /dev/null
> +++ b/arch/riscv/kernel/cacheinfo.c
> @@ -0,0 +1,82 @@
> +/*
> + * Copyright (C) 2017 SiFive
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful, but
> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
> + *   NON INFRINGEMENT.  See the GNU General Public License for
> + *   more details.
> + */
> +
> +#include <linux/cacheinfo.h>
> +#include <linux/cpu.h>
> +#include <linux/of.h>
> +#include <linux/of_device.h>
> +
> +static void ci_leaf_init(struct cacheinfo *this_leaf,
> +                         struct device_node *node,
> +                         enum cache_type type, unsigned int level)
> +{
> +        this_leaf->of_node = node;
> +        this_leaf->level = level;
> +        this_leaf->type = type;
> +        this_leaf->physical_line_partition = 1; // not a sector cache
> +        this_leaf->attributes = CACHE_WRITE_BACK | CACHE_READ_ALLOCATE | CACHE_WRITE_ALLOCATE; // TODO: add to DTS
> +}
> +
> +static int __init_cache_level(unsigned int cpu)
> +{
> +       struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
> +       struct device_node *np = of_cpu_device_node_get(cpu);
> +       int levels = 0, leaves = 0, level;
> +
> +       if (of_property_read_bool(np, "cache-size")) ++leaves;
> +       if (of_property_read_bool(np, "i-cache-size")) ++leaves;
> +       if (of_property_read_bool(np, "d-cache-size")) ++leaves;
> +       if (leaves > 0) levels = 1;
> +
> +       while ((np = of_find_next_cache_node(np))) {
> +               if (!of_device_is_compatible(np, "cache")) break;
> +               if (of_property_read_u32(np, "cache-level", &level)) break;
> +               if (level <= levels) break;
> +               if (of_property_read_bool(np, "cache-size")) ++leaves;
> +               if (of_property_read_bool(np, "i-cache-size")) ++leaves;
> +               if (of_property_read_bool(np, "d-cache-size")) ++leaves;
> +               levels = level;
> +       }
> +
> +       this_cpu_ci->num_levels = levels;
> +       this_cpu_ci->num_leaves = leaves;
> +       return 0;
> +}
> +
> +static int __populate_cache_leaves(unsigned int cpu)
> +{
> +       struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
> +       struct cacheinfo *this_leaf = this_cpu_ci->info_list;
> +       struct device_node *np = of_cpu_device_node_get(cpu);
> +       int levels = 1, level = 1;
> +
> +       if (of_property_read_bool(np, "cache-size"))   ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
> +       if (of_property_read_bool(np, "i-cache-size")) ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
> +       if (of_property_read_bool(np, "d-cache-size")) ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);

Please run checkpatch, kernel coding style doesn't use one-line ifs
(here nor elsewhere).

> +
> +       while ((np = of_find_next_cache_node(np))) {
> +               if (!of_device_is_compatible(np, "cache")) break;
> +               if (of_property_read_u32(np, "cache-level", &level)) break;
> +               if (level <= levels) break;
> +               if (of_property_read_bool(np, "cache-size"))   ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
> +               if (of_property_read_bool(np, "i-cache-size")) ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
> +               if (of_property_read_bool(np, "d-cache-size")) ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
> +               levels = level;
> +       }
> +
> +       return 0;
> +}
> +
> +DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
> +DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
> diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c
> new file mode 100644
> index 000000000000..9cbf53eb58be
> --- /dev/null
> +++ b/arch/riscv/kernel/cpu.c
> @@ -0,0 +1,81 @@
> +/*
> + * Copyright (C) 2012 Regents of the University of California
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful, but
> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
> + *   NON INFRINGEMENT.  See the GNU General Public License for
> + *   more details.
> + */
> +
> +#include <linux/init.h>
> +#include <linux/seq_file.h>
> +#include <linux/of.h>
> +
> +/* Return -1 if not a valid hart */
> +int riscv_of_processor_hart(struct device_node *node)
> +{
> +       const char *isa, *status;
> +       u32 hart;
> +
> +       if (!of_device_is_compatible(node, "riscv")) return -1;
> +       if (of_property_read_u32(node, "reg", &hart) || hart >= NR_CPUS) return -1;
> +       if (of_property_read_string(node, "status", &status) || strcmp(status, "okay")) return -1;
> +       if (of_property_read_string(node, "riscv,isa", &isa) || isa[0] != 'r' || isa[1] != 'v') return -1;
> +
> +       return hart;
> +}

We usually prefer to see real -E<foo> returns instead of -1 in the kernel.

[...]

> diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
> new file mode 100644
> index 000000000000..52d574206d76
> --- /dev/null
> +++ b/arch/riscv/kernel/head.S
> @@ -0,0 +1,139 @@
> +/*
> + * Copyright (C) 2012 Regents of the University of California
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful, but
> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
> + *   NON INFRINGEMENT.  See the GNU General Public License for
> + *   more details.
> + */
> +
> +#include <asm/thread_info.h>
> +#include <asm/asm-offsets.h>
> +#include <asm/asm.h>
> +#include <linux/init.h>
> +#include <linux/linkage.h>
> +#include <asm/thread_info.h>
> +#include <asm/page.h>
> +#include <asm/csr.h>
> +
> +__INIT
> +ENTRY(_start)
> +       /* Mask all interrupts */
> +       csrw sie, zero
> +
> +       /* Disable FPU to detect illegal usage of
> +          floating point in kernel space */
> +       li t0, SR_FS
> +       csrc sstatus, t0
> +
> +#ifndef CONFIG_RV_PUM
> +       /* Allow access to user memory */
> +       li t0, SR_SUM
> +       csrs sstatus, t0
> +#endif
> +
> +       /* Pick one hart to run the main boot sequence */
> +       la a3, hart_lottery
> +       li a2, 1
> +       amoadd.w a3, a2, (a3)
> +       bnez a3, .Lsecondary_start
> +
> +       /* Save hart ID and DTB physical address */
> +       mv s0, a0
> +       mv s1, a1
> +
> +       /* Initialize page tables and relocate to virtual addresses */
> +       la sp, init_thread_union + THREAD_SIZE
> +       call setup_vm
> +       call relocate
> +
> +       /* Restore C environment */
> +       la tp, init_thread_union
> +       li sp, THREAD_SIZE
> +       add sp, sp, tp
> +
> +       /* Start the kernel */
> +       mv a0, s0
> +       mv a1, s1
> +       call sbi_save
> +       tail start_kernel
> +
> +relocate:
> +       /* Relocate return address */
> +       li a1, PAGE_OFFSET
> +       la a0, _start
> +       sub a1, a1, a0
> +       add ra, ra, a1
> +
> +       /* Point stvec to virtual address of intruction after sptbr write */
> +       la a0, 1f
> +       add a0, a0, a1
> +       csrw stvec, a0
> +
> +       /* Compute sptbr for kernel page tables, but don't load it yet */
> +       la a2, swapper_pg_dir
> +       srl a2, a2, PAGE_SHIFT
> +       li a1, SPTBR_MODE
> +       or a2, a2, a1
> +
> +       /* Load trampoline page directory, which will cause us to trap to
> +          stvec if VA != PA, or simply fall through if VA == PA */
> +       la a0, trampoline_pg_dir
> +       srl a0, a0, PAGE_SHIFT
> +       or a0, a0, a1
> +       sfence.vma
> +       csrw sptbr, a0
> +1:
> +       /* Set trap vector to spin forever to help debug */
> +       la a0, .Lsecondary_park
> +       csrw stvec, a0
> +
> +       /* Load the global pointer */
> +       la gp, __global_pointer$
> +
> +       /* Switch to kernel page tables */
> +       csrw sptbr, a2
> +
> +       ret
> +
> +.Lsecondary_start:
> +#ifdef CONFIG_SMP
> +       li a1, CONFIG_NR_CPUS
> +       bgeu a0, a1, .Lsecondary_park
> +
> +       la a1, __cpu_up_stack_pointer
> +       slli a0, a0, LGREG
> +       add a0, a0, a1
> +
> +.Lwait_for_cpu_up:
> +       REG_L sp, (a0)
> +       beqz sp, .Lwait_for_cpu_up
> +       fence
> +
> +       /* Enable virtual memory and relocate to virtual address */
> +       call relocate
> +
> +       /* Initialize task_struct pointer */
> +       li tp, -THREAD_SIZE
> +       add tp, tp, sp
> +
> +       tail smp_callin
> +#endif
> +
> +.Lsecondary_park:
> +       /* We lack SMP support or have too many harts, so park this hart */
> +       wfi
> +       j .Lsecondary_park
> +END(_start)
> +
> +__PAGE_ALIGNED_BSS
> +       /* Empty zero page */
> +       .balign PAGE_SIZE
> +ENTRY(empty_zero_page)
> +       .fill (empty_zero_page + PAGE_SIZE) - ., 1, 0x00
> +END(empty_zero_page)
> diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c
> new file mode 100644
> index 000000000000..b772bb9539cf
> --- /dev/null
> +++ b/arch/riscv/kernel/irq.c
> @@ -0,0 +1,205 @@
> +/*
> + * Copyright (C) 2012 Regents of the University of California
> + * Copyright (C) 2017 SiFive
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful, but
> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
> + *   NON INFRINGEMENT.  See the GNU General Public License for
> + *   more details.
> + */
> +
> +#include <linux/irq.h>
> +#include <linux/irqchip.h>
> +#include <linux/irqdomain.h>
> +#include <linux/interrupt.h>
> +#include <linux/ftrace.h>
> +#include <linux/of.h>
> +#include <linux/seq_file.h>
> +
> +#include <asm/ptrace.h>
> +#include <asm/sbi.h>
> +#include <asm/smp.h>
> +
> +struct riscv_irq_data {
> +       struct irq_chip         chip;
> +       struct irq_domain       *domain;
> +       int                     hart;
> +       char                    name[20];
> +};
> +DEFINE_PER_CPU(struct riscv_irq_data, riscv_irq_data);
> +DEFINE_PER_CPU(atomic_long_t, riscv_early_sie);
> +
> +static void riscv_software_interrupt(void)
> +{
> +#ifdef CONFIG_SMP
> +       irqreturn_t ret;
> +
> +       ret = handle_ipi();
> +       if (ret != IRQ_NONE)
> +               return;
> +#endif
> +
> +       BUG();
> +}
> +
> +asmlinkage void __irq_entry do_IRQ(unsigned int cause, struct pt_regs *regs)
> +{
> +       struct pt_regs *old_regs = set_irq_regs(regs);
> +       irq_enter();
> +
> +       /* There are three classes of interrupt: timer, software, and
> +          external devices.  We dispatch between them here.  External
> +          device interrupts use the generic IRQ mechanisms. */
> +       switch (cause) {
> +               case INTERRUPT_CAUSE_TIMER:
> +                       riscv_timer_interrupt();
> +                       break;
> +               case INTERRUPT_CAUSE_SOFTWARE:
> +                       riscv_software_interrupt();
> +                       break;
> +               default: {
> +                       struct irq_domain *domain = per_cpu(riscv_irq_data, smp_processor_id()).domain;

Move this up to top of function and remove the { } wrap, please.

> +                       generic_handle_irq(irq_find_mapping(domain, cause));
> +                       break;
> +               }
> +       }
> +
> +       irq_exit();
> +       set_irq_regs(old_regs);
> +}
> +
> +static int riscv_irqdomain_map(struct irq_domain *d, unsigned int irq, irq_hw_number_t hwirq)
> +{
> +       struct riscv_irq_data *data = d->host_data;
> +
> +        irq_set_chip_and_handler(irq, &data->chip, handle_simple_irq);
> +        irq_set_chip_data(irq, data);
> +        irq_set_noprobe(irq);
> +
> +        return 0;
> +}
> +
> +static const struct irq_domain_ops riscv_irqdomain_ops = {
> +       .map    = riscv_irqdomain_map,
> +       .xlate  = irq_domain_xlate_onecell,
> +};
> +
> +static void riscv_irq_mask(struct irq_data *d)
> +{
> +       struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
> +       BUG_ON(smp_processor_id() != data->hart);
> +       csr_clear(sie, 1 << (long)d->hwirq);
> +}
> +
> +static void riscv_irq_unmask(struct irq_data *d)
> +{
> +       struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
> +       BUG_ON(smp_processor_id() != data->hart);
> +       csr_set(sie, 1 << (long)d->hwirq);
> +}
> +
> +static void riscv_irq_enable_helper(void *d)
> +{
> +       riscv_irq_unmask(d);
> +}
> +
> +static void riscv_irq_enable(struct irq_data *d)
> +{
> +       struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
> +       atomic_long_or((1 << (long)d->hwirq), &per_cpu(riscv_early_sie, data->hart));

This is a bit dense to get into without a few words of how it's
expected to work.

> +       if (data->hart == smp_processor_id()) {
> +               riscv_irq_unmask(d);
> +       } else if (cpu_online(data->hart)) {
> +               smp_call_function_single(data->hart, riscv_irq_enable_helper, d, true);
> +       }
> +}
> +
> +static void riscv_irq_disable_helper(void *d)
> +{
> +       riscv_irq_mask(d);
> +}
> +
> +static void riscv_irq_disable(struct irq_data *d)
> +{
> +       struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
> +       atomic_long_and(~(1 << (long)d->hwirq), &per_cpu(riscv_early_sie, data->hart));
> +       if (data->hart == smp_processor_id()) {
> +               riscv_irq_mask(d);
> +       } else if (cpu_online(data->hart)) {
> +               smp_call_function_single(data->hart, riscv_irq_disable_helper, d, true);
> +       }
> +}
> +
> +static void riscv_irq_mask_noop(struct irq_data *d) { }
> +
> +static void riscv_irq_unmask_noop(struct irq_data *d) { }
> +
> +static void riscv_irq_enable_noop(struct irq_data *d)
> +{
> +       struct device_node *data = irq_data_get_irq_chip_data(d);
> +       u32 hart;
> +
> +       if (!of_property_read_u32(data, "reg", &hart)) {
> +               printk("WARNING: enabled interrupt %d for missing hart %d (this interrupt has no handler)\n", (int)d->hwirq, hart);
> +       }
> +}
> +
> +static struct irq_chip riscv_noop_chip = {
> +       .name = "riscv,cpu-intc,noop",
> +       .irq_mask = riscv_irq_mask_noop,
> +       .irq_unmask = riscv_irq_unmask_noop,
> +       .irq_enable = riscv_irq_enable_noop,
> +};
> +
> +static int riscv_irqdomain_map_noop(struct irq_domain *d, unsigned int irq, irq_hw_number_t hwirq)
> +{
> +       struct device_node *data = d->host_data;
> +       irq_set_chip_and_handler(irq, &riscv_noop_chip, handle_simple_irq);
> +       irq_set_chip_data(irq, data);
> +       return 0;
> +}
> +
> +static const struct irq_domain_ops riscv_irqdomain_ops_noop = {
> +       .map    = riscv_irqdomain_map_noop,
> +       .xlate  = irq_domain_xlate_onecell,
> +};
> +
> +static int riscv_intc_init(struct device_node *node, struct device_node *parent)
> +{
> +       int hart;
> +
> +       if (parent) return 0; // should have no interrupt parent
> +
> +       if ((hart = riscv_of_processor_hart(node->parent)) >= 0) {

Common pattern in kernel is to detect error instead:
    hart = riscv_of_processor_hart(node->parent);
    if (hart < 0) {
        <from your else side here>
        return 0;
    }
    <body if your if statement here>

    return 0;

> +               struct riscv_irq_data *data = &per_cpu(riscv_irq_data, hart);
> +               snprintf(data->name, sizeof(data->name), "riscv,cpu_intc,%d", hart);
> +               data->hart = hart;
> +               data->chip.name = data->name;
> +               data->chip.irq_mask = riscv_irq_mask;
> +               data->chip.irq_unmask = riscv_irq_unmask;
> +               data->chip.irq_enable = riscv_irq_enable;
> +               data->chip.irq_disable = riscv_irq_disable;
> +               data->domain = irq_domain_add_linear(node, 8*sizeof(uintptr_t), &riscv_irqdomain_ops, data);
> +               WARN_ON(!data->domain);
> +               printk("%s: %d local interrupts mapped\n", data->name, 8*(int)sizeof(uintptr_t));
> +       } else {
> +               /* If a hart is disabled, create a no-op irq domain.
> +                * Devices may still have interrupts connected to those harts.
> +                * This is not wrong... unless they actually load a driver that needs it!
> +                */
> +               irq_domain_add_linear(node, 8*sizeof(uintptr_t), &riscv_irqdomain_ops_noop, node->parent);
> +       }
> +       return 0;
> +}
> +
> +IRQCHIP_DECLARE(riscv, "riscv,cpu-intc", riscv_intc_init);
> +
> +void __init init_IRQ(void)
> +{
> +       irqchip_init();
> +}

> diff --git a/arch/riscv/kernel/pci.c b/arch/riscv/kernel/pci.c
> new file mode 100644
> index 000000000000..4191a5ffdd67
> --- /dev/null
> +++ b/arch/riscv/kernel/pci.c
> @@ -0,0 +1,36 @@
> +/*
> + * Code borrowed from arch/arm64/kernel/pci.c

So, you should add recursive reference from there (i.e. powerpc).

But in the end, there's essentially no code in this file. :)


> + *
> + * Copyright (C) 2003 Anton Blanchard <anton@au.ibm.com>, IBM
> + * Copyright (C) 2014 ARM Ltd.
> + * Copyright (C) 2017 SiFive
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * version 2 as published by the Free Software Foundation.
> + *
> + */
> +
> +#include <linux/init.h>
> +#include <linux/io.h>
> +#include <linux/kernel.h>
> +#include <linux/mm.h>
> +#include <linux/slab.h>
> +#include <linux/pci.h>
> +
> +/*
> + * Called after each bus is probed, but before its children are examined
> + */
> +void pcibios_fixup_bus(struct pci_bus *bus)
> +{
> +       /* nothing to do, expected to be removed in the future */
> +}
> +
> +/*
> + * We don't have to worry about legacy ISA devices, so nothing to do here
> + */
> +resource_size_t pcibios_align_resource(void *data, const struct resource *res,
> +                               resource_size_t size, resource_size_t align)
> +{
> +       return res->start;
> +}
> diff --git a/arch/riscv/kernel/plic.c b/arch/riscv/kernel/plic.c
> new file mode 100644
> index 000000000000..5b3d4241f4e2
> --- /dev/null
> +++ b/arch/riscv/kernel/plic.c
> @@ -0,0 +1,208 @@
> +/*
> + * Copyright (C) 2017 SiFive
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful, but
> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
> + *   NON INFRINGEMENT.  See the GNU General Public License for
> + *   more details.
> + */
> +
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/irq.h>
> +#include <linux/irqchip.h>
> +#include <linux/irqchip/chained_irq.h>
> +#include <linux/irqdomain.h>
> +#include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_address.h>
> +#include <linux/of_irq.h>
> +#include <linux/platform_device.h>
> +
> +#define MAX_DEVICES    1024 // 0 is reserved

Seems like an odd comment to have here (and should probably not go at
the end of the line)

> +#define MAX_CONTEXTS   15872
> +
> +#define PRIORITY_BASE  0
> +#define ENABLE_BASE    0x2000
> +#define ENABLE_SIZE    0x80
> +#define HART_BASE      0x200000
> +#define HART_SIZE      0x1000
> +
> +#define PLIC_HART_CONTEXT(data, i)     (struct plic_hart_context *)((char*)data->reg + HART_BASE + HART_SIZE*i)
> +#define PLIC_ENABLE_CONTEXT(data, i)   (struct plic_enable_context *)((char*)data->reg + ENABLE_BASE + ENABLE_SIZE*i)
> +#define PLIC_PRIORITY(data)            (struct plic_priority *)((char *)data->reg + PRIORITY_BASE)

Since you have typecasting and stuff here, small static inlines with
appropriate return types seems slightly tidier.

> +
> +struct plic_hart_context {
> +       volatile u32 threshold;
> +       volatile u32 claim;
> +};
> +
> +struct plic_enable_context {
> +       atomic_t mask[32]; // 32-bit * 32-entry
> +};
> +
> +struct plic_priority {
> +       volatile u32 prio[MAX_DEVICES];
> +};
> +
> +struct plic_data {
> +       struct irq_chip         chip;
> +       struct irq_domain       *domain;
> +       u32                     ndev;
> +       void __iomem            *reg;
> +       int                     handlers;
> +       struct plic_handler     *handler;
> +       char                    name[30];
> +};
> +
> +struct plic_handler {
> +       struct plic_hart_context        *context;
> +       struct plic_data                *data;
> +};
> +
> +static void plic_disable(struct plic_data *data, int i, int hwirq)
> +{
> +       struct plic_enable_context *enable = PLIC_ENABLE_CONTEXT(data, i);
> +       atomic_and(~(1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
> +}
> +
> +static void plic_enable(struct plic_data *data, int i, int hwirq)
> +{
> +       struct plic_enable_context *enable = PLIC_ENABLE_CONTEXT(data, i);
> +       atomic_or((1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
> +}
> +
> +// There is no need to mask/unmask PLIC interrupts
> +// They are "masked" by reading claim and "unmasked" when writing it back.
> +static void plic_irq_mask(struct irq_data *d) { }
> +static void plic_irq_unmask(struct irq_data *d) { }
> +
> +static void plic_irq_enable(struct irq_data *d)
> +{
> +       struct plic_data *data = irq_data_get_irq_chip_data(d);
> +       struct plic_priority *priority = PLIC_PRIORITY(data);
> +       int i;
> +       iowrite32(1, &priority->prio[d->hwirq]);
> +       for (i = 0; i < data->handlers; ++i)
> +               if (data->handler[i].context)
> +                       plic_enable(data, i, d->hwirq);
> +}
> +
> +static void plic_irq_disable(struct irq_data *d)
> +{
> +       struct plic_data *data = irq_data_get_irq_chip_data(d);
> +       struct plic_priority *priority = PLIC_PRIORITY(data);
> +       int i;
> +       iowrite32(0, &priority->prio[d->hwirq]);
> +       for (i = 0; i < data->handlers; ++i)
> +               if (data->handler[i].context)
> +                       plic_disable(data, i, d->hwirq);
> +}
> +
> +static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq, irq_hw_number_t hwirq)
> +{
> +       struct plic_data *data = d->host_data;
> +
> +        irq_set_chip_and_handler(irq, &data->chip, handle_simple_irq);
> +        irq_set_chip_data(irq, data);
> +        irq_set_noprobe(irq);
> +
> +        return 0;
> +}
> +
> +static const struct irq_domain_ops plic_irqdomain_ops = {
> +       .map    = plic_irqdomain_map,
> +       .xlate  = irq_domain_xlate_onecell,
> +};
> +
> +static void plic_chained_handle_irq(struct irq_desc *desc)
> +{
> +        struct plic_handler *handler = irq_desc_get_handler_data(desc);
> +       struct irq_chip *chip = irq_desc_get_chip(desc);

Whitespace.

> +       struct irq_domain *domain = handler->data->domain;
> +       u32 what;
> +
> +       chained_irq_enter(chip, desc);
> +
> +       while ((what = ioread32(&handler->context->claim))) {
> +               int irq = irq_find_mapping(domain, what);
> +               if (irq > 0) {
> +                       generic_handle_irq(irq);
> +               } else {
> +                       handle_bad_irq(desc);
> +               }
> +               iowrite32(what, &handler->context->claim);
> +       }
> +
> +       chained_irq_exit(chip, desc);
> +}
> +
> +// TODO: add a /sys interface to set priority + per-hart enables for steering
> +
> +static int plic_init(struct device_node *node, struct device_node *parent)
> +{
> +       struct plic_data *data;
> +       struct resource resource;
> +       int i, ok = 0;
> +
> +       data = kzalloc(sizeof(*data), GFP_KERNEL);
> +       if (WARN_ON(!data)) return -ENOMEM;
> +
> +       data->reg = of_iomap(node, 0);
> +       if (WARN_ON(!data->reg)) return -EIO;
> +
> +       of_property_read_u32(node, "riscv,ndev", &data->ndev);
> +       if (WARN_ON(!data->ndev)) return -EINVAL;
> +
> +       data->handlers = of_irq_count(node);
> +       if (WARN_ON(!data->handlers)) return -EINVAL;
> +
> +       data->handler = kzalloc(sizeof(*data->handler)*data->handlers, GFP_KERNEL);
> +       if (WARN_ON(!data->handler)) return -ENOMEM;
> +
> +       data->domain = irq_domain_add_linear(node, data->ndev+1, &plic_irqdomain_ops, data);
> +       if (WARN_ON(!data->domain)) return -ENOMEM;
> +
> +       of_address_to_resource(node, 0, &resource);
> +       snprintf(data->name, sizeof(data->name), "riscv,plic0,%llx", resource.start);
> +       data->chip.name = data->name;
> +       data->chip.irq_mask = plic_irq_mask;
> +       data->chip.irq_unmask = plic_irq_unmask;
> +       data->chip.irq_enable = plic_irq_enable;
> +       data->chip.irq_disable = plic_irq_disable;
> +
> +       for (i = 0; i < data->handlers; ++i) {
> +               struct plic_handler *handler = &data->handler[i];
> +               struct of_phandle_args parent;
> +               int parent_irq, hwirq;
> +
> +               if (of_irq_parse_one(node, i, &parent)) continue;
> +               if (parent.args[0] == -1) continue; // skip context holes
> +
> +               // skip any contexts that lead to inactive harts
> +               if (of_device_is_compatible(parent.np, "riscv,cpu-intc") &&
> +                   parent.np->parent &&
> +                   riscv_of_processor_hart(parent.np->parent) < 0) continue;
> +
> +               parent_irq = irq_create_of_mapping(&parent);
> +               if (!parent_irq) continue;
> +
> +               handler->context = PLIC_HART_CONTEXT(data, i);
> +               handler->data = data;
> +               iowrite32(0, &handler->context->threshold); // hwirq prio must be > this to trigger an interrupt
> +               for (hwirq = 1; hwirq <= data->ndev; ++hwirq) plic_disable(data, i, hwirq);
> +               irq_set_chained_handler_and_data(parent_irq, plic_chained_handle_irq, handler);
> +               ++ok;
> +       }
> +
> +       printk("%s: mapped %d interrupts to %d/%d handlers\n", data->name, data->ndev, ok, data->handlers);
> +       WARN_ON(!ok);
> +       return 0;
> +}
> +
> +IRQCHIP_DECLARE(plic0, "riscv,plic0", plic_init);


[wrapping up review of this patch at this point to keep size down]


-Olof

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v1
  2017-05-23  0:41 RISC-V Linux Port v1 Palmer Dabbelt
                   ` (7 preceding siblings ...)
  2017-05-23  1:16 ` RISC-V Linux Port v1 Olof Johansson
@ 2017-05-23  2:16 ` Randy Dunlap
  2017-05-23  4:49   ` Palmer Dabbelt
  2017-06-06 22:59   ` Palmer Dabbelt
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 283+ messages in thread
From: Randy Dunlap @ 2017-05-23  2:16 UTC (permalink / raw)
  To: Palmer Dabbelt, linux-kernel, Arnd Bergmann, olof; +Cc: albert

On 05/22/17 17:41, Palmer Dabbelt wrote:

> [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64}
> [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
> [PATCH 3/7] RISC-V: Device Tree Documentation
> [PATCH 4/7] RISC-V: arch/riscv/include
> [PATCH 5/7] RISC-V: arch/riscv/lib
> [PATCH 6/7] RISC-V: arch/riscv/kernel
> [PATCH 7/7] RISC-V: arch/riscv/mm
> 

Now at patch 0004-riscv.patch
Warning: trailing whitespace in line 3 of arch/riscv/include/asm/asm.h
Warning: trailing whitespace in line 115 of arch/riscv/include/asm/spinlock.h

Now at patch 0006-riscv.patch
Warning: trailing whitespace in line 271 of arch/riscv/kernel/entry.S
Warning: trailing whitespace in line 77 of arch/riscv/kernel/process.c
Warning: trailing whitespace in line 78 of arch/riscv/kernel/smpboot.c


-- 
~Randy

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 7/7] RISC-V: arch/riscv/mm
  2017-05-23  1:28   ` Randy Dunlap
@ 2017-05-23  2:17     ` Olof Johansson
  2017-05-23  3:36       ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Olof Johansson @ 2017-05-23  2:17 UTC (permalink / raw)
  To: Randy Dunlap; +Cc: Palmer Dabbelt, linux-kernel, Arnd Bergmann, albert

On Mon, May 22, 2017 at 6:28 PM, Randy Dunlap <rdunlap@infradead.org> wrote:
>
>> diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
>> new file mode 100644
>> index 000000000000..f02e286dd1c1
>> --- /dev/null
>> +++ b/arch/riscv/mm/fault.c
>> @@ -0,0 +1,279 @@
>> +/*
>> + * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
>> + *  Lennox Wu <lennox.wu@sunplusct.com>
>> + *  Chen Liqin <liqin.chen@sunplusct.com>
>
> is "sunplusct.com" correct?  They don't have a web server?

Seems to come from arch/score/ contents. Looks like true genealogy is
from MIPS though.


-Olof

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v1
  2017-05-23  1:16 ` RISC-V Linux Port v1 Olof Johansson
  2017-05-23  1:25   ` Randy Dunlap
@ 2017-05-23  3:36   ` Palmer Dabbelt
  2017-05-23  6:45     ` Tobias Klauser
  1 sibling, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23  3:36 UTC (permalink / raw)
  To: olof; +Cc: linux-kernel, Arnd Bergmann, albert

On Mon, 22 May 2017 18:16:20 PDT (-0700), olof@lixom.net wrote:
> Hi Palmer,
>
> On Mon, May 22, 2017 at 5:41 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> We'd like to submit for inclusion in Linux a port for the RISC-V architecture.
>> While it is doubtlessly not complete, we think it is far enough along to start
>> the upstreaming process.  Our binutils and GCC ports have been accepted and
>> released, and we plan on submitting glibc patches soon.
>>
>> This port targets Version 1.10 of the RISC-V Privileged ISA, and supports both
>> the RV32 and RV64 user ISAs.  The RISC-V community and the 60-some member
>> companies of the RISC-V Foundation are quite eager to have a single, standard
>> Linux port.  We thank you in advance for your help in this process and for your
>> feedback on the software contribution itself.
>>
>> These patches build and boot on top of 4.12-rc2.  I understand that the merge
>> window is closed, but it was suggested that the best time to submit a new
>> architecture port would be right after an RC2 as the earliest point at which
>> the tree is usually generally churn-free enough.  While we optimistically hope
>> that we can get the port in for the 4.13 merge window, we're also eager to
>> ensure that the user-visible ABI is sane so we can proceed with our glibc port.
>> We'd like to at least get any user ABI issues shaken out as soon as possible,
>> even if we don't make it into 4.13.
>
> Time is right for review and eventual merge of this. Whether it makes
> 4.13 depends on how much discussion ensues. :)
>
>> Albert and I will volunteer to maintain this port if it's OK with everyone.
>
> It always makes sense to have architecture-knowledge people maintain
> it; no complaints from me.
>
> What we've seen been useful on other platforms (i.e. arm/arm64) is to
> offload the per-vendor stuff to a separate tree. It might or might not
> be needed here; likely to start out it won't be enough material to
> need it.

I'm OK with that.  We've been using http://github.com/riscv to hold all our
other "riscv-next" branches, but if you think another place is more appropriate
then I'm OK with that as well.

>> We'd like to thank the various members of the RISC-V software community who
>> have helped us with the port.
>>
>> Thanks!
>>
>> In addition to the threaded messages, our port can be found on Git Hib
>>
>>   https://github.com/riscv/riscv-linux/tree/riscv-for-submission-v1
>>
>> [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64}
>> [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
>> [PATCH 3/7] RISC-V: Device Tree Documentation
>> [PATCH 4/7] RISC-V: arch/riscv/include
>> [PATCH 5/7] RISC-V: arch/riscv/lib
>> [PATCH 6/7] RISC-V: arch/riscv/kernel
>> [PATCH 7/7] RISC-V: arch/riscv/mm
>
> So, one overall comment on this patchset is that it's not bisectable
> (i.e. early patches add Makefile contents that refers to directories
> not yet introduced).
>
> While it's not overly important to really split up a new architecture
> introduction into small incremental patches, we generally strive to
> have the tree fully buildable at any given commit. Some minor
> rearranging would alleviate these problems.

I only really split things up so they'll get through the various mailing lists,
I think of this as one logical commit so I didn't really worry about ordering.
I'll swizzle them around next time so everything always builds.

> Also, none of the patches seem to have any descriptions. Adding some
> high-level descriptions of what's in each patch in the patch itself is
> useful both for reviewing now, and for educating anyone coming along
> later on trying to learn about the code and why it's been implemented
> as it has.

I guess I just wasn't really sure what to say in the big code dumps.  I'll look
through and try to come up with something better for next time.

>
> I'll add more comments on some of the individual patches; expect this
> review to take a little while. Reposting once or twice a week to show
> incorporated changes can be useful; more than that and it can be
> harder to follow along in the discussion. It all depends on how much
> comments you end up receiving.

OK.  I'll incorporate all the feedback I get over the next week or so into a v2
patch set.

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v1
  2017-05-23  1:25   ` Randy Dunlap
@ 2017-05-23  3:36     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23  3:36 UTC (permalink / raw)
  To: rdunlap; +Cc: olof, linux-kernel, Arnd Bergmann, albert

On Mon, 22 May 2017 18:25:41 PDT (-0700), rdunlap@infradead.org wrote:
> On 05/22/17 18:16, Olof Johansson wrote:
>> Hi Palmer,
>>
>> On Mon, May 22, 2017 at 5:41 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>
>>> In addition to the threaded messages, our port can be found on Git Hib
>>>
>>>   https://github.com/riscv/riscv-linux/tree/riscv-for-submission-v1
>>>
>>> [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64}
>>> [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
>>> [PATCH 3/7] RISC-V: Device Tree Documentation
>>> [PATCH 4/7] RISC-V: arch/riscv/include
>>> [PATCH 5/7] RISC-V: arch/riscv/lib
>>> [PATCH 6/7] RISC-V: arch/riscv/kernel
>>> [PATCH 7/7] RISC-V: arch/riscv/mm
>>
>> So, one overall comment on this patchset is that it's not bisectable
>> (i.e. early patches add Makefile contents that refers to directories
>> not yet introduced).
>>
>> While it's not overly important to really split up a new architecture
>> introduction into small incremental patches, we generally strive to
>> have the tree fully buildable at any given commit. Some minor
>> rearranging would alleviate these problems.
>
> Neither the email patches nor the git tree have any Signed-off-by:
> entries AFAICT.

Makes sense.  I went through and checked everything for copyright, so I'll sign
off on the next patch set.

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 7/7] RISC-V: arch/riscv/mm
  2017-05-23  2:17     ` Olof Johansson
@ 2017-05-23  3:36       ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23  3:36 UTC (permalink / raw)
  To: olof; +Cc: rdunlap, linux-kernel, Arnd Bergmann, albert

On Mon, 22 May 2017 19:17:52 PDT (-0700), olof@lixom.net wrote:
> On Mon, May 22, 2017 at 6:28 PM, Randy Dunlap <rdunlap@infradead.org> wrote:
>>
>>> diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
>>> new file mode 100644
>>> index 000000000000..f02e286dd1c1
>>> --- /dev/null
>>> +++ b/arch/riscv/mm/fault.c
>>> @@ -0,0 +1,279 @@
>>> +/*
>>> + * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
>>> + *  Lennox Wu <lennox.wu@sunplusct.com>
>>> + *  Chen Liqin <liqin.chen@sunplusct.com>
>>
>> is "sunplusct.com" correct?  They don't have a web server?
>
> Seems to come from arch/score/ contents. Looks like true genealogy is
> from MIPS though.

Albert based the original port on score, so I copied their license header files
and appended to them.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-05-23  1:27   ` Olof Johansson
  2017-05-23  1:31     ` Randy Dunlap
@ 2017-05-23  4:49     ` Palmer Dabbelt
  2017-05-23  5:16       ` [patches] " Olof Johansson
  1 sibling, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23  4:49 UTC (permalink / raw)
  To: olof; +Cc: patches, linux-kernel, Arnd Bergmann, albert

On Mon, 22 May 2017 18:27:21 PDT (-0700), olof@lixom.net wrote:
> Hi,
>
>
> On Mon, May 22, 2017 at 5:41 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> ---
>>  arch/riscv/.gitignore                |  35 ++++
>>  arch/riscv/Kconfig                   | 300 +++++++++++++++++++++++++++++++++++
>>  arch/riscv/Makefile                  |  64 ++++++++
>>  arch/riscv/configs/riscv32_spike     |  47 ++++++
>>  arch/riscv/configs/riscv64_freedom-u |  52 ++++++
>>  arch/riscv/configs/riscv64_qemu      |  64 ++++++++
>>  arch/riscv/configs/riscv64_spike     |  45 ++++++
>>  7 files changed, 607 insertions(+)
>>  create mode 100644 arch/riscv/.gitignore
>>  create mode 100644 arch/riscv/Kconfig
>>  create mode 100644 arch/riscv/Makefile
>>  create mode 100644 arch/riscv/configs/riscv32_spike
>>  create mode 100644 arch/riscv/configs/riscv64_freedom-u
>>  create mode 100644 arch/riscv/configs/riscv64_qemu
>>  create mode 100644 arch/riscv/configs/riscv64_spike
>
> Nearly all other platforms have _defconfig in the config names. It
> might get a bit excessive to prepend riscv{32,64} to all of them
> though. Most other platforms have shortened it to, for example,
> spike_defconfig, spike64_defconfig, qemu_defconfig,
> freedom-u_defconfig.
>
> Not going to argue too much about the color of the shed here, but
> using the _defconfig naming is recommended.

Works for me <https://github.com/riscv/riscv-linux/commit/b1165397ba6cb54f23910537c4bf4c3488ef9aad>

I'll squash all the CR comments into a v2.

>
>>
>> diff --git a/arch/riscv/.gitignore b/arch/riscv/.gitignore
>> new file mode 100644
>> index 000000000000..376d06eb5d52
>> --- /dev/null
>> +++ b/arch/riscv/.gitignore
>> @@ -0,0 +1,35 @@
>> +# Now un-ignore all files.
>> +!*
>> +
>> +# But then re-ignore the files listed in the Linux .gitignore
>> +# Normal rules
>> +#
>> +.*
>> +*.o
>> +*.o.*
>> +*.a
>> +*.s
>> +*.ko
>> +*.so
>> +*.so.dbg
>> +*.mod.c
>> +*.i
>> +*.lst
>> +*.symtypes
>> +*.order
>> +modules.builtin
>> +*.elf
>> +*.bin
>> +*.gz
>> +*.bz2
>> +*.lzma
>> +*.xz
>> +*.lzo
>> +*.patch
>> +*.gcno
>
> I don't think you need to do any of this, just inherit the global one
> (by not having one here)?

Sorry, that's a holdover from how we used to manage our out-of-tree port and
can just be deleted.

  https://github.com/riscv/riscv-linux/commit/68032fb592297331a2f2caf246968da7b70373fe

>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>> new file mode 100644
>> index 000000000000..510ead1d3343
>> --- /dev/null
>> +++ b/arch/riscv/Kconfig
>> +config CPU_RV_GENERIC
>> +       bool "Generic RISC-V"
>> +       select CPU_SUPPORTS_32BIT_KERNEL
>> +       select CPU_SUPPORTS_64BIT_KERNEL
>
> Is this even needed at this point? If the only CPU you can pick
> supports this, you might as well not make it an option (CPU_RV_GENERIC
> that is), and just make CPU_SUPPORTS_{32,64}BIT_KERNEL 'def_bool y'
> for now.

I think this is actually broken in the opposite direction: we only support
32-bit kernels on RV32 and 64-bit kernels on RV64, not both at the same time.
I don't really think it makes sense to build a kernel that supports both 32-bit
and 64-bit on RISC-V (as they're different base ISAs), so I think it'd be
better to just have a "base ISA" configuration.

  https://github.com/riscv/riscv-linux/commit/695d428d65bf6fe1382d34393e9d07f40b74e1b2

We'll think on this a bit more and get something saner.

>> +config SBI_CONSOLE
>> +       tristate "SBI console support"
>> +       select TTY
>> +       default y
>
> Usually you end up having a DRIVER_FOO and DRIVER_FOO_CONSOLE option
> to enable registering it as console.
>
> Also, unless there's strong reason to keep it under arch/, it should
> probably go under drivers/tty/.

In this case "SBI" is the "Supervisor Binary Interface".  This is a set of
routines that are provided by the platform for OS use that do things like
writing to the boot console or TLB shootdowns.  The SBI is part of the RISC-V
ISA, so there isn't a config option for turning it off.  SBI_CONSOLE
enables/disables the SBI's console support, so I think this option is sane.

It's in arch/riscv because the SBI is part of the RISC-V ISA -- essentially
there's special SBI instructions that mean "write some register to the console"
(there's some implementation tricks behind this, so it's really just a
specification).  That said, I'm fine moving this to drivers.

>> +config RVC
>> +       bool "Use compressed instructions (RV32C or RV64C)"
>> +       default n
>
> What does "use" here mean? Use during build, or allow userspace to use them?
>
>> +
>> +config RV_ATOMIC
>> +       bool "Use atomic memory instructions (RV32A or RV64A)"
>> +       default y
>
> Same for this.

These mean "tell the compiler that it can emit these instructions when building
Linux".  Userspace applications can still use these instructions either way.
How does "Emit compressed instructions when building Linux" sound?

  https://github.com/riscv/riscv-linux/commit/d6e65bd8b7dcfa72578d62e5eb367f680b55f5a8

>
>> +config RV_SYSRISCV_ATOMIC
>> +       bool "Include support for atomic operation syscalls"
>> +       default n
>> +       help
>> +         If atomic memory instructions are present, i.e.,
>> +         CONFIG_RV_ATOMIC, this includes support for the syscall that
>> +         provides atomic accesses.  This is only useful to run
>> +         binaries that require atomic access but were compiled with
>> +         -mno-atomic.
>> +
>> +         If CONFIG_RV_ATOMIC is unset, this option is mandatory.
>
> If it's mandatory then Kconfig language should make it so.

I'm not sure what you mean by this.  We have

  config RISCV
  	...
  	select RV_SYSRISCV_ATOMIC if !RV_ATOMIC

Should this constraint just live within "config RV_SYSRISCV_ATOMIC"?  It seems
cleaner to have the constraints next to the config definitions.

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v1
  2017-05-23  2:16 ` Randy Dunlap
@ 2017-05-23  4:49   ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23  4:49 UTC (permalink / raw)
  To: rdunlap; +Cc: linux-kernel, Arnd Bergmann, olof, albert

On Mon, 22 May 2017 19:16:43 PDT (-0700), rdunlap@infradead.org wrote:
> On 05/22/17 17:41, Palmer Dabbelt wrote:
>
>> [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64}
>> [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
>> [PATCH 3/7] RISC-V: Device Tree Documentation
>> [PATCH 4/7] RISC-V: arch/riscv/include
>> [PATCH 5/7] RISC-V: arch/riscv/lib
>> [PATCH 6/7] RISC-V: arch/riscv/kernel
>> [PATCH 7/7] RISC-V: arch/riscv/mm
>>
>
> Now at patch 0004-riscv.patch
> Warning: trailing whitespace in line 3 of arch/riscv/include/asm/asm.h
> Warning: trailing whitespace in line 115 of arch/riscv/include/asm/spinlock.h
>
> Now at patch 0006-riscv.patch
> Warning: trailing whitespace in line 271 of arch/riscv/kernel/entry.S
> Warning: trailing whitespace in line 77 of arch/riscv/kernel/process.c
> Warning: trailing whitespace in line 78 of arch/riscv/kernel/smpboot.c

Sorry about that, I'll fix these for v2.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-05-23  1:31     ` Randy Dunlap
@ 2017-05-23  4:49       ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23  4:49 UTC (permalink / raw)
  To: rdunlap; +Cc: olof, linux-kernel, Arnd Bergmann, albert, patches

On Mon, 22 May 2017 18:31:08 PDT (-0700), rdunlap@infradead.org wrote:
> On 05/22/17 18:27, Olof Johansson wrote:
>> Hi,
>>
>>
>> On Mon, May 22, 2017 at 5:41 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> ---
>>>  arch/riscv/.gitignore                |  35 ++++
>>>  arch/riscv/Kconfig                   | 300 +++++++++++++++++++++++++++++++++++
>>>  arch/riscv/Makefile                  |  64 ++++++++
>>>  arch/riscv/configs/riscv32_spike     |  47 ++++++
>>>  arch/riscv/configs/riscv64_freedom-u |  52 ++++++
>>>  arch/riscv/configs/riscv64_qemu      |  64 ++++++++
>>>  arch/riscv/configs/riscv64_spike     |  45 ++++++
>>>  7 files changed, 607 insertions(+)
>>>  create mode 100644 arch/riscv/.gitignore
>>>  create mode 100644 arch/riscv/Kconfig
>>>  create mode 100644 arch/riscv/Makefile
>>>  create mode 100644 arch/riscv/configs/riscv32_spike
>>>  create mode 100644 arch/riscv/configs/riscv64_freedom-u
>>>  create mode 100644 arch/riscv/configs/riscv64_qemu
>>>  create mode 100644 arch/riscv/configs/riscv64_spike
>>
>> Nearly all other platforms have _defconfig in the config names. It
>> might get a bit excessive to prepend riscv{32,64} to all of them
>> though. Most other platforms have shortened it to, for example,
>> spike_defconfig, spike64_defconfig, qemu_defconfig,
>> freedom-u_defconfig.
>>
>> Not going to argue too much about the color of the shed here, but
>> using the _defconfig naming is recommended.
>
> well, the top-level Makefile looks for "make *config" to indicate that
> there is a config-command in progress (or in process), so they usually
> have to end in the string "config".
>
> Have these been tested?

Ah, that's much better -- I've just been copying them to .config :).  I've
already renamed them to things that end in defconfig.

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [patches] Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-05-23  4:49     ` Palmer Dabbelt
@ 2017-05-23  5:16       ` Olof Johansson
  2017-05-23 21:07         ` Benjamin Herrenschmidt
  0 siblings, 1 reply; 283+ messages in thread
From: Olof Johansson @ 2017-05-23  5:16 UTC (permalink / raw)
  To: patches; +Cc: linux-kernel, Arnd Bergmann, albert

On Mon, May 22, 2017 at 9:49 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> On Mon, 22 May 2017 18:27:21 PDT (-0700), olof@lixom.net wrote:
>> Hi,
>>
>>
>> On Mon, May 22, 2017 at 5:41 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> ---
>>>  arch/riscv/.gitignore                |  35 ++++
>>>  arch/riscv/Kconfig                   | 300 +++++++++++++++++++++++++++++++++++
>>>  arch/riscv/Makefile                  |  64 ++++++++
>>>  arch/riscv/configs/riscv32_spike     |  47 ++++++
>>>  arch/riscv/configs/riscv64_freedom-u |  52 ++++++
>>>  arch/riscv/configs/riscv64_qemu      |  64 ++++++++
>>>  arch/riscv/configs/riscv64_spike     |  45 ++++++
>>>  7 files changed, 607 insertions(+)
>>>  create mode 100644 arch/riscv/.gitignore
>>>  create mode 100644 arch/riscv/Kconfig
>>>  create mode 100644 arch/riscv/Makefile
>>>  create mode 100644 arch/riscv/configs/riscv32_spike
>>>  create mode 100644 arch/riscv/configs/riscv64_freedom-u
>>>  create mode 100644 arch/riscv/configs/riscv64_qemu
>>>  create mode 100644 arch/riscv/configs/riscv64_spike
>>
>> Nearly all other platforms have _defconfig in the config names. It
>> might get a bit excessive to prepend riscv{32,64} to all of them
>> though. Most other platforms have shortened it to, for example,
>> spike_defconfig, spike64_defconfig, qemu_defconfig,
>> freedom-u_defconfig.
>>
>> Not going to argue too much about the color of the shed here, but
>> using the _defconfig naming is recommended.
>
> Works for me <https://github.com/riscv/riscv-linux/commit/b1165397ba6cb54f23910537c4bf4c3488ef9aad>
>
> I'll squash all the CR comments into a v2.
>
>>
>>>
>>> diff --git a/arch/riscv/.gitignore b/arch/riscv/.gitignore
>>> new file mode 100644
>>> index 000000000000..376d06eb5d52
>>> --- /dev/null
>>> +++ b/arch/riscv/.gitignore
>>> @@ -0,0 +1,35 @@
>>> +# Now un-ignore all files.
>>> +!*
>>> +
>>> +# But then re-ignore the files listed in the Linux .gitignore
>>> +# Normal rules
>>> +#
>>> +.*
>>> +*.o
>>> +*.o.*
>>> +*.a
>>> +*.s
>>> +*.ko
>>> +*.so
>>> +*.so.dbg
>>> +*.mod.c
>>> +*.i
>>> +*.lst
>>> +*.symtypes
>>> +*.order
>>> +modules.builtin
>>> +*.elf
>>> +*.bin
>>> +*.gz
>>> +*.bz2
>>> +*.lzma
>>> +*.xz
>>> +*.lzo
>>> +*.patch
>>> +*.gcno
>>
>> I don't think you need to do any of this, just inherit the global one
>> (by not having one here)?
>
> Sorry, that's a holdover from how we used to manage our out-of-tree port and
> can just be deleted.
>
>   https://github.com/riscv/riscv-linux/commit/68032fb592297331a2f2caf246968da7b70373fe
>
>>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>>> new file mode 100644
>>> index 000000000000..510ead1d3343
>>> --- /dev/null
>>> +++ b/arch/riscv/Kconfig
>>> +config CPU_RV_GENERIC
>>> +       bool "Generic RISC-V"
>>> +       select CPU_SUPPORTS_32BIT_KERNEL
>>> +       select CPU_SUPPORTS_64BIT_KERNEL
>>
>> Is this even needed at this point? If the only CPU you can pick
>> supports this, you might as well not make it an option (CPU_RV_GENERIC
>> that is), and just make CPU_SUPPORTS_{32,64}BIT_KERNEL 'def_bool y'
>> for now.
>
> I think this is actually broken in the opposite direction: we only support
> 32-bit kernels on RV32 and 64-bit kernels on RV64, not both at the same time.
> I don't really think it makes sense to build a kernel that supports both 32-bit
> and 64-bit on RISC-V (as they're different base ISAs), so I think it'd be
> better to just have a "base ISA" configuration.
>
>   https://github.com/riscv/riscv-linux/commit/695d428d65bf6fe1382d34393e9d07f40b74e1b2
>
> We'll think on this a bit more and get something saner.

Sure, sounds good overall though.

>>> +config SBI_CONSOLE
>>> +       tristate "SBI console support"
>>> +       select TTY
>>> +       default y
>>
>> Usually you end up having a DRIVER_FOO and DRIVER_FOO_CONSOLE option
>> to enable registering it as console.
>>
>> Also, unless there's strong reason to keep it under arch/, it should
>> probably go under drivers/tty/.
>
> In this case "SBI" is the "Supervisor Binary Interface".  This is a set of
> routines that are provided by the platform for OS use that do things like
> writing to the boot console or TLB shootdowns.  The SBI is part of the RISC-V
> ISA, so there isn't a config option for turning it off.  SBI_CONSOLE
> enables/disables the SBI's console support, so I think this option is sane.
>
> It's in arch/riscv because the SBI is part of the RISC-V ISA -- essentially
> there's special SBI instructions that mean "write some register to the console"
> (there's some implementation tricks behind this, so it's really just a
> specification).  That said, I'm fine moving this to drivers.

The same is true for some other drivers. Actually, I wonder if it
might be just as easy to implement a sbi backend for hvc -- see
hvc_udbg.c for an example where, on power, you have a simple get/put
char hypervisor call in a very similar manner.

Either way (keeping discrete sbi driver or implementing hvc backend),
moving to drivers/tty is the right thing here -- we've worked hard on
ARM to get rid of random drivers under arch/ and it'd be nice to not
see new ones intoduced here.


>
>>> +config RVC
>>> +       bool "Use compressed instructions (RV32C or RV64C)"
>>> +       default n
>>
>> What does "use" here mean? Use during build, or allow userspace to use them?
>>
>>> +
>>> +config RV_ATOMIC
>>> +       bool "Use atomic memory instructions (RV32A or RV64A)"
>>> +       default y
>>
>> Same for this.
>
> These mean "tell the compiler that it can emit these instructions when building
> Linux".  Userspace applications can still use these instructions either way.
> How does "Emit compressed instructions when building Linux" sound?
>
>   https://github.com/riscv/riscv-linux/commit/d6e65bd8b7dcfa72578d62e5eb367f680b55f5a8

Sounds good, you can still reference the ISA extensions if you want though.

>>
>>> +config RV_SYSRISCV_ATOMIC
>>> +       bool "Include support for atomic operation syscalls"
>>> +       default n
>>> +       help
>>> +         If atomic memory instructions are present, i.e.,
>>> +         CONFIG_RV_ATOMIC, this includes support for the syscall that
>>> +         provides atomic accesses.  This is only useful to run
>>> +         binaries that require atomic access but were compiled with
>>> +         -mno-atomic.
>>> +
>>> +         If CONFIG_RV_ATOMIC is unset, this option is mandatory.
>>
>> If it's mandatory then Kconfig language should make it so.
>
> I'm not sure what you mean by this.  We have
>
>   config RISCV
>         ...
>         select RV_SYSRISCV_ATOMIC if !RV_ATOMIC
>
> Should this constraint just live within "config RV_SYSRISCV_ATOMIC"?  It seems
> cleaner to have the constraints next to the config definitions.

Ah, I just missed the select.


-Olof

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-05-23  0:41 ` [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs Palmer Dabbelt
  2017-05-23  1:27   ` Olof Johansson
@ 2017-05-23  5:23   ` Olof Johansson
  2017-05-23 15:29     ` Palmer Dabbelt
  2017-05-23 10:51   ` Geert Uytterhoeven
  2017-05-23 11:46   ` Arnd Bergmann
  3 siblings, 1 reply; 283+ messages in thread
From: Olof Johansson @ 2017-05-23  5:23 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: linux-kernel, Arnd Bergmann, albert

(new top-level subthread here since this is a separate topic):

On Mon, May 22, 2017 at 5:41 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:

> diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
> new file mode 100644
> index 000000000000..07ef200e0675
> --- /dev/null
> +++ b/arch/riscv/Makefile
> @@ -0,0 +1,64 @@
> +# This file is included by the global makefile so that you can add your own
> +# architecture-specific flags and dependencies. Remember to do have actions
> +# for "archclean" and "archdep" for cleaning up and making dependencies for
> +# this architecture
> +#
> +# This file is subject to the terms and conditions of the GNU General Public
> +# License.  See the file "COPYING" in the main directory of this archive
> +# for more details.
> +#
> +
> +LDFLAGS         :=
> +OBJCOPYFLAGS    := -O binary
> +LDFLAGS_vmlinux :=
> +KBUILD_AFLAGS_MODULE += -fPIC
> +KBUILD_CFLAGS_MODULE += -fPIC
> +
> +ifeq ($(ARCH),riscv)
> +       KBUILD_DEFCONFIG = riscv64_spike
> +else
> +       KBUILD_DEFCONFIG = $(ARCH)_spike
> +endif
> +
> +export BITS
> +ifeq ($(CONFIG_64BIT),y)
> +       BITS := 64
> +       UTS_MACHINE := riscv64
> +
> +       KBUILD_CFLAGS += -mabi=lp64
> +       KBUILD_AFLAGS += -mabi=lp64
> +       KBUILD_MARCH = rv64im
> +       LDFLAGS += -melf64lriscv
> +else
> +       BITS := 32
> +       UTS_MACHINE := riscv32
> +
> +       KBUILD_CFLAGS += -mabi=ilp32
> +       KBUILD_AFLAGS += -mabi=ilp32
> +       KBUILD_MARCH = rv32im
> +       LDFLAGS += -melf32lriscv
> +endif
> +
> +ifeq ($(CONFIG_RV_ATOMIC),y)
> +       KBUILD_RV_ATOMIC = a
> +endif
> +
> +KBUILD_CFLAGS += -Wall
> +
> +ifeq ($(CONFIG_RVC),y)
> +       KBUILD_RVC = c
> +endif
> +
> +KBUILD_AFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_RV_ATOMIC)fd$(KBUILD_RVC)
> +
> +KBUILD_CFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_RV_ATOMIC)$(KBUILD_RVC)
> +KBUILD_CFLAGS += -mno-save-restore
> +KBUILD_CFLAGS += -mstrict-align

I built a vanilla gcc-7.1.0 here, with 'riscv64-linux' as target, and I get:

riscv64-linux-gcc: error: unrecognized command line option
'-mstrict-align'; did you mean '-Wstrict-aliasing'?


The suggestion seems completely bogus, but the error is real. Looking
at the gcc sources, I only see strict-align plumbed up on rs6000,
aarch64, m68k(!) and v850. Or am I missing something here?



-Olof

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v1
  2017-05-23  3:36   ` Palmer Dabbelt
@ 2017-05-23  6:45     ` Tobias Klauser
  2017-05-23 15:44       ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Tobias Klauser @ 2017-05-23  6:45 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: olof, linux-kernel, Arnd Bergmann, albert

Hi Palmer,

On 2017-05-23 at 05:36:55 +0200, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> On Mon, 22 May 2017 18:16:20 PDT (-0700), olof@lixom.net wrote:
> > On Mon, May 22, 2017 at 5:41 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> >> We'd like to submit for inclusion in Linux a port for the RISC-V architecture.
> >> While it is doubtlessly not complete, we think it is far enough along to start
> >> the upstreaming process.  Our binutils and GCC ports have been accepted and
> >> released, and we plan on submitting glibc patches soon.
> >>
> >> This port targets Version 1.10 of the RISC-V Privileged ISA, and supports both
> >> the RV32 and RV64 user ISAs.  The RISC-V community and the 60-some member
> >> companies of the RISC-V Foundation are quite eager to have a single, standard
> >> Linux port.  We thank you in advance for your help in this process and for your
> >> feedback on the software contribution itself.
> >>
> >> These patches build and boot on top of 4.12-rc2.  I understand that the merge
> >> window is closed, but it was suggested that the best time to submit a new
> >> architecture port would be right after an RC2 as the earliest point at which
> >> the tree is usually generally churn-free enough.  While we optimistically hope
> >> that we can get the port in for the 4.13 merge window, we're also eager to
> >> ensure that the user-visible ABI is sane so we can proceed with our glibc port.
> >> We'd like to at least get any user ABI issues shaken out as soon as possible,
> >> even if we don't make it into 4.13.

[...]

> > I'll add more comments on some of the individual patches; expect this
> > review to take a little while. Reposting once or twice a week to show
> > incorporated changes can be useful; more than that and it can be
> > harder to follow along in the discussion. It all depends on how much
> > comments you end up receiving.
> 
> OK.  I'll incorporate all the feedback I get over the next week or so into a v2
> patch set.

You might want to Cc linux-arch@vger.kernel.org on future iterations of
this patchset where there's less "noise" than on LKML and the relevant
people are more likely to notice ;) Likewise, the device-tree specific
bits (e.g.  the bindings documentation) should probably be Cc'ed to
devicetree@vger.kernel.org

Tobias

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/7] RISC-V: arch/riscv/lib
  2017-05-23  0:41 ` [PATCH 5/7] RISC-V: arch/riscv/lib Palmer Dabbelt
@ 2017-05-23 10:47   ` Geert Uytterhoeven
  2017-05-23 22:07     ` Palmer Dabbelt
  2017-05-23 11:19   ` Arnd Bergmann
  1 sibling, 1 reply; 283+ messages in thread
From: Geert Uytterhoeven @ 2017-05-23 10:47 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: linux-kernel, Arnd Bergmann, Olof Johansson, albert

Hi Palmer,

On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>  arch/riscv/lib/Makefile  |   5 ++
>  arch/riscv/lib/ashldi3.c |  42 ++++++++++++++++

At least this one has already two identical copies in arch/score/lib/ashldi3.c
and arch/sh/lib/ashldi3.c. Probably these should be moved to lib/, and built
depending on a new config symbol that is selected on score, sh, and riscv.

Didn't check the others.

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-05-23  0:41 ` [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs Palmer Dabbelt
  2017-05-23  1:27   ` Olof Johansson
  2017-05-23  5:23   ` Olof Johansson
@ 2017-05-23 10:51   ` Geert Uytterhoeven
  2017-05-25  1:59     ` Palmer Dabbelt
  2017-05-23 11:46   ` Arnd Bergmann
  3 siblings, 1 reply; 283+ messages in thread
From: Geert Uytterhoeven @ 2017-05-23 10:51 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: linux-kernel, Arnd Bergmann, Olof Johansson, albert

Hi Palmer,

On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> --- /dev/null
> +++ b/arch/riscv/Kconfig
> @@ -0,0 +1,300 @@

> +config PLIC
> +       bool "Platform-Level Interrupt Controller"
> +       default y
> +       help
> +          This enables support for the PLIC chip found in standard RISC-V
> +          systems. The PLIC is the top-most interrupt controller found in
> +          the system, connected directly to the core complex. All other
> +          interrupt sources (MSI, GPIO, etc) are subordinate to the PLIC.
> +
> +          If you don't know what to do here, say Y.
> +

I think this symbol belongs in drivers/irqchip/Kconfig, and the corresponding
driver in drivers/irqchip/.

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/7] RISC-V: arch/riscv/lib
  2017-05-23  0:41 ` [PATCH 5/7] RISC-V: arch/riscv/lib Palmer Dabbelt
  2017-05-23 10:47   ` Geert Uytterhoeven
@ 2017-05-23 11:19   ` Arnd Bergmann
  2017-05-25  1:59     ` Palmer Dabbelt
  1 sibling, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-05-23 11:19 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Linux Kernel Mailing List, Olof Johansson, albert

On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile
> new file mode 100644
> index 000000000000..f644e582f4b8
> --- /dev/null
> +++ b/arch/riscv/lib/Makefile

> +
> +void __delay(unsigned long cycles)
> +{
> +       u64 t0 = get_cycles();
> +
> +       while ((unsigned long)(get_cycles() - t0) < cycles)
> +               cpu_relax();
> +}
> +
> +void udelay(unsigned long usecs)
> +{
> +       u64 ucycles = (u64)usecs * timebase;
> +       do_div(ucycles, 1000000U);
> +       __delay((unsigned long)ucycles);
> +}
> +EXPORT_SYMBOL(udelay);
> +
> +void ndelay(unsigned long nsecs)
> +{
> +       u64 ncycles = (u64)nsecs * timebase;
> +       do_div(ncycles, 1000000000U);
> +       __delay((unsigned long)ncycles);
> +}

I'd be slightly worried about a global 'timebase' identifier that
might conflict with a variable in some random driver.

Also, it would be good to replace the multiply+div64
with a single multiplication here, see how x86 and arm do it
(for the tsc/__timer_delay case).

      Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64}
  2017-05-23  0:41 ` [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64} Palmer Dabbelt
@ 2017-05-23 11:30   ` Arnd Bergmann
  2017-05-27  0:57     ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-05-23 11:30 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Linux Kernel Mailing List, Olof Johansson, albert

On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> RISC-V has both 32-bit and 64-bit base ISAs, but they are very similar.
> Like some other platforms, we'd like to share one arch directory between
> the two of them.

I think we mainly do the others for backwards-compatibility with ancient
build scripts, and we don't need that here. Instead, you could add one more
line to the 'SUBARCH:=' statement that interprets the uname output.

        Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-05-23  0:41 ` [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs Palmer Dabbelt
                     ` (2 preceding siblings ...)
  2017-05-23 10:51   ` Geert Uytterhoeven
@ 2017-05-23 11:46   ` Arnd Bergmann
  2017-05-27  0:57     ` Palmer Dabbelt
  3 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-05-23 11:46 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Linux Kernel Mailing List, Olof Johansson, albert

On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> ---
>  arch/riscv/.gitignore                |  35 ++++
>  arch/riscv/Kconfig                   | 300 +++++++++++++++++++++++++++++++++++
>  arch/riscv/Makefile                  |  64 ++++++++
>  arch/riscv/configs/riscv32_spike     |  47 ++++++
>  arch/riscv/configs/riscv64_freedom-u |  52 ++++++
>  arch/riscv/configs/riscv64_qemu      |  64 ++++++++
>  arch/riscv/configs/riscv64_spike     |  45 ++++++
>  7 files changed, 607 insertions(+)
>  create mode 100644 arch/riscv/.gitignore
>  create mode 100644 arch/riscv/Kconfig
>  create mode 100644 arch/riscv/Makefile
>  create mode 100644 arch/riscv/configs/riscv32_spike
>  create mode 100644 arch/riscv/configs/riscv64_freedom-u
>  create mode 100644 arch/riscv/configs/riscv64_qemu
>  create mode 100644 arch/riscv/configs/riscv64_spike
>
> diff --git a/arch/riscv/.gitignore b/arch/riscv/.gitignore
> new file mode 100644
> index 000000000000..376d06eb5d52
> --- /dev/null
> +++ b/arch/riscv/.gitignore
> @@ -0,0 +1,35 @@
> +# Now un-ignore all files.
> +!*
> +
> +# But then re-ignore the files listed in the Linux .gitignore
> +# Normal rules
> +#
> +.*
> +*.o
> +*.o.*
> +*.a

This doesn't seem to belong here: There is no reason for riscv
to be different from all other architectures. Is something wrong
with the top-level .gitignore? If so, we should just fix it there.

> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> new file mode 100644
> index 000000000000..510ead1d3343
> --- /dev/null
> +++ b/arch/riscv/Kconfig
> @@ -0,0 +1,300 @@
> +#
> +# For a description of the syntax of this configuration file,
> +# see Documentation/kbuild/kconfig-language.txt.
> +#
> +
> +config RISCV
> +       def_bool y
> +       select OF
> +       select OF_EARLY_FLATTREE
> +       select OF_IRQ
> +       select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
> +       select ARCH_WANT_FRAME_POINTERS
> +       select CLONE_BACKWARDS
> +       select COMMON_CLK
> +       select GENERIC_CLOCKEVENTS
> +       select GENERIC_CPU_DEVICES
> +       select GENERIC_IRQ_SHOW
> +       select GENERIC_PCI_IOMAP

You normally don't want GENERIC_PCI_IOMAP, unless your
inb()/outb() uses other instructions than your readl()/writel()

> +config MMU
> +       def_bool y

Just a general question: has there been any interest in a no-MMU
version?

> +# even on 32-bit, physical (and DMA) addresses are > 32-bits
> +config ARCH_PHYS_ADDR_T_64BIT
> +       def_bool y
> +
> +config ARCH_DMA_ADDR_T_64BIT
> +       def_bool y

Are you required to use 64-bit addressing for RAM on 32-bit
architectures though? Using 32-bit dma_addr_t and phys_addr_t
when possible makes some code noticeably more efficient.

> +config PGTABLE_LEVELS
> +       int
> +       default 3 if 64BIT
> +       default 2

With 2-level page tables, you usually can't address much more
than 32-bit physical memory anyway, so I'd guess that most
32-bit chips would actually put their RAM under the 4GB boundary.

> +config RV_ATOMIC
> +       bool "Use atomic memory instructions (RV32A or RV64A)"
> +       default y
> +
> +config RV_SYSRISCV_ATOMIC
> +       bool "Include support for atomic operation syscalls"
> +       default n
> +       help
> +         If atomic memory instructions are present, i.e.,
> +         CONFIG_RV_ATOMIC, this includes support for the syscall that
> +         provides atomic accesses.  This is only useful to run
> +         binaries that require atomic access but were compiled with
> +         -mno-atomic.
> +
> +         If CONFIG_RV_ATOMIC is unset, this option is mandatory.

Just express this in Kconfig terms to prevent misconfiguration:

config RV_SYSRISCV_ATOMIC
       bool "Include support for atomic operation syscalls" if RV_ATOMIC
       default !RV_ATOMIC

I wonder what the cost would be of always providing the syscalls
for compatibility. This is also something worth putting into a VDSO
instead of exposing the syscall:

That way, user space that is built with -mno-atomic can call into
the vdso, which depending on the hardware support will perform
the atomic operation directly or enter the syscall.

> +config PCI_DOMAINS
> +       def_bool PCI
> +
> +config PCI_DOMAINS_GENERIC
> +       def_bool PCI
> +
> +config PCI_SYSCALL
> +       def_bool PCI

I don't think you want PCI_SYSCALL

        Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 3/7] RISC-V: Device Tree Documentation
  2017-05-23  0:41 ` [PATCH 3/7] RISC-V: Device Tree Documentation Palmer Dabbelt
@ 2017-05-23 12:03   ` Arnd Bergmann
  2017-05-27  0:57     ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-05-23 12:03 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Linux Kernel Mailing List, Olof Johansson, albert

On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> ---
>  .../interrupt-controller/riscv,cpu-intc.txt        | 46 ++++++++++++++++++++++
>  .../bindings/interrupt-controller/riscv,plic0.txt  | 44 +++++++++++++++++++++

The patch needs a description, and should be sent to the irqchip maintainers
and the devicetree maintainers for review, along for the respective
drivers/irqchip/
patch.

       Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 4/7] RISC-V: arch/riscv/include
  2017-05-23  0:41 ` [PATCH 4/7] RISC-V: arch/riscv/include Palmer Dabbelt
@ 2017-05-23 12:55   ` Arnd Bergmann
  2017-05-23 21:23     ` Benjamin Herrenschmidt
  2017-06-01  0:56     ` Palmer Dabbelt
  0 siblings, 2 replies; 283+ messages in thread
From: Arnd Bergmann @ 2017-05-23 12:55 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Linux Kernel Mailing List, Olof Johansson, albert

On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> +/**
> + * atomic_read - read atomic variable
> + * @v: pointer of type atomic_t
> + *
> + * Atomically reads the value of @v.
> + */
> +static inline int atomic_read(const atomic_t *v)
> +{
> +       return *((volatile int *)(&(v->counter)));
> +}
> +/**
> + * atomic_set - set atomic variable
> + * @v: pointer of type atomic_t
> + * @i: required value
> + *
> + * Atomically sets the value of @v to @i.
> + */
> +static inline void atomic_set(atomic_t *v, int i)
> +{
> +       v->counter = i;
> +}

These commonly use READ_ONCE() and WRITE_ONCE,
I'd recommend doing the same here to be on the safe side.

> +/**
> + * atomic64_read - read atomic64 variable
> + * @v: pointer of type atomic64_t
> + *
> + * Atomically reads the value of @v.
> + */
> +static inline s64 atomic64_read(const atomic64_t *v)
> +{
> +       return *((volatile long *)(&(v->counter)));
> +}
> +
> +/**
> + * atomic64_set - set atomic64 variable
> + * @v: pointer to type atomic64_t
> + * @i: required value
> + *
> + * Atomically sets the value of @v to @i.
> + */
> +static inline void atomic64_set(atomic64_t *v, s64 i)
> +{
> +       v->counter = i;
> +}

same here

> diff --git a/arch/riscv/include/asm/bug.h b/arch/riscv/include/asm/bug.h
> new file mode 100644
> index 000000000000..10d894ac3137
> --- /dev/null
> +++ b/arch/riscv/include/asm/bug.h
> @@ -0,0 +1,81 @@
> +/*
>
> +#ifndef _ASM_RISCV_BUG_H
> +#define _ASM_RISCV_BUG_H

> +#ifdef CONFIG_GENERIC_BUG
> +#define __BUG_INSN     _AC(0x00100073, UL) /* sbreak */

Please have a look at the modifications I did for !CONFIG_BUG
on x86, arm and arm64. It's generally better to define BUG to a
trap even when CONFIG_BUG is disabled, otherwise you run
into undefined behavior in some code, and gcc will print annoying
warnings about that.

> +#ifndef _ASM_RISCV_CACHE_H
> +#define _ASM_RISCV_CACHE_H
> +
> +#define L1_CACHE_SHIFT         6
> +
> +#define L1_CACHE_BYTES         (1 << L1_CACHE_SHIFT)

Is this the only valid cache line size on riscv, or just the largest
one that is allowed?

> +
> +static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
> +{
> +       return (dma_addr_t)paddr;
> +}
> +
> +static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t dev_addr)
> +{
> +       return (phys_addr_t)dev_addr;
> +}

What do you need these for? If possible, try to remove them.

> +static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, enum dma_data_direction dir)
> +{
> +       /*
> +        * RISC-V is cache-coherent, so this is mostly a no-op.
> +        * However, we do need to ensure that dma_cache_sync()
> +        * enforces order, hence the mb().
> +        */
> +       mb();
> +}

Do you even support any drivers that use
dma_alloc_noncoherent()/dma_cache_sync()?

I would guess you can just leave this out.

> diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h
> new file mode 100644
> index 000000000000..d942555a7a08
> --- /dev/null
> +++ b/arch/riscv/include/asm/io.h
> @@ -0,0 +1,36 @@

> +#ifndef _ASM_RISCV_IO_H
> +#define _ASM_RISCV_IO_H
> +
> +#include <asm-generic/io.h>

I would recommend providing your own {read,write}{b,w,l,q}{,_relaxed}
helpers using inline assembly, to prevent the compiler for breaking
up accesses into byte accesses.

Also, most architectures require to some synchronization after a
non-relaxed readl() to prevent prefetching of DMA buffers, and
before a writel() to flush write buffers when a DMA gets triggered.

> +#ifdef __KERNEL__
> +
> +#ifdef CONFIG_MMU
> +
> +extern void __iomem *ioremap(phys_addr_t offset, unsigned long size);
> +
> +#define ioremap_nocache(addr, size) ioremap((addr), (size))
> +#define ioremap_wc(addr, size) ioremap((addr), (size))
> +#define ioremap_wt(addr, size) ioremap((addr), (size))

Is this a hard architecture limitation? Normally you really want
write-combined access on frame buffer memory and a few other
cases for performance reasons, and ioremap_wc() gets used
for by memremap() for addressing RAM in some cases, and you
normally don't want to have PTEs for the same memory using
cached and uncached page flags

> diff --git a/arch/riscv/include/asm/serial.h b/arch/riscv/include/asm/serial.h
> new file mode 100644
> index 000000000000..d783dbe80a4b
> --- /dev/null
> +++ b/arch/riscv/include/asm/serial.h
> @@ -0,0 +1,43 @@
> +/*
> + * Copyright (C) 2014 Regents of the University of California
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful, but
> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
> + *   NON INFRINGEMENT.  See the GNU General Public License for
> + *   more details.
> + */
> +
> +#ifndef _ASM_RISCV_SERIAL_H
> +#define _ASM_RISCV_SERIAL_H
> +
> +/*
> + * FIXME: interim serial support for riscv-qemu
> + *
> + * Currently requires that the emulator itself create a hole at addresses
> + * 0x3f8 - 0x3ff without looking through page tables.

This sounds like something we want to fix in qemu and not have in the
mainline kernel. In particular, something seems really wrong if your
inb()/outb() get remapped to physical CPU address 0+offset.

> diff --git a/arch/riscv/include/asm/setup.h b/arch/riscv/include/asm/setup.h
> new file mode 100644
> index 000000000000..e457854e9988
> --- /dev/null
> +++ b/arch/riscv/include/asm/setup.h
> @@ -0,0 +1,20 @@
> +/*
> + * Copyright (C) 2012 Regents of the University of California
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful, but
> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
> + *   NON INFRINGEMENT.  See the GNU General Public License for
> + *   more details.
> + */
> +
> +#ifndef _ASM_RISCV_SETUP_H
> +#define _ASM_RISCV_SETUP_H
> +
> +#include <asm-generic/setup.h>
> +
> +#endif /* _ASM_RISCV_SETUP_H */

Can you remove this file and add it to asm/Kbuild as generic-y instead?

> +/*
> + * low level task data that entry.S needs immediate access to
> + * - this struct should fit entirely inside of one cache line
> + * - this struct resides at the bottom of the supervisor stack
> + * - if the members of this struct changes, the assembly constants
> + *   in asm-offsets.c must be updated accordingly
> + */
> +struct thread_info {
> +       struct task_struct      *task;          /* main task structure */
> +       unsigned long           flags;          /* low level flags */
> +       __u32                   cpu;            /* current CPU */
> +       int                     preempt_count;  /* 0 => preemptable, <0 => BUG */
> +       mm_segment_t            addr_limit;
> +};

Please see 15f4eae70d36 ("x86: Move thread_info into task_struct")
and try to do the same.

> +#else /* !CONFIG_MMU */
> +
> +static inline void flush_tlb_all(void)
> +{
> +       BUG();
> +}
> +
> +static inline void flush_tlb_mm(struct mm_struct *mm)
> +{
> +       BUG();
> +}

The NOMMU support is rather incomplete and CONFIG_MMU is
hard-enabled, so I'd just drop any !CONFIG_MMU #ifdefs.

> diff --git a/arch/riscv/include/uapi/asm/Kbuild b/arch/riscv/include/uapi/asm/Kbuild
> new file mode 100644
> index 000000000000..276b6dae745c
> --- /dev/null
> +++ b/arch/riscv/include/uapi/asm/Kbuild
> @@ -0,0 +1,10 @@
> +# UAPI Header export list
> +include include/uapi/asm-generic/Kbuild.asm
> +
> +header-y += auxvec.h
> +header-y += bitsperlong.h
> +header-y += byteorder.h
> +header-y += ptrace.h
> +header-y += sigcontext.h
> +header-y += siginfo.h
> +header-y += unistd.h

Please see
fcc8487d477a ("uapi: export all headers under uapi directories")

and adapt the file accordingly

> +#include <asm-generic/unistd.h>
> +
> +#define __NR_sysriscv  __NR_arch_specific_syscall
> +#ifndef __riscv_atomic
> +__SYSCALL(__NR_sysriscv, sys_sysriscv)
> +#endif

Please make this a straight cmpxchg syscall and remove the multiplexer.
Why does the definition depend on __riscv_atomic rather than the
Kconfig symbol?

       Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/7] RISC-V: arch/riscv/kernel
  2017-05-23  0:41 ` [PATCH 6/7] RISC-V: arch/riscv/kernel Palmer Dabbelt
  2017-05-23  2:11   ` Olof Johansson
@ 2017-05-23 13:35   ` Arnd Bergmann
  2017-06-02 23:56     ` Palmer Dabbelt
  2017-05-25 17:05   ` Pavel Machek
  2 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-05-23 13:35 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Linux Kernel Mailing List, Olof Johansson, albert

> +IRQCHIP_DECLARE(riscv, "riscv,cpu-intc", riscv_intc_init);

Please move the majority of this file into drivers/irqchip as a
standalone driver.

> diff --git a/arch/riscv/kernel/pci.c b/arch/riscv/kernel/pci.c
> new file mode 100644
> index 000000000000..4191a5ffdd67
> --- /dev/null
> +++ b/arch/riscv/kernel/pci.c
> @@ -0,0 +1,36 @@
> +/*
> + * Code borrowed from arch/arm64/kernel/pci.c
> + *
> + * Copyright (C) 2003 Anton Blanchard <anton@au.ibm.com>, IBM
> + * Copyright (C) 2014 ARM Ltd.
> + * Copyright (C) 2017 SiFive
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * version 2 as published by the Free Software Foundation.
> + *
> + */
> +
> +#include <linux/init.h>
> +#include <linux/io.h>
> +#include <linux/kernel.h>
> +#include <linux/mm.h>
> +#include <linux/slab.h>
> +#include <linux/pci.h>
> +
> +/*
> + * Called after each bus is probed, but before its children are examined
> + */
> +void pcibios_fixup_bus(struct pci_bus *bus)
> +{
> +       /* nothing to do, expected to be removed in the future */
> +}
> +/*
> + * We don't have to worry about legacy ISA devices, so nothing to do here
> + */
> +resource_size_t pcibios_align_resource(void *data, const struct resource *res,
> +                               resource_size_t size, resource_size_t align)
> +{
> +       return res->start;
> +}

Can you add a patch to remove the need for this, and send that to the
PCI maintainers?

In the long run, I think we want both of these to be pci host bridge
driver specific callbacks rather than per-architecture definitions, but
for the moment, moving the empty version as a __weak copy
into drivers/pci/ should be sufficient.

[note: don't ever use __weak elsewhere, the use in PCI is
 only done for historic reasons and we want to get rid of that
 too, but for now it's more important to avoid adding yet another
 pointless copy]

If you don't care about LPC/ISA devices, then your PCI_MIN_IO
should also be zero instead of 0x1000

> diff --git a/arch/riscv/kernel/plic.c b/arch/riscv/kernel/plic.c
> new file mode 100644
> index 000000000000..5b3d4241f4e2
> --- /dev/null
> +++ b/arch/riscv/kernel/plic.c

drivers/irqchip/riscv-plic.c

The file needs some work for following coding style, once that
is done, please submit to the irqchip maintainers.

> +#define PLIC_HART_CONTEXT(data, i)     (struct plic_hart_context *)((char*)data->reg + HART_BASE + HART_SIZE*i)
> +#define PLIC_ENABLE_CONTEXT(data, i)   (struct plic_enable_context *)((char*)data->reg + ENABLE_BASE + ENABLE_SIZE*i)
> +#define PLIC_PRIORITY(data)            (struct plic_priority *)((char *)data->reg + PRIORITY_BASE)
> +
> +struct plic_hart_context {
> +       volatile u32 threshold;
> +       volatile u32 claim;
> +};
> +
> +struct plic_enable_context {
> +       atomic_t mask[32]; // 32-bit * 32-entry
> +};
> +
> +struct plic_priority {
> +       volatile u32 prio[MAX_DEVICES];
> +};

The 'volatile' seems misplaced here. What is it for?

> +// TODO: add a /sys interface to set priority + per-hart enables for steering

No driver-private sysfs interfaces please for irqchips please.
See http://elixir.free-electrons.com/linux/latest/source/Documentation/IRQ-affinity.txt
for setting the affinity.

> diff --git a/arch/riscv/kernel/reset.c b/arch/riscv/kernel/reset.c
> new file mode 100644
> index 000000000000..58bad9598e21
> --- /dev/null
> +++ b/arch/riscv/kernel/reset.c
> @@ -0,0 +1,33 @@
> +/*
> + * Copyright (C) 2012 Regents of the University of California
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful, but
> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
> + *   NON INFRINGEMENT.  See the GNU General Public License for
> + *   more details.
> + */
> +
> +#include <linux/reboot.h>
> +#include <linux/export.h>
> +#include <asm/sbi.h>
> +
> +void (*pm_power_off)(void) = machine_power_off;
> +EXPORT_SYMBOL(pm_power_off);
> +
> +void machine_restart(char *cmd)
> +{
> +}

Call do_kernel_restart(cmd) here.

> +void machine_halt(void)
> +{
> +}

This should not return. Either make it call sbi_shutdown as well,
or use the ARM implementation:

void machine_halt(void)
{
        local_irq_disable();
        smp_send_stop();
        while (1);
}

> diff --git a/arch/riscv/kernel/sbi-con.c b/arch/riscv/kernel/sbi-con.c
> new file mode 100644
> index 000000000000..86baeb5ef0cd
> --- /dev/null
> +++ b/arch/riscv/kernel/sbi-con.c

As Olof said, move it to drivers/tty/hvc/ and use those helpers.

> diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
> new file mode 100644
> index 000000000000..3e07308e24f5
> --- /dev/null
> +++ b/arch/riscv/kernel/sys_riscv.c
> @@ -0,0 +1,85 @@
> +/*
> + * Copyright (C) 2012 Regents of the University of California
> + * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful, but
> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
> + *   NON INFRINGEMENT.  See the GNU General Public License for
> + *   more details.
> + */
> +
> +#include <linux/syscalls.h>
> +#include <asm/unistd.h>
> +
> +SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
> +       unsigned long, prot, unsigned long, flags,
> +       unsigned long, fd, off_t, offset)
> +{
> +       if (unlikely(offset & (~PAGE_MASK)))
> +               return -EINVAL;
> +       return sys_mmap_pgoff(addr, len, prot, flags, fd, offset >> PAGE_SHIFT);
> +}
> +
> +#ifndef CONFIG_64BIT
> +SYSCALL_DEFINE6(mmap2, unsigned long, addr, unsigned long, len,
> +       unsigned long, prot, unsigned long, flags,
> +       unsigned long, fd, off_t, offset)
> +{
> +       /* Note that the shift for mmap2 is constant (12),
> +          regardless of PAGE_SIZE */
> +       if (unlikely(offset & (~PAGE_MASK >> 12)))
> +               return -EINVAL;
> +       return sys_mmap_pgoff(addr, len, prot, flags, fd,
> +               offset >> (PAGE_SHIFT - 12));
> +}
> +#endif /* !CONFIG_64BIT */

The first one should be CONFIG_64BIT only.

> +#ifdef CONFIG_RV_SYSRISCV_ATOMIC
> +SYSCALL_DEFINE4(sysriscv, unsigned long, cmd, unsigned long, arg1,
> +       unsigned long, arg2, unsigned long, arg3)
> +{
> +       unsigned long flags;
> +       unsigned long prev;
> +       unsigned int *ptr;
> +       unsigned int err;
> +
> +       switch (cmd) {
> +       case RISCV_ATOMIC_CMPXCHG:
> +               ptr = (unsigned int *)arg1;
> +               if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned int)))
> +                       return -EFAULT;
> +
> +               preempt_disable();
> +               raw_local_irq_save(flags);
> +               err = __get_user(prev, ptr);
> +               if (likely(!err && prev == arg2))
> +                       err = __put_user(arg3, ptr);
> +               raw_local_irq_restore(flags);
> +               preempt_enable();
> +
> +               return unlikely(err) ? err : prev;
> +
> +       case RISCV_ATOMIC_CMPXCHG64:

Make these two separate syscalls and get rid of the wrapper
(I already mentioned it in the header file comments, but it
fits better here).

It may be good to have an optimized version in the vdso
that does an atomic operation directly if the CPU supports
it.

> diff --git a/arch/riscv/kernel/time.c b/arch/riscv/kernel/time.c
> new file mode 100644
> index 000000000000..ce8c459fadaa
> --- /dev/null
> +++ b/arch/riscv/kernel/time.c

drivers/clocksource/riscv-timer.c, and submit it to the
respective maintainers.

      Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-05-23  5:23   ` Olof Johansson
@ 2017-05-23 15:29     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23 15:29 UTC (permalink / raw)
  To: olof; +Cc: linux-kernel, Arnd Bergmann, albert

On Mon, 22 May 2017 22:23:17 PDT (-0700), olof@lixom.net wrote:
> (new top-level subthread here since this is a separate topic):
>
> On Mon, May 22, 2017 at 5:41 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>
>> diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
>> new file mode 100644
>> index 000000000000..07ef200e0675
>> --- /dev/null
>> +++ b/arch/riscv/Makefile
>> @@ -0,0 +1,64 @@
>> +# This file is included by the global makefile so that you can add your own
>> +# architecture-specific flags and dependencies. Remember to do have actions
>> +# for "archclean" and "archdep" for cleaning up and making dependencies for
>> +# this architecture
>> +#
>> +# This file is subject to the terms and conditions of the GNU General Public
>> +# License.  See the file "COPYING" in the main directory of this archive
>> +# for more details.
>> +#
>> +
>> +LDFLAGS         :=
>> +OBJCOPYFLAGS    := -O binary
>> +LDFLAGS_vmlinux :=
>> +KBUILD_AFLAGS_MODULE += -fPIC
>> +KBUILD_CFLAGS_MODULE += -fPIC
>> +
>> +ifeq ($(ARCH),riscv)
>> +       KBUILD_DEFCONFIG = riscv64_spike
>> +else
>> +       KBUILD_DEFCONFIG = $(ARCH)_spike
>> +endif
>> +
>> +export BITS
>> +ifeq ($(CONFIG_64BIT),y)
>> +       BITS := 64
>> +       UTS_MACHINE := riscv64
>> +
>> +       KBUILD_CFLAGS += -mabi=lp64
>> +       KBUILD_AFLAGS += -mabi=lp64
>> +       KBUILD_MARCH = rv64im
>> +       LDFLAGS += -melf64lriscv
>> +else
>> +       BITS := 32
>> +       UTS_MACHINE := riscv32
>> +
>> +       KBUILD_CFLAGS += -mabi=ilp32
>> +       KBUILD_AFLAGS += -mabi=ilp32
>> +       KBUILD_MARCH = rv32im
>> +       LDFLAGS += -melf32lriscv
>> +endif
>> +
>> +ifeq ($(CONFIG_RV_ATOMIC),y)
>> +       KBUILD_RV_ATOMIC = a
>> +endif
>> +
>> +KBUILD_CFLAGS += -Wall
>> +
>> +ifeq ($(CONFIG_RVC),y)
>> +       KBUILD_RVC = c
>> +endif
>> +
>> +KBUILD_AFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_RV_ATOMIC)fd$(KBUILD_RVC)
>> +
>> +KBUILD_CFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_RV_ATOMIC)$(KBUILD_RVC)
>> +KBUILD_CFLAGS += -mno-save-restore
>> +KBUILD_CFLAGS += -mstrict-align
>
> I built a vanilla gcc-7.1.0 here, with 'riscv64-linux' as target, and I get:
>
> riscv64-linux-gcc: error: unrecognized command line option
> '-mstrict-align'; did you mean '-Wstrict-aliasing'?
>
>
> The suggestion seems completely bogus, but the error is real. Looking
> at the gcc sources, I only see strict-align plumbed up on rs6000,
> aarch64, m68k(!) and v850. Or am I missing something here?

We didn't get the "-mstrict-align" patch for RISC-V into the 7.1.0 release.
The boot loader install machine-mode unaligned access handlers, so in theory
you shouldn't need this at all -- it snuck its way in as a debugging attempt
because the Radeon driver was exhibiting some weird behavior.

I'll drop this from the next patch set.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v1
  2017-05-23  6:45     ` Tobias Klauser
@ 2017-05-23 15:44       ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23 15:44 UTC (permalink / raw)
  To: tklauser; +Cc: olof, linux-kernel, Arnd Bergmann, albert, patches

On Mon, 22 May 2017 23:45:34 PDT (-0700), tklauser@distanz.ch wrote:
> Hi Palmer,
>
> On 2017-05-23 at 05:36:55 +0200, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> On Mon, 22 May 2017 18:16:20 PDT (-0700), olof@lixom.net wrote:
>> > On Mon, May 22, 2017 at 5:41 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> >> We'd like to submit for inclusion in Linux a port for the RISC-V architecture.
>> >> While it is doubtlessly not complete, we think it is far enough along to start
>> >> the upstreaming process.  Our binutils and GCC ports have been accepted and
>> >> released, and we plan on submitting glibc patches soon.
>> >>
>> >> This port targets Version 1.10 of the RISC-V Privileged ISA, and supports both
>> >> the RV32 and RV64 user ISAs.  The RISC-V community and the 60-some member
>> >> companies of the RISC-V Foundation are quite eager to have a single, standard
>> >> Linux port.  We thank you in advance for your help in this process and for your
>> >> feedback on the software contribution itself.
>> >>
>> >> These patches build and boot on top of 4.12-rc2.  I understand that the merge
>> >> window is closed, but it was suggested that the best time to submit a new
>> >> architecture port would be right after an RC2 as the earliest point at which
>> >> the tree is usually generally churn-free enough.  While we optimistically hope
>> >> that we can get the port in for the 4.13 merge window, we're also eager to
>> >> ensure that the user-visible ABI is sane so we can proceed with our glibc port.
>> >> We'd like to at least get any user ABI issues shaken out as soon as possible,
>> >> even if we don't make it into 4.13.
>
> [...]
>
>> > I'll add more comments on some of the individual patches; expect this
>> > review to take a little while. Reposting once or twice a week to show
>> > incorporated changes can be useful; more than that and it can be
>> > harder to follow along in the discussion. It all depends on how much
>> > comments you end up receiving.
>>
>> OK.  I'll incorporate all the feedback I get over the next week or so into a v2
>> patch set.
>
> You might want to Cc linux-arch@vger.kernel.org on future iterations of
> this patchset where there's less "noise" than on LKML and the relevant
> people are more likely to notice ;) Likewise, the device-tree specific
> bits (e.g.  the bindings documentation) should probably be Cc'ed to
> devicetree@vger.kernel.org

OK, thanks.  I'll do that for the v2.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [patches] Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-05-23  5:16       ` [patches] " Olof Johansson
@ 2017-05-23 21:07         ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 283+ messages in thread
From: Benjamin Herrenschmidt @ 2017-05-23 21:07 UTC (permalink / raw)
  To: Olof Johansson, patches; +Cc: linux-kernel, Arnd Bergmann, albert

On Mon, 2017-05-22 at 22:16 -0700, Olof Johansson wrote:
> The same is true for some other drivers. Actually, I wonder if it
> might be just as easy to implement a sbi backend for hvc -- see
> hvc_udbg.c for an example where, on power, you have a simple get/put
> char hypervisor call in a very similar manner.

Rather look at hvc_opal. This is the console driver we use on native
POWER servers with the OPAL firmware, and it calls into a firmware
in a very similar way.

The driver lives in drivers/tty/hvc. It does call some "helpers" in
the arch code that wrap the actual FW calls.

> Either way (keeping discrete sbi driver or implementing hvc backend),
> moving to drivers/tty is the right thing here -- we've worked hard on
> ARM to get rid of random drivers under arch/ and it'd be nice to not
> see new ones intoduced here.

Yup. The reason mostly is that if the tty maintainer needs to do a
subsystem-wide change, he can address all drivers in drivers/tty and
doesn't have to look for others elsewhere in the tree. This is the same
for all subsystems.

Cheers,
Ben.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 4/7] RISC-V: arch/riscv/include
  2017-05-23 12:55   ` Arnd Bergmann
@ 2017-05-23 21:23     ` Benjamin Herrenschmidt
  2017-06-03  2:00       ` Palmer Dabbelt
  2017-06-01  0:56     ` Palmer Dabbelt
  1 sibling, 1 reply; 283+ messages in thread
From: Benjamin Herrenschmidt @ 2017-05-23 21:23 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Linux Kernel Mailing List, Olof Johansson, albert, Arnd Bergmann

On Tue, 2017-05-23 at 14:55 +0200, Arnd Bergmann wrote:
> > +
> > +#include <asm-generic/io.h>
> 
> I would recommend providing your own {read,write}{b,w,l,q}{,_relaxed}
> helpers using inline assembly, to prevent the compiler for breaking
> up accesses into byte accesses.
> 
> Also, most architectures require to some synchronization after a
> non-relaxed readl() to prevent prefetching of DMA buffers, and
> before a writel() to flush write buffers when a DMA gets triggered.

Right, I was about to comment on that one.

The question Palmer is about the ordering semantics of non-cached
storage.

What kind of ordering is provided architecturally ? Especially
between cachable and non-cachable loads and stores ?

Also, you have PCIe right ? What is the behaviour of MSIs ?

Does your HW provide a guarantee that in the case of a series of DMA
writes to memory by a device followed by an MSI, the CPU getting the
MSI will only get it after all the previous DMA writes have reached
coherency ? (Unlike LSIs where the driver is required to do an MMIO
read from the device, MSIs are expected to be ordered with data).

Another things with the read*() accessors. It's not uncommon for
a driver to do:

	writel(1, reset_reg);
	readl(reset_reg); /* flush posted writes */
	udelay(10);
	writel(0, reset_reg);

Now, in the above case, what can typically happen if you aren't careful
is that the readl which is intended to "push" the previous writel, will
not actually do its job because the return value hasn't been "consumed"
by the processor. Thus, the CPU will stick that on some kind of load
queue and won't actually wait for the return value before hitting the
delay loop.

Thus you might end up in a situation where the writel of 1 to the
device is itself reaching the device way after you started the delay
loop, and thus end up violating the delay requirement of the HW.

On powerpc we solve that by using a special instruction construct
inside the read* accessors that prevents the CPU from executing
subsequent instructions until the read value has been returned.

You may want to consider something similar.

Cheers,
Ben.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/7] RISC-V: arch/riscv/lib
  2017-05-23 10:47   ` Geert Uytterhoeven
@ 2017-05-23 22:07     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-23 22:07 UTC (permalink / raw)
  To: geert; +Cc: linux-kernel, Arnd Bergmann, olof, albert

On Tue, 23 May 2017 03:47:34 PDT (-0700), geert@linux-m68k.org wrote:
> Hi Palmer,
>
> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>  arch/riscv/lib/Makefile  |   5 ++
>>  arch/riscv/lib/ashldi3.c |  42 ++++++++++++++++
>
> At least this one has already two identical copies in arch/score/lib/ashldi3.c
> and arch/sh/lib/ashldi3.c. Probably these should be moved to lib/, and built
> depending on a new config symbol that is selected on score, sh, and riscv.
>
> Didn't check the others.

Thanks.  It looks like there's actually a lot of these on many architectures.
I have another patch set to correct all of these that you're To'd on, I'll just
pick up the first patch in the set for v2

https://lkml.org/lkml/2017/5/23/1280

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/7] RISC-V: arch/riscv/kernel
  2017-05-23  2:11   ` Olof Johansson
@ 2017-05-25  1:59     ` Palmer Dabbelt
  2017-05-25 19:51       ` Arnd Bergmann
  0 siblings, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-25  1:59 UTC (permalink / raw)
  To: olof; +Cc: linux-kernel, Arnd Bergmann, albert

On Mon, 22 May 2017 19:11:35 PDT (-0700), olof@lixom.net wrote:
> On Mon, May 22, 2017 at 5:41 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> What's missing from this patchset (ideally) is a good writeup under
> DOcumentation/ on expectations of system state (and/or configuration)
> upon entry of the kernel. For comparison, see the arm64 documentation
> where they were quite specific in this.

We don't have a spec written for this yet, but one is being written.  Is it OK
if I wait until there's a spec?

> This patch is also pushing size limits, and is getting unwieldy to
> comment on. I'll point out a few things below with plenty of snipped
> out lines.

I'm also going to start dropping diffs, as this is very big.

>> +++ b/arch/riscv/kernel/asm-offsets.c
>> @@ -0,0 +1,113 @@
>> +/*
>> + * Copyright (C) 2012 Regents of the University of California
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful, but
>> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
>> + *   NON INFRINGEMENT.  See the GNU General Public License for
>> + *   more details.
>
> Hmm, I haven't seen these terms used often, but they seem to exist
> around the tree in a few places. arch/tile is littered with them.
>
> I am not a lawyer, but I can't seem any reference to "good title" in
> the GPLv2 text.
>
> Rather than having to go through the process of figuring out if this
> license header is acceptable or not, you might find it easier to just
> go with something more established.

This almost certainly came from Tilera: I stole our ptrace from there becuase
that was the ISA I understood best, and that header must have proliferated
everywhere else.

I've changed it to a header I copied from ARM

  https://github.com/riscv/riscv-linux/commit/ccc4f51b40b28adf01b14ed6578bf26dc02f1425

>> +static int __populate_cache_leaves(unsigned int cpu)
>> +{
>> +       struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
>> +       struct cacheinfo *this_leaf = this_cpu_ci->info_list;
>> +       struct device_node *np = of_cpu_device_node_get(cpu);
>> +       int levels = 1, level = 1;
>> +
>> +       if (of_property_read_bool(np, "cache-size"))   ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
>> +       if (of_property_read_bool(np, "i-cache-size")) ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
>> +       if (of_property_read_bool(np, "d-cache-size")) ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
>
> Please run checkpatch, kernel coding style doesn't use one-line ifs
> (here nor elsewhere).

I went through and fixed many of the checkpatch messages.  The ones that are
left fall into the following categories:

 * Lots of uses of BUG/BUG_ON instead of WARN_ON.  Lots of these are in boot
   code, but some of them can probably be fixed.

 * Parens around single-statement __asm__ macros.  For these I also get a
   message when they're wrapped in "do {} while (0)", so I'm not sure what else
   to do.

 * Parens around macros like "#define RISCV_PTR .dword".  These can't have
   parens because they go directly to the assembler, so I'm considering this a
   false-positive.

 * Warnings about volatile in function declarations in bitops.h.  These are
   copied from other architectures.  There were a handful of other volatiles
   that I fixed,, but I think these should stay.

 * Definitions like ARCH_HAS_SETUP_ADDITIONAL_PAGES, these are also present in
   other architectures.

 * We added new typedefs, I can remove these if that's a problem.  They're
   there to match our other code (bootloader and simulator).

 * A handful of lines over 80 characters that I think are onerous to break any
   more.

 * Some warnings about printk() not having a KERN_ prefix.  I fixed a handful
   of these, but the remaining ones I don't know how to fix (in show_regs, for
   example, where arm64 also has them).

 * Extern declarations in C files, all of which link to symbols in assembly or
   linker scripts.  These were copied from other architectures.

There's also a bunch of false positives:

 * The spelling of SEPC, which is correct (Supervisor Exception Program
   Counter).

 * Fall-through warnings, probably getting confused by the break looking like
   "break; \" (they're in macros).

I'll make another pass on these before a v2 patch set.

>> +/* Return -1 if not a valid hart */
>> +int riscv_of_processor_hart(struct device_node *node)
>> +{
>> +       const char *isa, *status;
>> +       u32 hart;
>> +
>> +       if (!of_device_is_compatible(node, "riscv")) return -1;
>> +       if (of_property_read_u32(node, "reg", &hart) || hart >= NR_CPUS) return -1;
>> +       if (of_property_read_string(node, "status", &status) || strcmp(status, "okay")) return -1;
>> +       if (of_property_read_string(node, "riscv,isa", &isa) || isa[0] != 'r' || isa[1] != 'v') return -1;
>> +
>> +       return hart;
>> +}
>
> We usually prefer to see real -E<foo> returns instead of -1 in the kernel.

Makes sense.  https://github.com/riscv/riscv-linux/commit/10ef72b2aa16b2b69f9f349cffc06d12e183a56e

>> +asmlinkage void __irq_entry do_IRQ(unsigned int cause, struct pt_regs *regs)
>> +{
>> +       struct pt_regs *old_regs = set_irq_regs(regs);
>> +       irq_enter();
>> +
>> +       /* There are three classes of interrupt: timer, software, and
>> +          external devices.  We dispatch between them here.  External
>> +          device interrupts use the generic IRQ mechanisms. */
>> +       switch (cause) {
>> +               case INTERRUPT_CAUSE_TIMER:
>> +                       riscv_timer_interrupt();
>> +                       break;
>> +               case INTERRUPT_CAUSE_SOFTWARE:
>> +                       riscv_software_interrupt();
>> +                       break;
>> +               default: {
>> +                       struct irq_domain *domain = per_cpu(riscv_irq_data, smp_processor_id()).domain;
>
> Move this up to top of function and remove the { } wrap, please.

https://github.com/riscv/riscv-linux/commit/17879136caa05ed9f686736b3343f4b2063920ab

>> +static void riscv_irq_enable(struct irq_data *d)
>> +{
>> +       struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
>> +       atomic_long_or((1 << (long)d->hwirq), &per_cpu(riscv_early_sie, data->hart));
>
> This is a bit dense to get into without a few words of how it's
> expected to work.

OK, how does this look? https://github.com/riscv/riscv-linux/commit/112fd2d882c2363508a660061da558d772a4ff0b

>> +static int riscv_intc_init(struct device_node *node, struct device_node *parent)
>> +{
>> +       int hart;
>> +
>> +       if (parent) return 0; // should have no interrupt parent
>> +
>> +       if ((hart = riscv_of_processor_hart(node->parent)) >= 0) {
>
> Common pattern in kernel is to detect error instead:
>     hart = riscv_of_processor_hart(node->parent);
>     if (hart < 0) {
>         <from your else side here>
>         return 0;
>     }
>     <body if your if statement here>
>
>     return 0;

OK.  I've fixed this one here

  https://github.com/riscv/riscv-linux/commit/5f48d9ba0d3cd19dc0bf95f66370de4cfcd84cca

I'll try to remember to fix any others that I come across.

>> diff --git a/arch/riscv/kernel/pci.c b/arch/riscv/kernel/pci.c
>> new file mode 100644
>> index 000000000000..4191a5ffdd67
>> --- /dev/null
>> +++ b/arch/riscv/kernel/pci.c
>> @@ -0,0 +1,36 @@
>> +/*
>> + * Code borrowed from arch/arm64/kernel/pci.c
>
> So, you should add recursive reference from there (i.e. powerpc).
>
> But in the end, there's essentially no code in this file. :)

Well, now there's more copyright notices than lines of code... :)

  https://github.com/riscv/riscv-linux/commit/5ad312f755935319fdbb6739377b400ea81cd2ec

>> +#define MAX_DEVICES    1024 // 0 is reserved
>
> Seems like an odd comment to have here (and should probably not go at
> the end of the line)

Device 0 in the PLIC is reserved to mean "no device", which means "MAX_DEVICES"
is a bit of an odd name (there can only be MAX_DEVICES-1 devices).  I've added
a larger comment to describe this better.

  https://github.com/riscv/riscv-linux/commit/9d16413051dd86db0fb8a792f3d5f05ce788d145

>> +#define PLIC_HART_CONTEXT(data, i)     (struct plic_hart_context *)((char*)data->reg + HART_BASE + HART_SIZE*i)
>> +#define PLIC_ENABLE_CONTEXT(data, i)   (struct plic_enable_context *)((char*)data->reg + ENABLE_BASE + ENABLE_SIZE*i)
>> +#define PLIC_PRIORITY(data)            (struct plic_priority *)((char *)data->reg + PRIORITY_BASE)
>
> Since you have typecasting and stuff here, small static inlines with
> appropriate return types seems slightly tidier.

OK.  https://github.com/riscv/riscv-linux/commit/c416649b203276d944be595d87caf042c998727f

>> +static void plic_chained_handle_irq(struct irq_desc *desc)
>> +{
>> +        struct plic_handler *handler = irq_desc_get_handler_data(desc);
>> +       struct irq_chip *chip = irq_desc_get_chip(desc);
>
> Whitespace.

These were fixed along with the other checkpatch messages.


> [wrapping up review of this patch at this point to keep size down]
>
>
> -Olof

Thanks for the comments!  I'll batch these up into a v2 when I'm done with
everyone's comments from this round.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-05-23 10:51   ` Geert Uytterhoeven
@ 2017-05-25  1:59     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-25  1:59 UTC (permalink / raw)
  To: geert; +Cc: linux-kernel, Arnd Bergmann, olof, albert

On Tue, 23 May 2017 03:51:24 PDT (-0700), geert@linux-m68k.org wrote:
> Hi Palmer,
>
> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> --- /dev/null
>> +++ b/arch/riscv/Kconfig
>> @@ -0,0 +1,300 @@
>
>> +config PLIC
>> +       bool "Platform-Level Interrupt Controller"
>> +       default y
>> +       help
>> +          This enables support for the PLIC chip found in standard RISC-V
>> +          systems. The PLIC is the top-most interrupt controller found in
>> +          the system, connected directly to the core complex. All other
>> +          interrupt sources (MSI, GPIO, etc) are subordinate to the PLIC.
>> +
>> +          If you don't know what to do here, say Y.
>> +
>
> I think this symbol belongs in drivers/irqchip/Kconfig, and the corresponding
> driver in drivers/irqchip/.

OK, makes sense.

  https://github.com/riscv/riscv-linux/commit/81e5c9e581c318ea99192d5c1aaf97c197e79272

I'll batch this up with all the other code review comments for a v2.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/7] RISC-V: arch/riscv/lib
  2017-05-23 11:19   ` Arnd Bergmann
@ 2017-05-25  1:59     ` Palmer Dabbelt
  2017-05-26  9:06       ` Arnd Bergmann
  0 siblings, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-25  1:59 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: linux-kernel, olof, albert

On Tue, 23 May 2017 04:19:42 PDT (-0700), Arnd Bergmann wrote:
> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile
>> new file mode 100644
>> index 000000000000..f644e582f4b8
>> --- /dev/null
>> +++ b/arch/riscv/lib/Makefile
>
>> +
>> +void __delay(unsigned long cycles)
>> +{
>> +       u64 t0 = get_cycles();
>> +
>> +       while ((unsigned long)(get_cycles() - t0) < cycles)
>> +               cpu_relax();
>> +}
>> +
>> +void udelay(unsigned long usecs)
>> +{
>> +       u64 ucycles = (u64)usecs * timebase;
>> +       do_div(ucycles, 1000000U);
>> +       __delay((unsigned long)ucycles);
>> +}
>> +EXPORT_SYMBOL(udelay);
>> +
>> +void ndelay(unsigned long nsecs)
>> +{
>> +       u64 ncycles = (u64)nsecs * timebase;
>> +       do_div(ncycles, 1000000000U);
>> +       __delay((unsigned long)ncycles);
>> +}
>
> I'd be slightly worried about a global 'timebase' identifier that
> might conflict with a variable in some random driver.

Makes sense.  I've renamed it to riscv_timebase

  https://github.com/riscv/riscv-linux/commit/ed7d769e2c14e8809c3c125e0bba2978cb6fd37b

> Also, it would be good to replace the multiply+div64
> with a single multiplication here, see how x86 and arm do it
> (for the tsc/__timer_delay case).

Makes sense.  I think this should do it

  https://github.com/riscv/riscv-linux/commit/d397332f6ebff42f3ecb385e9cf3284fdeda6776

but I'm finding this hard to test as this only works for 2ms sleeps.  It seems
at least in the right ballpark

  [    0.048000] before 1000x usleep 1000
  [    1.060000] before 1000x nsleep 1000000
  [    2.072000] done

>       Arnd

Thanks for the feedback.  I'll incorporate this along with all the other
feedback into a v2.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/7] RISC-V: arch/riscv/kernel
  2017-05-23  0:41 ` [PATCH 6/7] RISC-V: arch/riscv/kernel Palmer Dabbelt
  2017-05-23  2:11   ` Olof Johansson
  2017-05-23 13:35   ` Arnd Bergmann
@ 2017-05-25 17:05   ` Pavel Machek
  2017-06-03  3:32     ` Palmer Dabbelt
  2 siblings, 1 reply; 283+ messages in thread
From: Pavel Machek @ 2017-05-25 17:05 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: linux-kernel, Arnd Bergmann, olof, albert

[-- Attachment #1: Type: text/plain, Size: 1346 bytes --]

Hi!

> +static void ci_leaf_init(struct cacheinfo *this_leaf,
> +                         struct device_node *node,
> +                         enum cache_type type, unsigned int level)
> +{
> +        this_leaf->of_node = node;
> +        this_leaf->level = level;
> +        this_leaf->type = type;
> +        this_leaf->physical_line_partition = 1; // not a sector cache
> +        this_leaf->attributes = CACHE_WRITE_BACK | CACHE_READ_ALLOCATE | CACHE_WRITE_ALLOCATE; // TODO: add to DTS
> +}

You may want to run the patches through checkpatch. (Comment style,
long lines).

> +static int __populate_cache_leaves(unsigned int cpu)
> +{
> +	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
> +	struct cacheinfo *this_leaf = this_cpu_ci->info_list;
> +	struct device_node *np = of_cpu_device_node_get(cpu);
> +	int levels = 1, level = 1;
> +
> +	if (of_property_read_bool(np, "cache-size"))   ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
> +	if (of_property_read_bool(np, "i-cache-size")) ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
> +	if (of_property_read_bool(np, "d-cache-size")) ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);


-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/7] RISC-V: arch/riscv/kernel
  2017-05-25  1:59     ` Palmer Dabbelt
@ 2017-05-25 19:51       ` Arnd Bergmann
  2017-06-06  4:56         ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-05-25 19:51 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Olof Johansson, Linux Kernel Mailing List, albert

On Thu, May 25, 2017 at 3:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> On Mon, 22 May 2017 19:11:35 PDT (-0700), olof@lixom.net wrote:

>  * Parens around single-statement __asm__ macros.  For these I also get a
>    message when they're wrapped in "do {} while (0)", so I'm not sure what else
>    to do.

I would generally recommend using inline functions for those, and only do
macros when you need them.

>  * Parens around macros like "#define RISCV_PTR .dword".  These can't have
>    parens because they go directly to the assembler, so I'm considering this a
>    false-positive.

agreed

>  * Warnings about volatile in function declarations in bitops.h.  These are
>    copied from other architectures.  There were a handful of other volatiles
>    that I fixed,, but I think these should stay.

Agreed, bitops.h is one of the few headers that should use 'volatile'.

>  * Definitions like ARCH_HAS_SETUP_ADDITIONAL_PAGES, these are also present in
>    other architectures.

What is the warning here? I would assume that you should leave this
unchanged as well.

>  * We added new typedefs, I can remove these if that's a problem.  They're
>    there to match our other code (bootloader and simulator).

It depends. What typedefs are those? Removing the typedefs in both
the kernel and the other code that uses the same types is likely the
best option here.

>  * A handful of lines over 80 characters that I think are onerous to break any
>    more.

Right, don't worry about it too much, and use common sense for this
warning.

>  * Some warnings about printk() not having a KERN_ prefix.  I fixed a handful
>    of these, but the remaining ones I don't know how to fix (in show_regs, for
>    example, where arm64 also has them).

KERN_CONT

>  * Extern declarations in C files, all of which link to symbols in assembly or
>    linker scripts.  These were copied from other architectures.

I would try to fix those by using a header even if there is only one user.
I'd actually like to get a compile-time warning for those in the long run,
maybe with 'make W=1', so better don't introduce new ones.

     Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/7] RISC-V: arch/riscv/lib
  2017-05-25  1:59     ` Palmer Dabbelt
@ 2017-05-26  9:06       ` Arnd Bergmann
  2017-06-06  4:56         ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-05-26  9:06 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Linux Kernel Mailing List, Olof Johansson, albert

On Thu, May 25, 2017 at 3:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> On Tue, 23 May 2017 04:19:42 PDT (-0700), Arnd Bergmann wrote:
>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:

>> Also, it would be good to replace the multiply+div64
>> with a single multiplication here, see how x86 and arm do it
>> (for the tsc/__timer_delay case).
>
> Makes sense.  I think this should do it
>
>   https://github.com/riscv/riscv-linux/commit/d397332f6ebff42f3ecb385e9cf3284fdeda6776
>
> but I'm finding this hard to test as this only works for 2ms sleeps.  It seems
> at least in the right ballpark

+ if (usecs > MAX_UDELAY_US) {
+ __delay((u64)usecs * riscv_timebase / 1000000ULL);
+ return;
+ }

You still do the 64-bit division here. What I meant is to completely
avoid the division and use a multiply+shift.

Also, you don't need to base anything on HZ, as you do not rely
on the delay calibration but always use a timer.

       Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64}
  2017-05-23 11:30   ` Arnd Bergmann
@ 2017-05-27  0:57     ` Palmer Dabbelt
  2017-05-29 10:50       ` Arnd Bergmann
  0 siblings, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-27  0:57 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: linux-kernel, olof, albert

On Tue, 23 May 2017 04:30:50 PDT (-0700), Arnd Bergmann wrote:
> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> RISC-V has both 32-bit and 64-bit base ISAs, but they are very similar.
>> Like some other platforms, we'd like to share one arch directory between
>> the two of them.
>
> I think we mainly do the others for backwards-compatibility with ancient
> build scripts, and we don't need that here. Instead, you could add one more
> line to the 'SUBARCH:=' statement that interprets the uname output.

I don't think that does the same thing.  The desired effect of this diff is:

 * "uname -m" when running on a RISC-V machine returns either riscv32 or
   riscv64, as that's what tools like autoconf expect when trying to find
   tuples.

 * I can cross compile for riscv32 and riscv64.  That's currently controlled by
   a Kconfig setting, but ARCH=riscv32 vs ARCH=riscv64 controlls what defconfig
   sets.

 * I can natively compile for riscv32 and riscv64.  That uses the same Kconfig
   setting, and the same ARCH=riscv32 vs ARCH=riscv64 switch for defconfig.

Neither of the two Kconfig issues is a big deal, but we de need "uname -m" to
return "riscv64" or "riscv32" not "riscv".  I think the only way to do that is
to set SRCARCH, but I'd be happy to change it if there's a better way.  I think
if I just do this

diff --git a/Makefile b/Makefile
index 0606f28..4adc609 100644
--- a/Makefile
+++ b/Makefile
@@ -232,7 +232,8 @@ SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
                                  -e s/arm.*/arm/ -e s/sa110/arm/ \
                                  -e s/s390x/s390/ -e s/parisc64/parisc/ \
                                  -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
-                                 -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ )
+                                 -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
+                                 -e s/riscv.*/riscv/ )

 # Cross compiling and selecting different set of gcc/bin-utils
 # ---------------------------------------------------------------------------
@@ -269,14 +270,6 @@ ifeq ($(ARCH),x86_64)
         SRCARCH := x86
 endif

-# Additional ARCH settings for RISC-V
-ifeq ($(ARCH),riscv32)
-       SRCARCH := riscv
-endif
-ifeq ($(ARCH),riscv64)
-       SRCARCH := riscv
-endif
-
 # Additional ARCH settings for sparc
 ifeq ($(ARCH),sparc32)
        SRCARCH := sparc

then I'll end up with "uname -m" as "riscv" -- I haven't tried it, but that's
why we ended up with this diff in the first place.

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-05-23 11:46   ` Arnd Bergmann
@ 2017-05-27  0:57     ` Palmer Dabbelt
  2017-05-29 11:17       ` Arnd Bergmann
  0 siblings, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-27  0:57 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: linux-kernel, olof, albert

On Tue, 23 May 2017 04:46:22 PDT (-0700), Arnd Bergmann wrote:
> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> ---
>>  arch/riscv/.gitignore                |  35 ++++
>>  arch/riscv/Kconfig                   | 300 +++++++++++++++++++++++++++++++++++
>>  arch/riscv/Makefile                  |  64 ++++++++
>>  arch/riscv/configs/riscv32_spike     |  47 ++++++
>>  arch/riscv/configs/riscv64_freedom-u |  52 ++++++
>>  arch/riscv/configs/riscv64_qemu      |  64 ++++++++
>>  arch/riscv/configs/riscv64_spike     |  45 ++++++
>>  7 files changed, 607 insertions(+)
>>  create mode 100644 arch/riscv/.gitignore
>>  create mode 100644 arch/riscv/Kconfig
>>  create mode 100644 arch/riscv/Makefile
>>  create mode 100644 arch/riscv/configs/riscv32_spike
>>  create mode 100644 arch/riscv/configs/riscv64_freedom-u
>>  create mode 100644 arch/riscv/configs/riscv64_qemu
>>  create mode 100644 arch/riscv/configs/riscv64_spike
>>
>> diff --git a/arch/riscv/.gitignore b/arch/riscv/.gitignore
>> new file mode 100644
>> index 000000000000..376d06eb5d52
>> --- /dev/null
>> +++ b/arch/riscv/.gitignore
>> @@ -0,0 +1,35 @@
>> +# Now un-ignore all files.
>> +!*
>> +
>> +# But then re-ignore the files listed in the Linux .gitignore
>> +# Normal rules
>> +#
>> +.*
>> +*.o
>> +*.o.*
>> +*.a
>
> This doesn't seem to belong here: There is no reason for riscv
> to be different from all other architectures. Is something wrong
> with the top-level .gitignore? If so, we should just fix it there.

Sorry, this snuck in from how we used to track the kernel.  It's been fixed.

>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>> new file mode 100644
>> index 000000000000..510ead1d3343
>> --- /dev/null
>> +++ b/arch/riscv/Kconfig
>> @@ -0,0 +1,300 @@
>> +#
>> +# For a description of the syntax of this configuration file,
>> +# see Documentation/kbuild/kconfig-language.txt.
>> +#
>> +
>> +config RISCV
>> +       def_bool y
>> +       select OF
>> +       select OF_EARLY_FLATTREE
>> +       select OF_IRQ
>> +       select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
>> +       select ARCH_WANT_FRAME_POINTERS
>> +       select CLONE_BACKWARDS
>> +       select COMMON_CLK
>> +       select GENERIC_CLOCKEVENTS
>> +       select GENERIC_CPU_DEVICES
>> +       select GENERIC_IRQ_SHOW
>> +       select GENERIC_PCI_IOMAP
>
> You normally don't want GENERIC_PCI_IOMAP, unless your
> inb()/outb() uses other instructions than your readl()/writel()

We don't have any special instructions for inb/outb, but there is a special
fence to ensure ordering.

It looks like most of my candidates for patterning things after use
GENERIC_PCI_IOMAP, but I ended up setting GENERIC_IOMAP and things still at
least build and boot without any PCI

  https://github.com/riscv/riscv-linux/commit/c43c599b0dd9d15886c03a9dd179f8936b0cbb2e

I'll be sure to check if I broke PCI, as I don't actually know what's going on
here.

>> +config MMU
>> +       def_bool y
>
> Just a general question: has there been any interest in a no-MMU
> version?

One of the fun things about RISC-V is that there's at least some interest in
_everything_... :).  There hasn't been any serious interest, and I don't know
of anyone building systems where no-MMU Linux would run (our DDR phy is a lot
bigger than our MMU), but I wouldn't rule it out.

>> +# even on 32-bit, physical (and DMA) addresses are > 32-bits
>> +config ARCH_PHYS_ADDR_T_64BIT
>> +       def_bool y
>> +
>> +config ARCH_DMA_ADDR_T_64BIT
>> +       def_bool y
>
> Are you required to use 64-bit addressing for RAM on 32-bit
> architectures though? Using 32-bit dma_addr_t and phys_addr_t
> when possible makes some code noticeably more efficient.
>
>> +config PGTABLE_LEVELS
>> +       int
>> +       default 3 if 64BIT
>> +       default 2
>
> With 2-level page tables, you usually can't address much more
> than 32-bit physical memory anyway, so I'd guess that most
> 32-bit chips would actually put their RAM under the 4GB boundary.

We can address 34 bits of physical address space on Sv32 (the 32-bit virtual
addressing scheme in RV32).  If this is a meaningful performance constraint
then we could always make this configurable.

>> +config RV_ATOMIC
>> +       bool "Use atomic memory instructions (RV32A or RV64A)"
>> +       default y
>> +
>> +config RV_SYSRISCV_ATOMIC
>> +       bool "Include support for atomic operation syscalls"
>> +       default n
>> +       help
>> +         If atomic memory instructions are present, i.e.,
>> +         CONFIG_RV_ATOMIC, this includes support for the syscall that
>> +         provides atomic accesses.  This is only useful to run
>> +         binaries that require atomic access but were compiled with
>> +         -mno-atomic.
>> +
>> +         If CONFIG_RV_ATOMIC is unset, this option is mandatory.
>
> Just express this in Kconfig terms to prevent misconfiguration:
>
> config RV_SYSRISCV_ATOMIC
>        bool "Include support for atomic operation syscalls" if RV_ATOMIC
>        default !RV_ATOMIC

Done

  https://github.com/riscv/riscv-linux/commit/921703e0a6639d10826e5bbad73f4998977dc244

> I wonder what the cost would be of always providing the syscalls
> for compatibility. This is also something worth putting into a VDSO
> instead of exposing the syscall:
>
> That way, user space that is built with -mno-atomic can call into
> the vdso, which depending on the hardware support will perform
> the atomic operation directly or enter the syscall.

I think the theory is that most Linux-capable systems are going to have the A
extension, and that this is really there for educational/hobbyist use.

The actual implementation isn't that expensive: it's no-SMP-only, so it's just
a disable/enable interrupts that wraps a compare+exchange.  I think putting it
in the VDSO is actually the right thing to do: that way non-A user code will
still have reasonable performance.

I'll add it to my TODO list.

>> +config PCI_DOMAINS
>> +       def_bool PCI
>> +
>> +config PCI_DOMAINS_GENERIC
>> +       def_bool PCI
>> +
>> +config PCI_SYSCALL
>> +       def_bool PCI
>
> I don't think you want PCI_SYSCALL

OK.  I'm a bit worried we need this to run DOOM, but I'll give everything a
shot without it and see what happens

  https://github.com/riscv/riscv-linux/commit/710b5bb81ab99cf1d49a74bb706f6aae179fc111

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 3/7] RISC-V: Device Tree Documentation
  2017-05-23 12:03   ` Arnd Bergmann
@ 2017-05-27  0:57     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-05-27  0:57 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: linux-kernel, olof, albert

On Tue, 23 May 2017 05:03:17 PDT (-0700), Arnd Bergmann wrote:
> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> ---
>>  .../interrupt-controller/riscv,cpu-intc.txt        | 46 ++++++++++++++++++++++
>>  .../bindings/interrupt-controller/riscv,plic0.txt  | 44 +++++++++++++++++++++
>
> The patch needs a description, and should be sent to the irqchip maintainers
> and the devicetree maintainers for review, along for the respective
> drivers/irqchip/
> patch.

OK, I'll include that as part of my v2.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64}
  2017-05-27  0:57     ` Palmer Dabbelt
@ 2017-05-29 10:50       ` Arnd Bergmann
  2017-06-06  4:56         ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-05-29 10:50 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Linux Kernel Mailing List, Olof Johansson, albert

On Sat, May 27, 2017 at 2:57 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> On Tue, 23 May 2017 04:30:50 PDT (-0700), Arnd Bergmann wrote:
>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> RISC-V has both 32-bit and 64-bit base ISAs, but they are very similar.
>>> Like some other platforms, we'd like to share one arch directory between
>>> the two of them.
>>
>> I think we mainly do the others for backwards-compatibility with ancient
>> build scripts, and we don't need that here. Instead, you could add one more
>> line to the 'SUBARCH:=' statement that interprets the uname output.
>
> I don't think that does the same thing.  The desired effect of this diff is:
>
>  * "uname -m" when running on a RISC-V machine returns either riscv32 or
>    riscv64, as that's what tools like autoconf expect when trying to find
>    tuples.
>
>  * I can cross compile for riscv32 and riscv64.  That's currently controlled by
>    a Kconfig setting, but ARCH=riscv32 vs ARCH=riscv64 controlls what defconfig
>    sets.
>
>  * I can natively compile for riscv32 and riscv64.  That uses the same Kconfig
>    setting, and the same ARCH=riscv32 vs ARCH=riscv64 switch for defconfig.

Right, but my point is that a new architecture should not rely on 'ARCH='
to pick the defconfig, we only do that on a couple of architectures for
backwards compatibility with old scripts.

> Neither of the two Kconfig issues is a big deal, but we de need "uname -m" to
> return "riscv64" or "riscv32" not "riscv".  I think the only way to do that is
> to set SRCARCH, but I'd be happy to change it if there's a better way.  I think
> if I just do this
>
> diff --git a/Makefile b/Makefile
> index 0606f28..4adc609 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -232,7 +232,8 @@ SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
>                                   -e s/arm.*/arm/ -e s/sa110/arm/ \
>                                   -e s/s390x/s390/ -e s/parisc64/parisc/ \
>                                   -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
> -                                 -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ )
> +                                 -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
> +                                 -e s/riscv.*/riscv/ )
>
>  # Cross compiling and selecting different set of gcc/bin-utils
>  # ---------------------------------------------------------------------------
> @@ -269,14 +270,6 @@ ifeq ($(ARCH),x86_64)
>          SRCARCH := x86
>  endif
>
> -# Additional ARCH settings for RISC-V
> -ifeq ($(ARCH),riscv32)
> -       SRCARCH := riscv
> -endif
> -ifeq ($(ARCH),riscv64)
> -       SRCARCH := riscv
> -endif
> -
>  # Additional ARCH settings for sparc
>  ifeq ($(ARCH),sparc32)
>         SRCARCH := sparc
>
> then I'll end up with "uname -m" as "riscv" -- I haven't tried it, but that's
> why we ended up with this diff in the first place.

Do you mean the "uname -m" output comes from "${SRCARCH}" at
the time of the kernel build? That would be easy enough to change
by simply hardcoding it depending on CONFIG_64BIT.

         Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-05-27  0:57     ` Palmer Dabbelt
@ 2017-05-29 11:17       ` Arnd Bergmann
  2017-06-06  4:56         ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-05-29 11:17 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Linux Kernel Mailing List, Olof Johansson, albert

On Sat, May 27, 2017 at 2:57 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> On Tue, 23 May 2017 04:46:22 PDT (-0700), Arnd Bergmann wrote:
>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>>> new file mode 100644
>>> index 000000000000..510ead1d3343
>>> --- /dev/null
>>> +++ b/arch/riscv/Kconfig
>>> @@ -0,0 +1,300 @@
>>> +#
>>> +# For a description of the syntax of this configuration file,
>>> +# see Documentation/kbuild/kconfig-language.txt.
>>> +#
>>> +
>>> +config RISCV
>>> +       def_bool y
>>> +       select OF
>>> +       select OF_EARLY_FLATTREE
>>> +       select OF_IRQ
>>> +       select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
>>> +       select ARCH_WANT_FRAME_POINTERS
>>> +       select CLONE_BACKWARDS
>>> +       select COMMON_CLK
>>> +       select GENERIC_CLOCKEVENTS
>>> +       select GENERIC_CPU_DEVICES
>>> +       select GENERIC_IRQ_SHOW
>>> +       select GENERIC_PCI_IOMAP
>>
>> You normally don't want GENERIC_PCI_IOMAP, unless your
>> inb()/outb() uses other instructions than your readl()/writel()
>
> We don't have any special instructions for inb/outb, but there is a special
> fence to ensure ordering.

Ah, interesting. What is the exact behavior of that fence? (this is slightly
unrelated but worth looking at anyway)

> It looks like most of my candidates for patterning things after use
> GENERIC_PCI_IOMAP, but I ended up setting GENERIC_IOMAP and things still at
> least build and boot without any PCI
>
>   https://github.com/riscv/riscv-linux/commit/c43c599b0dd9d15886c03a9dd179f8936b0cbb2e
>
> I'll be sure to check if I broke PCI, as I don't actually know what's going on
> here.

Actually I made a mistake: GENERIC_PCI_IOMAP is fine, please use that,
GENERIC_IOMAP is the option you don't want, and you already don't
enable that one.

>>> +# even on 32-bit, physical (and DMA) addresses are > 32-bits
>>> +config ARCH_PHYS_ADDR_T_64BIT
>>> +       def_bool y
>>> +
>>> +config ARCH_DMA_ADDR_T_64BIT
>>> +       def_bool y
>>
>> Are you required to use 64-bit addressing for RAM on 32-bit
>> architectures though? Using 32-bit dma_addr_t and phys_addr_t
>> when possible makes some code noticeably more efficient.
>>
>>> +config PGTABLE_LEVELS
>>> +       int
>>> +       default 3 if 64BIT
>>> +       default 2
>>
>> With 2-level page tables, you usually can't address much more
>> than 32-bit physical memory anyway, so I'd guess that most
>> 32-bit chips would actually put their RAM under the 4GB boundary.
>
> We can address 34 bits of physical address space on Sv32 (the 32-bit virtual
> addressing scheme in RV32).  If this is a meaningful performance constraint
> then we could always make this configurable.

I'd suggest to leave it turned off initially and only use it once someone
actually builds a system that makes use of the high address space.

Of course if there is already hardware that needs it, there has to
be a way to turn it on.

This raises a much more general question about how you want to deal
with SoC implementations in the future. The two most common ways of
doing this are:

- Every major platform gets a Kconfig option in the architecture menu,
  and that selects the essential drivers (irqchip, clocksource, pinctrl,
  clk, ...) that you need for that platform, along with architecture features
  (ISA level and optional features, ...)

- The architecture code knows nothing about the SoC and just keeps
   to the basics (CPU architecture level selection, SMP/MMU/etc enabled,
   selecting drivers that everyone needs) and leaves the rest up to be
   selected in the defconfig file.

On ARM, we have a bit of both, which is not as good as being
consistent one way or another.

>>> +config RV_ATOMIC
>>> +       bool "Use atomic memory instructions (RV32A or RV64A)"
>>> +       default y
>>> +
>>> +config RV_SYSRISCV_ATOMIC
>>> +       bool "Include support for atomic operation syscalls"
>>> +       default n
>>> +       help
>>> +         If atomic memory instructions are present, i.e.,
>>> +         CONFIG_RV_ATOMIC, this includes support for the syscall that
>>> +         provides atomic accesses.  This is only useful to run
>>> +         binaries that require atomic access but were compiled with
>>> +         -mno-atomic.
>>> +
>>> +         If CONFIG_RV_ATOMIC is unset, this option is mandatory.
>>
>> I wonder what the cost would be of always providing the syscalls
>> for compatibility. This is also something worth putting into a VDSO
>> instead of exposing the syscall:
>>
>> That way, user space that is built with -mno-atomic can call into
>> the vdso, which depending on the hardware support will perform
>> the atomic operation directly or enter the syscall.
>
> I think the theory is that most Linux-capable systems are going to have the A
> extension, and that this is really there for educational/hobbyist use.
>
> The actual implementation isn't that expensive: it's no-SMP-only, so it's just
> a disable/enable interrupts that wraps a compare+exchange.  I think putting it
> in the VDSO is actually the right thing to do: that way non-A user code will
> still have reasonable performance.
>
> I'll add it to my TODO list.

This would be another variant of the question above.

>>> +config PCI_DOMAINS
>>> +       def_bool PCI
>>> +
>>> +config PCI_DOMAINS_GENERIC
>>> +       def_bool PCI
>>> +
>>> +config PCI_SYSCALL
>>> +       def_bool PCI
>>
>> I don't think you want PCI_SYSCALL
>
> OK.  I'm a bit worried we need this to run DOOM, but I'll give everything a
> shot without it and see what happens

I'd argue that's a user space bug that we want to fix anyway to make
the code portable to other architectures.

        Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 4/7] RISC-V: arch/riscv/include
  2017-05-23 12:55   ` Arnd Bergmann
  2017-05-23 21:23     ` Benjamin Herrenschmidt
@ 2017-06-01  0:56     ` Palmer Dabbelt
  2017-06-01  9:00       ` Arnd Bergmann
  1 sibling, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-01  0:56 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: linux-kernel, olof, albert

On Tue, 23 May 2017 05:55:15 PDT (-0700), Arnd Bergmann wrote:
> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> +/**
>> + * atomic_read - read atomic variable
>> + * @v: pointer of type atomic_t
>> + *
>> + * Atomically reads the value of @v.
>> + */
>> +static inline int atomic_read(const atomic_t *v)
>> +{
>> +       return *((volatile int *)(&(v->counter)));
>> +}
>> +/**
>> + * atomic_set - set atomic variable
>> + * @v: pointer of type atomic_t
>> + * @i: required value
>> + *
>> + * Atomically sets the value of @v to @i.
>> + */
>> +static inline void atomic_set(atomic_t *v, int i)
>> +{
>> +       v->counter = i;
>> +}
>
> These commonly use READ_ONCE() and WRITE_ONCE,
> I'd recommend doing the same here to be on the safe side.

Makes sense.  https://github.com/riscv/riscv-linux/commit/77647f9e4dccab68c69a212a63c9efe1db2b7b1c

>> diff --git a/arch/riscv/include/asm/bug.h b/arch/riscv/include/asm/bug.h
>> new file mode 100644
>> index 000000000000..10d894ac3137
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/bug.h
>> @@ -0,0 +1,81 @@
>> +/*
>>
>> +#ifndef _ASM_RISCV_BUG_H
>> +#define _ASM_RISCV_BUG_H
>
>> +#ifdef CONFIG_GENERIC_BUG
>> +#define __BUG_INSN     _AC(0x00100073, UL) /* sbreak */
>
> Please have a look at the modifications I did for !CONFIG_BUG
> on x86, arm and arm64. It's generally better to define BUG to a
> trap even when CONFIG_BUG is disabled, otherwise you run
> into undefined behavior in some code, and gcc will print annoying
> warnings about that.

OK, seems like a good thing.  https://github.com/riscv/riscv-linux/commit/67db001653614c6555424b3812d7edfba12a6d4c

>> +#ifndef _ASM_RISCV_CACHE_H
>> +#define _ASM_RISCV_CACHE_H
>> +
>> +#define L1_CACHE_SHIFT         6
>> +
>> +#define L1_CACHE_BYTES         (1 << L1_CACHE_SHIFT)
>
> Is this the only valid cache line size on riscv, or just the largest
> one that is allowed?

The RISC-V ISA manual doesn't actually mention caches anywhere, so there's no
restriction on L1 cache line size (we tried to keep microarchitecture out of
the ISA specification).  We provide the actual cache parameters as part of the
device tree, but it looks like this needs to be known staticly in some places
so we can't use that everywhere.

We could always make this a Kconfig parameter.

>> +
>> +static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
>> +{
>> +       return (dma_addr_t)paddr;
>> +}
>> +
>> +static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t dev_addr)
>> +{
>> +       return (phys_addr_t)dev_addr;
>> +}
>
> What do you need these for? If possible, try to remove them.
>
>> +static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, enum dma_data_direction dir)
>> +{
>> +       /*
>> +        * RISC-V is cache-coherent, so this is mostly a no-op.
>> +        * However, we do need to ensure that dma_cache_sync()
>> +        * enforces order, hence the mb().
>> +        */
>> +       mb();
>> +}
>
> Do you even support any drivers that use
> dma_alloc_noncoherent()/dma_cache_sync()?
>
> I would guess you can just leave this out.

These must have been vestigial code, they appear safe to remove.

  https://github.com/riscv/riscv-linux/commit/d1c88783d5ff66464a25173f7a4af139f0ebf5e2

>> diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h
>> new file mode 100644
>> index 000000000000..d942555a7a08
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/io.h
>> @@ -0,0 +1,36 @@
>
>> +#ifndef _ASM_RISCV_IO_H
>> +#define _ASM_RISCV_IO_H
>> +
>> +#include <asm-generic/io.h>
>
> I would recommend providing your own {read,write}{b,w,l,q}{,_relaxed}
> helpers using inline assembly, to prevent the compiler for breaking
> up accesses into byte accesses.
>
> Also, most architectures require to some synchronization after a
> non-relaxed readl() to prevent prefetching of DMA buffers, and
> before a writel() to flush write buffers when a DMA gets triggered.

Makes sense.  These were all OK on existing implementations (as there's no
writable PMAs, so all MMIO regions are strictly ordered), but that's not
actually what the RISC-V ISA says.  I patterned this on arm64

  https://github.com/riscv/riscv-linux/commit/e200fa29a69451ef4d575076e4d2af6b7877b1fa

where I think the only odd thing is our definition of mmiowb

  +/* IO barriers.  These only fence on the IO bits because they're only required
  + * to order device access.  We're defining mmiowb because our AMO instructions
  + * (which are used to implement locks) don't specify ordering.  From Chapter 7
  + * of v2.2 of the user ISA:
  + * "The bits order accesses to one of the two address domains, memory or I/O,
  + * depending on which address domain the atomic instruction is accessing. No
  + * ordering constraint is implied to accesses to the other domain, and a FENCE
  + * instruction should be used to order across both domains."
  + */
  +
  +#define __iormb()               __asm__ __volatile__ ("fence i,io" : : : "memory");
  +#define __iowmb()               __asm__ __volatile__ ("fence io,o" : : : "memory");
  +
  +#define mmiowb()                __asm__ __volatile__ ("fence io,io" : : : "memory");

which I think is correct.

>> +#ifdef __KERNEL__
>> +
>> +#ifdef CONFIG_MMU
>> +
>> +extern void __iomem *ioremap(phys_addr_t offset, unsigned long size);
>> +
>> +#define ioremap_nocache(addr, size) ioremap((addr), (size))
>> +#define ioremap_wc(addr, size) ioremap((addr), (size))
>> +#define ioremap_wt(addr, size) ioremap((addr), (size))
>
> Is this a hard architecture limitation? Normally you really want
> write-combined access on frame buffer memory and a few other
> cases for performance reasons, and ioremap_wc() gets used
> for by memremap() for addressing RAM in some cases, and you
> normally don't want to have PTEs for the same memory using
> cached and uncached page flags

This is currently an architecture limitation.  In RISC-V these properties are
known as PMAs (Physical Memory Attributes).  While the supervisor spec mentions
PMAs, it doesn't provide a mechanism to read or write them so they are
essentially unspecified.  PMAs will be properly defined as part of the platform
specification, which isn't written yet.

>> diff --git a/arch/riscv/include/asm/serial.h b/arch/riscv/include/asm/serial.h
>> new file mode 100644
>> index 000000000000..d783dbe80a4b
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/serial.h
>> @@ -0,0 +1,43 @@
>> +/*
>> + * Copyright (C) 2014 Regents of the University of California
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful, but
>> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
>> + *   NON INFRINGEMENT.  See the GNU General Public License for
>> + *   more details.
>> + */
>> +
>> +#ifndef _ASM_RISCV_SERIAL_H
>> +#define _ASM_RISCV_SERIAL_H
>> +
>> +/*
>> + * FIXME: interim serial support for riscv-qemu
>> + *
>> + * Currently requires that the emulator itself create a hole at addresses
>> + * 0x3f8 - 0x3ff without looking through page tables.
>
> This sounds like something we want to fix in qemu and not have in the
> mainline kernel. In particular, something seems really wrong if your
> inb()/outb() get remapped to physical CPU address 0+offset.

Sorry, we had some hacks floating around for QEMU from before we actually had
any devices interfaces working (ie, before device tree and proper MMIO
support).  I'll go through and drop these before v2.

>> diff --git a/arch/riscv/include/asm/setup.h b/arch/riscv/include/asm/setup.h
>> new file mode 100644
>> index 000000000000..e457854e9988
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/setup.h
>> @@ -0,0 +1,20 @@
>> +/*
>> + * Copyright (C) 2012 Regents of the University of California
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful, but
>> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
>> + *   NON INFRINGEMENT.  See the GNU General Public License for
>> + *   more details.
>> + */
>> +
>> +#ifndef _ASM_RISCV_SETUP_H
>> +#define _ASM_RISCV_SETUP_H
>> +
>> +#include <asm-generic/setup.h>
>> +
>> +#endif /* _ASM_RISCV_SETUP_H */
>
> Can you remove this file and add it to asm/Kbuild as generic-y instead?

Yes.  https://github.com/riscv/riscv-linux/commit/8f35ce93bd1230ac0cb4aa92e5673c61e50dd862

>> +/*
>> + * low level task data that entry.S needs immediate access to
>> + * - this struct should fit entirely inside of one cache line
>> + * - this struct resides at the bottom of the supervisor stack
>> + * - if the members of this struct changes, the assembly constants
>> + *   in asm-offsets.c must be updated accordingly
>> + */
>> +struct thread_info {
>> +       struct task_struct      *task;          /* main task structure */
>> +       unsigned long           flags;          /* low level flags */
>> +       __u32                   cpu;            /* current CPU */
>> +       int                     preempt_count;  /* 0 => preemptable, <0 => BUG */
>> +       mm_segment_t            addr_limit;
>> +};
>
> Please see 15f4eae70d36 ("x86: Move thread_info into task_struct")
> and try to do the same.

OK, here's my attempt

  https://github.com/riscv/riscv-linux/commit/c618553e7aa65c85564a5d0a868ec7e6cf634afd

Since there's some actual meat, I left a commit message (these are more just
notes for me for my v2, I'll be squashing everything)

"
This is patterned more off the arm64 move than the x86 one, since we
still need to have at least addr_limit to emulate FS.

The patch itself changes sscratch from holding SP to holding TP, which
contains a pointer to task_struct.  thread_info must be at a 0 offset
from task_struct, but it looks like that's already enforced with a big
comment.  We now store both the user and kernel SP in task_struct, but
those are really acting more as extra scratch space than pemanent
storage.
"

>> +#else /* !CONFIG_MMU */
>> +
>> +static inline void flush_tlb_all(void)
>> +{
>> +       BUG();
>> +}
>> +
>> +static inline void flush_tlb_mm(struct mm_struct *mm)
>> +{
>> +       BUG();
>> +}
>
> The NOMMU support is rather incomplete and CONFIG_MMU is
> hard-enabled, so I'd just drop any !CONFIG_MMU #ifdefs.

OK.  I've left in the "#ifdef CONFIG_MMU" blocks as the #ifdef/#endif doesn't
really add any code, but I can go ahead and drop the #ifdef if you think that's
better.

  https://github.com/riscv/riscv-linux/commit/e98ca23adfb9422bebc87cbfb58f70d4a63cf067

>> diff --git a/arch/riscv/include/uapi/asm/Kbuild b/arch/riscv/include/uapi/asm/Kbuild
>> new file mode 100644
>> index 000000000000..276b6dae745c
>> --- /dev/null
>> +++ b/arch/riscv/include/uapi/asm/Kbuild
>> @@ -0,0 +1,10 @@
>> +# UAPI Header export list
>> +include include/uapi/asm-generic/Kbuild.asm
>> +
>> +header-y += auxvec.h
>> +header-y += bitsperlong.h
>> +header-y += byteorder.h
>> +header-y += ptrace.h
>> +header-y += sigcontext.h
>> +header-y += siginfo.h
>> +header-y += unistd.h
>
> Please see
> fcc8487d477a ("uapi: export all headers under uapi directories")
>
> and adapt the file accordingly

https://github.com/riscv/riscv-linux/commit/52c5e300b498742390434891db34f9dbacd082e9

>> +#include <asm-generic/unistd.h>
>> +
>> +#define __NR_sysriscv  __NR_arch_specific_syscall
>> +#ifndef __riscv_atomic
>> +__SYSCALL(__NR_sysriscv, sys_sysriscv)
>> +#endif
>
> Please make this a straight cmpxchg syscall and remove the multiplexer.
> Why does the definition depend on __riscv_atomic rather than the
> Kconfig symbol?

I think that was just an oversight: that's not the right switch.  Either you or
someone else pointed out some problems with this.  There's going to be an
interposer in the VDSO, and then we'll always enable the system call.

I can change this to two system calls: sysriscv_cmpxchg32 and
sysriscv_cmpxchg64.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 4/7] RISC-V: arch/riscv/include
  2017-06-01  0:56     ` Palmer Dabbelt
@ 2017-06-01  9:00       ` Arnd Bergmann
  2017-06-06  4:56         ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-06-01  9:00 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Linux Kernel Mailing List, Olof Johansson, albert

On Thu, Jun 1, 2017 at 2:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> On Tue, 23 May 2017 05:55:15 PDT (-0700), Arnd Bergmann wrote:
>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:


>>> +#ifndef _ASM_RISCV_CACHE_H
>>> +#define _ASM_RISCV_CACHE_H
>>> +
>>> +#define L1_CACHE_SHIFT         6
>>> +
>>> +#define L1_CACHE_BYTES         (1 << L1_CACHE_SHIFT)
>>
>> Is this the only valid cache line size on riscv, or just the largest
>> one that is allowed?
>
> The RISC-V ISA manual doesn't actually mention caches anywhere, so there's no
> restriction on L1 cache line size (we tried to keep microarchitecture out of
> the ISA specification).  We provide the actual cache parameters as part of the
> device tree, but it looks like this needs to be known staticly in some places
> so we can't use that everywhere.
>
> We could always make this a Kconfig parameter.

The cache line size is used in a couple of places, let's go through the most
common ones to see where that abstraction might be leaky and you actually
get an architectural effect:

- On SMP machines, ____cacheline_aligned_in_smp is used to annotate
  data structures used in lockless algorithms, typically with one CPU writing
  to some members of a structure, and another CPU reading from it but
  not writing the same members. Depending on the architecture, having a
  larger actual alignment than L1_CACHE_BYTES will either lead to
  bad performance from cache line ping pong, or actual data corruption.

- On systems with DMA masters that are not fully coherent,
  ____cacheline_aligned is used to annotate data structures used
  for DMA buffers, to make sure that the cache maintenance operations
  in dma_sync_*_for_*() helpers don't corrup data outside of the
  DMA buffer. You don't seem to support noncoherent DMA masters
  or the cache maintenance operations required to use those, so this
  might not be a problem until someone adds an extension for those.

- Depending on the bus interconnect, a coherent DMA master might
  not be able to update partial cache lines, so you need the same
  annotation.

- The kmalloc() family of memory allocators aligns data to the cache
  line size, for both DMA and SMP synchronization above.

- Many architectures have cache line prefetch, flush, zero or copy
  instructions  that are used for important performance optimizations
  but that are typically defined on a cacheline granularity. I don't
  think you currently have any of them, but it seems likely that there
  will be demand for them later.

Having a larger than necessary alignment can waste substantial amounts
of memory for arrays of cache line aligned structures (typically
per-cpu arrays), but otherwise should not cause harm.

>>> diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h
>>> new file mode 100644
>>> index 000000000000..d942555a7a08
>>> --- /dev/null
>>> +++ b/arch/riscv/include/asm/io.h
>>> @@ -0,0 +1,36 @@
>>
>>> +#ifndef _ASM_RISCV_IO_H
>>> +#define _ASM_RISCV_IO_H
>>> +
>>> +#include <asm-generic/io.h>
>>
>> I would recommend providing your own {read,write}{b,w,l,q}{,_relaxed}
>> helpers using inline assembly, to prevent the compiler for breaking
>> up accesses into byte accesses.
>>
>> Also, most architectures require to some synchronization after a
>> non-relaxed readl() to prevent prefetching of DMA buffers, and
>> before a writel() to flush write buffers when a DMA gets triggered.
>
> Makes sense.  These were all OK on existing implementations (as there's no
> writable PMAs, so all MMIO regions are strictly ordered), but that's not
> actually what the RISC-V ISA says.  I patterned this on arm64
>
>   https://github.com/riscv/riscv-linux/commit/e200fa29a69451ef4d575076e4d2af6b7877b1fa
>
> where I think the only odd thing is our definition of mmiowb
>
>   +/* IO barriers.  These only fence on the IO bits because they're only required
>   + * to order device access.  We're defining mmiowb because our AMO instructions
>   + * (which are used to implement locks) don't specify ordering.  From Chapter 7
>   + * of v2.2 of the user ISA:
>   + * "The bits order accesses to one of the two address domains, memory or I/O,
>   + * depending on which address domain the atomic instruction is accessing. No
>   + * ordering constraint is implied to accesses to the other domain, and a FENCE
>   + * instruction should be used to order across both domains."
>   + */
>   +
>   +#define __iormb()               __asm__ __volatile__ ("fence i,io" : : : "memory");
>   +#define __iowmb()               __asm__ __volatile__ ("fence io,o" : : : "memory");

Looks ok, yes.

>   +#define mmiowb()                __asm__ __volatile__ ("fence io,io" : : : "memory");
>
> which I think is correct.

I can never remember what exactly this one does.

>>> +#ifdef __KERNEL__
>>> +
>>> +#ifdef CONFIG_MMU
>>> +
>>> +extern void __iomem *ioremap(phys_addr_t offset, unsigned long size);
>>> +
>>> +#define ioremap_nocache(addr, size) ioremap((addr), (size))
>>> +#define ioremap_wc(addr, size) ioremap((addr), (size))
>>> +#define ioremap_wt(addr, size) ioremap((addr), (size))
>>
>> Is this a hard architecture limitation? Normally you really want
>> write-combined access on frame buffer memory and a few other
>> cases for performance reasons, and ioremap_wc() gets used
>> for by memremap() for addressing RAM in some cases, and you
>> normally don't want to have PTEs for the same memory using
>> cached and uncached page flags
>
> This is currently an architecture limitation.  In RISC-V these properties are
> known as PMAs (Physical Memory Attributes).  While the supervisor spec mentions
> PMAs, it doesn't provide a mechanism to read or write them so they are
> essentially unspecified.  PMAs will be properly defined as part of the platform
> specification, which isn't written yet.

Ok. Maybe add that as a comment above these definitions then.

>>> +/*
>>> + * low level task data that entry.S needs immediate access to
>>> + * - this struct should fit entirely inside of one cache line
>>> + * - this struct resides at the bottom of the supervisor stack
>>> + * - if the members of this struct changes, the assembly constants
>>> + *   in asm-offsets.c must be updated accordingly
>>> + */
>>> +struct thread_info {
>>> +       struct task_struct      *task;          /* main task structure */
>>> +       unsigned long           flags;          /* low level flags */
>>> +       __u32                   cpu;            /* current CPU */
>>> +       int                     preempt_count;  /* 0 => preemptable, <0 => BUG */
>>> +       mm_segment_t            addr_limit;
>>> +};
>>
>> Please see 15f4eae70d36 ("x86: Move thread_info into task_struct")
>> and try to do the same.
>
> OK, here's my attempt
>
>   https://github.com/riscv/riscv-linux/commit/c618553e7aa65c85564a5d0a868ec7e6cf634afd
>
> Since there's some actual meat, I left a commit message (these are more just
> notes for me for my v2, I'll be squashing everything)
>
> "
> This is patterned more off the arm64 move than the x86 one, since we
> still need to have at least addr_limit to emulate FS.
>
> The patch itself changes sscratch from holding SP to holding TP, which
> contains a pointer to task_struct.  thread_info must be at a 0 offset
> from task_struct, but it looks like that's already enforced with a big
> comment.  We now store both the user and kernel SP in task_struct, but
> those are really acting more as extra scratch space than pemanent
> storage.
> "

I haven't looked at all the details of the x86 patch, but it seems they
decided to put the arch specific members into 'struct thread_struct'
rather than 'struct thread_info', so I'd suggest you do the same here for
consistency, unless there is a strong reason against doing it.

>>> +#else /* !CONFIG_MMU */
>>> +
>>> +static inline void flush_tlb_all(void)
>>> +{
>>> +       BUG();
>>> +}
>>> +
>>> +static inline void flush_tlb_mm(struct mm_struct *mm)
>>> +{
>>> +       BUG();
>>> +}
>>
>> The NOMMU support is rather incomplete and CONFIG_MMU is
>> hard-enabled, so I'd just drop any !CONFIG_MMU #ifdefs.
>
> OK.  I've left in the "#ifdef CONFIG_MMU" blocks as the #ifdef/#endif doesn't
> really add any code, but I can go ahead and drop the #ifdef if you think that's
> better.
>
>   https://github.com/riscv/riscv-linux/commit/e98ca23adfb9422bebc87cbfb58f70d4a63cf067

Ok.

>>> +#include <asm-generic/unistd.h>
>>> +
>>> +#define __NR_sysriscv  __NR_arch_specific_syscall
>>> +#ifndef __riscv_atomic
>>> +__SYSCALL(__NR_sysriscv, sys_sysriscv)
>>> +#endif
>>
>> Please make this a straight cmpxchg syscall and remove the multiplexer.
>> Why does the definition depend on __riscv_atomic rather than the
>> Kconfig symbol?
>
> I think that was just an oversight: that's not the right switch.  Either you or
> someone else pointed out some problems with this.  There's going to be an
> interposer in the VDSO, and then we'll always enable the system call.
>
> I can change this to two system calls: sysriscv_cmpxchg32 and
> sysriscv_cmpxchg64.

Sounds good.

        Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/7] RISC-V: arch/riscv/kernel
  2017-05-23 13:35   ` Arnd Bergmann
@ 2017-06-02 23:56     ` Palmer Dabbelt
  2017-06-06  9:01       ` Arnd Bergmann
  0 siblings, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-02 23:56 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: linux-kernel, olof, albert

On Tue, 23 May 2017 06:35:23 PDT (-0700), Arnd Bergmann wrote:
>> +IRQCHIP_DECLARE(riscv, "riscv,cpu-intc", riscv_intc_init);
>
> Please move the majority of this file into drivers/irqchip as a
> standalone driver.

OK.

  https://github.com/riscv/riscv-linux/commit/549c7f5ef63d7be04c9cac7e332ef81ec6ffe103

>> diff --git a/arch/riscv/kernel/pci.c b/arch/riscv/kernel/pci.c
>> new file mode 100644
>> index 000000000000..4191a5ffdd67
>> --- /dev/null
>> +++ b/arch/riscv/kernel/pci.c
>> @@ -0,0 +1,36 @@
>> +/*
>> + * Code borrowed from arch/arm64/kernel/pci.c
>> + *
>> + * Copyright (C) 2003 Anton Blanchard <anton@au.ibm.com>, IBM
>> + * Copyright (C) 2014 ARM Ltd.
>> + * Copyright (C) 2017 SiFive
>> + *
>> + * This program is free software; you can redistribute it and/or
>> + * modify it under the terms of the GNU General Public License
>> + * version 2 as published by the Free Software Foundation.
>> + *
>> + */
>> +
>> +#include <linux/init.h>
>> +#include <linux/io.h>
>> +#include <linux/kernel.h>
>> +#include <linux/mm.h>
>> +#include <linux/slab.h>
>> +#include <linux/pci.h>
>> +
>> +/*
>> + * Called after each bus is probed, but before its children are examined
>> + */
>> +void pcibios_fixup_bus(struct pci_bus *bus)
>> +{
>> +       /* nothing to do, expected to be removed in the future */
>> +}
>> +/*
>> + * We don't have to worry about legacy ISA devices, so nothing to do here
>> + */
>> +resource_size_t pcibios_align_resource(void *data, const struct resource *res,
>> +                               resource_size_t size, resource_size_t align)
>> +{
>> +       return res->start;
>> +}
>
> Can you add a patch to remove the need for this, and send that to the
> PCI maintainers?
>
> In the long run, I think we want both of these to be pci host bridge
> driver specific callbacks rather than per-architecture definitions, but
> for the moment, moving the empty version as a __weak copy
> into drivers/pci/ should be sufficient.
>
> [note: don't ever use __weak elsewhere, the use in PCI is
>  only done for historic reasons and we want to get rid of that
>  too, but for now it's more important to avoid adding yet another
>  pointless copy]

Sounds good

  https://github.com/riscv/riscv-linux/commit/4aa540bf849b2a190e288e7d25d262dee21306b3
  https://github.com/riscv/riscv-linux/commit/bb3b4c6ca4841538d101f2b9c437f5dccda0b3a7

> If you don't care about LPC/ISA devices, then your PCI_MIN_IO
> should also be zero instead of 0x1000

Sorry, but the only Google results for PCI_MIN_IO is this email.  There don't
appear to be any relevant references to PCI_MIN in the kernel

  $ git grep PCI_MIN_
  arch/mips/include/asm/mach-loongson64/cs5536/cs5536_pci.h:      ((PCI_MAX_LATENCY << 24) | (PCI_MIN_GRANT << 16) | \
  arch/mips/include/asm/mach-loongson64/cs5536/cs5536_pci.h:#define PCI_MIN_GRANT                 0x00
  drivers/ata/pata_hpt366.c:      pci_write_config_byte(dev, PCI_MIN_GNT, 0x08);
  drivers/ata/pata_hpt37x.c:      pci_write_config_byte(dev, PCI_MIN_GNT, 0x08);
  drivers/ata/pata_hpt3x2n.c:     pci_write_config_byte(dev, PCI_MIN_GNT, 0x08);
  drivers/ide/hpt366.c:   pci_write_config_byte(dev, PCI_MIN_GNT, 0x08);
  drivers/net/fddi/skfp/h/skfbi.h:#define PCI_MIN_GNT     0x3e    /*  8 bit       Min_Gnt */
  drivers/net/fddi/skfp/h/skfbi.h:/*      PCI_MIN_GNT     8 bit   Min_Gnt */
  include/uapi/linux/pci_regs.h:#define PCI_MIN_GNT               0x3e    /* 8 bits */

I'm afraid that I'm not sure what to do here.

>> diff --git a/arch/riscv/kernel/plic.c b/arch/riscv/kernel/plic.c
>> new file mode 100644
>> index 000000000000..5b3d4241f4e2
>> --- /dev/null
>> +++ b/arch/riscv/kernel/plic.c
>
> drivers/irqchip/riscv-plic.c
>
> The file needs some work for following coding style, once that
> is done, please submit to the irqchip maintainers.

I've addressed most of this thanks to some other code reviews, so I'm going to
drop the comments that are about things I've already fixed.

>> +// TODO: add a /sys interface to set priority + per-hart enables for steering
>
> No driver-private sysfs interfaces please for irqchips please.
> See http://elixir.free-electrons.com/linux/latest/source/Documentation/IRQ-affinity.txt
> for setting the affinity.

We'll do it the right way when we add IRQ affinity control.  Thanks for the
heads up!

>> diff --git a/arch/riscv/kernel/reset.c b/arch/riscv/kernel/reset.c
>> new file mode 100644
>> index 000000000000..58bad9598e21
>> --- /dev/null
>> +++ b/arch/riscv/kernel/reset.c
>> @@ -0,0 +1,33 @@
>> +/*
>> + * Copyright (C) 2012 Regents of the University of California
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful, but
>> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
>> + *   NON INFRINGEMENT.  See the GNU General Public License for
>> + *   more details.
>> + */
>> +
>> +#include <linux/reboot.h>
>> +#include <linux/export.h>
>> +#include <asm/sbi.h>
>> +
>> +void (*pm_power_off)(void) = machine_power_off;
>> +EXPORT_SYMBOL(pm_power_off);
>> +
>> +void machine_restart(char *cmd)
>> +{
>> +}
>
> Call do_kernel_restart(cmd) here.
>
>> +void machine_halt(void)
>> +{
>> +}
>
> This should not return. Either make it call sbi_shutdown as well,
> or use the ARM implementation:
>
> void machine_halt(void)
> {
>         local_irq_disable();
>         smp_send_stop();
>         while (1);
> }

OK.

  https://github.com/riscv/riscv-linux/commit/5f486cb73e3a0a5a218d26781e9eae651e59203f

>> diff --git a/arch/riscv/kernel/sbi-con.c b/arch/riscv/kernel/sbi-con.c
>> new file mode 100644
>> index 000000000000..86baeb5ef0cd
>> --- /dev/null
>> +++ b/arch/riscv/kernel/sbi-con.c
>
> As Olof said, move it to drivers/tty/hvc/ and use those helpers.

Ah, that's great: now there's almost no code left :)

  https://github.com/riscv/riscv-linux/commit/8adad12c5525a70b8837196b8f2d4ac003a7647c

>> diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
>> new file mode 100644
>> index 000000000000..3e07308e24f5
>> --- /dev/null
>> +++ b/arch/riscv/kernel/sys_riscv.c
>> @@ -0,0 +1,85 @@
>> +/*
>> + * Copyright (C) 2012 Regents of the University of California
>> + * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful, but
>> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
>> + *   NON INFRINGEMENT.  See the GNU General Public License for
>> + *   more details.
>> + */
>> +
>> +#include <linux/syscalls.h>
>> +#include <asm/unistd.h>
>> +
>> +SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
>> +       unsigned long, prot, unsigned long, flags,
>> +       unsigned long, fd, off_t, offset)
>> +{
>> +       if (unlikely(offset & (~PAGE_MASK)))
>> +               return -EINVAL;
>> +       return sys_mmap_pgoff(addr, len, prot, flags, fd, offset >> PAGE_SHIFT);
>> +}
>> +
>> +#ifndef CONFIG_64BIT
>> +SYSCALL_DEFINE6(mmap2, unsigned long, addr, unsigned long, len,
>> +       unsigned long, prot, unsigned long, flags,
>> +       unsigned long, fd, off_t, offset)
>> +{
>> +       /* Note that the shift for mmap2 is constant (12),
>> +          regardless of PAGE_SIZE */
>> +       if (unlikely(offset & (~PAGE_MASK >> 12)))
>> +               return -EINVAL;
>> +       return sys_mmap_pgoff(addr, len, prot, flags, fd,
>> +               offset >> (PAGE_SHIFT - 12));
>> +}
>> +#endif /* !CONFIG_64BIT */
>
> The first one should be CONFIG_64BIT only.

OK. https://github.com/riscv/riscv-linux/commit/ed7545c6765ba7d705e1bc6ce7b67bb7e8cb0926

>> +#ifdef CONFIG_RV_SYSRISCV_ATOMIC
>> +SYSCALL_DEFINE4(sysriscv, unsigned long, cmd, unsigned long, arg1,
>> +       unsigned long, arg2, unsigned long, arg3)
>> +{
>> +       unsigned long flags;
>> +       unsigned long prev;
>> +       unsigned int *ptr;
>> +       unsigned int err;
>> +
>> +       switch (cmd) {
>> +       case RISCV_ATOMIC_CMPXCHG:
>> +               ptr = (unsigned int *)arg1;
>> +               if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned int)))
>> +                       return -EFAULT;
>> +
>> +               preempt_disable();
>> +               raw_local_irq_save(flags);
>> +               err = __get_user(prev, ptr);
>> +               if (likely(!err && prev == arg2))
>> +                       err = __put_user(arg3, ptr);
>> +               raw_local_irq_restore(flags);
>> +               preempt_enable();
>> +
>> +               return unlikely(err) ? err : prev;
>> +
>> +       case RISCV_ATOMIC_CMPXCHG64:
>
> Make these two separate syscalls and get rid of the wrapper
> (I already mentioned it in the header file comments, but it
> fits better here).
>
> It may be good to have an optimized version in the vdso
> that does an atomic operation directly if the CPU supports
> it.

Yep.  It's on the list.

>> diff --git a/arch/riscv/kernel/time.c b/arch/riscv/kernel/time.c
>> new file mode 100644
>> index 000000000000..ce8c459fadaa
>> --- /dev/null
>> +++ b/arch/riscv/kernel/time.c
>
> drivers/clocksource/riscv-timer.c, and submit it to the
> respective maintainers.

Sounds good.

  https://github.com/riscv/riscv-linux/commit/d9fcab4603c158755e19663dec0040b28ea1aad1

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 4/7] RISC-V: arch/riscv/include
  2017-05-23 21:23     ` Benjamin Herrenschmidt
@ 2017-06-03  2:00       ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-03  2:00 UTC (permalink / raw)
  To: benh; +Cc: linux-kernel, olof, albert, Arnd Bergmann

On Tue, 23 May 2017 14:23:50 PDT (-0700), benh@kernel.crashing.org wrote:
> On Tue, 2017-05-23 at 14:55 +0200, Arnd Bergmann wrote:
>> > +
>> > +#include <asm-generic/io.h>
>>
>> I would recommend providing your own {read,write}{b,w,l,q}{,_relaxed}
>> helpers using inline assembly, to prevent the compiler for breaking
>> up accesses into byte accesses.
>>
>> Also, most architectures require to some synchronization after a
>> non-relaxed readl() to prevent prefetching of DMA buffers, and
>> before a writel() to flush write buffers when a DMA gets triggered.
>
> Right, I was about to comment on that one.

Well, you're both correct: what was there just isn't correct.  Our
implementations were safe because they don't have aggressive MMIO systems, but
that won't remain true for long.

I've gone ahead and added a proper IO implementation patterned on arm64.  It'll
be part of the v2 patch set.  Here's the bulk of the patch, if you're curious

  https://github.com/riscv/riscv-linux/commit/e200fa29a69451ef4d575076e4d2af6b7877b1fa

> The question Palmer is about the ordering semantics of non-cached
> storage.
>
> What kind of ordering is provided architecturally ? Especially
> between cachable and non-cachable loads and stores ?

The memory model on RISC-V is pretty weak.  Without fences there are no
ordering constraints.  We provide 2 "ordering spaces" (an odd name I just made
up, there's one address space): the IO space and the regular memory space.  The
base RISC-V ordering primitive is a fence, which takes a predecessor set and
successor set.  Fences can look like

  fence IORW,IORW

where the left side is the predecessor set and the right side is the successor
set.  The fence enforces ordering between any operations in the two sets: all
operations in the predecessor set must be globally visible before any operation
in the successor set becomes visible anywhere.

For example, if you're emitting a DMA transaction you'd have to do something
like

  build_message_in_memory()
  fence w,o
  set_control_register()

with the fence ensuring all the memory writes are visible before the control
register write.

More information can be found in the ISA manuals

  https://github.com/riscv/riscv-isa-manual/releases/download/riscv-user-2.2/riscv-spec-v2.2.pdf
  https://github.com/riscv/riscv-isa-manual/releases/download/riscv-priv-1.10/riscv-privileged-v1.10.pdf

> Also, you have PCIe right ? What is the behaviour of MSIs ?
>
> Does your HW provide a guarantee that in the case of a series of DMA
> writes to memory by a device followed by an MSI, the CPU getting the
> MSI will only get it after all the previous DMA writes have reached
> coherency ? (Unlike LSIs where the driver is required to do an MMIO
> read from the device, MSIs are expected to be ordered with data).

We do have PCIe, but I'm not particularly familiar with it as I haven't spent
any time hacking on our PCIe hardware or driver.  My understanding here is that
PCIe defines that MSIs must not be reordered before the DMA writes, so the
implementation is required to enforce this ordering.  Thus it's a problem for
the PCIe controller implementation (which isn't covered by RISC-V) and therefor
doesn't need any ordering enforced by the driver.

If I'm correct in the assumption that the hardware is required to enforce these
ordering constraints then I think this isn't a RISC-V issue.  I'll go bug our
PCIe guys to make sure everything is kosher in that case, but it's an ordering
constraint that is possible to enforce in our coherence protocol so it's not a
fundamental problem (and just an implementation one at that).

If the hardware isn't required to enforce the ordering, then we'll need a fence
before handling the data.

> Another things with the read*() accessors. It's not uncommon for
> a driver to do:
>
> 	writel(1, reset_reg);
> 	readl(reset_reg); /* flush posted writes */
> 	udelay(10);
> 	writel(0, reset_reg);
>
> Now, in the above case, what can typically happen if you aren't careful
> is that the readl which is intended to "push" the previous writel, will
> not actually do its job because the return value hasn't been "consumed"
> by the processor. Thus, the CPU will stick that on some kind of load
> queue and won't actually wait for the return value before hitting the
> delay loop.
>
> Thus you might end up in a situation where the writel of 1 to the
> device is itself reaching the device way after you started the delay
> loop, and thus end up violating the delay requirement of the HW.
>
> On powerpc we solve that by using a special instruction construct
> inside the read* accessors that prevents the CPU from executing
> subsequent instructions until the read value has been returned.
>
> You may want to consider something similar.

Ooh, that's a fun one :).   I bugged Andrew (the ISA wizard), and this might
require some clarification in the ISA manual.  The code we emit will look
something like

  fence io,o
  st 1, RESET_REG
  ld RESET_REG
  fence o,io
loop:
  rdtime
  blt loop
  fence io,o
  st 0, RESET_REG

Since the fences just enforce ordering between the loads and stores, there's
nothing that prevents the processor from releasing the store before the delay
loop completes.  I think we might be safe here because you're not allowed to
make speculative writes visible, but arguably that's not a speculative write
because you can predict the timer will keep increasing.  The distinction is
somewhat academic, though: I'm not sure what purpose that very specific sort of
predictor would have aside from breaking existing code.

This issue of "how is time ordered" has come up a handful of times, most of
which don't have the loop.  The idea of adding the result rdtime instruction to
the IO space.  I've opened a spec bug

  https://github.com/riscv/riscv-isa-manual/issues/78

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/7] RISC-V: arch/riscv/kernel
  2017-05-25 17:05   ` Pavel Machek
@ 2017-06-03  3:32     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-03  3:32 UTC (permalink / raw)
  To: pavel; +Cc: linux-kernel, Arnd Bergmann, olof, albert

On Thu, 25 May 2017 10:05:05 PDT (-0700), pavel@ucw.cz wrote:
>> +static void ci_leaf_init(struct cacheinfo *this_leaf,
>> +                         struct device_node *node,
>> +                         enum cache_type type, unsigned int level)
>> +{
>> +        this_leaf->of_node = node;
>> +        this_leaf->level = level;
>> +        this_leaf->type = type;
>> +        this_leaf->physical_line_partition = 1; // not a sector cache
>> +        this_leaf->attributes = CACHE_WRITE_BACK | CACHE_READ_ALLOCATE | CACHE_WRITE_ALLOCATE; // TODO: add to DTS
>> +}
>
> You may want to run the patches through checkpatch. (Comment style,
> long lines).

Thanks.  Someone else suggested this and I've fixed most of the errors.  I'll
submit a v2 with everyone's feedback once I get through my mail.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/7] RISC-V: arch/riscv/kernel
  2017-05-25 19:51       ` Arnd Bergmann
@ 2017-06-06  4:56         ` Palmer Dabbelt
  2017-06-06  9:03           ` Arnd Bergmann
  0 siblings, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06  4:56 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: olof, linux-kernel, albert

On Thu, 25 May 2017 12:51:54 PDT (-0700), Arnd Bergmann wrote:
> On Thu, May 25, 2017 at 3:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> On Mon, 22 May 2017 19:11:35 PDT (-0700), olof@lixom.net wrote:
>
>>  * Parens around single-statement __asm__ macros.  For these I also get a
>>    message when they're wrapped in "do {} while (0)", so I'm not sure what else
>>    to do.
>
> I would generally recommend using inline functions for those, and only do
> macros when you need them.

We've tried to avoid new macros, so these are mostly from places where other
architectures use macros (like mb) or where we need to use CPP token pasting
(like __op_bit).  Should I change things like mb?

>>  * Parens around macros like "#define RISCV_PTR .dword".  These can't have
>>    parens because they go directly to the assembler, so I'm considering this a
>>    false-positive.
>
> agreed
>
>>  * Warnings about volatile in function declarations in bitops.h.  These are
>>    copied from other architectures.  There were a handful of other volatiles
>>    that I fixed,, but I think these should stay.
>
> Agreed, bitops.h is one of the few headers that should use 'volatile'.
>
>>  * Definitions like ARCH_HAS_SETUP_ADDITIONAL_PAGES, these are also present in
>>    other architectures.
>
> What is the warning here? I would assume that you should leave this
> unchanged as well.

ERROR: #define of 'ARCH_HAS_SETUP_ADDITIONAL_PAGES' is wrong - use Kconfig variables or standard guards instead
#2533: FILE: arch/riscv/include/asm/elf.h:79:
+#define ARCH_HAS_SETUP_ADDITIONAL_PAGES

>>  * We added new typedefs, I can remove these if that's a problem.  They're
>>    there to match our other code (bootloader and simulator).
>
> It depends. What typedefs are those? Removing the typedefs in both
> the kernel and the other code that uses the same types is likely the
> best option here.

OK, I'll add it to my TODO list.

>>  * A handful of lines over 80 characters that I think are onerous to break any
>>    more.
>
> Right, don't worry about it too much, and use common sense for this
> warning.
>
>>  * Some warnings about printk() not having a KERN_ prefix.  I fixed a handful
>>    of these, but the remaining ones I don't know how to fix (in show_regs, for
>>    example, where arm64 also has them).
>
> KERN_CONT

https://github.com/riscv/riscv-linux/commit/98e8fe9cb19d495180a9be03a0aa48c0183dd5be

>>  * Extern declarations in C files, all of which link to symbols in assembly or
>>    linker scripts.  These were copied from other architectures.
>
> I would try to fix those by using a header even if there is only one user.
> I'd actually like to get a compile-time warning for those in the long run,
> maybe with 'make W=1', so better don't introduce new ones.

OK, I'll fix them.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/7] RISC-V: arch/riscv/lib
  2017-05-26  9:06       ` Arnd Bergmann
@ 2017-06-06  4:56         ` Palmer Dabbelt
  2017-06-06  9:31           ` Arnd Bergmann
  0 siblings, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06  4:56 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: linux-kernel, olof, albert

On Fri, 26 May 2017 02:06:58 PDT (-0700), Arnd Bergmann wrote:
> On Thu, May 25, 2017 at 3:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> On Tue, 23 May 2017 04:19:42 PDT (-0700), Arnd Bergmann wrote:
>>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>
>>> Also, it would be good to replace the multiply+div64
>>> with a single multiplication here, see how x86 and arm do it
>>> (for the tsc/__timer_delay case).
>>
>> Makes sense.  I think this should do it
>>
>>   https://github.com/riscv/riscv-linux/commit/d397332f6ebff42f3ecb385e9cf3284fdeda6776
>>
>> but I'm finding this hard to test as this only works for 2ms sleeps.  It seems
>> at least in the right ballpark
>
> + if (usecs > MAX_UDELAY_US) {
> + __delay((u64)usecs * riscv_timebase / 1000000ULL);
> + return;
> + }
>
> You still do the 64-bit division here. What I meant is to completely
> avoid the division and use a multiply+shift.

The goal here was to avoid the error case that ARM has on overflow and instead
just delay for the requested time.  This should only divide when the delay is
>=2ms, so the division won't cost much in comparison.

The normal case should have no division in it.

I can copy ARM's error handling if you think that's better, but it seemed more
complicated than just computing the correct answer.

> Also, you don't need to base anything on HZ, as you do not rely
> on the delay calibration but always use a timer.

That makes sense, I just based this blindly off the ARM version.  I'll see if
that lets me avoid unnecessary overflow for ndelay.  If it doesn't then I'd
prefer to just keep exactly the same constraints ARM has to avoid unexpected
behavior.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64}
  2017-05-29 10:50       ` Arnd Bergmann
@ 2017-06-06  4:56         ` Palmer Dabbelt
  2017-06-06 17:39           ` Karsten Merker
  0 siblings, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06  4:56 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: linux-kernel, olof, albert

On Mon, 29 May 2017 03:50:47 PDT (-0700), Arnd Bergmann wrote:
> On Sat, May 27, 2017 at 2:57 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> On Tue, 23 May 2017 04:30:50 PDT (-0700), Arnd Bergmann wrote:
>>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>>> RISC-V has both 32-bit and 64-bit base ISAs, but they are very similar.
>>>> Like some other platforms, we'd like to share one arch directory between
>>>> the two of them.
>>>
>>> I think we mainly do the others for backwards-compatibility with ancient
>>> build scripts, and we don't need that here. Instead, you could add one more
>>> line to the 'SUBARCH:=' statement that interprets the uname output.
>>
>> I don't think that does the same thing.  The desired effect of this diff is:
>>
>>  * "uname -m" when running on a RISC-V machine returns either riscv32 or
>>    riscv64, as that's what tools like autoconf expect when trying to find
>>    tuples.
>>
>>  * I can cross compile for riscv32 and riscv64.  That's currently controlled by
>>    a Kconfig setting, but ARCH=riscv32 vs ARCH=riscv64 controlls what defconfig
>>    sets.
>>
>>  * I can natively compile for riscv32 and riscv64.  That uses the same Kconfig
>>    setting, and the same ARCH=riscv32 vs ARCH=riscv64 switch for defconfig.
>
> Right, but my point is that a new architecture should not rely on 'ARCH='
> to pick the defconfig, we only do that on a couple of architectures for
> backwards compatibility with old scripts.
>
>> Neither of the two Kconfig issues is a big deal, but we de need "uname -m" to
>> return "riscv64" or "riscv32" not "riscv".  I think the only way to do that is
>> to set SRCARCH, but I'd be happy to change it if there's a better way.  I think
>> if I just do this
>>
>> diff --git a/Makefile b/Makefile
>> index 0606f28..4adc609 100644
>> --- a/Makefile
>> +++ b/Makefile
>> @@ -232,7 +232,8 @@ SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
>>                                   -e s/arm.*/arm/ -e s/sa110/arm/ \
>>                                   -e s/s390x/s390/ -e s/parisc64/parisc/ \
>>                                   -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
>> -                                 -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ )
>> +                                 -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
>> +                                 -e s/riscv.*/riscv/ )
>>
>>  # Cross compiling and selecting different set of gcc/bin-utils
>>  # ---------------------------------------------------------------------------
>> @@ -269,14 +270,6 @@ ifeq ($(ARCH),x86_64)
>>          SRCARCH := x86
>>  endif
>>
>> -# Additional ARCH settings for RISC-V
>> -ifeq ($(ARCH),riscv32)
>> -       SRCARCH := riscv
>> -endif
>> -ifeq ($(ARCH),riscv64)
>> -       SRCARCH := riscv
>> -endif
>> -
>>  # Additional ARCH settings for sparc
>>  ifeq ($(ARCH),sparc32)
>>         SRCARCH := sparc
>>
>> then I'll end up with "uname -m" as "riscv" -- I haven't tried it, but that's
>> why we ended up with this diff in the first place.
>
> Do you mean the "uname -m" output comes from "${SRCARCH}" at
> the time of the kernel build? That would be easy enough to change
> by simply hardcoding it depending on CONFIG_64BIT.

OK, I didn't know about COMPAT_UTS_MACHINE.  That's a much better solution,
I'll use that.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-05-29 11:17       ` Arnd Bergmann
@ 2017-06-06  4:56         ` Palmer Dabbelt
  2017-06-06  9:20           ` Arnd Bergmann
  0 siblings, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06  4:56 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: linux-kernel, olof, albert

On Mon, 29 May 2017 04:17:40 PDT (-0700), Arnd Bergmann wrote:
> On Sat, May 27, 2017 at 2:57 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> On Tue, 23 May 2017 04:46:22 PDT (-0700), Arnd Bergmann wrote:
>>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>>>> new file mode 100644
>>>> index 000000000000..510ead1d3343
>>>> --- /dev/null
>>>> +++ b/arch/riscv/Kconfig
>>>> @@ -0,0 +1,300 @@
>>>> +#
>>>> +# For a description of the syntax of this configuration file,
>>>> +# see Documentation/kbuild/kconfig-language.txt.
>>>> +#
>>>> +
>>>> +config RISCV
>>>> +       def_bool y
>>>> +       select OF
>>>> +       select OF_EARLY_FLATTREE
>>>> +       select OF_IRQ
>>>> +       select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
>>>> +       select ARCH_WANT_FRAME_POINTERS
>>>> +       select CLONE_BACKWARDS
>>>> +       select COMMON_CLK
>>>> +       select GENERIC_CLOCKEVENTS
>>>> +       select GENERIC_CPU_DEVICES
>>>> +       select GENERIC_IRQ_SHOW
>>>> +       select GENERIC_PCI_IOMAP
>>>
>>> You normally don't want GENERIC_PCI_IOMAP, unless your
>>> inb()/outb() uses other instructions than your readl()/writel()
>>
>> We don't have any special instructions for inb/outb, but there is a special
>> fence to ensure ordering.
>
> Ah, interesting. What is the exact behavior of that fence? (this is slightly
> unrelated but worth looking at anyway)
>
>> It looks like most of my candidates for patterning things after use
>> GENERIC_PCI_IOMAP, but I ended up setting GENERIC_IOMAP and things still at
>> least build and boot without any PCI
>>
>>   https://github.com/riscv/riscv-linux/commit/c43c599b0dd9d15886c03a9dd179f8936b0cbb2e
>>
>> I'll be sure to check if I broke PCI, as I don't actually know what's going on
>> here.
>
> Actually I made a mistake: GENERIC_PCI_IOMAP is fine, please use that,
> GENERIC_IOMAP is the option you don't want, and you already don't
> enable that one.

Great -- I actually ended up reverting the patch already as I couldn't figure
out how to make it work.  That was the one big thing on my TODO list before a
v2, so with any luck I'll be able to get a new patch set out soon.

>>>> +# even on 32-bit, physical (and DMA) addresses are > 32-bits
>>>> +config ARCH_PHYS_ADDR_T_64BIT
>>>> +       def_bool y
>>>> +
>>>> +config ARCH_DMA_ADDR_T_64BIT
>>>> +       def_bool y
>>>
>>> Are you required to use 64-bit addressing for RAM on 32-bit
>>> architectures though? Using 32-bit dma_addr_t and phys_addr_t
>>> when possible makes some code noticeably more efficient.
>>>
>>>> +config PGTABLE_LEVELS
>>>> +       int
>>>> +       default 3 if 64BIT
>>>> +       default 2
>>>
>>> With 2-level page tables, you usually can't address much more
>>> than 32-bit physical memory anyway, so I'd guess that most
>>> 32-bit chips would actually put their RAM under the 4GB boundary.
>>
>> We can address 34 bits of physical address space on Sv32 (the 32-bit virtual
>> addressing scheme in RV32).  If this is a meaningful performance constraint
>> then we could always make this configurable.
>
> I'd suggest to leave it turned off initially and only use it once someone
> actually builds a system that makes use of the high address space.
>
> Of course if there is already hardware that needs it, there has to
> be a way to turn it on.

There isn't, and I'm find with 32-bit physical addresses on RV32I.  I doubt
anyone will be building big RV32 systems.  I'll make the change for the v2.

> This raises a much more general question about how you want to deal
> with SoC implementations in the future. The two most common ways of
> doing this are:
>
> - Every major platform gets a Kconfig option in the architecture menu,
>   and that selects the essential drivers (irqchip, clocksource, pinctrl,
>   clk, ...) that you need for that platform, along with architecture features
>   (ISA level and optional features, ...)
>
> - The architecture code knows nothing about the SoC and just keeps
>    to the basics (CPU architecture level selection, SMP/MMU/etc enabled,
>    selecting drivers that everyone needs) and leaves the rest up to be
>    selected in the defconfig file.
>
> On ARM, we have a bit of both, which is not as good as being
> consistent one way or another.

This is actually an open question in RISC-V land right now.  We should be
spinning up a platform specification working group this summer to try and work
things out.  While this will have to be ironed out, I believe the plan is to
define a small number of base platforms (maybe one for embedded systems with no
programmable PMAs, and one for larger machines with a bit more
configurability).  I'd anticipate that we'll have a platform Kconfig menu entry
for every platform that gets written down in a specification (just like we have
an entry for our base ISAs) and then defconfig entries for various
implementations that select the relevant platform in addition to the drivers
actually on board.

For now we've got a handful of defconfig entries for the various platforms we
support (the ISA simulator and our FPGA implementation), but there's no silicon
so we're not stuck with what's there.  I don't anticipate we'll add more than a
handful of these until the platform spec work is underway.

>>>> +config RV_ATOMIC
>>>> +       bool "Use atomic memory instructions (RV32A or RV64A)"
>>>> +       default y
>>>> +
>>>> +config RV_SYSRISCV_ATOMIC
>>>> +       bool "Include support for atomic operation syscalls"
>>>> +       default n
>>>> +       help
>>>> +         If atomic memory instructions are present, i.e.,
>>>> +         CONFIG_RV_ATOMIC, this includes support for the syscall that
>>>> +         provides atomic accesses.  This is only useful to run
>>>> +         binaries that require atomic access but were compiled with
>>>> +         -mno-atomic.
>>>> +
>>>> +         If CONFIG_RV_ATOMIC is unset, this option is mandatory.
>>>
>>> I wonder what the cost would be of always providing the syscalls
>>> for compatibility. This is also something worth putting into a VDSO
>>> instead of exposing the syscall:
>>>
>>> That way, user space that is built with -mno-atomic can call into
>>> the vdso, which depending on the hardware support will perform
>>> the atomic operation directly or enter the syscall.
>>
>> I think the theory is that most Linux-capable systems are going to have the A
>> extension, and that this is really there for educational/hobbyist use.
>>
>> The actual implementation isn't that expensive: it's no-SMP-only, so it's just
>> a disable/enable interrupts that wraps a compare+exchange.  I think putting it
>> in the VDSO is actually the right thing to do: that way non-A user code will
>> still have reasonable performance.
>>
>> I'll add it to my TODO list.
>
> This would be another variant of the question above.

After talking with people a bit about this, I think the sane thing to do here
is to just always provide the atomic system call (and VDSO implementation), and
to default to enabling the atomic extensions.  While I expect non-atomic
implementations to be very much the exception, there's a stronger argument for
supporting non-atomic userspace programs with reasonable performance on atomic
systems -- the first round of small Linux-capable embedded systems might not
support atomic (though ours will), but we don't want to wed software to a
syscall in this case.

Before the VDSO was suggested I was leaning towards disabling the atomic system
call by default, but since the VDSO implementation will provide very good
performance for non-atomic userspace code on atomic systems I think it's best
to just enable it everywhere.  It's only a few bytes of binary size.

>>>> +config PCI_DOMAINS
>>>> +       def_bool PCI
>>>> +
>>>> +config PCI_DOMAINS_GENERIC
>>>> +       def_bool PCI
>>>> +
>>>> +config PCI_SYSCALL
>>>> +       def_bool PCI
>>>
>>> I don't think you want PCI_SYSCALL
>>
>> OK.  I'm a bit worried we need this to run DOOM, but I'll give everything a
>> shot without it and see what happens
>
> I'd argue that's a user space bug that we want to fix anyway to make
> the code portable to other architectures.

I buy it.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 4/7] RISC-V: arch/riscv/include
  2017-06-01  9:00       ` Arnd Bergmann
@ 2017-06-06  4:56         ` Palmer Dabbelt
  2017-06-06  8:54           ` Arnd Bergmann
  0 siblings, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06  4:56 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: linux-kernel, olof, albert

On Thu, 01 Jun 2017 02:00:22 PDT (-0700), Arnd Bergmann wrote:
> On Thu, Jun 1, 2017 at 2:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> On Tue, 23 May 2017 05:55:15 PDT (-0700), Arnd Bergmann wrote:
>>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>
>
>>>> +#ifndef _ASM_RISCV_CACHE_H
>>>> +#define _ASM_RISCV_CACHE_H
>>>> +
>>>> +#define L1_CACHE_SHIFT         6
>>>> +
>>>> +#define L1_CACHE_BYTES         (1 << L1_CACHE_SHIFT)
>>>
>>> Is this the only valid cache line size on riscv, or just the largest
>>> one that is allowed?
>>
>> The RISC-V ISA manual doesn't actually mention caches anywhere, so there's no
>> restriction on L1 cache line size (we tried to keep microarchitecture out of
>> the ISA specification).  We provide the actual cache parameters as part of the
>> device tree, but it looks like this needs to be known staticly in some places
>> so we can't use that everywhere.
>>
>> We could always make this a Kconfig parameter.
>
> The cache line size is used in a couple of places, let's go through the most
> common ones to see where that abstraction might be leaky and you actually
> get an architectural effect:
>
> - On SMP machines, ____cacheline_aligned_in_smp is used to annotate
>   data structures used in lockless algorithms, typically with one CPU writing
>   to some members of a structure, and another CPU reading from it but
>   not writing the same members. Depending on the architecture, having a
>   larger actual alignment than L1_CACHE_BYTES will either lead to
>   bad performance from cache line ping pong, or actual data corruption.

On RISC-V it's just a performance problem, so at least it's not catastrophic.

> - On systems with DMA masters that are not fully coherent,
>   ____cacheline_aligned is used to annotate data structures used
>   for DMA buffers, to make sure that the cache maintenance operations
>   in dma_sync_*_for_*() helpers don't corrup data outside of the
>   DMA buffer. You don't seem to support noncoherent DMA masters
>   or the cache maintenance operations required to use those, so this
>   might not be a problem until someone adds an extension for those.
>
> - Depending on the bus interconnect, a coherent DMA master might
>   not be able to update partial cache lines, so you need the same
>   annotation.

Well, our (SiFive's) bus is easy to master so hopefully we won't end up doing
that.  There is, of course, the rest of the world -- but that's just a bridge
we'll have to cross later (if such an implementation arises).

> - The kmalloc() family of memory allocators aligns data to the cache
>   line size, for both DMA and SMP synchronization above.

Ya, but luckily just a performance problem on RISC-V.

> - Many architectures have cache line prefetch, flush, zero or copy
>   instructions  that are used for important performance optimizations
>   but that are typically defined on a cacheline granularity. I don't
>   think you currently have any of them, but it seems likely that there
>   will be demand for them later.

We actually have an implicit prefetch (loads to x0, the zero register), but
it still has all the load side-effects so nothing uses it.

> Having a larger than necessary alignment can waste substantial amounts
> of memory for arrays of cache line aligned structures (typically
> per-cpu arrays), but otherwise should not cause harm.

I bugged our L1 guy and he says 64-byte lines are a bit of a magic number
because of how they line up with DIMMs.  Since there's no spec to define this,
there's no correct answer.  I'd be amenable to making this a Kconfig option,
but I think we'll leave it alone for now.  It does match the extant
implementations.

>>>> diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h
>>>> new file mode 100644
>>>> index 000000000000..d942555a7a08
>>>> --- /dev/null
>>>> +++ b/arch/riscv/include/asm/io.h
>>>> @@ -0,0 +1,36 @@
>>>
>>>> +#ifndef _ASM_RISCV_IO_H
>>>> +#define _ASM_RISCV_IO_H
>>>> +
>>>> +#include <asm-generic/io.h>
>>>
>>> I would recommend providing your own {read,write}{b,w,l,q}{,_relaxed}
>>> helpers using inline assembly, to prevent the compiler for breaking
>>> up accesses into byte accesses.
>>>
>>> Also, most architectures require to some synchronization after a
>>> non-relaxed readl() to prevent prefetching of DMA buffers, and
>>> before a writel() to flush write buffers when a DMA gets triggered.
>>
>> Makes sense.  These were all OK on existing implementations (as there's no
>> writable PMAs, so all MMIO regions are strictly ordered), but that's not
>> actually what the RISC-V ISA says.  I patterned this on arm64
>>
>>   https://github.com/riscv/riscv-linux/commit/e200fa29a69451ef4d575076e4d2af6b7877b1fa
>>
>> where I think the only odd thing is our definition of mmiowb
>>
>>   +/* IO barriers.  These only fence on the IO bits because they're only required
>>   + * to order device access.  We're defining mmiowb because our AMO instructions
>>   + * (which are used to implement locks) don't specify ordering.  From Chapter 7
>>   + * of v2.2 of the user ISA:
>>   + * "The bits order accesses to one of the two address domains, memory or I/O,
>>   + * depending on which address domain the atomic instruction is accessing. No
>>   + * ordering constraint is implied to accesses to the other domain, and a FENCE
>>   + * instruction should be used to order across both domains."
>>   + */
>>   +
>>   +#define __iormb()               __asm__ __volatile__ ("fence i,io" : : : "memory");
>>   +#define __iowmb()               __asm__ __volatile__ ("fence io,o" : : : "memory");
>
> Looks ok, yes.
>
>>   +#define mmiowb()                __asm__ __volatile__ ("fence io,io" : : : "memory");
>>
>> which I think is correct.
>
> I can never remember what exactly this one does.

I can't find the reference again, but what I found said that if your atomics
(or whatever's used for locking) don't stay ordered with your MMIO accesses,
then you should define mmiowb to ensure ordering.  I managed to screw this up,
as there's no "w" in the successor set (to actually enforce the AMO ordering).
This is somewhat confirmed by

  https://lkml.org/lkml/2006/8/31/174
  Subject: Re: When to use mmiowb()?
  AFAICT, they're both right.  Generally, mmiowb() should be used prior to
  unlock in a critical section whose last PIO operation is a writeX.

Thus, I think the actual fence should be at least

  fence o,w

Documentation/memory-barries.txt says

  "
  The Linux kernel also has a special barrier for use with memory-mapped I/O
  writes:

  	mmiowb();

  This is a variation on the mandatory write barrier that causes writes to weakly
  ordered I/O regions to be partially ordered.  Its effects may go beyond the
  CPU->Hardware interface and actually affect the hardware at some level.

  See the subsection "Acquires vs I/O accesses" for more information.
  "

  "
  ACQUIRES VS I/O ACCESSES
  ------------------------

  Under certain circumstances (especially involving NUMA), I/O accesses within
  two spinlocked sections on two different CPUs may be seen as interleaved by the
  PCI bridge, because the PCI bridge does not necessarily participate in the
  cache-coherence protocol, and is therefore incapable of issuing the required
  read memory barriers.

  For example:

  	CPU 1				CPU 2
  	===============================	===============================
  	spin_lock(Q)
  	writel(0, ADDR)
  	writel(1, DATA);
  	spin_unlock(Q);
  					spin_lock(Q);
  					writel(4, ADDR);
  					writel(5, DATA);
  					spin_unlock(Q);

  may be seen by the PCI bridge as follows:

  	STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5

  which would probably cause the hardware to malfunction.


  What is necessary here is to intervene with an mmiowb() before dropping the
  spinlock, for example:

  	CPU 1				CPU 2
  	===============================	===============================
  	spin_lock(Q)
  	writel(0, ADDR)
  	writel(1, DATA);
  	mmiowb();
  	spin_unlock(Q);
  					spin_lock(Q);
  					writel(4, ADDR);
  					writel(5, DATA);
  					mmiowb();
  					spin_unlock(Q);

  this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
  before either of the stores issued on CPU 2.


  Furthermore, following a store by a load from the same device obviates the need
  for the mmiowb(), because the load forces the store to complete before the load
  is performed:

  	CPU 1				CPU 2
  	===============================	===============================
  	spin_lock(Q)
  	writel(0, ADDR)
  	a = readl(DATA);
  	spin_unlock(Q);
  					spin_lock(Q);
  					writel(4, ADDR);
  					b = readl(DATA);
  					spin_unlock(Q);


  See Documentation/driver-api/device-io.rst for more information.
  "

which matches what's above.  I think "fence o,w" is sufficient for a mmiowb on
RISC-V.  I'll make the change.

>>>> +#ifdef __KERNEL__
>>>> +
>>>> +#ifdef CONFIG_MMU
>>>> +
>>>> +extern void __iomem *ioremap(phys_addr_t offset, unsigned long size);
>>>> +
>>>> +#define ioremap_nocache(addr, size) ioremap((addr), (size))
>>>> +#define ioremap_wc(addr, size) ioremap((addr), (size))
>>>> +#define ioremap_wt(addr, size) ioremap((addr), (size))
>>>
>>> Is this a hard architecture limitation? Normally you really want
>>> write-combined access on frame buffer memory and a few other
>>> cases for performance reasons, and ioremap_wc() gets used
>>> for by memremap() for addressing RAM in some cases, and you
>>> normally don't want to have PTEs for the same memory using
>>> cached and uncached page flags
>>
>> This is currently an architecture limitation.  In RISC-V these properties are
>> known as PMAs (Physical Memory Attributes).  While the supervisor spec mentions
>> PMAs, it doesn't provide a mechanism to read or write them so they are
>> essentially unspecified.  PMAs will be properly defined as part of the platform
>> specification, which isn't written yet.
>
> Ok. Maybe add that as a comment above these definitions then.

OK.

>>>> +/*
>>>> + * low level task data that entry.S needs immediate access to
>>>> + * - this struct should fit entirely inside of one cache line
>>>> + * - this struct resides at the bottom of the supervisor stack
>>>> + * - if the members of this struct changes, the assembly constants
>>>> + *   in asm-offsets.c must be updated accordingly
>>>> + */
>>>> +struct thread_info {
>>>> +       struct task_struct      *task;          /* main task structure */
>>>> +       unsigned long           flags;          /* low level flags */
>>>> +       __u32                   cpu;            /* current CPU */
>>>> +       int                     preempt_count;  /* 0 => preemptable, <0 => BUG */
>>>> +       mm_segment_t            addr_limit;
>>>> +};
>>>
>>> Please see 15f4eae70d36 ("x86: Move thread_info into task_struct")
>>> and try to do the same.
>>
>> OK, here's my attempt
>>
>>   https://github.com/riscv/riscv-linux/commit/c618553e7aa65c85564a5d0a868ec7e6cf634afd
>>
>> Since there's some actual meat, I left a commit message (these are more just
>> notes for me for my v2, I'll be squashing everything)
>>
>> "
>> This is patterned more off the arm64 move than the x86 one, since we
>> still need to have at least addr_limit to emulate FS.
>>
>> The patch itself changes sscratch from holding SP to holding TP, which
>> contains a pointer to task_struct.  thread_info must be at a 0 offset
>> from task_struct, but it looks like that's already enforced with a big
>> comment.  We now store both the user and kernel SP in task_struct, but
>> those are really acting more as extra scratch space than pemanent
>> storage.
>> "
>
> I haven't looked at all the details of the x86 patch, but it seems they
> decided to put the arch specific members into 'struct thread_struct'
> rather than 'struct thread_info', so I'd suggest you do the same here for
> consistency, unless there is a strong reason against doing it.

We actually can't put them in "struct thread_struct" in a sane manner.  On
context switches on RISC-V there are no register saved, instead we use the
instructions

  csrrw REG, sscratch, REG

which swaps some register (used to be sp, now tp) with the "sscratch" CSR (a
register only visible to the supervisor).  At this point we only have one
register to work with in order to save the user state.  Since "struct
thread_info" is 0-offset from "struct task_struct", we're guaranteed to be able
to access it using our one addressing mode (a 12-bit signed constant offset
from a register).

Unfortunately, "struct thread_struct" is at a potentially large offset from the
saved TP.  We could do something silly like

  addi tp, tp, OFFSET_1
  addi tp, tp, OFFSET_2
  addi tp, tp, OFFSET_3

but for an "allyesconfig" we end up with offsets of about 9KiB, which would
require 4 additional instructions to find "struct thread_struct".  While this
is possible, I'd prefer to avoid the extra cycles.  We usually handle long
immediate with a two-instruction sequence

  li REG, IMM31-12 (loads the high bits of REG, zeroing the low bits)
  addi REG, REG, IMM11-0 (loads the low bits of REG, leaving the high bits alone)

but there's no scratch register here so we can't use that.  We don't have a
long-immediate-add instruction.

The arguments in x86 land for moving everything to "struct thread_struct" were
that they always know it's in "struct task_struct", but since that's all we're
supporting on RISC-V I don't think it counts.  There were also discussions of
eliminating "struct thread_info", but unless there's something at the start of
"struct task_struct" then RISC-V will have to pay a bunch of cycles on a
context switch.

>>>> +#else /* !CONFIG_MMU */
>>>> +
>>>> +static inline void flush_tlb_all(void)
>>>> +{
>>>> +       BUG();
>>>> +}
>>>> +
>>>> +static inline void flush_tlb_mm(struct mm_struct *mm)
>>>> +{
>>>> +       BUG();
>>>> +}
>>>
>>> The NOMMU support is rather incomplete and CONFIG_MMU is
>>> hard-enabled, so I'd just drop any !CONFIG_MMU #ifdefs.
>>
>> OK.  I've left in the "#ifdef CONFIG_MMU" blocks as the #ifdef/#endif doesn't
>> really add any code, but I can go ahead and drop the #ifdef if you think that's
>> better.
>>
>>   https://github.com/riscv/riscv-linux/commit/e98ca23adfb9422bebc87cbfb58f70d4a63cf067
>
> Ok.
>
>>>> +#include <asm-generic/unistd.h>
>>>> +
>>>> +#define __NR_sysriscv  __NR_arch_specific_syscall
>>>> +#ifndef __riscv_atomic
>>>> +__SYSCALL(__NR_sysriscv, sys_sysriscv)
>>>> +#endif
>>>
>>> Please make this a straight cmpxchg syscall and remove the multiplexer.
>>> Why does the definition depend on __riscv_atomic rather than the
>>> Kconfig symbol?
>>
>> I think that was just an oversight: that's not the right switch.  Either you or
>> someone else pointed out some problems with this.  There's going to be an
>> interposer in the VDSO, and then we'll always enable the system call.
>>
>> I can change this to two system calls: sysriscv_cmpxchg32 and
>> sysriscv_cmpxchg64.
>
> Sounds good.
>
>         Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 4/7] RISC-V: arch/riscv/include
  2017-06-06  4:56         ` Palmer Dabbelt
@ 2017-06-06  8:54           ` Arnd Bergmann
  2017-06-06 19:07             ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-06-06  8:54 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Linux Kernel Mailing List, Olof Johansson, albert

On Tue, Jun 6, 2017 at 6:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> On Thu, 01 Jun 2017 02:00:22 PDT (-0700), Arnd Bergmann wrote:
>> On Thu, Jun 1, 2017 at 2:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> On Tue, 23 May 2017 05:55:15 PDT (-0700), Arnd Bergmann wrote:
>>>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> - Many architectures have cache line prefetch, flush, zero or copy
>>   instructions  that are used for important performance optimizations
>>   but that are typically defined on a cacheline granularity. I don't
>>   think you currently have any of them, but it seems likely that there
>>   will be demand for them later.
>
> We actually have an implicit prefetch (loads to x0, the zero register), but
> it still has all the load side-effects so nothing uses it.
>
>> Having a larger than necessary alignment can waste substantial amounts
>> of memory for arrays of cache line aligned structures (typically
>> per-cpu arrays), but otherwise should not cause harm.
>
> I bugged our L1 guy and he says 64-byte lines are a bit of a magic number
> because of how they line up with DIMMs.  Since there's no spec to define this,
> there's no correct answer.  I'd be amenable to making this a Kconfig option,
> but I think we'll leave it alone for now.  It does match the extant
> implementations.

Hmm, this sounds like a hole in the architecture definition: if you have an
instruction that performs a prefetch (even one that is not easily usable),
I would argue that the cache line size has become a feature of the
architecture and is no longer strictly an implementation detail of the
microarchitecture.

Regarding the memory interface, a lot of systems use two DIMMs on
each memory channel for 128-bit parallel buses, and with LP-DDRx
controllers, you might have a width as small as 16 bits.

>>>>> diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h
>>>>> new file mode 100644
>>>>> index 000000000000..d942555a7a08
>>>>> --- /dev/null
>>>>> +++ b/arch/riscv/include/asm/io.h
>>>>> @@ -0,0 +1,36 @@
>>>>
>>>>> +#ifndef _ASM_RISCV_IO_H
>>>>> +#define _ASM_RISCV_IO_H
>>>>> +
>>>>> +#include <asm-generic/io.h>
>>>>
>>>> I would recommend providing your own {read,write}{b,w,l,q}{,_relaxed}
>>>> helpers using inline assembly, to prevent the compiler for breaking
>>>> up accesses into byte accesses.
>>>>
>>>> Also, most architectures require to some synchronization after a
>>>> non-relaxed readl() to prevent prefetching of DMA buffers, and
>>>> before a writel() to flush write buffers when a DMA gets triggered.
>>>
>>> Makes sense.  These were all OK on existing implementations (as there's no
>>> writable PMAs, so all MMIO regions are strictly ordered), but that's not
>>> actually what the RISC-V ISA says.  I patterned this on arm64
>>>
>>>   https://github.com/riscv/riscv-linux/commit/e200fa29a69451ef4d575076e4d2af6b7877b1fa
>>>
>>> where I think the only odd thing is our definition of mmiowb
>>>
>>>   +/* IO barriers.  These only fence on the IO bits because they're only required
>>>   + * to order device access.  We're defining mmiowb because our AMO instructions
>>>   + * (which are used to implement locks) don't specify ordering.  From Chapter 7
>>>   + * of v2.2 of the user ISA:
>>>   + * "The bits order accesses to one of the two address domains, memory or I/O,
>>>   + * depending on which address domain the atomic instruction is accessing. No
>>>   + * ordering constraint is implied to accesses to the other domain, and a FENCE
>>>   + * instruction should be used to order across both domains."
>>>   + */
>>>   +
>>>   +#define __iormb()               __asm__ __volatile__ ("fence i,io" : : : "memory");
>>>   +#define __iowmb()               __asm__ __volatile__ ("fence io,o" : : : "memory");
>>
>> Looks ok, yes.
>>
>>>   +#define mmiowb()                __asm__ __volatile__ ("fence io,io" : : : "memory");
>>>
>>> which I think is correct.
>>
>> I can never remember what exactly this one does.
>
> I can't find the reference again, but what I found said that if your atomics
> (or whatever's used for locking) don't stay ordered with your MMIO accesses,
> then you should define mmiowb to ensure ordering.  I managed to screw this up,
> as there's no "w" in the successor set (to actually enforce the AMO ordering).
> This is somewhat confirmed by
>
>   https://lkml.org/lkml/2006/8/31/174
>   Subject: Re: When to use mmiowb()?
>   AFAICT, they're both right.  Generally, mmiowb() should be used prior to
>   unlock in a critical section whose last PIO operation is a writeX.
>
> Thus, I think the actual fence should be at least
>
>   fence o,w
...
> which matches what's above.  I think "fence o,w" is sufficient for a mmiowb on
> RISC-V.  I'll make the change.

This sounds reasonable according to the documentation, but with your
longer explanation of the barriers, I think the __iormb/__iowmb definitions
above are wrong. What you actually need I think is

void writel(u32 v, volatile void __iomem *addr)
{
         asm volatile("fence w,o" : : : "memory");
         writel_relaxed(v, addr);
}

u32 readl(volatile void __iomem *addr)
{
         u32 ret = readl_relaxed(addr);
         asm volatile("fence i,r" : : : "memory");
         return ret;
}

to synchronize between DMA and I/O. The barriers you listed above
in contrast appear to be directed at synchronizing I/O with other I/O.
We normally assume that this is not required when you have
subsequent MMIO accesses on the same device (on PCI) or the
same address region (per ARM architecture and others). If you do
need to enforce ordering between MMIO, you might even need to
add those barriers in the relaxed version to be portable with drivers
written for ARM SoCs:

void writel_relaxed(u32 v, volatile void __iomem *addr)
{
        __raw_writel((__force u32)cpu_to_le32(v, addr)
         asm volatile("fence o,io" : : : "memory");
}

u32 readl_relaxed(volatile void __iomem *addr)
{
         asm volatile("fence i,io" : : : "memory");
         return le32_to_cpu((__force __le32)__raw_readl(addr));
}

You then end up with a barrier before and after each regular
readl/writel in order to synchronize both with DMA and MMIO
instrictructions, and you still need the extre mmiowb() to
synchronize against the spinlock.

My memory on mmiowb is still a bit cloudy, but I think we don't
need that on ARM, and while PowerPC originally needed it, it is
now implied by the spin_unlock(). If this is actually right, you might
want to do the same here. Very few drivers actually use mmiowb(),
but there might be more drivers that would need it if your
spin_unlock() doesn't synchronize against writel() or writel_relaxed().
Maybe it the mmiowb() should really be implied by writel() but not
writel_relaxed()? That might be sensible, but only if we do it
the same way on powerpc, which currently doesn't have
writel_relaxed() any more relaxed than writel()

> The arguments in x86 land for moving everything to "struct thread_struct" were
> that they always know it's in "struct task_struct", but since that's all we're
> supporting on RISC-V I don't think it counts.  There were also discussions of
> eliminating "struct thread_info", but unless there's something at the start of
> "struct task_struct" then RISC-V will have to pay a bunch of cycles on a
> context switch.

Ok. Since you have THREAD_INFO_IN_TASK, it probably doesn't matter
too much then, it's just a bit inconsistent with the other architectures, but
the effect is the same, so just do it the way that is more efficient for you.

      Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/7] RISC-V: arch/riscv/kernel
  2017-06-02 23:56     ` Palmer Dabbelt
@ 2017-06-06  9:01       ` Arnd Bergmann
  2017-06-06 20:37         ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-06-06  9:01 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Linux Kernel Mailing List, Olof Johansson, albert

On Sat, Jun 3, 2017 at 1:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> On Tue, 23 May 2017 06:35:23 PDT (-0700), Arnd Bergmann wrote:
>>> +IRQCHIP_DECLARE(riscv, "riscv,cpu-intc", riscv_intc_init);
>> If you don't care about LPC/ISA devices, then your PCI_MIN_IO
>> should also be zero instead of 0x1000
>
> Sorry, but the only Google results for PCI_MIN_IO is this email.  There don't
> appear to be any relevant references to PCI_MIN in the kernel
>
>   $ git grep PCI_MIN_
>   arch/mips/include/asm/mach-loongson64/cs5536/cs5536_pci.h:      ((PCI_MAX_LATENCY << 24) | (PCI_MIN_GRANT << 16) | \
>   arch/mips/include/asm/mach-loongson64/cs5536/cs5536_pci.h:#define PCI_MIN_GRANT                 0x00
>   drivers/ata/pata_hpt366.c:      pci_write_config_byte(dev, PCI_MIN_GNT, 0x08);
>   drivers/ata/pata_hpt37x.c:      pci_write_config_byte(dev, PCI_MIN_GNT, 0x08);
>   drivers/ata/pata_hpt3x2n.c:     pci_write_config_byte(dev, PCI_MIN_GNT, 0x08);
>   drivers/ide/hpt366.c:   pci_write_config_byte(dev, PCI_MIN_GNT, 0x08);
>   drivers/net/fddi/skfp/h/skfbi.h:#define PCI_MIN_GNT     0x3e    /*  8 bit       Min_Gnt */
>   drivers/net/fddi/skfp/h/skfbi.h:/*      PCI_MIN_GNT     8 bit   Min_Gnt */
>   include/uapi/linux/pci_regs.h:#define PCI_MIN_GNT               0x3e    /* 8 bits */
>
> I'm afraid that I'm not sure what to do here.

Sorry, I meant PCIBIOS_MIN_IO

       Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/7] RISC-V: arch/riscv/kernel
  2017-06-06  4:56         ` Palmer Dabbelt
@ 2017-06-06  9:03           ` Arnd Bergmann
  2017-06-06 20:38             ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-06-06  9:03 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Olof Johansson, Linux Kernel Mailing List, albert

On Tue, Jun 6, 2017 at 6:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> On Thu, 25 May 2017 12:51:54 PDT (-0700), Arnd Bergmann wrote:
>> On Thu, May 25, 2017 at 3:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> On Mon, 22 May 2017 19:11:35 PDT (-0700), olof@lixom.net wrote:
>>
>>>  * Definitions like ARCH_HAS_SETUP_ADDITIONAL_PAGES, these are also present in
>>>    other architectures.
>>
>> What is the warning here? I would assume that you should leave this
>> unchanged as well.
>
> ERROR: #define of 'ARCH_HAS_SETUP_ADDITIONAL_PAGES' is wrong - use Kconfig variables or standard guards instead
> #2533: FILE: arch/riscv/include/asm/elf.h:79:
> +#define ARCH_HAS_SETUP_ADDITIONAL_PAGES

Ok, you can definitely ignore this one. The warning is meant to prevent adding
new macros like that, but the macro already exists in the other architectures,
and I see no point in converting them all into a CONFIG_ symbol.

       Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-06-06  4:56         ` Palmer Dabbelt
@ 2017-06-06  9:20           ` Arnd Bergmann
  2017-06-06 20:38             ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-06-06  9:20 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Linux Kernel Mailing List, Olof Johansson, albert

On Tue, Jun 6, 2017 at 6:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> On Mon, 29 May 2017 04:17:40 PDT (-0700), Arnd Bergmann wrote:
>> On Sat, May 27, 2017 at 2:57 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> On Tue, 23 May 2017 04:46:22 PDT (-0700), Arnd Bergmann wrote:
>> This raises a much more general question about how you want to deal
>> with SoC implementations in the future. The two most common ways of
>> doing this are:
>>
>> - Every major platform gets a Kconfig option in the architecture menu,
>>   and that selects the essential drivers (irqchip, clocksource, pinctrl,
>>   clk, ...) that you need for that platform, along with architecture features
>>   (ISA level and optional features, ...)
>>
>> - The architecture code knows nothing about the SoC and just keeps
>>    to the basics (CPU architecture level selection, SMP/MMU/etc enabled,
>>    selecting drivers that everyone needs) and leaves the rest up to be
>>    selected in the defconfig file.
>>
>> On ARM, we have a bit of both, which is not as good as being
>> consistent one way or another.
>
> This is actually an open question in RISC-V land right now.  We should be
> spinning up a platform specification working group this summer to try and work
> things out.  While this will have to be ironed out, I believe the plan is to
> define a small number of base platforms (maybe one for embedded systems with no
> programmable PMAs, and one for larger machines with a bit more
> configurability).  I'd anticipate that we'll have a platform Kconfig menu entry
> for every platform that gets written down in a specification (just like we have
> an entry for our base ISAs) and then defconfig entries for various
> implementations that select the relevant platform in addition to the drivers
> actually on board.
>
> For now we've got a handful of defconfig entries for the various platforms we
> support (the ISA simulator and our FPGA implementation), but there's no silicon
> so we're not stuck with what's there.  I don't anticipate we'll add more than a
> handful of these until the platform spec work is underway.

Ok. Another related point, which may or may not be obvious: when you
come up with platform definitions, they should not be mutually exclusive.

For instance, supporting both MMU/NOMMU, big/little-endian or 32/64-bit
kernels will of course require building separate binaries, but almost every
other configuration option should be backward compatible: An SMP
kernel should run on a uniprocessor machine and vice versa (using only
one CPU), and you should be able to run a kernel with support multiple
instruction set revisions by restricting the build to the smallest subset
of instructions.

On older ARM platforms and most MIPS platforms, we are still
restricted to building a kernel binary that will only run on a particular
SoC family and not even another SoC with the same CPU core.
Fixing this for most ARM platforms required a lot of work that you
should avoid by requiring them all to work with a common kernel
from the start.

Similarly, ARM has an incompatibility that prevents us from running
on older (ARMv4/v5) along with newer (ARMv6 or higher) instruction
set versions with a single kernel. Avoiding this on RISC-V may
become challenging as one of the strengths of the architecture is
its flexibility: Someone may come up with their own architecture
extensions that they really want to support in the kernel but can't
get it to work without making the kernel binary incompatible with
other implementations. Do you already have a policy for how to
deal with this? Usually by the time someone ships hardware, it's
too late and it becomes hard to argue for their kernel port to not
get merged.

>>>>> +config RV_ATOMIC
>>>>> +       bool "Use atomic memory instructions (RV32A or RV64A)"
>>>>> +       default y
>>>>> +
>>>>> +config RV_SYSRISCV_ATOMIC
>>>>> +       bool "Include support for atomic operation syscalls"
>>>>> +       default n
>>>>> +       help
>>>>> +         If atomic memory instructions are present, i.e.,
>>>>> +         CONFIG_RV_ATOMIC, this includes support for the syscall that
>>>>> +         provides atomic accesses.  This is only useful to run
>>>>> +         binaries that require atomic access but were compiled with
>>>>> +         -mno-atomic.
>>>>> +
>>>>> +         If CONFIG_RV_ATOMIC is unset, this option is mandatory.
>>>>
>>>> I wonder what the cost would be of always providing the syscalls
>>>> for compatibility. This is also something worth putting into a VDSO
>>>> instead of exposing the syscall:
>>>>
>>>> That way, user space that is built with -mno-atomic can call into
>>>> the vdso, which depending on the hardware support will perform
>>>> the atomic operation directly or enter the syscall.
>>>
>>> I think the theory is that most Linux-capable systems are going to have the A
>>> extension, and that this is really there for educational/hobbyist use.
>>>
>>> The actual implementation isn't that expensive: it's no-SMP-only, so it's just
>>> a disable/enable interrupts that wraps a compare+exchange.  I think putting it
>>> in the VDSO is actually the right thing to do: that way non-A user code will
>>> still have reasonable performance.
>>>
>>> I'll add it to my TODO list.
>>
>> This would be another variant of the question above.
>
> After talking with people a bit about this, I think the sane thing to do here
> is to just always provide the atomic system call (and VDSO implementation), and
> to default to enabling the atomic extensions.  While I expect non-atomic
> implementations to be very much the exception, there's a stronger argument for
> supporting non-atomic userspace programs with reasonable performance on atomic
> systems -- the first round of small Linux-capable embedded systems might not
> support atomic (though ours will), but we don't want to wed software to a
> syscall in this case.
>
> Before the VDSO was suggested I was leaning towards disabling the atomic system
> call by default, but since the VDSO implementation will provide very good
> performance for non-atomic userspace code on atomic systems I think it's best
> to just enable it everywhere.  It's only a few bytes of binary size.

Ok.

        Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/7] RISC-V: arch/riscv/lib
  2017-06-06  4:56         ` Palmer Dabbelt
@ 2017-06-06  9:31           ` Arnd Bergmann
  2017-06-06 20:53             ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-06-06  9:31 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Linux Kernel Mailing List, Olof Johansson, albert

On Tue, Jun 6, 2017 at 6:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> On Fri, 26 May 2017 02:06:58 PDT (-0700), Arnd Bergmann wrote:
>> On Thu, May 25, 2017 at 3:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> On Tue, 23 May 2017 04:19:42 PDT (-0700), Arnd Bergmann wrote:
>>>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>
>>>> Also, it would be good to replace the multiply+div64
>>>> with a single multiplication here, see how x86 and arm do it
>>>> (for the tsc/__timer_delay case).
>>>
>>> Makes sense.  I think this should do it
>>>
>>>   https://github.com/riscv/riscv-linux/commit/d397332f6ebff42f3ecb385e9cf3284fdeda6776
>>>
>>> but I'm finding this hard to test as this only works for 2ms sleeps.  It seems
>>> at least in the right ballpark
>>
>> + if (usecs > MAX_UDELAY_US) {
>> + __delay((u64)usecs * riscv_timebase / 1000000ULL);
>> + return;
>> + }
>>
>> You still do the 64-bit division here. What I meant is to completely
>> avoid the division and use a multiply+shift.
>
> The goal here was to avoid the error case that ARM has on overflow and instead
> just delay for the requested time.  This should only divide when the delay is
>>=2ms, so the division won't cost much in comparison.
>
> The normal case should have no division in it.
>
> I can copy ARM's error handling if you think that's better, but it seemed more
> complicated than just computing the correct answer.

I think the intention originally was to avoid overflowing the 32-bit
argument in

 void __delay(unsigned long cycles)

If you need to delay for more than 4 billion clocksource cycles,
your code is still broken.

>> Also, you don't need to base anything on HZ, as you do not rely
>> on the delay calibration but always use a timer.
>
> That makes sense, I just based this blindly off the ARM version.  I'll see if
> that lets me avoid unnecessary overflow for ndelay.  If it doesn't then I'd
> prefer to just keep exactly the same constraints ARM has to avoid unexpected
> behavior.

Right, I should have been more specific here, as ARM has two implementations
(loop and timer) and it gets much easier if you know you can rely on the
timer to be available.

       Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64}
  2017-06-06  4:56         ` Palmer Dabbelt
@ 2017-06-06 17:39           ` Karsten Merker
  2017-06-06 17:57             ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Karsten Merker @ 2017-06-06 17:39 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Arnd Bergmann, linux-kernel

On Mon, Jun 05, 2017 at 09:56:33PM -0700, Palmer Dabbelt wrote:
> On Mon, 29 May 2017 03:50:47 PDT (-0700), Arnd Bergmann wrote:
> > On Sat, May 27, 2017 at 2:57 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> >> On Tue, 23 May 2017 04:30:50 PDT (-0700), Arnd Bergmann wrote:
> >>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> >>>> RISC-V has both 32-bit and 64-bit base ISAs, but they are very similar.
> >>>> Like some other platforms, we'd like to share one arch directory between
> >>>> the two of them.
> >>>
> >>> I think we mainly do the others for backwards-compatibility with ancient
> >>> build scripts, and we don't need that here. Instead, you could add one more
> >>> line to the 'SUBARCH:=' statement that interprets the uname output.
> >>
> >> I don't think that does the same thing.  The desired effect of this diff is:
> >>
> >>  * "uname -m" when running on a RISC-V machine returns either riscv32 or
> >>    riscv64, as that's what tools like autoconf expect when trying to find
> >>    tuples.
> >>
> >>  * I can cross compile for riscv32 and riscv64.  That's currently controlled by
> >>    a Kconfig setting, but ARCH=riscv32 vs ARCH=riscv64 controlls what defconfig
> >>    sets.
> >>
> >>  * I can natively compile for riscv32 and riscv64.  That uses the same Kconfig
> >>    setting, and the same ARCH=riscv32 vs ARCH=riscv64 switch for defconfig.
> >
> > Right, but my point is that a new architecture should not rely on 'ARCH='
> > to pick the defconfig, we only do that on a couple of architectures for
> > backwards compatibility with old scripts.
> >
> >> Neither of the two Kconfig issues is a big deal, but we de need "uname -m" to
> >> return "riscv64" or "riscv32" not "riscv".  I think the only way to do that is
> >> to set SRCARCH, but I'd be happy to change it if there's a better way.  I think
> >> if I just do this
> >>
> >> diff --git a/Makefile b/Makefile
> >> index 0606f28..4adc609 100644
> >> --- a/Makefile
> >> +++ b/Makefile
> >> @@ -232,7 +232,8 @@ SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
> >>                                   -e s/arm.*/arm/ -e s/sa110/arm/ \
> >>                                   -e s/s390x/s390/ -e s/parisc64/parisc/ \
> >>                                   -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
> >> -                                 -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ )
> >> +                                 -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
> >> +                                 -e s/riscv.*/riscv/ )
> >>
> >>  # Cross compiling and selecting different set of gcc/bin-utils
> >>  # ---------------------------------------------------------------------------
> >> @@ -269,14 +270,6 @@ ifeq ($(ARCH),x86_64)
> >>          SRCARCH := x86
> >>  endif
> >>
> >> -# Additional ARCH settings for RISC-V
> >> -ifeq ($(ARCH),riscv32)
> >> -       SRCARCH := riscv
> >> -endif
> >> -ifeq ($(ARCH),riscv64)
> >> -       SRCARCH := riscv
> >> -endif
> >> -
> >>  # Additional ARCH settings for sparc
> >>  ifeq ($(ARCH),sparc32)
> >>         SRCARCH := sparc
> >>
> >> then I'll end up with "uname -m" as "riscv" -- I haven't tried it, but that's
> >> why we ended up with this diff in the first place.
> >
> > Do you mean the "uname -m" output comes from "${SRCARCH}" at
> > the time of the kernel build? That would be easy enough to change
> > by simply hardcoding it depending on CONFIG_64BIT.
> 
> OK, I didn't know about COMPAT_UTS_MACHINE.  That's a much better solution,
> I'll use that.

Hello Palmer,

I suppose the commit:

  https://github.com/riscv/riscv-linux/commit/8c826930d2a19ecd4f1036f10a380dc4fddd0da5

aims to address this, but it appears to be incomplete. It lacks
the first fragment of the patch above, i.e. the conversion from
the "uname -m" output (i.e. "riscv{64,32}") to the canonical
arch string (i.e. "riscv").  As a result, a native build (which
normally doesn't explicitly pass ARCH=riscv to make and therefore
relies on the output of "uname -m") would fail.

Regards,
Karsten
-- 
Gem. Par. 28 Abs. 4 Bundesdatenschutzgesetz widerspreche ich der Nutzung
sowie der Weitergabe meiner personenbezogenen Daten für Zwecke der
Werbung sowie der Markt- oder Meinungsforschung.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64}
  2017-06-06 17:39           ` Karsten Merker
@ 2017-06-06 17:57             ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 17:57 UTC (permalink / raw)
  To: merker; +Cc: Arnd Bergmann, linux-kernel

On Tue, 06 Jun 2017 10:39:04 PDT (-0700), merker@debian.org wrote:
> On Mon, Jun 05, 2017 at 09:56:33PM -0700, Palmer Dabbelt wrote:
>> On Mon, 29 May 2017 03:50:47 PDT (-0700), Arnd Bergmann wrote:
>> > On Sat, May 27, 2017 at 2:57 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> >> On Tue, 23 May 2017 04:30:50 PDT (-0700), Arnd Bergmann wrote:
>> >>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> >>>> RISC-V has both 32-bit and 64-bit base ISAs, but they are very similar.
>> >>>> Like some other platforms, we'd like to share one arch directory between
>> >>>> the two of them.
>> >>>
>> >>> I think we mainly do the others for backwards-compatibility with ancient
>> >>> build scripts, and we don't need that here. Instead, you could add one more
>> >>> line to the 'SUBARCH:=' statement that interprets the uname output.
>> >>
>> >> I don't think that does the same thing.  The desired effect of this diff is:
>> >>
>> >>  * "uname -m" when running on a RISC-V machine returns either riscv32 or
>> >>    riscv64, as that's what tools like autoconf expect when trying to find
>> >>    tuples.
>> >>
>> >>  * I can cross compile for riscv32 and riscv64.  That's currently controlled by
>> >>    a Kconfig setting, but ARCH=riscv32 vs ARCH=riscv64 controlls what defconfig
>> >>    sets.
>> >>
>> >>  * I can natively compile for riscv32 and riscv64.  That uses the same Kconfig
>> >>    setting, and the same ARCH=riscv32 vs ARCH=riscv64 switch for defconfig.
>> >
>> > Right, but my point is that a new architecture should not rely on 'ARCH='
>> > to pick the defconfig, we only do that on a couple of architectures for
>> > backwards compatibility with old scripts.
>> >
>> >> Neither of the two Kconfig issues is a big deal, but we de need "uname -m" to
>> >> return "riscv64" or "riscv32" not "riscv".  I think the only way to do that is
>> >> to set SRCARCH, but I'd be happy to change it if there's a better way.  I think
>> >> if I just do this
>> >>
>> >> diff --git a/Makefile b/Makefile
>> >> index 0606f28..4adc609 100644
>> >> --- a/Makefile
>> >> +++ b/Makefile
>> >> @@ -232,7 +232,8 @@ SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
>> >>                                   -e s/arm.*/arm/ -e s/sa110/arm/ \
>> >>                                   -e s/s390x/s390/ -e s/parisc64/parisc/ \
>> >>                                   -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
>> >> -                                 -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ )
>> >> +                                 -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
>> >> +                                 -e s/riscv.*/riscv/ )
>> >>
>> >>  # Cross compiling and selecting different set of gcc/bin-utils
>> >>  # ---------------------------------------------------------------------------
>> >> @@ -269,14 +270,6 @@ ifeq ($(ARCH),x86_64)
>> >>          SRCARCH := x86
>> >>  endif
>> >>
>> >> -# Additional ARCH settings for RISC-V
>> >> -ifeq ($(ARCH),riscv32)
>> >> -       SRCARCH := riscv
>> >> -endif
>> >> -ifeq ($(ARCH),riscv64)
>> >> -       SRCARCH := riscv
>> >> -endif
>> >> -
>> >>  # Additional ARCH settings for sparc
>> >>  ifeq ($(ARCH),sparc32)
>> >>         SRCARCH := sparc
>> >>
>> >> then I'll end up with "uname -m" as "riscv" -- I haven't tried it, but that's
>> >> why we ended up with this diff in the first place.
>> >
>> > Do you mean the "uname -m" output comes from "${SRCARCH}" at
>> > the time of the kernel build? That would be easy enough to change
>> > by simply hardcoding it depending on CONFIG_64BIT.
>>
>> OK, I didn't know about COMPAT_UTS_MACHINE.  That's a much better solution,
>> I'll use that.
>
> Hello Palmer,
>
> I suppose the commit:
>
>   https://github.com/riscv/riscv-linux/commit/8c826930d2a19ecd4f1036f10a380dc4fddd0da5
>
> aims to address this, but it appears to be incomplete. It lacks
> the first fragment of the patch above, i.e. the conversion from
> the "uname -m" output (i.e. "riscv{64,32}") to the canonical
> arch string (i.e. "riscv").  As a result, a native build (which
> normally doesn't explicitly pass ARCH=riscv to make and therefore
> relies on the output of "uname -m") would fail.

Sorry about that, I dropped the commit.  I'm in the middle of bisecting to find
a regression, so the patch might now show up for a bit, but it'll be fixed.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 4/7] RISC-V: arch/riscv/include
  2017-06-06  8:54           ` Arnd Bergmann
@ 2017-06-06 19:07             ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 19:07 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: linux-kernel, olof, albert

On Tue, 06 Jun 2017 01:54:23 PDT (-0700), Arnd Bergmann wrote:
> On Tue, Jun 6, 2017 at 6:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> On Thu, 01 Jun 2017 02:00:22 PDT (-0700), Arnd Bergmann wrote:
>>> On Thu, Jun 1, 2017 at 2:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>>> On Tue, 23 May 2017 05:55:15 PDT (-0700), Arnd Bergmann wrote:
>>>>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>>>>> diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h
>>>>>> new file mode 100644
>>>>>> index 000000000000..d942555a7a08
>>>>>> --- /dev/null
>>>>>> +++ b/arch/riscv/include/asm/io.h
>>>>>> @@ -0,0 +1,36 @@
>>>>>
>>>>>> +#ifndef _ASM_RISCV_IO_H
>>>>>> +#define _ASM_RISCV_IO_H
>>>>>> +
>>>>>> +#include <asm-generic/io.h>
>>>>>
>>>>> I would recommend providing your own {read,write}{b,w,l,q}{,_relaxed}
>>>>> helpers using inline assembly, to prevent the compiler for breaking
>>>>> up accesses into byte accesses.
>>>>>
>>>>> Also, most architectures require to some synchronization after a
>>>>> non-relaxed readl() to prevent prefetching of DMA buffers, and
>>>>> before a writel() to flush write buffers when a DMA gets triggered.
>>>>
>>>> Makes sense.  These were all OK on existing implementations (as there's no
>>>> writable PMAs, so all MMIO regions are strictly ordered), but that's not
>>>> actually what the RISC-V ISA says.  I patterned this on arm64
>>>>
>>>>   https://github.com/riscv/riscv-linux/commit/e200fa29a69451ef4d575076e4d2af6b7877b1fa
>>>>
>>>> where I think the only odd thing is our definition of mmiowb
>>>>
>>>>   +/* IO barriers.  These only fence on the IO bits because they're only required
>>>>   + * to order device access.  We're defining mmiowb because our AMO instructions
>>>>   + * (which are used to implement locks) don't specify ordering.  From Chapter 7
>>>>   + * of v2.2 of the user ISA:
>>>>   + * "The bits order accesses to one of the two address domains, memory or I/O,
>>>>   + * depending on which address domain the atomic instruction is accessing. No
>>>>   + * ordering constraint is implied to accesses to the other domain, and a FENCE
>>>>   + * instruction should be used to order across both domains."
>>>>   + */
>>>>   +
>>>>   +#define __iormb()               __asm__ __volatile__ ("fence i,io" : : : "memory");
>>>>   +#define __iowmb()               __asm__ __volatile__ ("fence io,o" : : : "memory");
>>>
>>> Looks ok, yes.
>>>
>>>>   +#define mmiowb()                __asm__ __volatile__ ("fence io,io" : : : "memory");
>>>>
>>>> which I think is correct.
>>>
>>> I can never remember what exactly this one does.
>>
>> I can't find the reference again, but what I found said that if your atomics
>> (or whatever's used for locking) don't stay ordered with your MMIO accesses,
>> then you should define mmiowb to ensure ordering.  I managed to screw this up,
>> as there's no "w" in the successor set (to actually enforce the AMO ordering).
>> This is somewhat confirmed by
>>
>>   https://lkml.org/lkml/2006/8/31/174
>>   Subject: Re: When to use mmiowb()?
>>   AFAICT, they're both right.  Generally, mmiowb() should be used prior to
>>   unlock in a critical section whose last PIO operation is a writeX.
>>
>> Thus, I think the actual fence should be at least
>>
>>   fence o,w
> ...
>> which matches what's above.  I think "fence o,w" is sufficient for a mmiowb on
>> RISC-V.  I'll make the change.
>
> This sounds reasonable according to the documentation, but with your
> longer explanation of the barriers, I think the __iormb/__iowmb definitions
> above are wrong. What you actually need I think is
>
> void writel(u32 v, volatile void __iomem *addr)
> {
>          asm volatile("fence w,o" : : : "memory");
>          writel_relaxed(v, addr);
> }
>
> u32 readl(volatile void __iomem *addr)
> {
>          u32 ret = readl_relaxed(addr);
>          asm volatile("fence i,r" : : : "memory");
>          return ret;
> }
>
> to synchronize between DMA and I/O. The barriers you listed above
> in contrast appear to be directed at synchronizing I/O with other I/O.
> We normally assume that this is not required when you have
> subsequent MMIO accesses on the same device (on PCI) or the
> same address region (per ARM architecture and others). If you do
> need to enforce ordering between MMIO, you might even need to
> add those barriers in the relaxed version to be portable with drivers
> written for ARM SoCs:
>
> void writel_relaxed(u32 v, volatile void __iomem *addr)
> {
>         __raw_writel((__force u32)cpu_to_le32(v, addr)
>          asm volatile("fence o,io" : : : "memory");
> }
>
> u32 readl_relaxed(volatile void __iomem *addr)
> {
>          asm volatile("fence i,io" : : : "memory");
>          return le32_to_cpu((__force __le32)__raw_readl(addr));
> }
>
> You then end up with a barrier before and after each regular
> readl/writel in order to synchronize both with DMA and MMIO
> instrictructions, and you still need the extre mmiowb() to
> synchronize against the spinlock.

Ah, thanks.  I guess I just had those wrong.  I'll fix the non-relaxed versions
and add relaxed versions that have the fence.

> My memory on mmiowb is still a bit cloudy, but I think we don't
> need that on ARM, and while PowerPC originally needed it, it is
> now implied by the spin_unlock(). If this is actually right, you might
> want to do the same here. Very few drivers actually use mmiowb(),
> but there might be more drivers that would need it if your
> spin_unlock() doesn't synchronize against writel() or writel_relaxed().
> Maybe it the mmiowb() should really be implied by writel() but not
> writel_relaxed()? That might be sensible, but only if we do it
> the same way on powerpc, which currently doesn't have
> writel_relaxed() any more relaxed than writel()

I like the idea of putting this in spin_unlock() better, particularly if
PowerPC does it that way.  I was worried that with mmiowb being such an
esoteric thing that it would be wrong all over the place, this feels much
better.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/7] RISC-V: arch/riscv/kernel
  2017-06-06  9:01       ` Arnd Bergmann
@ 2017-06-06 20:37         ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 20:37 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: linux-kernel, olof, albert

On Tue, 06 Jun 2017 02:01:23 PDT (-0700), Arnd Bergmann wrote:
> On Sat, Jun 3, 2017 at 1:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> On Tue, 23 May 2017 06:35:23 PDT (-0700), Arnd Bergmann wrote:
>>>> +IRQCHIP_DECLARE(riscv, "riscv,cpu-intc", riscv_intc_init);
>>> If you don't care about LPC/ISA devices, then your PCI_MIN_IO
>>> should also be zero instead of 0x1000
>>
>> Sorry, but the only Google results for PCI_MIN_IO is this email.  There don't
>> appear to be any relevant references to PCI_MIN in the kernel
>>
>>   $ git grep PCI_MIN_
>>   arch/mips/include/asm/mach-loongson64/cs5536/cs5536_pci.h:      ((PCI_MAX_LATENCY << 24) | (PCI_MIN_GRANT << 16) | \
>>   arch/mips/include/asm/mach-loongson64/cs5536/cs5536_pci.h:#define PCI_MIN_GRANT                 0x00
>>   drivers/ata/pata_hpt366.c:      pci_write_config_byte(dev, PCI_MIN_GNT, 0x08);
>>   drivers/ata/pata_hpt37x.c:      pci_write_config_byte(dev, PCI_MIN_GNT, 0x08);
>>   drivers/ata/pata_hpt3x2n.c:     pci_write_config_byte(dev, PCI_MIN_GNT, 0x08);
>>   drivers/ide/hpt366.c:   pci_write_config_byte(dev, PCI_MIN_GNT, 0x08);
>>   drivers/net/fddi/skfp/h/skfbi.h:#define PCI_MIN_GNT     0x3e    /*  8 bit       Min_Gnt */
>>   drivers/net/fddi/skfp/h/skfbi.h:/*      PCI_MIN_GNT     8 bit   Min_Gnt */
>>   include/uapi/linux/pci_regs.h:#define PCI_MIN_GNT               0x3e    /* 8 bits */
>>
>> I'm afraid that I'm not sure what to do here.
>
> Sorry, I meant PCIBIOS_MIN_IO

OK, fixed.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/7] RISC-V: arch/riscv/kernel
  2017-06-06  9:03           ` Arnd Bergmann
@ 2017-06-06 20:38             ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 20:38 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: olof, linux-kernel, albert

On Tue, 06 Jun 2017 02:03:51 PDT (-0700), Arnd Bergmann wrote:
> On Tue, Jun 6, 2017 at 6:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> On Thu, 25 May 2017 12:51:54 PDT (-0700), Arnd Bergmann wrote:
>>> On Thu, May 25, 2017 at 3:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>>> On Mon, 22 May 2017 19:11:35 PDT (-0700), olof@lixom.net wrote:
>>>
>>>>  * Definitions like ARCH_HAS_SETUP_ADDITIONAL_PAGES, these are also present in
>>>>    other architectures.
>>>
>>> What is the warning here? I would assume that you should leave this
>>> unchanged as well.
>>
>> ERROR: #define of 'ARCH_HAS_SETUP_ADDITIONAL_PAGES' is wrong - use Kconfig variables or standard guards instead
>> #2533: FILE: arch/riscv/include/asm/elf.h:79:
>> +#define ARCH_HAS_SETUP_ADDITIONAL_PAGES
>
> Ok, you can definitely ignore this one. The warning is meant to prevent adding
> new macros like that, but the macro already exists in the other architectures,
> and I see no point in converting them all into a CONFIG_ symbol.

Sounds good.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
  2017-06-06  9:20           ` Arnd Bergmann
@ 2017-06-06 20:38             ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 20:38 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: linux-kernel, olof, albert

On Tue, 06 Jun 2017 02:20:50 PDT (-0700), Arnd Bergmann wrote:
> On Tue, Jun 6, 2017 at 6:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> On Mon, 29 May 2017 04:17:40 PDT (-0700), Arnd Bergmann wrote:
>>> On Sat, May 27, 2017 at 2:57 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>>> On Tue, 23 May 2017 04:46:22 PDT (-0700), Arnd Bergmann wrote:
>>> This raises a much more general question about how you want to deal
>>> with SoC implementations in the future. The two most common ways of
>>> doing this are:
>>>
>>> - Every major platform gets a Kconfig option in the architecture menu,
>>>   and that selects the essential drivers (irqchip, clocksource, pinctrl,
>>>   clk, ...) that you need for that platform, along with architecture features
>>>   (ISA level and optional features, ...)
>>>
>>> - The architecture code knows nothing about the SoC and just keeps
>>>    to the basics (CPU architecture level selection, SMP/MMU/etc enabled,
>>>    selecting drivers that everyone needs) and leaves the rest up to be
>>>    selected in the defconfig file.
>>>
>>> On ARM, we have a bit of both, which is not as good as being
>>> consistent one way or another.
>>
>> This is actually an open question in RISC-V land right now.  We should be
>> spinning up a platform specification working group this summer to try and work
>> things out.  While this will have to be ironed out, I believe the plan is to
>> define a small number of base platforms (maybe one for embedded systems with no
>> programmable PMAs, and one for larger machines with a bit more
>> configurability).  I'd anticipate that we'll have a platform Kconfig menu entry
>> for every platform that gets written down in a specification (just like we have
>> an entry for our base ISAs) and then defconfig entries for various
>> implementations that select the relevant platform in addition to the drivers
>> actually on board.
>>
>> For now we've got a handful of defconfig entries for the various platforms we
>> support (the ISA simulator and our FPGA implementation), but there's no silicon
>> so we're not stuck with what's there.  I don't anticipate we'll add more than a
>> handful of these until the platform spec work is underway.
>
> Ok. Another related point, which may or may not be obvious: when you
> come up with platform definitions, they should not be mutually exclusive.
>
> For instance, supporting both MMU/NOMMU, big/little-endian or 32/64-bit
> kernels will of course require building separate binaries, but almost every
> other configuration option should be backward compatible: An SMP
> kernel should run on a uniprocessor machine and vice versa (using only
> one CPU), and you should be able to run a kernel with support multiple
> instruction set revisions by restricting the build to the smallest subset
> of instructions.

We're actually trying really hard to ensure a single binary can run anywhere:
for example, an XLEN (the RISC-V term for the general purpose register bit
width) agnostic code sequence can be run that determines if you're on a 32-bit
or 64-bit platform without taking any traps, then enumerate the entire set of
user and supervisor extensions available.  In theory one could build a kernel
image that supports both RV32I and RV64I, as well as optionally supporting
paging.  This won't happen for Linux, but our debugger examines the target chip
using this mechanism.

An explicit goal of the supervisor specification is to make it possible for one
Linux image to run on every platform.  Right now the only constraints we have
is that SMP kernels require atomics.  Specifically:

 * non-SMP kernels run on multiprocessor systems, picking an arbitrary hart to
   run on (or hart 0 if there's no atomics).
 * The kernel is always built with floating-point enabled, it just never sees
   the "FPU is on" bit if there's no FPU and therefor doesn't execute any
   F-extension instructions.

[Though of course after writing this I went to check our boot code and found it
using an unguarded AMO...  Luckily it was an easy fix.  Thanks!]

The big hole right now is that the SBI (our version of Alpha's PAL code) isn't
specified because it's part of the upcoming platform spec.  We'll just have to
home this isn't a problem.

> On older ARM platforms and most MIPS platforms, we are still
> restricted to building a kernel binary that will only run on a particular
> SoC family and not even another SoC with the same CPU core.
> Fixing this for most ARM platforms required a lot of work that you
> should avoid by requiring them all to work with a common kernel
> from the start.

We're hoping that everyone is more amenable to play ball here after seeing how
much pain it was in ARM land.  We'll see how well the platform specification
goes.

> Similarly, ARM has an incompatibility that prevents us from running
> on older (ARMv4/v5) along with newer (ARMv6 or higher) instruction
> set versions with a single kernel. Avoiding this on RISC-V may
> become challenging as one of the strengths of the architecture is
> its flexibility: Someone may come up with their own architecture
> extensions that they really want to support in the kernel but can't
> get it to work without making the kernel binary incompatible with
> other implementations. Do you already have a policy for how to
> deal with this? Usually by the time someone ships hardware, it's
> too late and it becomes hard to argue for their kernel port to not
> get merged.

The RISC-V ISA defines extension mechanisms that allows new non-standard
extensions to coexist, so as long as everyone is well behaved we should be
safe.  I believe there is some mechanism to disallow bad behavior via the
RISC-V foundation and a certification process, but we'll see if that actually
has any teeth.

Of course, we're hoping to only need to support standard extensions, but
nothing ever goes that well :).

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/7] RISC-V: arch/riscv/lib
  2017-06-06  9:31           ` Arnd Bergmann
@ 2017-06-06 20:53             ` Palmer Dabbelt
  2017-06-07  7:35               ` Arnd Bergmann
  0 siblings, 1 reply; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 20:53 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: linux-kernel, olof, albert

On Tue, 06 Jun 2017 02:31:02 PDT (-0700), Arnd Bergmann wrote:
> On Tue, Jun 6, 2017 at 6:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> On Fri, 26 May 2017 02:06:58 PDT (-0700), Arnd Bergmann wrote:
>>> On Thu, May 25, 2017 at 3:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>>> On Tue, 23 May 2017 04:19:42 PDT (-0700), Arnd Bergmann wrote:
>>>>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>>
>>>>> Also, it would be good to replace the multiply+div64
>>>>> with a single multiplication here, see how x86 and arm do it
>>>>> (for the tsc/__timer_delay case).
>>>>
>>>> Makes sense.  I think this should do it
>>>>
>>>>   https://github.com/riscv/riscv-linux/commit/d397332f6ebff42f3ecb385e9cf3284fdeda6776
>>>>
>>>> but I'm finding this hard to test as this only works for 2ms sleeps.  It seems
>>>> at least in the right ballpark
>>>
>>> + if (usecs > MAX_UDELAY_US) {
>>> + __delay((u64)usecs * riscv_timebase / 1000000ULL);
>>> + return;
>>> + }
>>>
>>> You still do the 64-bit division here. What I meant is to completely
>>> avoid the division and use a multiply+shift.
>>
>> The goal here was to avoid the error case that ARM has on overflow and instead
>> just delay for the requested time.  This should only divide when the delay is
>>>=2ms, so the division won't cost much in comparison.
>>
>> The normal case should have no division in it.
>>
>> I can copy ARM's error handling if you think that's better, but it seemed more
>> complicated than just computing the correct answer.
>
> I think the intention originally was to avoid overflowing the 32-bit
> argument in
>
>  void __delay(unsigned long cycles)
>
> If you need to delay for more than 4 billion clocksource cycles,
> your code is still broken.

Maybe I'm crazy, but I thought the goal was to avoid overflowing on the
multiply.  Specifically, the code looks like

  udelay(long input) {
    long a = input * MUL_VAL;
    long b = a >> SHIFT_VAL;
    __delay(b);
  }

so the place there's extra overflow is at computing a, not b (the input to
__delay).  When I modified the ARM code I went and recalculated the point at
which the multiply would overflow and it matched the value from the ARM code,
which is 2000us.

While I can buy the argument that 2000us is still too long, the real reason I
wrote the code this way is because I thought it was easier than having an error
case.  If you think the error is better then I'll do it that way.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* RISC-V Linux Port v2
  2017-05-23  0:41 RISC-V Linux Port v1 Palmer Dabbelt
@ 2017-06-06 22:59   ` Palmer Dabbelt
  2017-05-23  0:41 ` [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs Palmer Dabbelt
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof; +Cc: albert, patches

Thanks to everyone who has participated in the review process so far.  We've
made a lot of changes since the v1 and while this isn't ready to go yet, I
finally managed to get through everything in my inbox so I thought it would be
a good time to submit a v2 so everyone is on the same page.

A highlight of the changes since the v1 patch set includes:

  * We've split out our drivers into the right places, which means now there's
    a lot more patches.  I'll be submitting these patches to various subsystem
    maintainers and including them in any future RISC-V patch sets until
    they've been merged.

  * The SBI console driver has been completely rewritten to use the HVC helpers
    and is now significantly smaller.

  * We've begun to use weaker barries as opposed to just the big "fence".
    There's still some work to do here, specifically:
    - We need fences in the realxed MMIO functions.
    - The non-relaxed MMIO functions are missing R/W bits on their fences.
    - Many AMOs need the aq and rl bits set.

  * We now have thread_info in task_struct.  As a result, sscratch now contains
    TP instead of SP.  This was necessary because thread_info is no longer on
    the stack.

  * A few shared routines have been added that we use instead of creating
    another arch copy.

Here's my TODO list

  * The memory model changes indicated above.

  * I need to go through checkpatch again to make sure none of the messages are
    valid problems.

  * Put an implementation of atomic compare/exchange in the VDSO when atomic
    instructions are enabled, otherwise insert stubs to the cmpxchg syscall.

  * Remove the extra multiplexing in the cmpxchg syscall.

Aside from those two releatively minor ABI issues, I think our ABI is in good
shape.  Unless there are any other issues that crop up I'd like to begin our
glibc submission early next week.

[PATCH 01/17] drivers: support PCIe in RISCV
[PATCH 02/17] pcie-xilinx: add missing 5th legacy interrupt
[PATCH 03/17] base: fix order of OF initialization
[PATCH 04/17] Documentation: atomic_ops.txt is
[PATCH 05/17] MAINTAINERS: Add RISC-V
[PATCH 06/17] pci: Add generic pcibios_{fixup_bus,align_resource}
[PATCH 07/17] lib: Add shared copies of some GCC library routines
[PATCH 08/17] dts: include documentation for the RISC-V interrupt
[PATCH 09/17] clocksource/timer-riscv: New RISC-V Clocksource
[PATCH 10/17] irqchip: New RISC-V PLIC Driver
[PATCH 11/17] irqchip: RISC-V Local Interrupt Controller Driver
[PATCH 12/17] tty: New RISC-V SBI Console Driver
[PATCH 13/17] RISC-V: Add include subdirectory
[PATCH 14/17] RISC-V: lib files
[PATCH 15/17] RISC-V: Add mm subdirectory
[PATCH 16/17] RISC-V: Add kernel subdirectory
[PATCH 17/17] RISC-V: Makefile and Kconfig

^ permalink raw reply	[flat|nested] 283+ messages in thread

* RISC-V Linux Port v2
  2017-05-23  0:41 RISC-V Linux Port v1 Palmer Dabbelt
                   ` (9 preceding siblings ...)
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 22:59 ` Palmer Dabbelt
  2017-06-07  7:29 ` David Howells
  11 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof; +Cc: albert, patches

Thanks to everyone who has participated in the review process so far.  We've
made a lot of changes since the v1 and while this isn't ready to go yet, I
finally managed to get through everything in my inbox so I thought it would be
a good time to submit a v2 so everyone is on the same page.

A highlight of the changes since the v1 patch set includes:

  * We've split out our drivers into the right places, which means now there's
    a lot more patches.  I'll be submitting these patches to various subsystem
    maintainers and including them in any future RISC-V patch sets until
    they've been merged.

  * The SBI console driver has been completely rewritten to use the HVC helpers
    and is now significantly smaller.

  * We've begun to use weaker barries as opposed to just the big "fence".
    There's still some work to do here, specifically:
    - We need fences in the realxed MMIO functions.
    - The non-relaxed MMIO functions are missing R/W bits on their fences.
    - Many AMOs need the aq and rl bits set.

  * We now have thread_info in task_struct.  As a result, sscratch now contains
    TP instead of SP.  This was necessary because thread_info is no longer on
    the stack.

  * A few shared routines have been added that we use instead of creating
    another arch copy.

Here's my TODO list

  * The memory model changes indicated above.

  * I need to go through checkpatch again to make sure none of the messages are
    valid problems.

  * Put an implementation of atomic compare/exchange in the VDSO when atomic
    instructions are enabled, otherwise insert stubs to the cmpxchg syscall.

  * Remove the extra multiplexing in the cmpxchg syscall.

Aside from those two releatively minor ABI issues, I think our ABI is in good
shape.  Unless there are any other issues that crop up I'd like to begin our
glibc submission early next week.

[PATCH 01/17] drivers: support PCIe in RISCV
[PATCH 02/17] pcie-xilinx: add missing 5th legacy interrupt
[PATCH 03/17] base: fix order of OF initialization
[PATCH 04/17] Documentation: atomic_ops.txt is
[PATCH 05/17] MAINTAINERS: Add RISC-V
[PATCH 06/17] pci: Add generic pcibios_{fixup_bus,align_resource}
[PATCH 07/17] lib: Add shared copies of some GCC library routines
[PATCH 08/17] dts: include documentation for the RISC-V interrupt
[PATCH 09/17] clocksource/timer-riscv: New RISC-V Clocksource
[PATCH 10/17] irqchip: New RISC-V PLIC Driver
[PATCH 11/17] irqchip: RISC-V Local Interrupt Controller Driver
[PATCH 12/17] tty: New RISC-V SBI Console Driver
[PATCH 13/17] RISC-V: Add include subdirectory
[PATCH 14/17] RISC-V: lib files
[PATCH 15/17] RISC-V: Add mm subdirectory
[PATCH 16/17] RISC-V: Add kernel subdirectory
[PATCH 17/17] RISC-V: Makefile and Kconfig

^ permalink raw reply	[flat|nested] 283+ messages in thread

* RISC-V Linux Port v2
@ 2017-06-06 22:59   ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof; +Cc: albert, patches

Thanks to everyone who has participated in the review process so far.  We've
made a lot of changes since the v1 and while this isn't ready to go yet, I
finally managed to get through everything in my inbox so I thought it would be
a good time to submit a v2 so everyone is on the same page.

A highlight of the changes since the v1 patch set includes:

  * We've split out our drivers into the right places, which means now there's
    a lot more patches.  I'll be submitting these patches to various subsystem
    maintainers and including them in any future RISC-V patch sets until
    they've been merged.

  * The SBI console driver has been completely rewritten to use the HVC helpers
    and is now significantly smaller.

  * We've begun to use weaker barries as opposed to just the big "fence".
    There's still some work to do here, specifically:
    - We need fences in the realxed MMIO functions.
    - The non-relaxed MMIO functions are missing R/W bits on their fences.
    - Many AMOs need the aq and rl bits set.

  * We now have thread_info in task_struct.  As a result, sscratch now contains
    TP instead of SP.  This was necessary because thread_info is no longer on
    the stack.

  * A few shared routines have been added that we use instead of creating
    another arch copy.

Here's my TODO list

  * The memory model changes indicated above.

  * I need to go through checkpatch again to make sure none of the messages are
    valid problems.

  * Put an implementation of atomic compare/exchange in the VDSO when atomic
    instructions are enabled, otherwise insert stubs to the cmpxchg syscall.

  * Remove the extra multiplexing in the cmpxchg syscall.

Aside from those two releatively minor ABI issues, I think our ABI is in good
shape.  Unless there are any other issues that crop up I'd like to begin our
glibc submission early next week.

[PATCH 01/17] drivers: support PCIe in RISCV
[PATCH 02/17] pcie-xilinx: add missing 5th legacy interrupt
[PATCH 03/17] base: fix order of OF initialization
[PATCH 04/17] Documentation: atomic_ops.txt is
[PATCH 05/17] MAINTAINERS: Add RISC-V
[PATCH 06/17] pci: Add generic pcibios_{fixup_bus,align_resource}
[PATCH 07/17] lib: Add shared copies of some GCC library routines
[PATCH 08/17] dts: include documentation for the RISC-V interrupt
[PATCH 09/17] clocksource/timer-riscv: New RISC-V Clocksource
[PATCH 10/17] irqchip: New RISC-V PLIC Driver
[PATCH 11/17] irqchip: RISC-V Local Interrupt Controller Driver
[PATCH 12/17] tty: New RISC-V SBI Console Driver
[PATCH 13/17] RISC-V: Add include subdirectory
[PATCH 14/17] RISC-V: lib files
[PATCH 15/17] RISC-V: Add mm subdirectory
[PATCH 16/17] RISC-V: Add kernel subdirectory
[PATCH 17/17] RISC-V: Makefile and Kconfig

^ permalink raw reply	[flat|nested] 283+ messages in thread

* [PATCH 01/17] drivers: support PCIe in RISCV
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 22:59     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Wesley W. Terpstra, Palmer Dabbelt

From: "Wesley W. Terpstra" <wesley@sifive.com>

There are RISC-V systems that have been mapped to Xilinx FPGAs that have
their PCIe controllers on chip.  These build system changes allow RISC-V
systems to enable the Xilinx PCIe controller, and to setup PCIe IRQs.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/pci/Makefile     | 1 +
 drivers/pci/host/Kconfig | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
index 462c1f5f5546..a29d9ec05d13 100644
--- a/drivers/pci/Makefile
+++ b/drivers/pci/Makefile
@@ -41,6 +41,7 @@ obj-$(CONFIG_MIPS) += setup-irq.o
 obj-$(CONFIG_TILE) += setup-irq.o
 obj-$(CONFIG_SPARC_LEON) += setup-irq.o
 obj-$(CONFIG_M68K) += setup-irq.o
+obj-$(CONFIG_RISCV) += setup-irq.o
 
 #
 # ACPI Related PCI FW Functions
diff --git a/drivers/pci/host/Kconfig b/drivers/pci/host/Kconfig
index 7f47cd5e10a5..5148f3d3cab7 100644
--- a/drivers/pci/host/Kconfig
+++ b/drivers/pci/host/Kconfig
@@ -71,7 +71,7 @@ config PCI_HOST_GENERIC
 
 config PCIE_XILINX
 	bool "Xilinx AXI PCIe host bridge support"
-	depends on ARCH_ZYNQ || MICROBLAZE
+	depends on ARCH_ZYNQ || MICROBLAZE || RISCV
 	help
 	  Say 'Y' here if you want kernel to support the Xilinx AXI PCIe
 	  Host Bridge driver.
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 01/17] drivers: support PCIe in RISCV
@ 2017-06-06 22:59     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Wesley W. Terpstra, Palmer Dabbelt

From: "Wesley W. Terpstra" <wesley@sifive.com>

There are RISC-V systems that have been mapped to Xilinx FPGAs that have
their PCIe controllers on chip.  These build system changes allow RISC-V
systems to enable the Xilinx PCIe controller, and to setup PCIe IRQs.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/pci/Makefile     | 1 +
 drivers/pci/host/Kconfig | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
index 462c1f5f5546..a29d9ec05d13 100644
--- a/drivers/pci/Makefile
+++ b/drivers/pci/Makefile
@@ -41,6 +41,7 @@ obj-$(CONFIG_MIPS) += setup-irq.o
 obj-$(CONFIG_TILE) += setup-irq.o
 obj-$(CONFIG_SPARC_LEON) += setup-irq.o
 obj-$(CONFIG_M68K) += setup-irq.o
+obj-$(CONFIG_RISCV) += setup-irq.o
 
 #
 # ACPI Related PCI FW Functions
diff --git a/drivers/pci/host/Kconfig b/drivers/pci/host/Kconfig
index 7f47cd5e10a5..5148f3d3cab7 100644
--- a/drivers/pci/host/Kconfig
+++ b/drivers/pci/host/Kconfig
@@ -71,7 +71,7 @@ config PCI_HOST_GENERIC
 
 config PCIE_XILINX
 	bool "Xilinx AXI PCIe host bridge support"
-	depends on ARCH_ZYNQ || MICROBLAZE
+	depends on ARCH_ZYNQ || MICROBLAZE || RISCV
 	help
 	  Say 'Y' here if you want kernel to support the Xilinx AXI PCIe
 	  Host Bridge driver.
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 02/17] pcie-xilinx: add missing 5th legacy interrupt
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 22:59     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Wesley W. Terpstra, Palmer Dabbelt

From: "Wesley W. Terpstra" <wesley@sifive.com>

These are numbered from 1.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/pci/host/pcie-xilinx.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/pci/host/pcie-xilinx.c b/drivers/pci/host/pcie-xilinx.c
index 2fe2df51f9f8..8804145d399a 100644
--- a/drivers/pci/host/pcie-xilinx.c
+++ b/drivers/pci/host/pcie-xilinx.c
@@ -443,7 +443,7 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data)
 			val = ((val & XILINX_PCIE_RPIFR1_INTR_MASK) >>
 				XILINX_PCIE_RPIFR1_INTR_SHIFT) + 1;
 			generic_handle_irq(irq_find_mapping(port->leg_domain,
-							    val));
+							    val + 1));
 		}
 	}
 
@@ -524,7 +524,7 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
 		return -ENODEV;
 	}
 
-	port->leg_domain = irq_domain_add_linear(pcie_intc_node, 4,
+	port->leg_domain = irq_domain_add_linear(pcie_intc_node, 5,
 						 &intx_domain_ops,
 						 port);
 	if (!port->leg_domain) {
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 02/17] pcie-xilinx: add missing 5th legacy interrupt
  2017-06-06 22:59   ` Palmer Dabbelt
  (?)
  (?)
@ 2017-06-06 22:59   ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Wesley W. Terpstra, Palmer Dabbelt

From: "Wesley W. Terpstra" <wesley@sifive.com>

These are numbered from 1.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/pci/host/pcie-xilinx.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/pci/host/pcie-xilinx.c b/drivers/pci/host/pcie-xilinx.c
index 2fe2df51f9f8..8804145d399a 100644
--- a/drivers/pci/host/pcie-xilinx.c
+++ b/drivers/pci/host/pcie-xilinx.c
@@ -443,7 +443,7 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data)
 			val = ((val & XILINX_PCIE_RPIFR1_INTR_MASK) >>
 				XILINX_PCIE_RPIFR1_INTR_SHIFT) + 1;
 			generic_handle_irq(irq_find_mapping(port->leg_domain,
-							    val));
+							    val + 1));
 		}
 	}
 
@@ -524,7 +524,7 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
 		return -ENODEV;
 	}
 
-	port->leg_domain = irq_domain_add_linear(pcie_intc_node, 4,
+	port->leg_domain = irq_domain_add_linear(pcie_intc_node, 5,
 						 &intx_domain_ops,
 						 port);
 	if (!port->leg_domain) {
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 02/17] pcie-xilinx: add missing 5th legacy interrupt
@ 2017-06-06 22:59     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Wesley W. Terpstra, Palmer Dabbelt

From: "Wesley W. Terpstra" <wesley@sifive.com>

These are numbered from 1.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/pci/host/pcie-xilinx.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/pci/host/pcie-xilinx.c b/drivers/pci/host/pcie-xilinx.c
index 2fe2df51f9f8..8804145d399a 100644
--- a/drivers/pci/host/pcie-xilinx.c
+++ b/drivers/pci/host/pcie-xilinx.c
@@ -443,7 +443,7 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data)
 			val = ((val & XILINX_PCIE_RPIFR1_INTR_MASK) >>
 				XILINX_PCIE_RPIFR1_INTR_SHIFT) + 1;
 			generic_handle_irq(irq_find_mapping(port->leg_domain,
-							    val));
+							    val + 1));
 		}
 	}
 
@@ -524,7 +524,7 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
 		return -ENODEV;
 	}
 
-	port->leg_domain = irq_domain_add_linear(pcie_intc_node, 4,
+	port->leg_domain = irq_domain_add_linear(pcie_intc_node, 5,
 						 &intx_domain_ops,
 						 port);
 	if (!port->leg_domain) {
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 03/17] base: fix order of OF initialization
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 22:59     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Wesley W. Terpstra, Palmer Dabbelt

From: "Wesley W. Terpstra" <wesley@sifive.com>

This fixes: [    0.010000] cpu cpu0: Error -2 creating of_node link
... which you get for every CPU on all architectures with a OF cpu/ node.

This affects riscv, nios, etc.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/base/init.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/base/init.c b/drivers/base/init.c
index 48c0e220acc0..0dcd17e561d0 100644
--- a/drivers/base/init.c
+++ b/drivers/base/init.c
@@ -31,9 +31,9 @@ void __init driver_init(void)
 	/* These are also core pieces, but must come after the
 	 * core core pieces.
 	 */
+	of_core_init();
 	platform_bus_init();
 	cpu_dev_init();
 	memory_dev_init();
 	container_dev_init();
-	of_core_init();
 }
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 03/17] base: fix order of OF initialization
@ 2017-06-06 22:59     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Wesley W. Terpstra, Palmer Dabbelt

From: "Wesley W. Terpstra" <wesley@sifive.com>

This fixes: [    0.010000] cpu cpu0: Error -2 creating of_node link
... which you get for every CPU on all architectures with a OF cpu/ node.

This affects riscv, nios, etc.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/base/init.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/base/init.c b/drivers/base/init.c
index 48c0e220acc0..0dcd17e561d0 100644
--- a/drivers/base/init.c
+++ b/drivers/base/init.c
@@ -31,9 +31,9 @@ void __init driver_init(void)
 	/* These are also core pieces, but must come after the
 	 * core core pieces.
 	 */
+	of_core_init();
 	platform_bus_init();
 	cpu_dev_init();
 	memory_dev_init();
 	container_dev_init();
-	of_core_init();
 }
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 04/17] Documentation: atomic_ops.txt is core-api/atomic_ops.rst
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 22:59     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

I was reading the memory barries documentation in order to make sure the
RISC-V barries were correct, and I found a broken link to the atomic
operations documentation.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 Documentation/memory-barriers.txt | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index 732f10ea382e..f1c9eaa45a57 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -498,11 +498,11 @@ And a couple of implicit varieties:
      This means that ACQUIRE acts as a minimal "acquire" operation and
      RELEASE acts as a minimal "release" operation.
 
-A subset of the atomic operations described in atomic_ops.txt have ACQUIRE
-and RELEASE variants in addition to fully-ordered and relaxed (no barrier
-semantics) definitions.  For compound atomics performing both a load and a
-store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
-only to the store portion of the operation.
+A subset of the atomic operations described in core-api/atomic_ops.rst have
+ACQUIRE and RELEASE variants in addition to fully-ordered and relaxed (no
+barrier semantics) definitions.  For compound atomics performing both a load
+and a store, ACQUIRE semantics apply only to the load and RELEASE semantics
+apply only to the store portion of the operation.
 
 Memory barriers are only required where there's a possibility of interaction
 between two CPUs or between a CPU and a device.  If it can be guaranteed that
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 04/17] Documentation: atomic_ops.txt is core-api/atomic_ops.rst
@ 2017-06-06 22:59     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

I was reading the memory barries documentation in order to make sure the
RISC-V barries were correct, and I found a broken link to the atomic
operations documentation.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 Documentation/memory-barriers.txt | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index 732f10ea382e..f1c9eaa45a57 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -498,11 +498,11 @@ And a couple of implicit varieties:
      This means that ACQUIRE acts as a minimal "acquire" operation and
      RELEASE acts as a minimal "release" operation.
 
-A subset of the atomic operations described in atomic_ops.txt have ACQUIRE
-and RELEASE variants in addition to fully-ordered and relaxed (no barrier
-semantics) definitions.  For compound atomics performing both a load and a
-store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
-only to the store portion of the operation.
+A subset of the atomic operations described in core-api/atomic_ops.rst have
+ACQUIRE and RELEASE variants in addition to fully-ordered and relaxed (no
+barrier semantics) definitions.  For compound atomics performing both a load
+and a store, ACQUIRE semantics apply only to the load and RELEASE semantics
+apply only to the store portion of the operation.
 
 Memory barriers are only required where there's a possibility of interaction
 between two CPUs or between a CPU and a device.  If it can be guaranteed that
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 05/17] MAINTAINERS: Add RISC-V
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 22:59     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, nathan=20Neusch=C3=A4fer?=, Palmer Dabbelt

From: Jonathan Neuschäfer <j.neuschaefer@gmx.net>

RISC-V needs a MAINTAINERS entry. Let's add one.

Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net>
Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 MAINTAINERS | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 7a28acd7f525..50b18dad8c22 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10877,6 +10877,14 @@ M:	Maxim Levitsky <maximlevitsky@gmail.com>
 S:	Maintained
 F:	drivers/memstick/host/r592.*
 
+RISC-V ARCHITECTURE
+M:	Palmer Dabbelt <palmer@sifive.com>
+M:	Albert Ou <albert@sifive.com>
+L:	patches@groups.riscv.org
+T:	git https://github.com/riscv/riscv-linux
+S:	Supported
+F:	arch/riscv/
+
 ROCCAT DRIVERS
 M:	Stefan Achatz <erazor_de@users.sourceforge.net>
 W:	http://sourceforge.net/projects/roccat/
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 05/17] MAINTAINERS: Add RISC-V
  2017-06-06 22:59   ` Palmer Dabbelt
                     ` (5 preceding siblings ...)
  (?)
@ 2017-06-06 22:59   ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, nathan=20Neusch=C3=A4fer?=, Palmer Dabbelt

From: Jonathan Neuschäfer <j.neuschaefer@gmx.net>

RISC-V needs a MAINTAINERS entry. Let's add one.

Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net>
Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 MAINTAINERS | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 7a28acd7f525..50b18dad8c22 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10877,6 +10877,14 @@ M:	Maxim Levitsky <maximlevitsky@gmail.com>
 S:	Maintained
 F:	drivers/memstick/host/r592.*
 
+RISC-V ARCHITECTURE
+M:	Palmer Dabbelt <palmer@sifive.com>
+M:	Albert Ou <albert@sifive.com>
+L:	patches@groups.riscv.org
+T:	git https://github.com/riscv/riscv-linux
+S:	Supported
+F:	arch/riscv/
+
 ROCCAT DRIVERS
 M:	Stefan Achatz <erazor_de@users.sourceforge.net>
 W:	http://sourceforge.net/projects/roccat/
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 05/17] MAINTAINERS: Add RISC-V
@ 2017-06-06 22:59     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, nathan=20Neusch=C3=A4fer?=, Palmer Dabbelt

From: Jonathan Neuschäfer <j.neuschaefer@gmx.net>

RISC-V needs a MAINTAINERS entry. Let's add one.

Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net>
Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 MAINTAINERS | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 7a28acd7f525..50b18dad8c22 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10877,6 +10877,14 @@ M:	Maxim Levitsky <maximlevitsky@gmail.com>
 S:	Maintained
 F:	drivers/memstick/host/r592.*
 
+RISC-V ARCHITECTURE
+M:	Palmer Dabbelt <palmer@sifive.com>
+M:	Albert Ou <albert@sifive.com>
+L:	patches@groups.riscv.org
+T:	git https://github.com/riscv/riscv-linux
+S:	Supported
+F:	arch/riscv/
+
 ROCCAT DRIVERS
 M:	Stefan Achatz <erazor_de@users.sourceforge.net>
 W:	http://sourceforge.net/projects/roccat/
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 06/17] pci: Add generic pcibios_{fixup_bus,align_resource}
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 22:59     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

While upstreaming the RISC-V port, it was pointed out that multiple
architectures (arc, arm64, cris, microblaze, sh, tile) have copied the
mostly empty versions of at least one of these functions.  This defines
weakly bound versions of the common functions so other architetures can
use them.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/pci/Makefile |  2 +-
 drivers/pci/bios.c   | 42 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 43 insertions(+), 1 deletion(-)
 create mode 100644 drivers/pci/bios.c

diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
index a29d9ec05d13..fa7040915194 100644
--- a/drivers/pci/Makefile
+++ b/drivers/pci/Makefile
@@ -4,7 +4,7 @@
 
 obj-y		+= access.o bus.o probe.o host-bridge.o remove.o pci.o \
 			pci-driver.o search.o pci-sysfs.o rom.o setup-res.o \
-			irq.o vpd.o setup-bus.o vc.o mmap.o
+			irq.o vpd.o setup-bus.o vc.o mmap.o bios.o
 obj-$(CONFIG_PROC_FS) += proc.o
 obj-$(CONFIG_SYSFS) += slot.o
 
diff --git a/drivers/pci/bios.c b/drivers/pci/bios.c
new file mode 100644
index 000000000000..ffe34c024aa8
--- /dev/null
+++ b/drivers/pci/bios.c
@@ -0,0 +1,42 @@
+/*
+ * Code borrowed from arch/arm64/kernel/pci.c
+ *   which borrowed from powerpc/kernel/pci-common.c
+ *   which borrowed from arch/alpha/kernel/pci.c
+ *
+ * Extruded from code written by
+ *      Dave Rusling (david.rusling@reo.mts.dec.com)
+ *      David Mosberger (davidm@cs.arizona.edu)
+ * Copyright (C) 1999 Andrea Arcangeli <andrea@suse.de>
+ * Copyright (C) 2000 Ivan Kokshaysky <ink@jurassic.park.msu.ru>
+ * Copyright (C) 2003 Anton Blanchard <anton@au.ibm.com>, IBM
+ * Copyright (C) 2014 ARM Ltd.
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ */
+
+/* This file contains weakly bound functions that implement pcibios functions
+ * that some architectures have copied verbatim.
+ */
+
+#include <linux/pci.h>
+
+/*
+ * Called after each bus is probed, but before its children are examined
+ */
+__attribute__ ((weak))
+void pcibios_fixup_bus(struct pci_bus *bus)
+{
+       /* nothing to do, expected to be removed in the future */
+}
+/*
+ * We don't have to worry about legacy ISA devices, so nothing to do here
+ */
+__attribute__ ((weak))
+resource_size_t pcibios_align_resource(void *data, const struct resource *res,
+				       resource_size_t size, resource_size_t align)
+{
+       return res->start;
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 06/17] pci: Add generic pcibios_{fixup_bus,align_resource}
@ 2017-06-06 22:59     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

While upstreaming the RISC-V port, it was pointed out that multiple
architectures (arc, arm64, cris, microblaze, sh, tile) have copied the
mostly empty versions of at least one of these functions.  This defines
weakly bound versions of the common functions so other architetures can
use them.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/pci/Makefile |  2 +-
 drivers/pci/bios.c   | 42 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 43 insertions(+), 1 deletion(-)
 create mode 100644 drivers/pci/bios.c

diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
index a29d9ec05d13..fa7040915194 100644
--- a/drivers/pci/Makefile
+++ b/drivers/pci/Makefile
@@ -4,7 +4,7 @@
 
 obj-y		+= access.o bus.o probe.o host-bridge.o remove.o pci.o \
 			pci-driver.o search.o pci-sysfs.o rom.o setup-res.o \
-			irq.o vpd.o setup-bus.o vc.o mmap.o
+			irq.o vpd.o setup-bus.o vc.o mmap.o bios.o
 obj-$(CONFIG_PROC_FS) += proc.o
 obj-$(CONFIG_SYSFS) += slot.o
 
diff --git a/drivers/pci/bios.c b/drivers/pci/bios.c
new file mode 100644
index 000000000000..ffe34c024aa8
--- /dev/null
+++ b/drivers/pci/bios.c
@@ -0,0 +1,42 @@
+/*
+ * Code borrowed from arch/arm64/kernel/pci.c
+ *   which borrowed from powerpc/kernel/pci-common.c
+ *   which borrowed from arch/alpha/kernel/pci.c
+ *
+ * Extruded from code written by
+ *      Dave Rusling (david.rusling@reo.mts.dec.com)
+ *      David Mosberger (davidm@cs.arizona.edu)
+ * Copyright (C) 1999 Andrea Arcangeli <andrea@suse.de>
+ * Copyright (C) 2000 Ivan Kokshaysky <ink@jurassic.park.msu.ru>
+ * Copyright (C) 2003 Anton Blanchard <anton@au.ibm.com>, IBM
+ * Copyright (C) 2014 ARM Ltd.
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ */
+
+/* This file contains weakly bound functions that implement pcibios functions
+ * that some architectures have copied verbatim.
+ */
+
+#include <linux/pci.h>
+
+/*
+ * Called after each bus is probed, but before its children are examined
+ */
+__attribute__ ((weak))
+void pcibios_fixup_bus(struct pci_bus *bus)
+{
+       /* nothing to do, expected to be removed in the future */
+}
+/*
+ * We don't have to worry about legacy ISA devices, so nothing to do here
+ */
+__attribute__ ((weak))
+resource_size_t pcibios_align_resource(void *data, const struct resource *res,
+				       resource_size_t size, resource_size_t align)
+{
+       return res->start;
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 07/17] lib: Add shared copies of some GCC library routines
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 22:59     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

Many ports (m32r, microblaze, mips, parisc, score, and sparc) use
functionally identical copies of various GCC library routine files,
which came up as we were submitting the RISC-V port (which also uses
some of these).

This patch adds a new copy of these library routine files, which are
functionally identical to the various other copies.  These are
availiable via Kconfig as CONFIG_LIB_$ROUTINE, which currently isn't
used anywhere.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
---
 include/lib/libgcc.h | 44 ++++++++++++++++++++++++++++++++
 lib/Kconfig          | 18 +++++++++++++
 lib/Makefile         |  8 ++++++
 lib/ashldi3.c        | 45 ++++++++++++++++++++++++++++++++
 lib/ashrdi3.c        | 46 +++++++++++++++++++++++++++++++++
 lib/cmpdi2.c         | 42 ++++++++++++++++++++++++++++++
 lib/lshrdi3.c        | 45 ++++++++++++++++++++++++++++++++
 lib/muldi3.c         | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 lib/ucmpdi2.c        | 35 +++++++++++++++++++++++++
 9 files changed, 355 insertions(+)
 create mode 100644 include/lib/libgcc.h
 create mode 100644 lib/ashldi3.c
 create mode 100644 lib/ashrdi3.c
 create mode 100644 lib/cmpdi2.c
 create mode 100644 lib/lshrdi3.c
 create mode 100644 lib/muldi3.c
 create mode 100644 lib/ucmpdi2.c

diff --git a/include/lib/libgcc.h b/include/lib/libgcc.h
new file mode 100644
index 000000000000..a5397e34e005
--- /dev/null
+++ b/include/lib/libgcc.h
@@ -0,0 +1,44 @@
+/*
+ * include/lib/libgcc.h
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+
+#ifndef __LIB_LIBGCC_H
+#define __LIB_LIBGCC_H
+
+#include <asm/byteorder.h>
+
+typedef int word_type __attribute__ ((mode (__word__)));
+
+#ifdef __BIG_ENDIAN
+struct DWstruct {
+	int high, low;
+};
+#elif defined(__LITTLE_ENDIAN)
+struct DWstruct {
+	int low, high;
+};
+#else
+#error I feel sick.
+#endif
+
+typedef union {
+	struct DWstruct s;
+	long long ll;
+} DWunion;
+
+#endif /* __ASM_LIBGCC_H */
diff --git a/lib/Kconfig b/lib/Kconfig
index 0c8b78a9ae2e..7a9c934d91fd 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -565,3 +565,21 @@ config PRIME_NUMBERS
 	tristate
 
 endmenu
+
+config GENERIC_ASHLDI3
+	def_bool n
+
+config GENERIC_ASHRDI3
+	def_bool n
+
+config GENERIC_LSHRDI3
+	def_bool n
+
+config GENERIC_MULDI3
+	def_bool n
+
+config GENERIC_CMPDI2
+	def_bool n
+
+config GENERIC_UCMPDI2
+	def_bool n
diff --git a/lib/Makefile b/lib/Makefile
index 0166fbc0fa81..5f68242f7774 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -243,3 +243,11 @@ UBSAN_SANITIZE_ubsan.o := n
 obj-$(CONFIG_SBITMAP) += sbitmap.o
 
 obj-$(CONFIG_PARMAN) += parman.o
+
+# GCC library routines
+obj-$(CONFIG_GENERIC_ASHLDI3) += ashldi3.o
+obj-$(CONFIG_GENERIC_ASHRDI3) += ashrdi3.o
+obj-$(CONFIG_GENERIC_LSHRDI3) += lshrdi3.o
+obj-$(CONFIG_GENERIC_MULDI3) += muldi3.o
+obj-$(CONFIG_GENERIC_CMPDI2) += cmpdi2.o
+obj-$(CONFIG_GENERIC_UCMPDI2) += ucmpdi2.o
diff --git a/lib/ashldi3.c b/lib/ashldi3.c
new file mode 100644
index 000000000000..ff4ec63d2ab6
--- /dev/null
+++ b/lib/ashldi3.c
@@ -0,0 +1,45 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+
+#include <linux/export.h>
+
+#include <lib/libgcc.h>
+
+long long notrace __ashldi3(long long u, word_type b)
+{
+	DWunion uu, w;
+	word_type bm;
+
+	if (b == 0)
+		return u;
+
+	uu.ll = u;
+	bm = 32 - b;
+
+	if (bm <= 0) {
+		w.s.low = 0;
+		w.s.high = (unsigned int) uu.s.low << -bm;
+	} else {
+		const unsigned int carries = (unsigned int) uu.s.low >> bm;
+
+		w.s.low = (unsigned int) uu.s.low << b;
+		w.s.high = ((unsigned int) uu.s.high << b) | carries;
+	}
+
+	return w.ll;
+}
+EXPORT_SYMBOL(__ashldi3);
diff --git a/lib/ashrdi3.c b/lib/ashrdi3.c
new file mode 100644
index 000000000000..2e67c97ac65a
--- /dev/null
+++ b/lib/ashrdi3.c
@@ -0,0 +1,46 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+#include <linux/export.h>
+
+#include <lib/libgcc.h>
+
+long long notrace __ashrdi3(long long u, word_type b)
+{
+	DWunion uu, w;
+	word_type bm;
+
+	if (b == 0)
+		return u;
+
+	uu.ll = u;
+	bm = 32 - b;
+
+	if (bm <= 0) {
+		/* w.s.high = 1..1 or 0..0 */
+		w.s.high =
+		    uu.s.high >> 31;
+		w.s.low = uu.s.high >> -bm;
+	} else {
+		const unsigned int carries = (unsigned int) uu.s.high << bm;
+
+		w.s.high = uu.s.high >> b;
+		w.s.low = ((unsigned int) uu.s.low >> b) | carries;
+	}
+
+	return w.ll;
+}
+EXPORT_SYMBOL(__ashrdi3);
diff --git a/lib/cmpdi2.c b/lib/cmpdi2.c
new file mode 100644
index 000000000000..6d7ebf6c2b86
--- /dev/null
+++ b/lib/cmpdi2.c
@@ -0,0 +1,42 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+#include <linux/export.h>
+
+#include <lib/libgcc.h>
+
+word_type notrace __cmpdi2(long long a, long long b)
+{
+	const DWunion au = {
+		.ll = a
+	};
+	const DWunion bu = {
+		.ll = b
+	};
+
+	if (au.s.high < bu.s.high)
+		return 0;
+	else if (au.s.high > bu.s.high)
+		return 2;
+
+	if ((unsigned int) au.s.low < (unsigned int) bu.s.low)
+		return 0;
+	else if ((unsigned int) au.s.low > (unsigned int) bu.s.low)
+		return 2;
+
+	return 1;
+}
+EXPORT_SYMBOL(__cmpdi2);
diff --git a/lib/lshrdi3.c b/lib/lshrdi3.c
new file mode 100644
index 000000000000..8e845f4bb65f
--- /dev/null
+++ b/lib/lshrdi3.c
@@ -0,0 +1,45 @@
+/*
+ * lib/lshrdi3.c
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+#include <linux/module.h>
+#include <lib/libgcc.h>
+
+long long notrace __lshrdi3(long long u, word_type b)
+{
+	DWunion uu, w;
+	word_type bm;
+
+	if (b == 0)
+		return u;
+
+	uu.ll = u;
+	bm = 32 - b;
+
+	if (bm <= 0) {
+		w.s.high = 0;
+		w.s.low = (unsigned int) uu.s.high >> -bm;
+	} else {
+		const unsigned int carries = (unsigned int) uu.s.high << bm;
+
+		w.s.high = (unsigned int) uu.s.high >> b;
+		w.s.low = ((unsigned int) uu.s.low >> b) | carries;
+	}
+
+	return w.ll;
+}
+EXPORT_SYMBOL(__lshrdi3);
diff --git a/lib/muldi3.c b/lib/muldi3.c
new file mode 100644
index 000000000000..88938543e10a
--- /dev/null
+++ b/lib/muldi3.c
@@ -0,0 +1,72 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+#include <linux/export.h>
+#include <lib/libgcc.h>
+
+#define W_TYPE_SIZE 32
+
+#define __ll_B ((unsigned long) 1 << (W_TYPE_SIZE / 2))
+#define __ll_lowpart(t) ((unsigned long) (t) & (__ll_B - 1))
+#define __ll_highpart(t) ((unsigned long) (t) >> (W_TYPE_SIZE / 2))
+
+/* If we still don't have umul_ppmm, define it using plain C.  */
+#if !defined(umul_ppmm)
+#define umul_ppmm(w1, w0, u, v)						\
+	do {								\
+		unsigned long __x0, __x1, __x2, __x3;			\
+		unsigned short __ul, __vl, __uh, __vh;			\
+									\
+		__ul = __ll_lowpart(u);					\
+		__uh = __ll_highpart(u);				\
+		__vl = __ll_lowpart(v);					\
+		__vh = __ll_highpart(v);				\
+									\
+		__x0 = (unsigned long) __ul * __vl;			\
+		__x1 = (unsigned long) __ul * __vh;			\
+		__x2 = (unsigned long) __uh * __vl;			\
+		__x3 = (unsigned long) __uh * __vh;			\
+									\
+		__x1 += __ll_highpart(__x0); /* this can't give carry */\
+		__x1 += __x2; /* but this indeed can */			\
+		if (__x1 < __x2) /* did we get it? */			\
+		__x3 += __ll_B; /* yes, add it in the proper pos */	\
+									\
+		(w1) = __x3 + __ll_highpart(__x1);			\
+		(w0) = __ll_lowpart(__x1) * __ll_B + __ll_lowpart(__x0);\
+	} while (0)
+#endif
+
+#if !defined(__umulsidi3)
+#define __umulsidi3(u, v) ({				\
+	DWunion __w;					\
+	umul_ppmm(__w.s.high, __w.s.low, u, v);		\
+	__w.ll;						\
+	})
+#endif
+
+long long notrace __muldi3(long long u, long long v)
+{
+	const DWunion uu = {.ll = u};
+	const DWunion vv = {.ll = v};
+	DWunion w = {.ll = __umulsidi3(uu.s.low, vv.s.low)};
+
+	w.s.high += ((unsigned long) uu.s.low * (unsigned long) vv.s.high
+		+ (unsigned long) uu.s.high * (unsigned long) vv.s.low);
+
+	return w.ll;
+}
+EXPORT_SYMBOL(__muldi3);
diff --git a/lib/ucmpdi2.c b/lib/ucmpdi2.c
new file mode 100644
index 000000000000..49a53505c8e3
--- /dev/null
+++ b/lib/ucmpdi2.c
@@ -0,0 +1,35 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+#include <linux/module.h>
+#include <lib/libgcc.h>
+
+word_type __ucmpdi2(unsigned long long a, unsigned long long b)
+{
+	const DWunion au = {.ll = a};
+	const DWunion bu = {.ll = b};
+
+	if ((unsigned int) au.s.high < (unsigned int) bu.s.high)
+		return 0;
+	else if ((unsigned int) au.s.high > (unsigned int) bu.s.high)
+		return 2;
+	if ((unsigned int) au.s.low < (unsigned int) bu.s.low)
+		return 0;
+	else if ((unsigned int) au.s.low > (unsigned int) bu.s.low)
+		return 2;
+	return 1;
+}
+EXPORT_SYMBOL(__ucmpdi2);
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 07/17] lib: Add shared copies of some GCC library routines
  2017-06-06 22:59   ` Palmer Dabbelt
                     ` (9 preceding siblings ...)
  (?)
@ 2017-06-06 22:59   ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

Many ports (m32r, microblaze, mips, parisc, score, and sparc) use
functionally identical copies of various GCC library routine files,
which came up as we were submitting the RISC-V port (which also uses
some of these).

This patch adds a new copy of these library routine files, which are
functionally identical to the various other copies.  These are
availiable via Kconfig as CONFIG_LIB_$ROUTINE, which currently isn't
used anywhere.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
---
 include/lib/libgcc.h | 44 ++++++++++++++++++++++++++++++++
 lib/Kconfig          | 18 +++++++++++++
 lib/Makefile         |  8 ++++++
 lib/ashldi3.c        | 45 ++++++++++++++++++++++++++++++++
 lib/ashrdi3.c        | 46 +++++++++++++++++++++++++++++++++
 lib/cmpdi2.c         | 42 ++++++++++++++++++++++++++++++
 lib/lshrdi3.c        | 45 ++++++++++++++++++++++++++++++++
 lib/muldi3.c         | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 lib/ucmpdi2.c        | 35 +++++++++++++++++++++++++
 9 files changed, 355 insertions(+)
 create mode 100644 include/lib/libgcc.h
 create mode 100644 lib/ashldi3.c
 create mode 100644 lib/ashrdi3.c
 create mode 100644 lib/cmpdi2.c
 create mode 100644 lib/lshrdi3.c
 create mode 100644 lib/muldi3.c
 create mode 100644 lib/ucmpdi2.c

diff --git a/include/lib/libgcc.h b/include/lib/libgcc.h
new file mode 100644
index 000000000000..a5397e34e005
--- /dev/null
+++ b/include/lib/libgcc.h
@@ -0,0 +1,44 @@
+/*
+ * include/lib/libgcc.h
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+
+#ifndef __LIB_LIBGCC_H
+#define __LIB_LIBGCC_H
+
+#include <asm/byteorder.h>
+
+typedef int word_type __attribute__ ((mode (__word__)));
+
+#ifdef __BIG_ENDIAN
+struct DWstruct {
+	int high, low;
+};
+#elif defined(__LITTLE_ENDIAN)
+struct DWstruct {
+	int low, high;
+};
+#else
+#error I feel sick.
+#endif
+
+typedef union {
+	struct DWstruct s;
+	long long ll;
+} DWunion;
+
+#endif /* __ASM_LIBGCC_H */
diff --git a/lib/Kconfig b/lib/Kconfig
index 0c8b78a9ae2e..7a9c934d91fd 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -565,3 +565,21 @@ config PRIME_NUMBERS
 	tristate
 
 endmenu
+
+config GENERIC_ASHLDI3
+	def_bool n
+
+config GENERIC_ASHRDI3
+	def_bool n
+
+config GENERIC_LSHRDI3
+	def_bool n
+
+config GENERIC_MULDI3
+	def_bool n
+
+config GENERIC_CMPDI2
+	def_bool n
+
+config GENERIC_UCMPDI2
+	def_bool n
diff --git a/lib/Makefile b/lib/Makefile
index 0166fbc0fa81..5f68242f7774 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -243,3 +243,11 @@ UBSAN_SANITIZE_ubsan.o := n
 obj-$(CONFIG_SBITMAP) += sbitmap.o
 
 obj-$(CONFIG_PARMAN) += parman.o
+
+# GCC library routines
+obj-$(CONFIG_GENERIC_ASHLDI3) += ashldi3.o
+obj-$(CONFIG_GENERIC_ASHRDI3) += ashrdi3.o
+obj-$(CONFIG_GENERIC_LSHRDI3) += lshrdi3.o
+obj-$(CONFIG_GENERIC_MULDI3) += muldi3.o
+obj-$(CONFIG_GENERIC_CMPDI2) += cmpdi2.o
+obj-$(CONFIG_GENERIC_UCMPDI2) += ucmpdi2.o
diff --git a/lib/ashldi3.c b/lib/ashldi3.c
new file mode 100644
index 000000000000..ff4ec63d2ab6
--- /dev/null
+++ b/lib/ashldi3.c
@@ -0,0 +1,45 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+
+#include <linux/export.h>
+
+#include <lib/libgcc.h>
+
+long long notrace __ashldi3(long long u, word_type b)
+{
+	DWunion uu, w;
+	word_type bm;
+
+	if (b == 0)
+		return u;
+
+	uu.ll = u;
+	bm = 32 - b;
+
+	if (bm <= 0) {
+		w.s.low = 0;
+		w.s.high = (unsigned int) uu.s.low << -bm;
+	} else {
+		const unsigned int carries = (unsigned int) uu.s.low >> bm;
+
+		w.s.low = (unsigned int) uu.s.low << b;
+		w.s.high = ((unsigned int) uu.s.high << b) | carries;
+	}
+
+	return w.ll;
+}
+EXPORT_SYMBOL(__ashldi3);
diff --git a/lib/ashrdi3.c b/lib/ashrdi3.c
new file mode 100644
index 000000000000..2e67c97ac65a
--- /dev/null
+++ b/lib/ashrdi3.c
@@ -0,0 +1,46 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+#include <linux/export.h>
+
+#include <lib/libgcc.h>
+
+long long notrace __ashrdi3(long long u, word_type b)
+{
+	DWunion uu, w;
+	word_type bm;
+
+	if (b == 0)
+		return u;
+
+	uu.ll = u;
+	bm = 32 - b;
+
+	if (bm <= 0) {
+		/* w.s.high = 1..1 or 0..0 */
+		w.s.high =
+		    uu.s.high >> 31;
+		w.s.low = uu.s.high >> -bm;
+	} else {
+		const unsigned int carries = (unsigned int) uu.s.high << bm;
+
+		w.s.high = uu.s.high >> b;
+		w.s.low = ((unsigned int) uu.s.low >> b) | carries;
+	}
+
+	return w.ll;
+}
+EXPORT_SYMBOL(__ashrdi3);
diff --git a/lib/cmpdi2.c b/lib/cmpdi2.c
new file mode 100644
index 000000000000..6d7ebf6c2b86
--- /dev/null
+++ b/lib/cmpdi2.c
@@ -0,0 +1,42 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+#include <linux/export.h>
+
+#include <lib/libgcc.h>
+
+word_type notrace __cmpdi2(long long a, long long b)
+{
+	const DWunion au = {
+		.ll = a
+	};
+	const DWunion bu = {
+		.ll = b
+	};
+
+	if (au.s.high < bu.s.high)
+		return 0;
+	else if (au.s.high > bu.s.high)
+		return 2;
+
+	if ((unsigned int) au.s.low < (unsigned int) bu.s.low)
+		return 0;
+	else if ((unsigned int) au.s.low > (unsigned int) bu.s.low)
+		return 2;
+
+	return 1;
+}
+EXPORT_SYMBOL(__cmpdi2);
diff --git a/lib/lshrdi3.c b/lib/lshrdi3.c
new file mode 100644
index 000000000000..8e845f4bb65f
--- /dev/null
+++ b/lib/lshrdi3.c
@@ -0,0 +1,45 @@
+/*
+ * lib/lshrdi3.c
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+#include <linux/module.h>
+#include <lib/libgcc.h>
+
+long long notrace __lshrdi3(long long u, word_type b)
+{
+	DWunion uu, w;
+	word_type bm;
+
+	if (b == 0)
+		return u;
+
+	uu.ll = u;
+	bm = 32 - b;
+
+	if (bm <= 0) {
+		w.s.high = 0;
+		w.s.low = (unsigned int) uu.s.high >> -bm;
+	} else {
+		const unsigned int carries = (unsigned int) uu.s.high << bm;
+
+		w.s.high = (unsigned int) uu.s.high >> b;
+		w.s.low = ((unsigned int) uu.s.low >> b) | carries;
+	}
+
+	return w.ll;
+}
+EXPORT_SYMBOL(__lshrdi3);
diff --git a/lib/muldi3.c b/lib/muldi3.c
new file mode 100644
index 000000000000..88938543e10a
--- /dev/null
+++ b/lib/muldi3.c
@@ -0,0 +1,72 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+#include <linux/export.h>
+#include <lib/libgcc.h>
+
+#define W_TYPE_SIZE 32
+
+#define __ll_B ((unsigned long) 1 << (W_TYPE_SIZE / 2))
+#define __ll_lowpart(t) ((unsigned long) (t) & (__ll_B - 1))
+#define __ll_highpart(t) ((unsigned long) (t) >> (W_TYPE_SIZE / 2))
+
+/* If we still don't have umul_ppmm, define it using plain C.  */
+#if !defined(umul_ppmm)
+#define umul_ppmm(w1, w0, u, v)						\
+	do {								\
+		unsigned long __x0, __x1, __x2, __x3;			\
+		unsigned short __ul, __vl, __uh, __vh;			\
+									\
+		__ul = __ll_lowpart(u);					\
+		__uh = __ll_highpart(u);				\
+		__vl = __ll_lowpart(v);					\
+		__vh = __ll_highpart(v);				\
+									\
+		__x0 = (unsigned long) __ul * __vl;			\
+		__x1 = (unsigned long) __ul * __vh;			\
+		__x2 = (unsigned long) __uh * __vl;			\
+		__x3 = (unsigned long) __uh * __vh;			\
+									\
+		__x1 += __ll_highpart(__x0); /* this can't give carry */\
+		__x1 += __x2; /* but this indeed can */			\
+		if (__x1 < __x2) /* did we get it? */			\
+		__x3 += __ll_B; /* yes, add it in the proper pos */	\
+									\
+		(w1) = __x3 + __ll_highpart(__x1);			\
+		(w0) = __ll_lowpart(__x1) * __ll_B + __ll_lowpart(__x0);\
+	} while (0)
+#endif
+
+#if !defined(__umulsidi3)
+#define __umulsidi3(u, v) ({				\
+	DWunion __w;					\
+	umul_ppmm(__w.s.high, __w.s.low, u, v);		\
+	__w.ll;						\
+	})
+#endif
+
+long long notrace __muldi3(long long u, long long v)
+{
+	const DWunion uu = {.ll = u};
+	const DWunion vv = {.ll = v};
+	DWunion w = {.ll = __umulsidi3(uu.s.low, vv.s.low)};
+
+	w.s.high += ((unsigned long) uu.s.low * (unsigned long) vv.s.high
+		+ (unsigned long) uu.s.high * (unsigned long) vv.s.low);
+
+	return w.ll;
+}
+EXPORT_SYMBOL(__muldi3);
diff --git a/lib/ucmpdi2.c b/lib/ucmpdi2.c
new file mode 100644
index 000000000000..49a53505c8e3
--- /dev/null
+++ b/lib/ucmpdi2.c
@@ -0,0 +1,35 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+#include <linux/module.h>
+#include <lib/libgcc.h>
+
+word_type __ucmpdi2(unsigned long long a, unsigned long long b)
+{
+	const DWunion au = {.ll = a};
+	const DWunion bu = {.ll = b};
+
+	if ((unsigned int) au.s.high < (unsigned int) bu.s.high)
+		return 0;
+	else if ((unsigned int) au.s.high > (unsigned int) bu.s.high)
+		return 2;
+	if ((unsigned int) au.s.low < (unsigned int) bu.s.low)
+		return 0;
+	else if ((unsigned int) au.s.low > (unsigned int) bu.s.low)
+		return 2;
+	return 1;
+}
+EXPORT_SYMBOL(__ucmpdi2);
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 07/17] lib: Add shared copies of some GCC library routines
@ 2017-06-06 22:59     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

Many ports (m32r, microblaze, mips, parisc, score, and sparc) use
functionally identical copies of various GCC library routine files,
which came up as we were submitting the RISC-V port (which also uses
some of these).

This patch adds a new copy of these library routine files, which are
functionally identical to the various other copies.  These are
availiable via Kconfig as CONFIG_LIB_$ROUTINE, which currently isn't
used anywhere.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
---
 include/lib/libgcc.h | 44 ++++++++++++++++++++++++++++++++
 lib/Kconfig          | 18 +++++++++++++
 lib/Makefile         |  8 ++++++
 lib/ashldi3.c        | 45 ++++++++++++++++++++++++++++++++
 lib/ashrdi3.c        | 46 +++++++++++++++++++++++++++++++++
 lib/cmpdi2.c         | 42 ++++++++++++++++++++++++++++++
 lib/lshrdi3.c        | 45 ++++++++++++++++++++++++++++++++
 lib/muldi3.c         | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 lib/ucmpdi2.c        | 35 +++++++++++++++++++++++++
 9 files changed, 355 insertions(+)
 create mode 100644 include/lib/libgcc.h
 create mode 100644 lib/ashldi3.c
 create mode 100644 lib/ashrdi3.c
 create mode 100644 lib/cmpdi2.c
 create mode 100644 lib/lshrdi3.c
 create mode 100644 lib/muldi3.c
 create mode 100644 lib/ucmpdi2.c

diff --git a/include/lib/libgcc.h b/include/lib/libgcc.h
new file mode 100644
index 000000000000..a5397e34e005
--- /dev/null
+++ b/include/lib/libgcc.h
@@ -0,0 +1,44 @@
+/*
+ * include/lib/libgcc.h
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+
+#ifndef __LIB_LIBGCC_H
+#define __LIB_LIBGCC_H
+
+#include <asm/byteorder.h>
+
+typedef int word_type __attribute__ ((mode (__word__)));
+
+#ifdef __BIG_ENDIAN
+struct DWstruct {
+	int high, low;
+};
+#elif defined(__LITTLE_ENDIAN)
+struct DWstruct {
+	int low, high;
+};
+#else
+#error I feel sick.
+#endif
+
+typedef union {
+	struct DWstruct s;
+	long long ll;
+} DWunion;
+
+#endif /* __ASM_LIBGCC_H */
diff --git a/lib/Kconfig b/lib/Kconfig
index 0c8b78a9ae2e..7a9c934d91fd 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -565,3 +565,21 @@ config PRIME_NUMBERS
 	tristate
 
 endmenu
+
+config GENERIC_ASHLDI3
+	def_bool n
+
+config GENERIC_ASHRDI3
+	def_bool n
+
+config GENERIC_LSHRDI3
+	def_bool n
+
+config GENERIC_MULDI3
+	def_bool n
+
+config GENERIC_CMPDI2
+	def_bool n
+
+config GENERIC_UCMPDI2
+	def_bool n
diff --git a/lib/Makefile b/lib/Makefile
index 0166fbc0fa81..5f68242f7774 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -243,3 +243,11 @@ UBSAN_SANITIZE_ubsan.o := n
 obj-$(CONFIG_SBITMAP) += sbitmap.o
 
 obj-$(CONFIG_PARMAN) += parman.o
+
+# GCC library routines
+obj-$(CONFIG_GENERIC_ASHLDI3) += ashldi3.o
+obj-$(CONFIG_GENERIC_ASHRDI3) += ashrdi3.o
+obj-$(CONFIG_GENERIC_LSHRDI3) += lshrdi3.o
+obj-$(CONFIG_GENERIC_MULDI3) += muldi3.o
+obj-$(CONFIG_GENERIC_CMPDI2) += cmpdi2.o
+obj-$(CONFIG_GENERIC_UCMPDI2) += ucmpdi2.o
diff --git a/lib/ashldi3.c b/lib/ashldi3.c
new file mode 100644
index 000000000000..ff4ec63d2ab6
--- /dev/null
+++ b/lib/ashldi3.c
@@ -0,0 +1,45 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+
+#include <linux/export.h>
+
+#include <lib/libgcc.h>
+
+long long notrace __ashldi3(long long u, word_type b)
+{
+	DWunion uu, w;
+	word_type bm;
+
+	if (b == 0)
+		return u;
+
+	uu.ll = u;
+	bm = 32 - b;
+
+	if (bm <= 0) {
+		w.s.low = 0;
+		w.s.high = (unsigned int) uu.s.low << -bm;
+	} else {
+		const unsigned int carries = (unsigned int) uu.s.low >> bm;
+
+		w.s.low = (unsigned int) uu.s.low << b;
+		w.s.high = ((unsigned int) uu.s.high << b) | carries;
+	}
+
+	return w.ll;
+}
+EXPORT_SYMBOL(__ashldi3);
diff --git a/lib/ashrdi3.c b/lib/ashrdi3.c
new file mode 100644
index 000000000000..2e67c97ac65a
--- /dev/null
+++ b/lib/ashrdi3.c
@@ -0,0 +1,46 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+#include <linux/export.h>
+
+#include <lib/libgcc.h>
+
+long long notrace __ashrdi3(long long u, word_type b)
+{
+	DWunion uu, w;
+	word_type bm;
+
+	if (b == 0)
+		return u;
+
+	uu.ll = u;
+	bm = 32 - b;
+
+	if (bm <= 0) {
+		/* w.s.high = 1..1 or 0..0 */
+		w.s.high =
+		    uu.s.high >> 31;
+		w.s.low = uu.s.high >> -bm;
+	} else {
+		const unsigned int carries = (unsigned int) uu.s.high << bm;
+
+		w.s.high = uu.s.high >> b;
+		w.s.low = ((unsigned int) uu.s.low >> b) | carries;
+	}
+
+	return w.ll;
+}
+EXPORT_SYMBOL(__ashrdi3);
diff --git a/lib/cmpdi2.c b/lib/cmpdi2.c
new file mode 100644
index 000000000000..6d7ebf6c2b86
--- /dev/null
+++ b/lib/cmpdi2.c
@@ -0,0 +1,42 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+#include <linux/export.h>
+
+#include <lib/libgcc.h>
+
+word_type notrace __cmpdi2(long long a, long long b)
+{
+	const DWunion au = {
+		.ll = a
+	};
+	const DWunion bu = {
+		.ll = b
+	};
+
+	if (au.s.high < bu.s.high)
+		return 0;
+	else if (au.s.high > bu.s.high)
+		return 2;
+
+	if ((unsigned int) au.s.low < (unsigned int) bu.s.low)
+		return 0;
+	else if ((unsigned int) au.s.low > (unsigned int) bu.s.low)
+		return 2;
+
+	return 1;
+}
+EXPORT_SYMBOL(__cmpdi2);
diff --git a/lib/lshrdi3.c b/lib/lshrdi3.c
new file mode 100644
index 000000000000..8e845f4bb65f
--- /dev/null
+++ b/lib/lshrdi3.c
@@ -0,0 +1,45 @@
+/*
+ * lib/lshrdi3.c
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+#include <linux/module.h>
+#include <lib/libgcc.h>
+
+long long notrace __lshrdi3(long long u, word_type b)
+{
+	DWunion uu, w;
+	word_type bm;
+
+	if (b == 0)
+		return u;
+
+	uu.ll = u;
+	bm = 32 - b;
+
+	if (bm <= 0) {
+		w.s.high = 0;
+		w.s.low = (unsigned int) uu.s.high >> -bm;
+	} else {
+		const unsigned int carries = (unsigned int) uu.s.high << bm;
+
+		w.s.high = (unsigned int) uu.s.high >> b;
+		w.s.low = ((unsigned int) uu.s.low >> b) | carries;
+	}
+
+	return w.ll;
+}
+EXPORT_SYMBOL(__lshrdi3);
diff --git a/lib/muldi3.c b/lib/muldi3.c
new file mode 100644
index 000000000000..88938543e10a
--- /dev/null
+++ b/lib/muldi3.c
@@ -0,0 +1,72 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+#include <linux/export.h>
+#include <lib/libgcc.h>
+
+#define W_TYPE_SIZE 32
+
+#define __ll_B ((unsigned long) 1 << (W_TYPE_SIZE / 2))
+#define __ll_lowpart(t) ((unsigned long) (t) & (__ll_B - 1))
+#define __ll_highpart(t) ((unsigned long) (t) >> (W_TYPE_SIZE / 2))
+
+/* If we still don't have umul_ppmm, define it using plain C.  */
+#if !defined(umul_ppmm)
+#define umul_ppmm(w1, w0, u, v)						\
+	do {								\
+		unsigned long __x0, __x1, __x2, __x3;			\
+		unsigned short __ul, __vl, __uh, __vh;			\
+									\
+		__ul = __ll_lowpart(u);					\
+		__uh = __ll_highpart(u);				\
+		__vl = __ll_lowpart(v);					\
+		__vh = __ll_highpart(v);				\
+									\
+		__x0 = (unsigned long) __ul * __vl;			\
+		__x1 = (unsigned long) __ul * __vh;			\
+		__x2 = (unsigned long) __uh * __vl;			\
+		__x3 = (unsigned long) __uh * __vh;			\
+									\
+		__x1 += __ll_highpart(__x0); /* this can't give carry */\
+		__x1 += __x2; /* but this indeed can */			\
+		if (__x1 < __x2) /* did we get it? */			\
+		__x3 += __ll_B; /* yes, add it in the proper pos */	\
+									\
+		(w1) = __x3 + __ll_highpart(__x1);			\
+		(w0) = __ll_lowpart(__x1) * __ll_B + __ll_lowpart(__x0);\
+	} while (0)
+#endif
+
+#if !defined(__umulsidi3)
+#define __umulsidi3(u, v) ({				\
+	DWunion __w;					\
+	umul_ppmm(__w.s.high, __w.s.low, u, v);		\
+	__w.ll;						\
+	})
+#endif
+
+long long notrace __muldi3(long long u, long long v)
+{
+	const DWunion uu = {.ll = u};
+	const DWunion vv = {.ll = v};
+	DWunion w = {.ll = __umulsidi3(uu.s.low, vv.s.low)};
+
+	w.s.high += ((unsigned long) uu.s.low * (unsigned long) vv.s.high
+		+ (unsigned long) uu.s.high * (unsigned long) vv.s.low);
+
+	return w.ll;
+}
+EXPORT_SYMBOL(__muldi3);
diff --git a/lib/ucmpdi2.c b/lib/ucmpdi2.c
new file mode 100644
index 000000000000..49a53505c8e3
--- /dev/null
+++ b/lib/ucmpdi2.c
@@ -0,0 +1,35 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.
+ */
+
+#include <linux/module.h>
+#include <lib/libgcc.h>
+
+word_type __ucmpdi2(unsigned long long a, unsigned long long b)
+{
+	const DWunion au = {.ll = a};
+	const DWunion bu = {.ll = b};
+
+	if ((unsigned int) au.s.high < (unsigned int) bu.s.high)
+		return 0;
+	else if ((unsigned int) au.s.high > (unsigned int) bu.s.high)
+		return 2;
+	if ((unsigned int) au.s.low < (unsigned int) bu.s.low)
+		return 0;
+	else if ((unsigned int) au.s.low > (unsigned int) bu.s.low)
+		return 2;
+	return 1;
+}
+EXPORT_SYMBOL(__ucmpdi2);
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 08/17] dts: include documentation for the RISC-V interrupt controllers
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 22:59     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Wesley W. Terpstra, Palmer Dabbelt

From: "Wesley W. Terpstra" <wesley@sifive.com>

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 .../interrupt-controller/riscv,cpu-intc.txt        | 46 ++++++++++++++++++++++
 .../bindings/interrupt-controller/riscv,plic0.txt  | 44 +++++++++++++++++++++
 2 files changed, 90 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
 create mode 100644 Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt

diff --git a/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt b/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
new file mode 100644
index 000000000000..62f02e834ff9
--- /dev/null
+++ b/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
@@ -0,0 +1,46 @@
+RISC-V Hart-Level Interrupt Controller (HLIC)
+---------------------------------------------
+
+RISC-V cores include Control Status Registers (CSRs) which are local to each
+hart and can be read or written by software. Some of these CSRs are used to
+control local interrupts connected to the core.
+
+Typical examples of local interrupts on a RISC-V core include: software IPI
+interrupts, timer interrupts, and a link to the PLIC interrupt controller.
+
+Required properties:
+- compatible : "riscv,cpu-intc"
+- #interrupt-cells : should be <1>
+- interrupt-controller : Identifies the node as an interrupt controller
+
+Furthermore, this interrupt-controller MUST be embedded inside the cpu
+definition of the hart whose CSRs control these local interrupts.
+
+Example:
+
+	cpu1: cpu@1 {
+		clock-frequency = <1600000000>;
+		compatible = "riscv";
+		d-cache-block-size = <64>;
+		d-cache-sets = <64>;
+		d-cache-size = <16384>;
+		d-tlb-sets = <1>;
+		d-tlb-size = <32>;
+		device_type = "cpu";
+		i-cache-block-size = <64>;
+		i-cache-sets = <64>;
+		i-cache-size = <16384>;
+		i-tlb-sets = <1>;
+		i-tlb-size = <32>;
+		mmu-type = "riscv,sv39";
+		next-level-cache = <&L2>;
+		reg = <1>;
+		riscv,isa = "rv64imac";
+		status = "okay";
+		tlb-split;
+		cpu1-intc: interrupt-controller {
+			#interrupt-cells = <1>;
+			compatible = "riscv,cpu-intc";
+			interrupt-controller;
+		};
+	};
diff --git a/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt b/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt
new file mode 100644
index 000000000000..c05b5806f7d2
--- /dev/null
+++ b/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt
@@ -0,0 +1,44 @@
+RISC-V Platform-Level Interrupt Controller (PLIC)
+-------------------------------------------------
+
+RISC-V cores typically include a PLIC, which route interrupts from multiple
+devices to multiple hart contexts.  The PLIC is connected to the interrupt
+controller embedded in a RISC-V core via the interrupt-related CSRs.
+
+A hart context is a priviledge mode in a hardware execution thread.  For
+example, in an 4 core system with 2-way SMT, you have 8 harts and probably
+at least two priviledge modes per hart; machine mode and supervisor mode.
+
+Each interrupt can be enabled on per-context basis. Any context can claim
+a pending enabled interrupt and then release it once it has been handled.
+
+Each interrupt has a configurable priority. Higher priority interrupts are
+serviced firs. Each context can specify a priority threshold. Interrupts
+with priority below this threshold will not cause the PLIC to raise its
+interrupt line leading to the context.
+
+Required properties:
+- compatible : "riscv,plic0"
+- #address-cells : should be <0>
+- #interrupt-cells : should be <1>
+- interrupt-controller : Identifies the node as an interrupt controller
+- reg : Should contain 1 register range (address and length)
+- riscv,ndev : Specifies the number of interrupts attached to the PLIC
+- interrupts-extended : Specifies which contexts are connected to the PLIC
+
+Example:
+
+	plic: interrupt-controller@c000000 {
+		#address-cells = <0>;
+		#interrupt-cells = <1>;
+		compatible = "riscv,plic0";
+		interrupt-controller;
+		interrupts-extended = <
+			&cpu0-intc 11
+			&cpu1-intc 11 &cpu1-intc 9
+			&cpu2-intc 11 &cpu2-intc 9
+			&cpu3-intc 11 &cpu3-intc 9
+			&cpu4-intc 11 &cpu4-intc 9>;
+		reg = <0xc000000 0x4000000>;
+		riscv,ndev = <10>;
+	};
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 08/17] dts: include documentation for the RISC-V interrupt controllers
  2017-06-06 22:59   ` Palmer Dabbelt
                     ` (11 preceding siblings ...)
  (?)
@ 2017-06-06 22:59   ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Wesley W. Terpstra, Palmer Dabbelt

From: "Wesley W. Terpstra" <wesley@sifive.com>

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 .../interrupt-controller/riscv,cpu-intc.txt        | 46 ++++++++++++++++++++++
 .../bindings/interrupt-controller/riscv,plic0.txt  | 44 +++++++++++++++++++++
 2 files changed, 90 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
 create mode 100644 Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt

diff --git a/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt b/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
new file mode 100644
index 000000000000..62f02e834ff9
--- /dev/null
+++ b/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
@@ -0,0 +1,46 @@
+RISC-V Hart-Level Interrupt Controller (HLIC)
+---------------------------------------------
+
+RISC-V cores include Control Status Registers (CSRs) which are local to each
+hart and can be read or written by software. Some of these CSRs are used to
+control local interrupts connected to the core.
+
+Typical examples of local interrupts on a RISC-V core include: software IPI
+interrupts, timer interrupts, and a link to the PLIC interrupt controller.
+
+Required properties:
+- compatible : "riscv,cpu-intc"
+- #interrupt-cells : should be <1>
+- interrupt-controller : Identifies the node as an interrupt controller
+
+Furthermore, this interrupt-controller MUST be embedded inside the cpu
+definition of the hart whose CSRs control these local interrupts.
+
+Example:
+
+	cpu1: cpu@1 {
+		clock-frequency = <1600000000>;
+		compatible = "riscv";
+		d-cache-block-size = <64>;
+		d-cache-sets = <64>;
+		d-cache-size = <16384>;
+		d-tlb-sets = <1>;
+		d-tlb-size = <32>;
+		device_type = "cpu";
+		i-cache-block-size = <64>;
+		i-cache-sets = <64>;
+		i-cache-size = <16384>;
+		i-tlb-sets = <1>;
+		i-tlb-size = <32>;
+		mmu-type = "riscv,sv39";
+		next-level-cache = <&L2>;
+		reg = <1>;
+		riscv,isa = "rv64imac";
+		status = "okay";
+		tlb-split;
+		cpu1-intc: interrupt-controller {
+			#interrupt-cells = <1>;
+			compatible = "riscv,cpu-intc";
+			interrupt-controller;
+		};
+	};
diff --git a/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt b/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt
new file mode 100644
index 000000000000..c05b5806f7d2
--- /dev/null
+++ b/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt
@@ -0,0 +1,44 @@
+RISC-V Platform-Level Interrupt Controller (PLIC)
+-------------------------------------------------
+
+RISC-V cores typically include a PLIC, which route interrupts from multiple
+devices to multiple hart contexts.  The PLIC is connected to the interrupt
+controller embedded in a RISC-V core via the interrupt-related CSRs.
+
+A hart context is a priviledge mode in a hardware execution thread.  For
+example, in an 4 core system with 2-way SMT, you have 8 harts and probably
+at least two priviledge modes per hart; machine mode and supervisor mode.
+
+Each interrupt can be enabled on per-context basis. Any context can claim
+a pending enabled interrupt and then release it once it has been handled.
+
+Each interrupt has a configurable priority. Higher priority interrupts are
+serviced firs. Each context can specify a priority threshold. Interrupts
+with priority below this threshold will not cause the PLIC to raise its
+interrupt line leading to the context.
+
+Required properties:
+- compatible : "riscv,plic0"
+- #address-cells : should be <0>
+- #interrupt-cells : should be <1>
+- interrupt-controller : Identifies the node as an interrupt controller
+- reg : Should contain 1 register range (address and length)
+- riscv,ndev : Specifies the number of interrupts attached to the PLIC
+- interrupts-extended : Specifies which contexts are connected to the PLIC
+
+Example:
+
+	plic: interrupt-controller@c000000 {
+		#address-cells = <0>;
+		#interrupt-cells = <1>;
+		compatible = "riscv,plic0";
+		interrupt-controller;
+		interrupts-extended = <
+			&cpu0-intc 11
+			&cpu1-intc 11 &cpu1-intc 9
+			&cpu2-intc 11 &cpu2-intc 9
+			&cpu3-intc 11 &cpu3-intc 9
+			&cpu4-intc 11 &cpu4-intc 9>;
+		reg = <0xc000000 0x4000000>;
+		riscv,ndev = <10>;
+	};
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 08/17] dts: include documentation for the RISC-V interrupt controllers
@ 2017-06-06 22:59     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Wesley W. Terpstra, Palmer Dabbelt

From: "Wesley W. Terpstra" <wesley@sifive.com>

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 .../interrupt-controller/riscv,cpu-intc.txt        | 46 ++++++++++++++++++++++
 .../bindings/interrupt-controller/riscv,plic0.txt  | 44 +++++++++++++++++++++
 2 files changed, 90 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
 create mode 100644 Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt

diff --git a/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt b/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
new file mode 100644
index 000000000000..62f02e834ff9
--- /dev/null
+++ b/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
@@ -0,0 +1,46 @@
+RISC-V Hart-Level Interrupt Controller (HLIC)
+---------------------------------------------
+
+RISC-V cores include Control Status Registers (CSRs) which are local to each
+hart and can be read or written by software. Some of these CSRs are used to
+control local interrupts connected to the core.
+
+Typical examples of local interrupts on a RISC-V core include: software IPI
+interrupts, timer interrupts, and a link to the PLIC interrupt controller.
+
+Required properties:
+- compatible : "riscv,cpu-intc"
+- #interrupt-cells : should be <1>
+- interrupt-controller : Identifies the node as an interrupt controller
+
+Furthermore, this interrupt-controller MUST be embedded inside the cpu
+definition of the hart whose CSRs control these local interrupts.
+
+Example:
+
+	cpu1: cpu@1 {
+		clock-frequency = <1600000000>;
+		compatible = "riscv";
+		d-cache-block-size = <64>;
+		d-cache-sets = <64>;
+		d-cache-size = <16384>;
+		d-tlb-sets = <1>;
+		d-tlb-size = <32>;
+		device_type = "cpu";
+		i-cache-block-size = <64>;
+		i-cache-sets = <64>;
+		i-cache-size = <16384>;
+		i-tlb-sets = <1>;
+		i-tlb-size = <32>;
+		mmu-type = "riscv,sv39";
+		next-level-cache = <&L2>;
+		reg = <1>;
+		riscv,isa = "rv64imac";
+		status = "okay";
+		tlb-split;
+		cpu1-intc: interrupt-controller {
+			#interrupt-cells = <1>;
+			compatible = "riscv,cpu-intc";
+			interrupt-controller;
+		};
+	};
diff --git a/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt b/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt
new file mode 100644
index 000000000000..c05b5806f7d2
--- /dev/null
+++ b/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt
@@ -0,0 +1,44 @@
+RISC-V Platform-Level Interrupt Controller (PLIC)
+-------------------------------------------------
+
+RISC-V cores typically include a PLIC, which route interrupts from multiple
+devices to multiple hart contexts.  The PLIC is connected to the interrupt
+controller embedded in a RISC-V core via the interrupt-related CSRs.
+
+A hart context is a priviledge mode in a hardware execution thread.  For
+example, in an 4 core system with 2-way SMT, you have 8 harts and probably
+at least two priviledge modes per hart; machine mode and supervisor mode.
+
+Each interrupt can be enabled on per-context basis. Any context can claim
+a pending enabled interrupt and then release it once it has been handled.
+
+Each interrupt has a configurable priority. Higher priority interrupts are
+serviced firs. Each context can specify a priority threshold. Interrupts
+with priority below this threshold will not cause the PLIC to raise its
+interrupt line leading to the context.
+
+Required properties:
+- compatible : "riscv,plic0"
+- #address-cells : should be <0>
+- #interrupt-cells : should be <1>
+- interrupt-controller : Identifies the node as an interrupt controller
+- reg : Should contain 1 register range (address and length)
+- riscv,ndev : Specifies the number of interrupts attached to the PLIC
+- interrupts-extended : Specifies which contexts are connected to the PLIC
+
+Example:
+
+	plic: interrupt-controller@c000000 {
+		#address-cells = <0>;
+		#interrupt-cells = <1>;
+		compatible = "riscv,plic0";
+		interrupt-controller;
+		interrupts-extended = <
+			&cpu0-intc 11
+			&cpu1-intc 11 &cpu1-intc 9
+			&cpu2-intc 11 &cpu2-intc 9
+			&cpu3-intc 11 &cpu3-intc 9
+			&cpu4-intc 11 &cpu4-intc 9>;
+		reg = <0xc000000 0x4000000>;
+		riscv,ndev = <10>;
+	};
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 09/17] clocksource/timer-riscv: New RISC-V Clocksource
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 22:59     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

The RISC-V ISA defines a single RTC as well as an SBI oneshot timer.
This timer is present on all RISC-V systems.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/clocksource/Kconfig       |   8 +++
 drivers/clocksource/Makefile      |   1 +
 drivers/clocksource/timer-riscv.c | 118 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 127 insertions(+)
 create mode 100644 drivers/clocksource/timer-riscv.c

diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
index 545d541ae20e..1c2c6e7c7fab 100644
--- a/drivers/clocksource/Kconfig
+++ b/drivers/clocksource/Kconfig
@@ -612,4 +612,12 @@ config CLKSRC_ST_LPC
 	  Enable this option to use the Low Power controller timer
 	  as clocksource.
 
+config CLKSRC_RISCV
+	#bool "Clocksource for the RISC-V platform"
+	def_bool y if RISCV
+	depends on RISCV
+	help
+	  This enables a clocksource based on the RISC-V SBI timer, which is
+	  built in to all RISC-V systems.
+
 endmenu
diff --git a/drivers/clocksource/Makefile b/drivers/clocksource/Makefile
index 2b5b56a6f00f..408ed9d314dc 100644
--- a/drivers/clocksource/Makefile
+++ b/drivers/clocksource/Makefile
@@ -73,3 +73,4 @@ obj-$(CONFIG_H8300_TMR16)		+= h8300_timer16.o
 obj-$(CONFIG_H8300_TPU)			+= h8300_tpu.o
 obj-$(CONFIG_CLKSRC_ST_LPC)		+= clksrc_st_lpc.o
 obj-$(CONFIG_X86_NUMACHIP)		+= numachip.o
+obj-$(CONFIG_CLKSRC_RISCV)		+= timer-riscv.o
diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
new file mode 100644
index 000000000000..04ef7b9130b3
--- /dev/null
+++ b/drivers/clocksource/timer-riscv.c
@@ -0,0 +1,118 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/clocksource.h>
+#include <linux/clockchips.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/delay.h>
+#include <linux/of.h>
+
+#include <asm/irq.h>
+#include <asm/csr.h>
+#include <asm/sbi.h>
+#include <asm/delay.h>
+
+unsigned long riscv_timebase;
+
+static DEFINE_PER_CPU(struct clock_event_device, clock_event);
+
+static int riscv_timer_set_next_event(unsigned long delta,
+	struct clock_event_device *evdev)
+{
+	sbi_set_timer(get_cycles() + delta);
+	return 0;
+}
+
+static int riscv_timer_set_oneshot(struct clock_event_device *evt)
+{
+	/* no-op; only one mode */
+	return 0;
+}
+
+static int riscv_timer_set_shutdown(struct clock_event_device *evt)
+{
+	/* can't stop the clock! */
+	return 0;
+}
+
+static u64 riscv_rdtime(struct clocksource *cs)
+{
+	return get_cycles();
+}
+
+static struct clocksource riscv_clocksource = {
+	.name = "riscv_clocksource",
+	.rating = 300,
+	.read = riscv_rdtime,
+#ifdef CONFIG_64BITS
+	.mask = CLOCKSOURCE_MASK(64),
+#else
+	.mask = CLOCKSOURCE_MASK(32),
+#endif /* CONFIG_64BITS */
+	.flags = CLOCK_SOURCE_IS_CONTINUOUS,
+};
+
+void riscv_timer_interrupt(void)
+{
+	int cpu = smp_processor_id();
+	struct clock_event_device *evdev = &per_cpu(clock_event, cpu);
+
+	evdev->event_handler(evdev);
+}
+
+void __init init_clockevent(void)
+{
+	int cpu = smp_processor_id();
+	struct clock_event_device *ce = &per_cpu(clock_event, cpu);
+
+	*ce = (struct clock_event_device){
+		.name = "riscv_timer_clockevent",
+		.features = CLOCK_EVT_FEAT_ONESHOT,
+		.rating = 300,
+		.cpumask = cpumask_of(cpu),
+		.set_next_event = riscv_timer_set_next_event,
+		.set_state_oneshot  = riscv_timer_set_oneshot,
+		.set_state_shutdown = riscv_timer_set_shutdown,
+	};
+
+	/* Enable timer interrupts */
+	csr_set(sie, SIE_STIE);
+
+	clockevents_config_and_register(ce, riscv_timebase, 100, 0x7fffffff);
+}
+
+static unsigned long __init of_timebase(void)
+{
+	struct device_node *cpu;
+	const __be32 *prop;
+
+	cpu = of_find_node_by_path("/cpus");
+	if (cpu) {
+		prop = of_get_property(cpu, "timebase-frequency", NULL);
+		if (prop)
+			return be32_to_cpu(*prop);
+	}
+
+	return 10000000;
+}
+
+void __init time_init(void)
+{
+	riscv_timebase = of_timebase();
+	lpj_fine = riscv_timebase / HZ;
+
+	clocksource_register_hz(&riscv_clocksource, riscv_timebase);
+	init_clockevent();
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 09/17] clocksource/timer-riscv: New RISC-V Clocksource
@ 2017-06-06 22:59     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 22:59 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

The RISC-V ISA defines a single RTC as well as an SBI oneshot timer.
This timer is present on all RISC-V systems.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/clocksource/Kconfig       |   8 +++
 drivers/clocksource/Makefile      |   1 +
 drivers/clocksource/timer-riscv.c | 118 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 127 insertions(+)
 create mode 100644 drivers/clocksource/timer-riscv.c

diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
index 545d541ae20e..1c2c6e7c7fab 100644
--- a/drivers/clocksource/Kconfig
+++ b/drivers/clocksource/Kconfig
@@ -612,4 +612,12 @@ config CLKSRC_ST_LPC
 	  Enable this option to use the Low Power controller timer
 	  as clocksource.
 
+config CLKSRC_RISCV
+	#bool "Clocksource for the RISC-V platform"
+	def_bool y if RISCV
+	depends on RISCV
+	help
+	  This enables a clocksource based on the RISC-V SBI timer, which is
+	  built in to all RISC-V systems.
+
 endmenu
diff --git a/drivers/clocksource/Makefile b/drivers/clocksource/Makefile
index 2b5b56a6f00f..408ed9d314dc 100644
--- a/drivers/clocksource/Makefile
+++ b/drivers/clocksource/Makefile
@@ -73,3 +73,4 @@ obj-$(CONFIG_H8300_TMR16)		+= h8300_timer16.o
 obj-$(CONFIG_H8300_TPU)			+= h8300_tpu.o
 obj-$(CONFIG_CLKSRC_ST_LPC)		+= clksrc_st_lpc.o
 obj-$(CONFIG_X86_NUMACHIP)		+= numachip.o
+obj-$(CONFIG_CLKSRC_RISCV)		+= timer-riscv.o
diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
new file mode 100644
index 000000000000..04ef7b9130b3
--- /dev/null
+++ b/drivers/clocksource/timer-riscv.c
@@ -0,0 +1,118 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/clocksource.h>
+#include <linux/clockchips.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/delay.h>
+#include <linux/of.h>
+
+#include <asm/irq.h>
+#include <asm/csr.h>
+#include <asm/sbi.h>
+#include <asm/delay.h>
+
+unsigned long riscv_timebase;
+
+static DEFINE_PER_CPU(struct clock_event_device, clock_event);
+
+static int riscv_timer_set_next_event(unsigned long delta,
+	struct clock_event_device *evdev)
+{
+	sbi_set_timer(get_cycles() + delta);
+	return 0;
+}
+
+static int riscv_timer_set_oneshot(struct clock_event_device *evt)
+{
+	/* no-op; only one mode */
+	return 0;
+}
+
+static int riscv_timer_set_shutdown(struct clock_event_device *evt)
+{
+	/* can't stop the clock! */
+	return 0;
+}
+
+static u64 riscv_rdtime(struct clocksource *cs)
+{
+	return get_cycles();
+}
+
+static struct clocksource riscv_clocksource = {
+	.name = "riscv_clocksource",
+	.rating = 300,
+	.read = riscv_rdtime,
+#ifdef CONFIG_64BITS
+	.mask = CLOCKSOURCE_MASK(64),
+#else
+	.mask = CLOCKSOURCE_MASK(32),
+#endif /* CONFIG_64BITS */
+	.flags = CLOCK_SOURCE_IS_CONTINUOUS,
+};
+
+void riscv_timer_interrupt(void)
+{
+	int cpu = smp_processor_id();
+	struct clock_event_device *evdev = &per_cpu(clock_event, cpu);
+
+	evdev->event_handler(evdev);
+}
+
+void __init init_clockevent(void)
+{
+	int cpu = smp_processor_id();
+	struct clock_event_device *ce = &per_cpu(clock_event, cpu);
+
+	*ce = (struct clock_event_device){
+		.name = "riscv_timer_clockevent",
+		.features = CLOCK_EVT_FEAT_ONESHOT,
+		.rating = 300,
+		.cpumask = cpumask_of(cpu),
+		.set_next_event = riscv_timer_set_next_event,
+		.set_state_oneshot  = riscv_timer_set_oneshot,
+		.set_state_shutdown = riscv_timer_set_shutdown,
+	};
+
+	/* Enable timer interrupts */
+	csr_set(sie, SIE_STIE);
+
+	clockevents_config_and_register(ce, riscv_timebase, 100, 0x7fffffff);
+}
+
+static unsigned long __init of_timebase(void)
+{
+	struct device_node *cpu;
+	const __be32 *prop;
+
+	cpu = of_find_node_by_path("/cpus");
+	if (cpu) {
+		prop = of_get_property(cpu, "timebase-frequency", NULL);
+		if (prop)
+			return be32_to_cpu(*prop);
+	}
+
+	return 10000000;
+}
+
+void __init time_init(void)
+{
+	riscv_timebase = of_timebase();
+	lpj_fine = riscv_timebase / HZ;
+
+	clocksource_register_hz(&riscv_clocksource, riscv_timebase);
+	init_clockevent();
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 10/17] irqchip: New RISC-V PLIC Driver
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 23:00     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

This patch adds a driver for the Platform Level Interrupt Controller
(PLIC) specified as part of the RISC-V supervisor level ISA manual.
The PLIC connocts global interrupt sources to the local interrupt
controller on each hart.  A PLIC is present on all RISC-V systems.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/irqchip/Kconfig          |  12 ++
 drivers/irqchip/Makefile         |   1 +
 drivers/irqchip/irq-riscv-plic.c | 253 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 266 insertions(+)
 create mode 100644 drivers/irqchip/irq-riscv-plic.c

diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
index 478f8ace2664..2906d63934ef 100644
--- a/drivers/irqchip/Kconfig
+++ b/drivers/irqchip/Kconfig
@@ -301,3 +301,15 @@ config QCOM_IRQ_COMBINER
 	help
 	  Say yes here to add support for the IRQ combiner devices embedded
 	  in Qualcomm Technologies chips.
+
+config RISCV_PLIC
+        bool "Platform-Level Interrupt Controller"
+	depends on RISCV
+        default y
+        help
+	   This enables support for the PLIC chip found in standard RISC-V
+	   systems. The PLIC is the top-most interrupt controller found in
+	   the system, connected directly to the core complex. All other
+	   interrupt sources (MSI, GPIO, etc) are subordinate to the PLIC.
+
+	   If you don't know what to do here, say Y.
diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile
index b64c59b838a0..bed94cc89146 100644
--- a/drivers/irqchip/Makefile
+++ b/drivers/irqchip/Makefile
@@ -76,3 +76,4 @@ obj-$(CONFIG_EZNPS_GIC)			+= irq-eznps.o
 obj-$(CONFIG_ARCH_ASPEED)		+= irq-aspeed-vic.o
 obj-$(CONFIG_STM32_EXTI) 		+= irq-stm32-exti.o
 obj-$(CONFIG_QCOM_IRQ_COMBINER)		+= qcom-irq-combiner.o
+obj-$(CONFIG_RISCV_PLIC)		+= irq-riscv-plic.o
diff --git a/drivers/irqchip/irq-riscv-plic.c b/drivers/irqchip/irq-riscv-plic.c
new file mode 100644
index 000000000000..906c8a62a911
--- /dev/null
+++ b/drivers/irqchip/irq-riscv-plic.c
@@ -0,0 +1,253 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/irq.h>
+#include <linux/irqchip.h>
+#include <linux/irqchip/chained_irq.h>
+#include <linux/irqdomain.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/platform_device.h>
+
+/* From the RISC-V Privlidged Spec v1.10:
+ *
+ * Global interrupt sources are assigned small unsigned integer identifiers,
+ * beginning at the value 1.  An interrupt ID of 0 is reserved to mean “no
+ * interrupt”.  Interrupt identifiers are also used to break ties when two or
+ * more interrupt sources have the same assigned priority. Smaller values of
+ * interrupt ID take precedence over larger values of interrupt ID.
+ *
+ * It's not defined what the largest device ID is, so we're just fixing
+ * MAX_DEVICES right here (which is named oddly, as there will never be a
+ * device 0).
+ */
+#define MAX_DEVICES	1024
+#define MAX_CONTEXTS	15872
+
+#define PRIORITY_BASE	0
+#define ENABLE_BASE	0x2000
+#define ENABLE_SIZE	0x80
+#define HART_BASE	0x200000
+#define HART_SIZE	0x1000
+
+struct plic_hart_context {
+	u32 threshold;
+	u32 claim;
+};
+
+struct plic_enable_context {
+	atomic_t mask[32]; // 32-bit * 32-entry
+};
+
+struct plic_priority {
+	u32 prio[MAX_DEVICES];
+};
+
+struct plic_data {
+	struct irq_chip		chip;
+	struct irq_domain	*domain;
+	u32			ndev;
+	void __iomem		*reg;
+	int			handlers;
+	struct plic_handler	*handler;
+	char			name[30];
+};
+
+struct plic_handler {
+	struct plic_hart_context	*context;
+	struct plic_data		*data;
+};
+
+static inline
+struct plic_hart_context *plic_hart_context(struct plic_data *data, size_t i)
+{
+	return (struct plic_hart_context *)((char *)data->reg + HART_BASE + HART_SIZE*i);
+}
+
+static inline
+struct plic_enable_context *plic_enable_context(struct plic_data *data, size_t i)
+{
+	return (struct plic_enable_context *)((char *)data->reg + ENABLE_BASE + ENABLE_SIZE*i);
+}
+
+static inline
+struct plic_priority *plic_priority(struct plic_data *data)
+{
+	return (struct plic_priority *)((char *)data->reg + PRIORITY_BASE);
+}
+
+static void plic_disable(struct plic_data *data, int i, int hwirq)
+{
+	struct plic_enable_context *enable = plic_enable_context(data, i);
+
+	atomic_and(~(1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
+}
+
+static void plic_enable(struct plic_data *data, int i, int hwirq)
+{
+	struct plic_enable_context *enable = plic_enable_context(data, i);
+
+	atomic_or((1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
+}
+
+// There is no need to mask/unmask PLIC interrupts
+// They are "masked" by reading claim and "unmasked" when writing it back.
+static void plic_irq_mask(struct irq_data *d) { }
+static void plic_irq_unmask(struct irq_data *d) { }
+
+static void plic_irq_enable(struct irq_data *d)
+{
+	struct plic_data *data = irq_data_get_irq_chip_data(d);
+	struct plic_priority *priority = plic_priority(data);
+	int i;
+
+	iowrite32(1, &priority->prio[d->hwirq]);
+	for (i = 0; i < data->handlers; ++i)
+		if (data->handler[i].context)
+			plic_enable(data, i, d->hwirq);
+}
+
+static void plic_irq_disable(struct irq_data *d)
+{
+	struct plic_data *data = irq_data_get_irq_chip_data(d);
+	struct plic_priority *priority = plic_priority(data);
+	int i;
+
+	iowrite32(0, &priority->prio[d->hwirq]);
+	for (i = 0; i < data->handlers; ++i)
+		if (data->handler[i].context)
+			plic_disable(data, i, d->hwirq);
+}
+
+static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
+			      irq_hw_number_t hwirq)
+{
+	struct plic_data *data = d->host_data;
+
+	irq_set_chip_and_handler(irq, &data->chip, handle_simple_irq);
+	irq_set_chip_data(irq, data);
+	irq_set_noprobe(irq);
+
+	return 0;
+}
+
+static const struct irq_domain_ops plic_irqdomain_ops = {
+	.map	= plic_irqdomain_map,
+	.xlate	= irq_domain_xlate_onecell,
+};
+
+static void plic_chained_handle_irq(struct irq_desc *desc)
+{
+	struct plic_handler *handler = irq_desc_get_handler_data(desc);
+	struct irq_chip *chip = irq_desc_get_chip(desc);
+	struct irq_domain *domain = handler->data->domain;
+	u32 what;
+
+	chained_irq_enter(chip, desc);
+
+	while ((what = ioread32(&handler->context->claim))) {
+		int irq = irq_find_mapping(domain, what);
+
+		if (irq > 0)
+			generic_handle_irq(irq);
+		else
+			handle_bad_irq(desc);
+		iowrite32(what, &handler->context->claim);
+	}
+
+	chained_irq_exit(chip, desc);
+}
+
+static int plic_init(struct device_node *node, struct device_node *parent)
+{
+	struct plic_data *data;
+	struct resource resource;
+	int i, ok = 0;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (WARN_ON(!data))
+		return -ENOMEM;
+
+	data->reg = of_iomap(node, 0);
+	if (WARN_ON(!data->reg))
+		return -EIO;
+
+	of_property_read_u32(node, "riscv,ndev", &data->ndev);
+	if (WARN_ON(!data->ndev))
+		return -EINVAL;
+
+	data->handlers = of_irq_count(node);
+	if (WARN_ON(!data->handlers))
+		return -EINVAL;
+
+	data->handler =
+		kcalloc(data->handlers, sizeof(*data->handler), GFP_KERNEL);
+	if (WARN_ON(!data->handler))
+		return -ENOMEM;
+
+	data->domain = irq_domain_add_linear(node, data->ndev+1, &plic_irqdomain_ops, data);
+	if (WARN_ON(!data->domain))
+		return -ENOMEM;
+
+	of_address_to_resource(node, 0, &resource);
+	snprintf(data->name, sizeof(data->name),
+		 "riscv,plic0,%llx", resource.start);
+	data->chip.name = data->name;
+	data->chip.irq_mask = plic_irq_mask;
+	data->chip.irq_unmask = plic_irq_unmask;
+	data->chip.irq_enable = plic_irq_enable;
+	data->chip.irq_disable = plic_irq_disable;
+
+	for (i = 0; i < data->handlers; ++i) {
+		struct plic_handler *handler = &data->handler[i];
+		struct of_phandle_args parent;
+		int parent_irq, hwirq;
+
+		if (of_irq_parse_one(node, i, &parent))
+			continue;
+		// skip context holes
+		if (parent.args[0] == -1)
+			continue;
+
+		// skip any contexts that lead to inactive harts
+		if (of_device_is_compatible(parent.np, "riscv,cpu-intc") &&
+		    parent.np->parent &&
+		    riscv_of_processor_hart(parent.np->parent) < 0)
+			continue;
+
+		parent_irq = irq_create_of_mapping(&parent);
+		if (!parent_irq)
+			continue;
+
+		handler->context = plic_hart_context(data, i);
+		handler->data = data;
+		// hwirq prio must be > this to trigger an interrupt
+		iowrite32(0, &handler->context->threshold);
+
+		for (hwirq = 1; hwirq <= data->ndev; ++hwirq)
+			plic_disable(data, i, hwirq);
+		irq_set_chained_handler_and_data(parent_irq, plic_chained_handle_irq, handler);
+		++ok;
+	}
+
+	printk(KERN_INFO "%s: mapped %d interrupts to %d/%d handlers\n",
+	       data->name, data->ndev, ok, data->handlers);
+	WARN_ON(!ok);
+	return 0;
+}
+
+IRQCHIP_DECLARE(plic0, "riscv,plic0", plic_init);
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 10/17] irqchip: New RISC-V PLIC Driver
  2017-06-06 22:59   ` Palmer Dabbelt
                     ` (14 preceding siblings ...)
  (?)
@ 2017-06-06 23:00   ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

This patch adds a driver for the Platform Level Interrupt Controller
(PLIC) specified as part of the RISC-V supervisor level ISA manual.
The PLIC connocts global interrupt sources to the local interrupt
controller on each hart.  A PLIC is present on all RISC-V systems.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/irqchip/Kconfig          |  12 ++
 drivers/irqchip/Makefile         |   1 +
 drivers/irqchip/irq-riscv-plic.c | 253 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 266 insertions(+)
 create mode 100644 drivers/irqchip/irq-riscv-plic.c

diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
index 478f8ace2664..2906d63934ef 100644
--- a/drivers/irqchip/Kconfig
+++ b/drivers/irqchip/Kconfig
@@ -301,3 +301,15 @@ config QCOM_IRQ_COMBINER
 	help
 	  Say yes here to add support for the IRQ combiner devices embedded
 	  in Qualcomm Technologies chips.
+
+config RISCV_PLIC
+        bool "Platform-Level Interrupt Controller"
+	depends on RISCV
+        default y
+        help
+	   This enables support for the PLIC chip found in standard RISC-V
+	   systems. The PLIC is the top-most interrupt controller found in
+	   the system, connected directly to the core complex. All other
+	   interrupt sources (MSI, GPIO, etc) are subordinate to the PLIC.
+
+	   If you don't know what to do here, say Y.
diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile
index b64c59b838a0..bed94cc89146 100644
--- a/drivers/irqchip/Makefile
+++ b/drivers/irqchip/Makefile
@@ -76,3 +76,4 @@ obj-$(CONFIG_EZNPS_GIC)			+= irq-eznps.o
 obj-$(CONFIG_ARCH_ASPEED)		+= irq-aspeed-vic.o
 obj-$(CONFIG_STM32_EXTI) 		+= irq-stm32-exti.o
 obj-$(CONFIG_QCOM_IRQ_COMBINER)		+= qcom-irq-combiner.o
+obj-$(CONFIG_RISCV_PLIC)		+= irq-riscv-plic.o
diff --git a/drivers/irqchip/irq-riscv-plic.c b/drivers/irqchip/irq-riscv-plic.c
new file mode 100644
index 000000000000..906c8a62a911
--- /dev/null
+++ b/drivers/irqchip/irq-riscv-plic.c
@@ -0,0 +1,253 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/irq.h>
+#include <linux/irqchip.h>
+#include <linux/irqchip/chained_irq.h>
+#include <linux/irqdomain.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/platform_device.h>
+
+/* From the RISC-V Privlidged Spec v1.10:
+ *
+ * Global interrupt sources are assigned small unsigned integer identifiers,
+ * beginning at the value 1.  An interrupt ID of 0 is reserved to mean “no
+ * interrupt”.  Interrupt identifiers are also used to break ties when two or
+ * more interrupt sources have the same assigned priority. Smaller values of
+ * interrupt ID take precedence over larger values of interrupt ID.
+ *
+ * It's not defined what the largest device ID is, so we're just fixing
+ * MAX_DEVICES right here (which is named oddly, as there will never be a
+ * device 0).
+ */
+#define MAX_DEVICES	1024
+#define MAX_CONTEXTS	15872
+
+#define PRIORITY_BASE	0
+#define ENABLE_BASE	0x2000
+#define ENABLE_SIZE	0x80
+#define HART_BASE	0x200000
+#define HART_SIZE	0x1000
+
+struct plic_hart_context {
+	u32 threshold;
+	u32 claim;
+};
+
+struct plic_enable_context {
+	atomic_t mask[32]; // 32-bit * 32-entry
+};
+
+struct plic_priority {
+	u32 prio[MAX_DEVICES];
+};
+
+struct plic_data {
+	struct irq_chip		chip;
+	struct irq_domain	*domain;
+	u32			ndev;
+	void __iomem		*reg;
+	int			handlers;
+	struct plic_handler	*handler;
+	char			name[30];
+};
+
+struct plic_handler {
+	struct plic_hart_context	*context;
+	struct plic_data		*data;
+};
+
+static inline
+struct plic_hart_context *plic_hart_context(struct plic_data *data, size_t i)
+{
+	return (struct plic_hart_context *)((char *)data->reg + HART_BASE + HART_SIZE*i);
+}
+
+static inline
+struct plic_enable_context *plic_enable_context(struct plic_data *data, size_t i)
+{
+	return (struct plic_enable_context *)((char *)data->reg + ENABLE_BASE + ENABLE_SIZE*i);
+}
+
+static inline
+struct plic_priority *plic_priority(struct plic_data *data)
+{
+	return (struct plic_priority *)((char *)data->reg + PRIORITY_BASE);
+}
+
+static void plic_disable(struct plic_data *data, int i, int hwirq)
+{
+	struct plic_enable_context *enable = plic_enable_context(data, i);
+
+	atomic_and(~(1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
+}
+
+static void plic_enable(struct plic_data *data, int i, int hwirq)
+{
+	struct plic_enable_context *enable = plic_enable_context(data, i);
+
+	atomic_or((1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
+}
+
+// There is no need to mask/unmask PLIC interrupts
+// They are "masked" by reading claim and "unmasked" when writing it back.
+static void plic_irq_mask(struct irq_data *d) { }
+static void plic_irq_unmask(struct irq_data *d) { }
+
+static void plic_irq_enable(struct irq_data *d)
+{
+	struct plic_data *data = irq_data_get_irq_chip_data(d);
+	struct plic_priority *priority = plic_priority(data);
+	int i;
+
+	iowrite32(1, &priority->prio[d->hwirq]);
+	for (i = 0; i < data->handlers; ++i)
+		if (data->handler[i].context)
+			plic_enable(data, i, d->hwirq);
+}
+
+static void plic_irq_disable(struct irq_data *d)
+{
+	struct plic_data *data = irq_data_get_irq_chip_data(d);
+	struct plic_priority *priority = plic_priority(data);
+	int i;
+
+	iowrite32(0, &priority->prio[d->hwirq]);
+	for (i = 0; i < data->handlers; ++i)
+		if (data->handler[i].context)
+			plic_disable(data, i, d->hwirq);
+}
+
+static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
+			      irq_hw_number_t hwirq)
+{
+	struct plic_data *data = d->host_data;
+
+	irq_set_chip_and_handler(irq, &data->chip, handle_simple_irq);
+	irq_set_chip_data(irq, data);
+	irq_set_noprobe(irq);
+
+	return 0;
+}
+
+static const struct irq_domain_ops plic_irqdomain_ops = {
+	.map	= plic_irqdomain_map,
+	.xlate	= irq_domain_xlate_onecell,
+};
+
+static void plic_chained_handle_irq(struct irq_desc *desc)
+{
+	struct plic_handler *handler = irq_desc_get_handler_data(desc);
+	struct irq_chip *chip = irq_desc_get_chip(desc);
+	struct irq_domain *domain = handler->data->domain;
+	u32 what;
+
+	chained_irq_enter(chip, desc);
+
+	while ((what = ioread32(&handler->context->claim))) {
+		int irq = irq_find_mapping(domain, what);
+
+		if (irq > 0)
+			generic_handle_irq(irq);
+		else
+			handle_bad_irq(desc);
+		iowrite32(what, &handler->context->claim);
+	}
+
+	chained_irq_exit(chip, desc);
+}
+
+static int plic_init(struct device_node *node, struct device_node *parent)
+{
+	struct plic_data *data;
+	struct resource resource;
+	int i, ok = 0;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (WARN_ON(!data))
+		return -ENOMEM;
+
+	data->reg = of_iomap(node, 0);
+	if (WARN_ON(!data->reg))
+		return -EIO;
+
+	of_property_read_u32(node, "riscv,ndev", &data->ndev);
+	if (WARN_ON(!data->ndev))
+		return -EINVAL;
+
+	data->handlers = of_irq_count(node);
+	if (WARN_ON(!data->handlers))
+		return -EINVAL;
+
+	data->handler =
+		kcalloc(data->handlers, sizeof(*data->handler), GFP_KERNEL);
+	if (WARN_ON(!data->handler))
+		return -ENOMEM;
+
+	data->domain = irq_domain_add_linear(node, data->ndev+1, &plic_irqdomain_ops, data);
+	if (WARN_ON(!data->domain))
+		return -ENOMEM;
+
+	of_address_to_resource(node, 0, &resource);
+	snprintf(data->name, sizeof(data->name),
+		 "riscv,plic0,%llx", resource.start);
+	data->chip.name = data->name;
+	data->chip.irq_mask = plic_irq_mask;
+	data->chip.irq_unmask = plic_irq_unmask;
+	data->chip.irq_enable = plic_irq_enable;
+	data->chip.irq_disable = plic_irq_disable;
+
+	for (i = 0; i < data->handlers; ++i) {
+		struct plic_handler *handler = &data->handler[i];
+		struct of_phandle_args parent;
+		int parent_irq, hwirq;
+
+		if (of_irq_parse_one(node, i, &parent))
+			continue;
+		// skip context holes
+		if (parent.args[0] == -1)
+			continue;
+
+		// skip any contexts that lead to inactive harts
+		if (of_device_is_compatible(parent.np, "riscv,cpu-intc") &&
+		    parent.np->parent &&
+		    riscv_of_processor_hart(parent.np->parent) < 0)
+			continue;
+
+		parent_irq = irq_create_of_mapping(&parent);
+		if (!parent_irq)
+			continue;
+
+		handler->context = plic_hart_context(data, i);
+		handler->data = data;
+		// hwirq prio must be > this to trigger an interrupt
+		iowrite32(0, &handler->context->threshold);
+
+		for (hwirq = 1; hwirq <= data->ndev; ++hwirq)
+			plic_disable(data, i, hwirq);
+		irq_set_chained_handler_and_data(parent_irq, plic_chained_handle_irq, handler);
+		++ok;
+	}
+
+	printk(KERN_INFO "%s: mapped %d interrupts to %d/%d handlers\n",
+	       data->name, data->ndev, ok, data->handlers);
+	WARN_ON(!ok);
+	return 0;
+}
+
+IRQCHIP_DECLARE(plic0, "riscv,plic0", plic_init);
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 10/17] irqchip: New RISC-V PLIC Driver
@ 2017-06-06 23:00     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

This patch adds a driver for the Platform Level Interrupt Controller
(PLIC) specified as part of the RISC-V supervisor level ISA manual.
The PLIC connocts global interrupt sources to the local interrupt
controller on each hart.  A PLIC is present on all RISC-V systems.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/irqchip/Kconfig          |  12 ++
 drivers/irqchip/Makefile         |   1 +
 drivers/irqchip/irq-riscv-plic.c | 253 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 266 insertions(+)
 create mode 100644 drivers/irqchip/irq-riscv-plic.c

diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
index 478f8ace2664..2906d63934ef 100644
--- a/drivers/irqchip/Kconfig
+++ b/drivers/irqchip/Kconfig
@@ -301,3 +301,15 @@ config QCOM_IRQ_COMBINER
 	help
 	  Say yes here to add support for the IRQ combiner devices embedded
 	  in Qualcomm Technologies chips.
+
+config RISCV_PLIC
+        bool "Platform-Level Interrupt Controller"
+	depends on RISCV
+        default y
+        help
+	   This enables support for the PLIC chip found in standard RISC-V
+	   systems. The PLIC is the top-most interrupt controller found in
+	   the system, connected directly to the core complex. All other
+	   interrupt sources (MSI, GPIO, etc) are subordinate to the PLIC.
+
+	   If you don't know what to do here, say Y.
diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile
index b64c59b838a0..bed94cc89146 100644
--- a/drivers/irqchip/Makefile
+++ b/drivers/irqchip/Makefile
@@ -76,3 +76,4 @@ obj-$(CONFIG_EZNPS_GIC)			+= irq-eznps.o
 obj-$(CONFIG_ARCH_ASPEED)		+= irq-aspeed-vic.o
 obj-$(CONFIG_STM32_EXTI) 		+= irq-stm32-exti.o
 obj-$(CONFIG_QCOM_IRQ_COMBINER)		+= qcom-irq-combiner.o
+obj-$(CONFIG_RISCV_PLIC)		+= irq-riscv-plic.o
diff --git a/drivers/irqchip/irq-riscv-plic.c b/drivers/irqchip/irq-riscv-plic.c
new file mode 100644
index 000000000000..906c8a62a911
--- /dev/null
+++ b/drivers/irqchip/irq-riscv-plic.c
@@ -0,0 +1,253 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/irq.h>
+#include <linux/irqchip.h>
+#include <linux/irqchip/chained_irq.h>
+#include <linux/irqdomain.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/platform_device.h>
+
+/* From the RISC-V Privlidged Spec v1.10:
+ *
+ * Global interrupt sources are assigned small unsigned integer identifiers,
+ * beginning at the value 1.  An interrupt ID of 0 is reserved to mean “no
+ * interrupt”.  Interrupt identifiers are also used to break ties when two or
+ * more interrupt sources have the same assigned priority. Smaller values of
+ * interrupt ID take precedence over larger values of interrupt ID.
+ *
+ * It's not defined what the largest device ID is, so we're just fixing
+ * MAX_DEVICES right here (which is named oddly, as there will never be a
+ * device 0).
+ */
+#define MAX_DEVICES	1024
+#define MAX_CONTEXTS	15872
+
+#define PRIORITY_BASE	0
+#define ENABLE_BASE	0x2000
+#define ENABLE_SIZE	0x80
+#define HART_BASE	0x200000
+#define HART_SIZE	0x1000
+
+struct plic_hart_context {
+	u32 threshold;
+	u32 claim;
+};
+
+struct plic_enable_context {
+	atomic_t mask[32]; // 32-bit * 32-entry
+};
+
+struct plic_priority {
+	u32 prio[MAX_DEVICES];
+};
+
+struct plic_data {
+	struct irq_chip		chip;
+	struct irq_domain	*domain;
+	u32			ndev;
+	void __iomem		*reg;
+	int			handlers;
+	struct plic_handler	*handler;
+	char			name[30];
+};
+
+struct plic_handler {
+	struct plic_hart_context	*context;
+	struct plic_data		*data;
+};
+
+static inline
+struct plic_hart_context *plic_hart_context(struct plic_data *data, size_t i)
+{
+	return (struct plic_hart_context *)((char *)data->reg + HART_BASE + HART_SIZE*i);
+}
+
+static inline
+struct plic_enable_context *plic_enable_context(struct plic_data *data, size_t i)
+{
+	return (struct plic_enable_context *)((char *)data->reg + ENABLE_BASE + ENABLE_SIZE*i);
+}
+
+static inline
+struct plic_priority *plic_priority(struct plic_data *data)
+{
+	return (struct plic_priority *)((char *)data->reg + PRIORITY_BASE);
+}
+
+static void plic_disable(struct plic_data *data, int i, int hwirq)
+{
+	struct plic_enable_context *enable = plic_enable_context(data, i);
+
+	atomic_and(~(1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
+}
+
+static void plic_enable(struct plic_data *data, int i, int hwirq)
+{
+	struct plic_enable_context *enable = plic_enable_context(data, i);
+
+	atomic_or((1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
+}
+
+// There is no need to mask/unmask PLIC interrupts
+// They are "masked" by reading claim and "unmasked" when writing it back.
+static void plic_irq_mask(struct irq_data *d) { }
+static void plic_irq_unmask(struct irq_data *d) { }
+
+static void plic_irq_enable(struct irq_data *d)
+{
+	struct plic_data *data = irq_data_get_irq_chip_data(d);
+	struct plic_priority *priority = plic_priority(data);
+	int i;
+
+	iowrite32(1, &priority->prio[d->hwirq]);
+	for (i = 0; i < data->handlers; ++i)
+		if (data->handler[i].context)
+			plic_enable(data, i, d->hwirq);
+}
+
+static void plic_irq_disable(struct irq_data *d)
+{
+	struct plic_data *data = irq_data_get_irq_chip_data(d);
+	struct plic_priority *priority = plic_priority(data);
+	int i;
+
+	iowrite32(0, &priority->prio[d->hwirq]);
+	for (i = 0; i < data->handlers; ++i)
+		if (data->handler[i].context)
+			plic_disable(data, i, d->hwirq);
+}
+
+static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
+			      irq_hw_number_t hwirq)
+{
+	struct plic_data *data = d->host_data;
+
+	irq_set_chip_and_handler(irq, &data->chip, handle_simple_irq);
+	irq_set_chip_data(irq, data);
+	irq_set_noprobe(irq);
+
+	return 0;
+}
+
+static const struct irq_domain_ops plic_irqdomain_ops = {
+	.map	= plic_irqdomain_map,
+	.xlate	= irq_domain_xlate_onecell,
+};
+
+static void plic_chained_handle_irq(struct irq_desc *desc)
+{
+	struct plic_handler *handler = irq_desc_get_handler_data(desc);
+	struct irq_chip *chip = irq_desc_get_chip(desc);
+	struct irq_domain *domain = handler->data->domain;
+	u32 what;
+
+	chained_irq_enter(chip, desc);
+
+	while ((what = ioread32(&handler->context->claim))) {
+		int irq = irq_find_mapping(domain, what);
+
+		if (irq > 0)
+			generic_handle_irq(irq);
+		else
+			handle_bad_irq(desc);
+		iowrite32(what, &handler->context->claim);
+	}
+
+	chained_irq_exit(chip, desc);
+}
+
+static int plic_init(struct device_node *node, struct device_node *parent)
+{
+	struct plic_data *data;
+	struct resource resource;
+	int i, ok = 0;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (WARN_ON(!data))
+		return -ENOMEM;
+
+	data->reg = of_iomap(node, 0);
+	if (WARN_ON(!data->reg))
+		return -EIO;
+
+	of_property_read_u32(node, "riscv,ndev", &data->ndev);
+	if (WARN_ON(!data->ndev))
+		return -EINVAL;
+
+	data->handlers = of_irq_count(node);
+	if (WARN_ON(!data->handlers))
+		return -EINVAL;
+
+	data->handler =
+		kcalloc(data->handlers, sizeof(*data->handler), GFP_KERNEL);
+	if (WARN_ON(!data->handler))
+		return -ENOMEM;
+
+	data->domain = irq_domain_add_linear(node, data->ndev+1, &plic_irqdomain_ops, data);
+	if (WARN_ON(!data->domain))
+		return -ENOMEM;
+
+	of_address_to_resource(node, 0, &resource);
+	snprintf(data->name, sizeof(data->name),
+		 "riscv,plic0,%llx", resource.start);
+	data->chip.name = data->name;
+	data->chip.irq_mask = plic_irq_mask;
+	data->chip.irq_unmask = plic_irq_unmask;
+	data->chip.irq_enable = plic_irq_enable;
+	data->chip.irq_disable = plic_irq_disable;
+
+	for (i = 0; i < data->handlers; ++i) {
+		struct plic_handler *handler = &data->handler[i];
+		struct of_phandle_args parent;
+		int parent_irq, hwirq;
+
+		if (of_irq_parse_one(node, i, &parent))
+			continue;
+		// skip context holes
+		if (parent.args[0] == -1)
+			continue;
+
+		// skip any contexts that lead to inactive harts
+		if (of_device_is_compatible(parent.np, "riscv,cpu-intc") &&
+		    parent.np->parent &&
+		    riscv_of_processor_hart(parent.np->parent) < 0)
+			continue;
+
+		parent_irq = irq_create_of_mapping(&parent);
+		if (!parent_irq)
+			continue;
+
+		handler->context = plic_hart_context(data, i);
+		handler->data = data;
+		// hwirq prio must be > this to trigger an interrupt
+		iowrite32(0, &handler->context->threshold);
+
+		for (hwirq = 1; hwirq <= data->ndev; ++hwirq)
+			plic_disable(data, i, hwirq);
+		irq_set_chained_handler_and_data(parent_irq, plic_chained_handle_irq, handler);
+		++ok;
+	}
+
+	printk(KERN_INFO "%s: mapped %d interrupts to %d/%d handlers\n",
+	       data->name, data->ndev, ok, data->handlers);
+	WARN_ON(!ok);
+	return 0;
+}
+
+IRQCHIP_DECLARE(plic0, "riscv,plic0", plic_init);
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 11/17] irqchip: RISC-V Local Interrupt Controller Driver
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 23:00     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

This patch adds a driver that manages the local interrupts on each
RISC-V hart, as specifiec by the RISC-V supervisor level ISA manual.
The local interrupt controller manages software interrupts, timer
interrupts, and hardware interrupts (which are routed via the
platform level interrupt controller).  Per-hart local interrupt
controllers are found on all RISC-V systems.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/irqchip/Kconfig          |  14 +++
 drivers/irqchip/Makefile         |   1 +
 drivers/irqchip/irq-riscv-intc.c | 239 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 254 insertions(+)
 create mode 100644 drivers/irqchip/irq-riscv-intc.c

diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
index 2906d63934ef..dfde170e5886 100644
--- a/drivers/irqchip/Kconfig
+++ b/drivers/irqchip/Kconfig
@@ -313,3 +313,17 @@ config RISCV_PLIC
 	   interrupt sources (MSI, GPIO, etc) are subordinate to the PLIC.
 
 	   If you don't know what to do here, say Y.
+
+config RISCV_INTC
+	def_bool y if RISCV
+	#bool "RISC-V Interrupt Controller"
+	depends on RISCV
+	default y
+	help
+	   This enables support for the local interrupt controller found in
+	   standard RISC-V systems.  The local interrupt controller handles
+	   timer interrupts, software interrupts, and hardware interrupts.
+	   Without a local interrupt controller the system will be unable to
+	   handle any interrupts, including those passed via the PLIC.
+
+	   If you don't know what to do here, say Y.
diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile
index bed94cc89146..bc9a6b45903b 100644
--- a/drivers/irqchip/Makefile
+++ b/drivers/irqchip/Makefile
@@ -77,3 +77,4 @@ obj-$(CONFIG_ARCH_ASPEED)		+= irq-aspeed-vic.o
 obj-$(CONFIG_STM32_EXTI) 		+= irq-stm32-exti.o
 obj-$(CONFIG_QCOM_IRQ_COMBINER)		+= qcom-irq-combiner.o
 obj-$(CONFIG_RISCV_PLIC)		+= irq-riscv-plic.o
+obj-$(CONFIG_RISCV_INTC)		+= irq-riscv-intc.o
diff --git a/drivers/irqchip/irq-riscv-intc.c b/drivers/irqchip/irq-riscv-intc.c
new file mode 100644
index 000000000000..8150a035aada
--- /dev/null
+++ b/drivers/irqchip/irq-riscv-intc.c
@@ -0,0 +1,239 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/irq.h>
+#include <linux/irqchip.h>
+#include <linux/irqdomain.h>
+#include <linux/interrupt.h>
+#include <linux/ftrace.h>
+#include <linux/of.h>
+#include <linux/seq_file.h>
+
+#include <asm/ptrace.h>
+#include <asm/sbi.h>
+#include <asm/smp.h>
+
+struct riscv_irq_data {
+	struct irq_chip		chip;
+	struct irq_domain	*domain;
+	int			hart;
+	char			name[20];
+};
+DEFINE_PER_CPU(struct riscv_irq_data, riscv_irq_data);
+DEFINE_PER_CPU(atomic_long_t, riscv_early_sie);
+
+static void riscv_software_interrupt(void)
+{
+#ifdef CONFIG_SMP
+	irqreturn_t ret;
+
+	ret = handle_ipi();
+	if (ret != IRQ_NONE)
+		return;
+#endif
+
+	BUG();
+}
+
+asmlinkage void __irq_entry do_IRQ(unsigned int cause, struct pt_regs *regs)
+{
+	struct pt_regs *old_regs = set_irq_regs(regs);
+	struct irq_domain *domain;
+
+	irq_enter();
+
+	/* There are three classes of interrupt: timer, software, and
+	 * external devices.  We dispatch between them here.  External
+	 * device interrupts use the generic IRQ mechanisms.
+	 */
+	switch (cause) {
+	case INTERRUPT_CAUSE_TIMER:
+		riscv_timer_interrupt();
+		break;
+	case INTERRUPT_CAUSE_SOFTWARE:
+		riscv_software_interrupt();
+		break;
+	default:
+		domain = per_cpu(riscv_irq_data, smp_processor_id()).domain;
+		generic_handle_irq(irq_find_mapping(domain, cause));
+		break;
+	}
+
+	irq_exit();
+	set_irq_regs(old_regs);
+}
+
+static int riscv_irqdomain_map(struct irq_domain *d, unsigned int irq,
+			       irq_hw_number_t hwirq)
+{
+	struct riscv_irq_data *data = d->host_data;
+
+	irq_set_chip_and_handler(irq, &data->chip, handle_simple_irq);
+	irq_set_chip_data(irq, data);
+	irq_set_noprobe(irq);
+
+	return 0;
+}
+
+static const struct irq_domain_ops riscv_irqdomain_ops = {
+	.map	= riscv_irqdomain_map,
+	.xlate	= irq_domain_xlate_onecell,
+};
+
+static void riscv_irq_mask(struct irq_data *d)
+{
+	struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
+
+	BUG_ON(smp_processor_id() != data->hart);
+	csr_clear(sie, 1 << (long)d->hwirq);
+}
+
+static void riscv_irq_unmask(struct irq_data *d)
+{
+	struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
+
+	BUG_ON(smp_processor_id() != data->hart);
+	csr_set(sie, 1 << (long)d->hwirq);
+}
+
+static void riscv_irq_enable_helper(void *d)
+{
+	riscv_irq_unmask(d);
+}
+
+static void riscv_irq_enable(struct irq_data *d)
+{
+	struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
+
+	/* There are two phases to setting up an interrupt: first we set a bit
+	 * in this bookkeeping structure, which is used by trap_init to
+	 * initialize SIE for each hart as it comes up.
+	 */
+	atomic_long_or((1 << (long)d->hwirq),
+		       &per_cpu(riscv_early_sie, data->hart));
+
+	/* The CPU is usually online, so here we just attempt to enable the
+	 * interrupt by writing SIE directly.  We need to write SIE on the
+	 * correct hart, which might be another hart.
+	 */
+	if (data->hart == smp_processor_id())
+		riscv_irq_unmask(d);
+	else if (cpu_online(data->hart))
+		smp_call_function_single(data->hart,
+					 riscv_irq_enable_helper,
+					 d,
+					 true);
+}
+
+static void riscv_irq_disable_helper(void *d)
+{
+	riscv_irq_mask(d);
+}
+
+static void riscv_irq_disable(struct irq_data *d)
+{
+	struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
+
+	/* This is the mirror of riscv_irq_enable. */
+	atomic_long_and(~(1 << (long)d->hwirq),
+			&per_cpu(riscv_early_sie, data->hart));
+	if (data->hart == smp_processor_id())
+		riscv_irq_mask(d);
+	else if (cpu_online(data->hart))
+		smp_call_function_single(data->hart,
+					 riscv_irq_disable_helper,
+					 d,
+					 true);
+}
+
+static void riscv_irq_mask_noop(struct irq_data *d) { }
+
+static void riscv_irq_unmask_noop(struct irq_data *d) { }
+
+static void riscv_irq_enable_noop(struct irq_data *d)
+{
+	struct device_node *data = irq_data_get_irq_chip_data(d);
+	u32 hart;
+
+	if (!of_property_read_u32(data, "reg", &hart))
+		printk(
+		  KERN_WARNING "enabled interrupt %d for missing hart %d (this interrupt has no handler)\n",
+		  (int)d->hwirq, hart);
+}
+
+static struct irq_chip riscv_noop_chip = {
+	.name = "riscv,cpu-intc,noop",
+	.irq_mask = riscv_irq_mask_noop,
+	.irq_unmask = riscv_irq_unmask_noop,
+	.irq_enable = riscv_irq_enable_noop,
+};
+
+static int riscv_irqdomain_map_noop(struct irq_domain *d, unsigned int irq,
+				    irq_hw_number_t hwirq)
+{
+	struct device_node *data = d->host_data;
+
+	irq_set_chip_and_handler(irq, &riscv_noop_chip, handle_simple_irq);
+	irq_set_chip_data(irq, data);
+	return 0;
+}
+
+static const struct irq_domain_ops riscv_irqdomain_ops_noop = {
+	.map    = riscv_irqdomain_map_noop,
+	.xlate  = irq_domain_xlate_onecell,
+};
+
+static int riscv_intc_init(struct device_node *node, struct device_node *parent)
+{
+	int hart;
+	struct riscv_irq_data *data;
+
+	if (parent)
+		return 0;
+
+	hart = riscv_of_processor_hart(node->parent);
+	if (hart < 0) {
+		/* If a hart is disabled, create a no-op irq domain.  Devices
+		 * may still have interrupts connected to those harts.  This is
+		 * not wrong... unless they actually load a driver that needs
+		 * it!
+		 */
+		irq_domain_add_linear(
+			node,
+			8*sizeof(uintptr_t),
+			&riscv_irqdomain_ops_noop,
+			node->parent);
+		return 0;
+	}
+
+	data = &per_cpu(riscv_irq_data, hart);
+	snprintf(data->name, sizeof(data->name), "riscv,cpu_intc,%d", hart);
+	data->hart = hart;
+	data->chip.name = data->name;
+	data->chip.irq_mask = riscv_irq_mask;
+	data->chip.irq_unmask = riscv_irq_unmask;
+	data->chip.irq_enable = riscv_irq_enable;
+	data->chip.irq_disable = riscv_irq_disable;
+	data->domain = irq_domain_add_linear(
+		node,
+		8*sizeof(uintptr_t),
+		&riscv_irqdomain_ops,
+		data);
+	WARN_ON(!data->domain);
+	printk(KERN_INFO "%s: %d local interrupts mapped\n",
+	       data->name, 8*(int)sizeof(uintptr_t));
+	return 0;
+}
+
+IRQCHIP_DECLARE(riscv, "riscv,cpu-intc", riscv_intc_init);
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 11/17] irqchip: RISC-V Local Interrupt Controller Driver
@ 2017-06-06 23:00     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

This patch adds a driver that manages the local interrupts on each
RISC-V hart, as specifiec by the RISC-V supervisor level ISA manual.
The local interrupt controller manages software interrupts, timer
interrupts, and hardware interrupts (which are routed via the
platform level interrupt controller).  Per-hart local interrupt
controllers are found on all RISC-V systems.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/irqchip/Kconfig          |  14 +++
 drivers/irqchip/Makefile         |   1 +
 drivers/irqchip/irq-riscv-intc.c | 239 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 254 insertions(+)
 create mode 100644 drivers/irqchip/irq-riscv-intc.c

diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
index 2906d63934ef..dfde170e5886 100644
--- a/drivers/irqchip/Kconfig
+++ b/drivers/irqchip/Kconfig
@@ -313,3 +313,17 @@ config RISCV_PLIC
 	   interrupt sources (MSI, GPIO, etc) are subordinate to the PLIC.
 
 	   If you don't know what to do here, say Y.
+
+config RISCV_INTC
+	def_bool y if RISCV
+	#bool "RISC-V Interrupt Controller"
+	depends on RISCV
+	default y
+	help
+	   This enables support for the local interrupt controller found in
+	   standard RISC-V systems.  The local interrupt controller handles
+	   timer interrupts, software interrupts, and hardware interrupts.
+	   Without a local interrupt controller the system will be unable to
+	   handle any interrupts, including those passed via the PLIC.
+
+	   If you don't know what to do here, say Y.
diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile
index bed94cc89146..bc9a6b45903b 100644
--- a/drivers/irqchip/Makefile
+++ b/drivers/irqchip/Makefile
@@ -77,3 +77,4 @@ obj-$(CONFIG_ARCH_ASPEED)		+= irq-aspeed-vic.o
 obj-$(CONFIG_STM32_EXTI) 		+= irq-stm32-exti.o
 obj-$(CONFIG_QCOM_IRQ_COMBINER)		+= qcom-irq-combiner.o
 obj-$(CONFIG_RISCV_PLIC)		+= irq-riscv-plic.o
+obj-$(CONFIG_RISCV_INTC)		+= irq-riscv-intc.o
diff --git a/drivers/irqchip/irq-riscv-intc.c b/drivers/irqchip/irq-riscv-intc.c
new file mode 100644
index 000000000000..8150a035aada
--- /dev/null
+++ b/drivers/irqchip/irq-riscv-intc.c
@@ -0,0 +1,239 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/irq.h>
+#include <linux/irqchip.h>
+#include <linux/irqdomain.h>
+#include <linux/interrupt.h>
+#include <linux/ftrace.h>
+#include <linux/of.h>
+#include <linux/seq_file.h>
+
+#include <asm/ptrace.h>
+#include <asm/sbi.h>
+#include <asm/smp.h>
+
+struct riscv_irq_data {
+	struct irq_chip		chip;
+	struct irq_domain	*domain;
+	int			hart;
+	char			name[20];
+};
+DEFINE_PER_CPU(struct riscv_irq_data, riscv_irq_data);
+DEFINE_PER_CPU(atomic_long_t, riscv_early_sie);
+
+static void riscv_software_interrupt(void)
+{
+#ifdef CONFIG_SMP
+	irqreturn_t ret;
+
+	ret = handle_ipi();
+	if (ret != IRQ_NONE)
+		return;
+#endif
+
+	BUG();
+}
+
+asmlinkage void __irq_entry do_IRQ(unsigned int cause, struct pt_regs *regs)
+{
+	struct pt_regs *old_regs = set_irq_regs(regs);
+	struct irq_domain *domain;
+
+	irq_enter();
+
+	/* There are three classes of interrupt: timer, software, and
+	 * external devices.  We dispatch between them here.  External
+	 * device interrupts use the generic IRQ mechanisms.
+	 */
+	switch (cause) {
+	case INTERRUPT_CAUSE_TIMER:
+		riscv_timer_interrupt();
+		break;
+	case INTERRUPT_CAUSE_SOFTWARE:
+		riscv_software_interrupt();
+		break;
+	default:
+		domain = per_cpu(riscv_irq_data, smp_processor_id()).domain;
+		generic_handle_irq(irq_find_mapping(domain, cause));
+		break;
+	}
+
+	irq_exit();
+	set_irq_regs(old_regs);
+}
+
+static int riscv_irqdomain_map(struct irq_domain *d, unsigned int irq,
+			       irq_hw_number_t hwirq)
+{
+	struct riscv_irq_data *data = d->host_data;
+
+	irq_set_chip_and_handler(irq, &data->chip, handle_simple_irq);
+	irq_set_chip_data(irq, data);
+	irq_set_noprobe(irq);
+
+	return 0;
+}
+
+static const struct irq_domain_ops riscv_irqdomain_ops = {
+	.map	= riscv_irqdomain_map,
+	.xlate	= irq_domain_xlate_onecell,
+};
+
+static void riscv_irq_mask(struct irq_data *d)
+{
+	struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
+
+	BUG_ON(smp_processor_id() != data->hart);
+	csr_clear(sie, 1 << (long)d->hwirq);
+}
+
+static void riscv_irq_unmask(struct irq_data *d)
+{
+	struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
+
+	BUG_ON(smp_processor_id() != data->hart);
+	csr_set(sie, 1 << (long)d->hwirq);
+}
+
+static void riscv_irq_enable_helper(void *d)
+{
+	riscv_irq_unmask(d);
+}
+
+static void riscv_irq_enable(struct irq_data *d)
+{
+	struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
+
+	/* There are two phases to setting up an interrupt: first we set a bit
+	 * in this bookkeeping structure, which is used by trap_init to
+	 * initialize SIE for each hart as it comes up.
+	 */
+	atomic_long_or((1 << (long)d->hwirq),
+		       &per_cpu(riscv_early_sie, data->hart));
+
+	/* The CPU is usually online, so here we just attempt to enable the
+	 * interrupt by writing SIE directly.  We need to write SIE on the
+	 * correct hart, which might be another hart.
+	 */
+	if (data->hart == smp_processor_id())
+		riscv_irq_unmask(d);
+	else if (cpu_online(data->hart))
+		smp_call_function_single(data->hart,
+					 riscv_irq_enable_helper,
+					 d,
+					 true);
+}
+
+static void riscv_irq_disable_helper(void *d)
+{
+	riscv_irq_mask(d);
+}
+
+static void riscv_irq_disable(struct irq_data *d)
+{
+	struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
+
+	/* This is the mirror of riscv_irq_enable. */
+	atomic_long_and(~(1 << (long)d->hwirq),
+			&per_cpu(riscv_early_sie, data->hart));
+	if (data->hart == smp_processor_id())
+		riscv_irq_mask(d);
+	else if (cpu_online(data->hart))
+		smp_call_function_single(data->hart,
+					 riscv_irq_disable_helper,
+					 d,
+					 true);
+}
+
+static void riscv_irq_mask_noop(struct irq_data *d) { }
+
+static void riscv_irq_unmask_noop(struct irq_data *d) { }
+
+static void riscv_irq_enable_noop(struct irq_data *d)
+{
+	struct device_node *data = irq_data_get_irq_chip_data(d);
+	u32 hart;
+
+	if (!of_property_read_u32(data, "reg", &hart))
+		printk(
+		  KERN_WARNING "enabled interrupt %d for missing hart %d (this interrupt has no handler)\n",
+		  (int)d->hwirq, hart);
+}
+
+static struct irq_chip riscv_noop_chip = {
+	.name = "riscv,cpu-intc,noop",
+	.irq_mask = riscv_irq_mask_noop,
+	.irq_unmask = riscv_irq_unmask_noop,
+	.irq_enable = riscv_irq_enable_noop,
+};
+
+static int riscv_irqdomain_map_noop(struct irq_domain *d, unsigned int irq,
+				    irq_hw_number_t hwirq)
+{
+	struct device_node *data = d->host_data;
+
+	irq_set_chip_and_handler(irq, &riscv_noop_chip, handle_simple_irq);
+	irq_set_chip_data(irq, data);
+	return 0;
+}
+
+static const struct irq_domain_ops riscv_irqdomain_ops_noop = {
+	.map    = riscv_irqdomain_map_noop,
+	.xlate  = irq_domain_xlate_onecell,
+};
+
+static int riscv_intc_init(struct device_node *node, struct device_node *parent)
+{
+	int hart;
+	struct riscv_irq_data *data;
+
+	if (parent)
+		return 0;
+
+	hart = riscv_of_processor_hart(node->parent);
+	if (hart < 0) {
+		/* If a hart is disabled, create a no-op irq domain.  Devices
+		 * may still have interrupts connected to those harts.  This is
+		 * not wrong... unless they actually load a driver that needs
+		 * it!
+		 */
+		irq_domain_add_linear(
+			node,
+			8*sizeof(uintptr_t),
+			&riscv_irqdomain_ops_noop,
+			node->parent);
+		return 0;
+	}
+
+	data = &per_cpu(riscv_irq_data, hart);
+	snprintf(data->name, sizeof(data->name), "riscv,cpu_intc,%d", hart);
+	data->hart = hart;
+	data->chip.name = data->name;
+	data->chip.irq_mask = riscv_irq_mask;
+	data->chip.irq_unmask = riscv_irq_unmask;
+	data->chip.irq_enable = riscv_irq_enable;
+	data->chip.irq_disable = riscv_irq_disable;
+	data->domain = irq_domain_add_linear(
+		node,
+		8*sizeof(uintptr_t),
+		&riscv_irqdomain_ops,
+		data);
+	WARN_ON(!data->domain);
+	printk(KERN_INFO "%s: %d local interrupts mapped\n",
+	       data->name, 8*(int)sizeof(uintptr_t));
+	return 0;
+}
+
+IRQCHIP_DECLARE(riscv, "riscv,cpu-intc", riscv_intc_init);
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 12/17] tty: New RISC-V SBI Console Driver
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 23:00     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

This patch adds a new driver for the console availiable via the RISC-V
SBI.  This console is specified to be used for early boot messages, and
is designed to be a very simple (albiet somewhat slow) console that is
always availiable.  All RISC-V systems have an SBI console.

The SBI console is made availiable for early printk messages and is also
availiable as a regular console.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/tty/hvc/Kconfig   |  11 +++++
 drivers/tty/hvc/Makefile  |   1 +
 drivers/tty/hvc/hvc_sbi.c | 102 ++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 114 insertions(+)
 create mode 100644 drivers/tty/hvc/hvc_sbi.c

diff --git a/drivers/tty/hvc/Kconfig b/drivers/tty/hvc/Kconfig
index 574da15fe618..f3774adab240 100644
--- a/drivers/tty/hvc/Kconfig
+++ b/drivers/tty/hvc/Kconfig
@@ -114,4 +114,15 @@ config HVCS
 	  which will also be compiled when this driver is built as a
 	  module.
 
+config HVC_SBI
+	bool "SBI console support"
+	depends on RISCV
+	select HVC_DRIVER
+	default y
+	help
+	  This enables support for console output via RISC-V SBI calls, which
+	  is normally used only during boot to output printk.
+
+	  If you don't know what do to here, say Y.
+
 endif # TTY
diff --git a/drivers/tty/hvc/Makefile b/drivers/tty/hvc/Makefile
index 6a2702be76d1..2d63bfe4a96b 100644
--- a/drivers/tty/hvc/Makefile
+++ b/drivers/tty/hvc/Makefile
@@ -11,3 +11,4 @@ obj-$(CONFIG_HVC_IUCV)		+= hvc_iucv.o
 obj-$(CONFIG_HVC_UDBG)		+= hvc_udbg.o
 obj-$(CONFIG_HVC_BFIN_JTAG)	+= hvc_bfin_jtag.o
 obj-$(CONFIG_HVCS)		+= hvcs.o
+obj-$(CONFIG_HVC_SBI)		+= hvc_sbi.o
diff --git a/drivers/tty/hvc/hvc_sbi.c b/drivers/tty/hvc/hvc_sbi.c
new file mode 100644
index 000000000000..e70293fb7b35
--- /dev/null
+++ b/drivers/tty/hvc/hvc_sbi.c
@@ -0,0 +1,102 @@
+/*
+ * RISC-V SBI interface to hvc_console.c
+ *  based on drivers-tty/hvc/hvc_udbg.c
+ *
+ * Copyright (C) 2008 David Gibson, IBM Corporation
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/irq.h>
+
+#include <asm/sbi.h>
+
+#include "hvc_console.h"
+
+static int hvc_sbi_tty_put(uint32_t vtermno, const char *buf, int count)
+{
+	int i;
+
+	for (i = 0; i < count; i++)
+		sbi_console_putchar(buf[i]);
+
+	return i;
+}
+
+static int hvc_sbi_tty_get(uint32_t vtermno, char *buf, int count)
+{
+	int i, c;
+
+	for (i = 0; i < count; i++) {
+		if ((c = sbi_console_getchar()) < 0)
+			break;
+		buf[i] = c;
+	}
+
+	return i;
+}
+
+static const struct hv_ops hvc_sbi_ops = {
+	.get_chars = hvc_sbi_tty_get,
+	.put_chars = hvc_sbi_tty_put,
+};
+
+static int __init hvc_sbi_init(void)
+{
+	return PTR_ERR_OR_ZERO(hvc_alloc(0, 0, &hvc_sbi_ops, 16));
+}
+device_initcall(hvc_sbi_init);
+
+static int __init hvc_sbi_console_init(void)
+{
+	hvc_instantiate(0, 0, &hvc_sbi_ops);
+	add_preferred_console("hvc", 0, NULL);
+
+	return 0;
+}
+console_initcall(hvc_sbi_console_init);
+
+#ifdef CONFIG_EARLY_PRINTK
+static void sbi_console_write(struct console *co, const char *buf,
+			      unsigned int n)
+{
+	int i;
+
+	for (i = 0; i < n; ++i) {
+		if (buf[i] == '\n')
+			sbi_console_putchar('\r');
+		sbi_console_putchar(buf[i]);
+	}
+}
+
+static struct console early_console_dev __initdata = {
+	.name	= "early",
+	.write	= sbi_console_write,
+	.flags	= CON_PRINTBUFFER | CON_BOOT,
+	.index	= -1
+};
+
+static int __init setup_early_printk(char *str)
+{
+	if (early_console == NULL) {
+		early_console = &early_console_dev;
+		register_console(early_console);
+	}
+	return 0;
+}
+early_param("earlyprintk", setup_early_printk);
+#endif
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 12/17] tty: New RISC-V SBI Console Driver
  2017-06-06 22:59   ` Palmer Dabbelt
                     ` (17 preceding siblings ...)
  (?)
@ 2017-06-06 23:00   ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

This patch adds a new driver for the console availiable via the RISC-V
SBI.  This console is specified to be used for early boot messages, and
is designed to be a very simple (albiet somewhat slow) console that is
always availiable.  All RISC-V systems have an SBI console.

The SBI console is made availiable for early printk messages and is also
availiable as a regular console.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/tty/hvc/Kconfig   |  11 +++++
 drivers/tty/hvc/Makefile  |   1 +
 drivers/tty/hvc/hvc_sbi.c | 102 ++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 114 insertions(+)
 create mode 100644 drivers/tty/hvc/hvc_sbi.c

diff --git a/drivers/tty/hvc/Kconfig b/drivers/tty/hvc/Kconfig
index 574da15fe618..f3774adab240 100644
--- a/drivers/tty/hvc/Kconfig
+++ b/drivers/tty/hvc/Kconfig
@@ -114,4 +114,15 @@ config HVCS
 	  which will also be compiled when this driver is built as a
 	  module.
 
+config HVC_SBI
+	bool "SBI console support"
+	depends on RISCV
+	select HVC_DRIVER
+	default y
+	help
+	  This enables support for console output via RISC-V SBI calls, which
+	  is normally used only during boot to output printk.
+
+	  If you don't know what do to here, say Y.
+
 endif # TTY
diff --git a/drivers/tty/hvc/Makefile b/drivers/tty/hvc/Makefile
index 6a2702be76d1..2d63bfe4a96b 100644
--- a/drivers/tty/hvc/Makefile
+++ b/drivers/tty/hvc/Makefile
@@ -11,3 +11,4 @@ obj-$(CONFIG_HVC_IUCV)		+= hvc_iucv.o
 obj-$(CONFIG_HVC_UDBG)		+= hvc_udbg.o
 obj-$(CONFIG_HVC_BFIN_JTAG)	+= hvc_bfin_jtag.o
 obj-$(CONFIG_HVCS)		+= hvcs.o
+obj-$(CONFIG_HVC_SBI)		+= hvc_sbi.o
diff --git a/drivers/tty/hvc/hvc_sbi.c b/drivers/tty/hvc/hvc_sbi.c
new file mode 100644
index 000000000000..e70293fb7b35
--- /dev/null
+++ b/drivers/tty/hvc/hvc_sbi.c
@@ -0,0 +1,102 @@
+/*
+ * RISC-V SBI interface to hvc_console.c
+ *  based on drivers-tty/hvc/hvc_udbg.c
+ *
+ * Copyright (C) 2008 David Gibson, IBM Corporation
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/irq.h>
+
+#include <asm/sbi.h>
+
+#include "hvc_console.h"
+
+static int hvc_sbi_tty_put(uint32_t vtermno, const char *buf, int count)
+{
+	int i;
+
+	for (i = 0; i < count; i++)
+		sbi_console_putchar(buf[i]);
+
+	return i;
+}
+
+static int hvc_sbi_tty_get(uint32_t vtermno, char *buf, int count)
+{
+	int i, c;
+
+	for (i = 0; i < count; i++) {
+		if ((c = sbi_console_getchar()) < 0)
+			break;
+		buf[i] = c;
+	}
+
+	return i;
+}
+
+static const struct hv_ops hvc_sbi_ops = {
+	.get_chars = hvc_sbi_tty_get,
+	.put_chars = hvc_sbi_tty_put,
+};
+
+static int __init hvc_sbi_init(void)
+{
+	return PTR_ERR_OR_ZERO(hvc_alloc(0, 0, &hvc_sbi_ops, 16));
+}
+device_initcall(hvc_sbi_init);
+
+static int __init hvc_sbi_console_init(void)
+{
+	hvc_instantiate(0, 0, &hvc_sbi_ops);
+	add_preferred_console("hvc", 0, NULL);
+
+	return 0;
+}
+console_initcall(hvc_sbi_console_init);
+
+#ifdef CONFIG_EARLY_PRINTK
+static void sbi_console_write(struct console *co, const char *buf,
+			      unsigned int n)
+{
+	int i;
+
+	for (i = 0; i < n; ++i) {
+		if (buf[i] == '\n')
+			sbi_console_putchar('\r');
+		sbi_console_putchar(buf[i]);
+	}
+}
+
+static struct console early_console_dev __initdata = {
+	.name	= "early",
+	.write	= sbi_console_write,
+	.flags	= CON_PRINTBUFFER | CON_BOOT,
+	.index	= -1
+};
+
+static int __init setup_early_printk(char *str)
+{
+	if (early_console == NULL) {
+		early_console = &early_console_dev;
+		register_console(early_console);
+	}
+	return 0;
+}
+early_param("earlyprintk", setup_early_printk);
+#endif
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 12/17] tty: New RISC-V SBI Console Driver
@ 2017-06-06 23:00     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

This patch adds a new driver for the console availiable via the RISC-V
SBI.  This console is specified to be used for early boot messages, and
is designed to be a very simple (albiet somewhat slow) console that is
always availiable.  All RISC-V systems have an SBI console.

The SBI console is made availiable for early printk messages and is also
availiable as a regular console.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 drivers/tty/hvc/Kconfig   |  11 +++++
 drivers/tty/hvc/Makefile  |   1 +
 drivers/tty/hvc/hvc_sbi.c | 102 ++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 114 insertions(+)
 create mode 100644 drivers/tty/hvc/hvc_sbi.c

diff --git a/drivers/tty/hvc/Kconfig b/drivers/tty/hvc/Kconfig
index 574da15fe618..f3774adab240 100644
--- a/drivers/tty/hvc/Kconfig
+++ b/drivers/tty/hvc/Kconfig
@@ -114,4 +114,15 @@ config HVCS
 	  which will also be compiled when this driver is built as a
 	  module.
 
+config HVC_SBI
+	bool "SBI console support"
+	depends on RISCV
+	select HVC_DRIVER
+	default y
+	help
+	  This enables support for console output via RISC-V SBI calls, which
+	  is normally used only during boot to output printk.
+
+	  If you don't know what do to here, say Y.
+
 endif # TTY
diff --git a/drivers/tty/hvc/Makefile b/drivers/tty/hvc/Makefile
index 6a2702be76d1..2d63bfe4a96b 100644
--- a/drivers/tty/hvc/Makefile
+++ b/drivers/tty/hvc/Makefile
@@ -11,3 +11,4 @@ obj-$(CONFIG_HVC_IUCV)		+= hvc_iucv.o
 obj-$(CONFIG_HVC_UDBG)		+= hvc_udbg.o
 obj-$(CONFIG_HVC_BFIN_JTAG)	+= hvc_bfin_jtag.o
 obj-$(CONFIG_HVCS)		+= hvcs.o
+obj-$(CONFIG_HVC_SBI)		+= hvc_sbi.o
diff --git a/drivers/tty/hvc/hvc_sbi.c b/drivers/tty/hvc/hvc_sbi.c
new file mode 100644
index 000000000000..e70293fb7b35
--- /dev/null
+++ b/drivers/tty/hvc/hvc_sbi.c
@@ -0,0 +1,102 @@
+/*
+ * RISC-V SBI interface to hvc_console.c
+ *  based on drivers-tty/hvc/hvc_udbg.c
+ *
+ * Copyright (C) 2008 David Gibson, IBM Corporation
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/irq.h>
+
+#include <asm/sbi.h>
+
+#include "hvc_console.h"
+
+static int hvc_sbi_tty_put(uint32_t vtermno, const char *buf, int count)
+{
+	int i;
+
+	for (i = 0; i < count; i++)
+		sbi_console_putchar(buf[i]);
+
+	return i;
+}
+
+static int hvc_sbi_tty_get(uint32_t vtermno, char *buf, int count)
+{
+	int i, c;
+
+	for (i = 0; i < count; i++) {
+		if ((c = sbi_console_getchar()) < 0)
+			break;
+		buf[i] = c;
+	}
+
+	return i;
+}
+
+static const struct hv_ops hvc_sbi_ops = {
+	.get_chars = hvc_sbi_tty_get,
+	.put_chars = hvc_sbi_tty_put,
+};
+
+static int __init hvc_sbi_init(void)
+{
+	return PTR_ERR_OR_ZERO(hvc_alloc(0, 0, &hvc_sbi_ops, 16));
+}
+device_initcall(hvc_sbi_init);
+
+static int __init hvc_sbi_console_init(void)
+{
+	hvc_instantiate(0, 0, &hvc_sbi_ops);
+	add_preferred_console("hvc", 0, NULL);
+
+	return 0;
+}
+console_initcall(hvc_sbi_console_init);
+
+#ifdef CONFIG_EARLY_PRINTK
+static void sbi_console_write(struct console *co, const char *buf,
+			      unsigned int n)
+{
+	int i;
+
+	for (i = 0; i < n; ++i) {
+		if (buf[i] == '\n')
+			sbi_console_putchar('\r');
+		sbi_console_putchar(buf[i]);
+	}
+}
+
+static struct console early_console_dev __initdata = {
+	.name	= "early",
+	.write	= sbi_console_write,
+	.flags	= CON_PRINTBUFFER | CON_BOOT,
+	.index	= -1
+};
+
+static int __init setup_early_printk(char *str)
+{
+	if (early_console == NULL) {
+		early_console = &early_console_dev;
+		register_console(early_console);
+	}
+	return 0;
+}
+early_param("earlyprintk", setup_early_printk);
+#endif
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 23:00     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

This patch adds the include files for the RISC-V port.  These are mostly
based on the score port, but there are a lot of arm64-based files as
well.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/Kbuild             |  61 +++++
 arch/riscv/include/asm/asm-offsets.h      |   1 +
 arch/riscv/include/asm/asm.h              |  76 ++++++
 arch/riscv/include/asm/atomic.h           | 349 ++++++++++++++++++++++++
 arch/riscv/include/asm/atomic64.h         | 355 ++++++++++++++++++++++++
 arch/riscv/include/asm/barrier.h          |  41 +++
 arch/riscv/include/asm/bitops.h           | 228 ++++++++++++++++
 arch/riscv/include/asm/bug.h              |  88 ++++++
 arch/riscv/include/asm/cache.h            |  22 ++
 arch/riscv/include/asm/cacheflush.h       |  39 +++
 arch/riscv/include/asm/cmpxchg.h          | 124 +++++++++
 arch/riscv/include/asm/compat.h           |  31 +++
 arch/riscv/include/asm/csr.h              | 125 +++++++++
 arch/riscv/include/asm/current.h          |  42 +++
 arch/riscv/include/asm/delay.h            |  28 ++
 arch/riscv/include/asm/device.h           |  27 ++
 arch/riscv/include/asm/dma-mapping.h      |  41 +++
 arch/riscv/include/asm/elf.h              |  85 ++++++
 arch/riscv/include/asm/io.h               | 152 +++++++++++
 arch/riscv/include/asm/irq.h              |  31 +++
 arch/riscv/include/asm/irqflags.h         |  63 +++++
 arch/riscv/include/asm/kprobes.h          |  22 ++
 arch/riscv/include/asm/linkage.h          |  20 ++
 arch/riscv/include/asm/mmu.h              |  26 ++
 arch/riscv/include/asm/mmu_context.h      |  69 +++++
 arch/riscv/include/asm/page.h             | 138 ++++++++++
 arch/riscv/include/asm/pci.h              |  50 ++++
 arch/riscv/include/asm/pgalloc.h          | 124 +++++++++
 arch/riscv/include/asm/pgtable-32.h       |  25 ++
 arch/riscv/include/asm/pgtable-64.h       |  84 ++++++
 arch/riscv/include/asm/pgtable-bits.h     |  48 ++++
 arch/riscv/include/asm/pgtable.h          | 427 +++++++++++++++++++++++++++++
 arch/riscv/include/asm/processor.h        | 102 +++++++
 arch/riscv/include/asm/ptrace.h           | 116 ++++++++
 arch/riscv/include/asm/sbi.h              | 100 +++++++
 arch/riscv/include/asm/smp.h              |  41 +++
 arch/riscv/include/asm/spinlock.h         | 155 +++++++++++
 arch/riscv/include/asm/spinlock_types.h   |  33 +++
 arch/riscv/include/asm/string.h           |  30 ++
 arch/riscv/include/asm/switch_to.h        |  69 +++++
 arch/riscv/include/asm/syscall.h          |  90 ++++++
 arch/riscv/include/asm/syscalls.h         |  25 ++
 arch/riscv/include/asm/thread_info.h      |  96 +++++++
 arch/riscv/include/asm/timex.h            |  43 +++
 arch/riscv/include/asm/tlb.h              |  24 ++
 arch/riscv/include/asm/tlbflush.h         |  64 +++++
 arch/riscv/include/asm/uaccess.h          | 440 ++++++++++++++++++++++++++++++
 arch/riscv/include/asm/unistd.h           |  16 ++
 arch/riscv/include/asm/vdso.h             |  32 +++
 arch/riscv/include/asm/word-at-a-time.h   |  55 ++++
 arch/riscv/include/uapi/asm/Kbuild        |   2 +
 arch/riscv/include/uapi/asm/auxvec.h      |  24 ++
 arch/riscv/include/uapi/asm/bitsperlong.h |  25 ++
 arch/riscv/include/uapi/asm/byteorder.h   |  23 ++
 arch/riscv/include/uapi/asm/elf.h         |  83 ++++++
 arch/riscv/include/uapi/asm/ptrace.h      |  68 +++++
 arch/riscv/include/uapi/asm/sigcontext.h  |  29 ++
 arch/riscv/include/uapi/asm/siginfo.h     |  24 ++
 arch/riscv/include/uapi/asm/unistd.h      |  22 ++
 59 files changed, 4873 insertions(+)
 create mode 100644 arch/riscv/include/asm/Kbuild
 create mode 100644 arch/riscv/include/asm/asm-offsets.h
 create mode 100644 arch/riscv/include/asm/asm.h
 create mode 100644 arch/riscv/include/asm/atomic.h
 create mode 100644 arch/riscv/include/asm/atomic64.h
 create mode 100644 arch/riscv/include/asm/barrier.h
 create mode 100644 arch/riscv/include/asm/bitops.h
 create mode 100644 arch/riscv/include/asm/bug.h
 create mode 100644 arch/riscv/include/asm/cache.h
 create mode 100644 arch/riscv/include/asm/cacheflush.h
 create mode 100644 arch/riscv/include/asm/cmpxchg.h
 create mode 100644 arch/riscv/include/asm/compat.h
 create mode 100644 arch/riscv/include/asm/csr.h
 create mode 100644 arch/riscv/include/asm/current.h
 create mode 100644 arch/riscv/include/asm/delay.h
 create mode 100644 arch/riscv/include/asm/device.h
 create mode 100644 arch/riscv/include/asm/dma-mapping.h
 create mode 100644 arch/riscv/include/asm/elf.h
 create mode 100644 arch/riscv/include/asm/io.h
 create mode 100644 arch/riscv/include/asm/irq.h
 create mode 100644 arch/riscv/include/asm/irqflags.h
 create mode 100644 arch/riscv/include/asm/kprobes.h
 create mode 100644 arch/riscv/include/asm/linkage.h
 create mode 100644 arch/riscv/include/asm/mmu.h
 create mode 100644 arch/riscv/include/asm/mmu_context.h
 create mode 100644 arch/riscv/include/asm/page.h
 create mode 100644 arch/riscv/include/asm/pci.h
 create mode 100644 arch/riscv/include/asm/pgalloc.h
 create mode 100644 arch/riscv/include/asm/pgtable-32.h
 create mode 100644 arch/riscv/include/asm/pgtable-64.h
 create mode 100644 arch/riscv/include/asm/pgtable-bits.h
 create mode 100644 arch/riscv/include/asm/pgtable.h
 create mode 100644 arch/riscv/include/asm/processor.h
 create mode 100644 arch/riscv/include/asm/ptrace.h
 create mode 100644 arch/riscv/include/asm/sbi.h
 create mode 100644 arch/riscv/include/asm/smp.h
 create mode 100644 arch/riscv/include/asm/spinlock.h
 create mode 100644 arch/riscv/include/asm/spinlock_types.h
 create mode 100644 arch/riscv/include/asm/string.h
 create mode 100644 arch/riscv/include/asm/switch_to.h
 create mode 100644 arch/riscv/include/asm/syscall.h
 create mode 100644 arch/riscv/include/asm/syscalls.h
 create mode 100644 arch/riscv/include/asm/thread_info.h
 create mode 100644 arch/riscv/include/asm/timex.h
 create mode 100644 arch/riscv/include/asm/tlb.h
 create mode 100644 arch/riscv/include/asm/tlbflush.h
 create mode 100644 arch/riscv/include/asm/uaccess.h
 create mode 100644 arch/riscv/include/asm/unistd.h
 create mode 100644 arch/riscv/include/asm/vdso.h
 create mode 100644 arch/riscv/include/asm/word-at-a-time.h
 create mode 100644 arch/riscv/include/uapi/asm/Kbuild
 create mode 100644 arch/riscv/include/uapi/asm/auxvec.h
 create mode 100644 arch/riscv/include/uapi/asm/bitsperlong.h
 create mode 100644 arch/riscv/include/uapi/asm/byteorder.h
 create mode 100644 arch/riscv/include/uapi/asm/elf.h
 create mode 100644 arch/riscv/include/uapi/asm/ptrace.h
 create mode 100644 arch/riscv/include/uapi/asm/sigcontext.h
 create mode 100644 arch/riscv/include/uapi/asm/siginfo.h
 create mode 100644 arch/riscv/include/uapi/asm/unistd.h

diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild
new file mode 100644
index 000000000000..710397395981
--- /dev/null
+++ b/arch/riscv/include/asm/Kbuild
@@ -0,0 +1,61 @@
+generic-y += bugs.h
+generic-y += cacheflush.h
+generic-y += checksum.h
+generic-y += clkdev.h
+generic-y += cputime.h
+generic-y += div64.h
+generic-y += dma.h
+generic-y += dma-contiguous.h
+generic-y += emergency-restart.h
+generic-y += errno.h
+generic-y += exec.h
+generic-y += fb.h
+generic-y += fcntl.h
+generic-y += ftrace.h
+generic-y += futex.h
+generic-y += hardirq.h
+generic-y += hash.h
+generic-y += hw_irq.h
+generic-y += ioctl.h
+generic-y += ioctls.h
+generic-y += ipcbuf.h
+generic-y += irq_regs.h
+generic-y += irq_work.h
+generic-y += kdebug.h
+generic-y += kmap_types.h
+generic-y += kvm_para.h
+generic-y += local.h
+generic-y += mm-arch-hooks.h
+generic-y += mman.h
+generic-y += module.h
+generic-y += msgbuf.h
+generic-y += mutex.h
+generic-y += param.h
+generic-y += percpu.h
+generic-y += poll.h
+generic-y += posix_types.h
+generic-y += preempt.h
+generic-y += resource.h
+generic-y += scatterlist.h
+generic-y += sections.h
+generic-y += sembuf.h
+generic-y += setup.h
+generic-y += shmbuf.h
+generic-y += shmparam.h
+generic-y += signal.h
+generic-y += socket.h
+generic-y += sockios.h
+generic-y += stat.h
+generic-y += statfs.h
+generic-y += swab.h
+generic-y += termbits.h
+generic-y += termios.h
+generic-y += topology.h
+generic-y += trace_clock.h
+generic-y += types.h
+generic-y += ucontext.h
+generic-y += unaligned.h
+generic-y += user.h
+generic-y += vga.h
+generic-y += vmlinux.lds.h
+generic-y += xor.h
diff --git a/arch/riscv/include/asm/asm-offsets.h b/arch/riscv/include/asm/asm-offsets.h
new file mode 100644
index 000000000000..d370ee36a182
--- /dev/null
+++ b/arch/riscv/include/asm/asm-offsets.h
@@ -0,0 +1 @@
+#include <generated/asm-offsets.h>
diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h
new file mode 100644
index 000000000000..6cbbb6a68d76
--- /dev/null
+++ b/arch/riscv/include/asm/asm.h
@@ -0,0 +1,76 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_ASM_H
+#define _ASM_RISCV_ASM_H
+
+#ifdef __ASSEMBLY__
+#define __ASM_STR(x)	x
+#else
+#define __ASM_STR(x)	#x
+#endif
+
+#if __riscv_xlen == 64
+#define __REG_SEL(a, b)	__ASM_STR(a)
+#elif __riscv_xlen == 32
+#define __REG_SEL(a, b)	__ASM_STR(b)
+#else
+#error "Unexpected __riscv_xlen"
+#endif
+
+#define REG_L		__REG_SEL(ld, lw)
+#define REG_S		__REG_SEL(sd, sw)
+#define SZREG		__REG_SEL(8, 4)
+#define LGREG		__REG_SEL(3, 2)
+
+#if __SIZEOF_POINTER__ == 8
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.dword
+#define RISCV_SZPTR		8
+#define RISCV_LGPTR		3
+#else
+#define RISCV_PTR		".dword"
+#define RISCV_SZPTR		"8"
+#define RISCV_LGPTR		"3"
+#endif
+#elif __SIZEOF_POINTER__ == 4
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.word
+#define RISCV_SZPTR		4
+#define RISCV_LGPTR		2
+#else
+#define RISCV_PTR		".word"
+#define RISCV_SZPTR		"4"
+#define RISCV_LGPTR		"2"
+#endif
+#else
+#error "Unexpected __SIZEOF_POINTER__"
+#endif
+
+#if (__SIZEOF_INT__ == 4)
+#define INT		__ASM_STR(.word)
+#define SZINT		__ASM_STR(4)
+#define LGINT		__ASM_STR(2)
+#else
+#error "Unexpected __SIZEOF_INT__"
+#endif
+
+#if (__SIZEOF_SHORT__ == 2)
+#define SHORT		__ASM_STR(.half)
+#define SZSHORT		__ASM_STR(2)
+#define LGSHORT		__ASM_STR(1)
+#else
+#error "Unexpected __SIZEOF_SHORT__"
+#endif
+
+#endif /* _ASM_RISCV_ASM_H */
diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
new file mode 100644
index 000000000000..17ab22036e1a
--- /dev/null
+++ b/arch/riscv/include/asm/atomic.h
@@ -0,0 +1,349 @@
+/*
+ * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#ifndef _ASM_RISCV_ATOMIC_H
+#define _ASM_RISCV_ATOMIC_H
+
+#ifdef CONFIG_ISA_A
+
+#include <asm/cmpxchg.h>
+#include <asm/barrier.h>
+
+#define ATOMIC_INIT(i)	{ (i) }
+
+/**
+ * atomic_read - read atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically reads the value of @v.
+ */
+static inline int atomic_read(const atomic_t *v)
+{
+	return READ_ONCE(v->counter);
+}
+
+/**
+ * atomic_set - set atomic variable
+ * @v: pointer of type atomic_t
+ * @i: required value
+ *
+ * Atomically sets the value of @v to @i.
+ */
+static inline void atomic_set(atomic_t *v, int i)
+{
+	WRITE_ONCE(v->counter, i);
+}
+
+/**
+ * atomic_add - add integer to atomic variable
+ * @i: integer value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically adds @i to @v.
+ */
+static inline void atomic_add(int i, atomic_t *v)
+{
+	__asm__ __volatile__ (
+		"amoadd.w zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (i));
+}
+
+#define atomic_fetch_add atomic_fetch_add
+static inline int atomic_fetch_add(unsigned int mask, atomic_t *v)
+{
+	int out;
+
+	__asm__ __volatile__ (
+		"amoadd.w %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic_sub - subtract integer from atomic variable
+ * @i: integer value to subtract
+ * @v: pointer of type atomic_t
+ *
+ * Atomically subtracts @i from @v.
+ */
+static inline void atomic_sub(int i, atomic_t *v)
+{
+	atomic_add(-i, v);
+}
+
+#define atomic_fetch_sub atomic_fetch_sub
+static inline int atomic_fetch_sub(unsigned int mask, atomic_t *v)
+{
+	int out;
+
+	__asm__ __volatile__ (
+		"amosub.w %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic_add_return - add integer to atomic variable
+ * @i: integer value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically adds @i to @v and returns the result
+ */
+static inline int atomic_add_return(int i, atomic_t *v)
+{
+	register int c;
+
+	__asm__ __volatile__ (
+		"amoadd.w %0, %2, %1"
+		: "=r" (c), "+A" (v->counter)
+		: "r" (i));
+	return (c + i);
+}
+
+/**
+ * atomic_sub_return - subtract integer from atomic variable
+ * @i: integer value to subtract
+ * @v: pointer of type atomic_t
+ *
+ * Atomically subtracts @i from @v and returns the result
+ */
+static inline int atomic_sub_return(int i, atomic_t *v)
+{
+	return atomic_add_return(-i, v);
+}
+
+/**
+ * atomic_inc - increment atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1.
+ */
+static inline void atomic_inc(atomic_t *v)
+{
+	atomic_add(1, v);
+}
+
+/**
+ * atomic_dec - decrement atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1.
+ */
+static inline void atomic_dec(atomic_t *v)
+{
+	atomic_add(-1, v);
+}
+
+static inline int atomic_inc_return(atomic_t *v)
+{
+	return atomic_add_return(1, v);
+}
+
+static inline int atomic_dec_return(atomic_t *v)
+{
+	return atomic_sub_return(1, v);
+}
+
+/**
+ * atomic_sub_and_test - subtract value from variable and test result
+ * @i: integer value to subtract
+ * @v: pointer of type atomic_t
+ *
+ * Atomically subtracts @i from @v and returns
+ * true if the result is zero, or false for all
+ * other cases.
+ */
+static inline int atomic_sub_and_test(int i, atomic_t *v)
+{
+	return (atomic_sub_return(i, v) == 0);
+}
+
+/**
+ * atomic_inc_and_test - increment and test
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1
+ * and returns true if the result is zero, or false for all
+ * other cases.
+ */
+static inline int atomic_inc_and_test(atomic_t *v)
+{
+	return (atomic_inc_return(v) == 0);
+}
+
+/**
+ * atomic_dec_and_test - decrement and test
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1 and
+ * returns true if the result is 0, or false for all other
+ * cases.
+ */
+static inline int atomic_dec_and_test(atomic_t *v)
+{
+	return (atomic_dec_return(v) == 0);
+}
+
+/**
+ * atomic_add_negative - add and test if negative
+ * @i: integer value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically adds @i to @v and returns true
+ * if the result is negative, or false when
+ * result is greater than or equal to zero.
+ */
+static inline int atomic_add_negative(int i, atomic_t *v)
+{
+	return (atomic_add_return(i, v) < 0);
+}
+
+
+static inline int atomic_xchg(atomic_t *v, int n)
+{
+	register int c;
+
+	__asm__ __volatile__ (
+		"amoswap.w %0, %2, %1"
+		: "=r" (c), "+A" (v->counter)
+		: "r" (n));
+	return c;
+}
+
+static inline int atomic_cmpxchg(atomic_t *v, int o, int n)
+{
+	return cmpxchg(&(v->counter), o, n);
+}
+
+/**
+ * __atomic_add_unless - add unless the number is already a given value
+ * @v: pointer of type atomic_t
+ * @a: the amount to add to v...
+ * @u: ...unless v is equal to u.
+ *
+ * Atomically adds @a to @v, so long as @v was not already @u.
+ * Returns the old value of @v.
+ */
+static inline int __atomic_add_unless(atomic_t *v, int a, int u)
+{
+	register int prev, rc;
+
+	__asm__ __volatile__ (
+	"0:\n"
+		"lr.w %0, %2\n"
+		"beq  %0, %4, 1f\n"
+		"add  %1, %0, %3\n"
+		"sc.w %1, %1, %2\n"
+		"bnez %1, 0b\n"
+	"1:"
+		: "=&r" (prev), "=&r" (rc), "+A" (v->counter)
+		: "r" (a), "r" (u));
+	return prev;
+}
+
+/**
+ * atomic_and - Atomically clear bits in atomic variable
+ * @mask: Mask of the bits to be retained
+ * @v: pointer of type atomic_t
+ *
+ * Atomically retains the bits set in @mask from @v
+ */
+static inline void atomic_and(unsigned int mask, atomic_t *v)
+{
+	__asm__ __volatile__ (
+		"amoand.w zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+#define atomic_fetch_and atomic_fetch_and
+static inline int atomic_fetch_and(unsigned int mask, atomic_t *v)
+{
+	int out;
+
+	__asm__ __volatile__ (
+		"amoand.w %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic_or - Atomically set bits in atomic variable
+ * @mask: Mask of the bits to be set
+ * @v: pointer of type atomic_t
+ *
+ * Atomically sets the bits set in @mask in @v
+ */
+static inline void atomic_or(unsigned int mask, atomic_t *v)
+{
+	__asm__ __volatile__ (
+		"amoor.w zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+#define atomic_fetch_or atomic_fetch_or
+static inline int atomic_fetch_or(unsigned int mask, atomic_t *v)
+{
+	int out;
+
+	__asm__ __volatile__ (
+		"amoor.w %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic_xor - Atomically flips bits in atomic variable
+ * @mask: Mask of the bits to be flipped
+ * @v: pointer of type atomic_t
+ *
+ * Atomically flips the bits set in @mask in @v
+ */
+static inline void atomic_xor(unsigned int mask, atomic_t *v)
+{
+	__asm__ __volatile__ (
+		"amoxor.w zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+#define atomic_fetch_xor atomic_fetch_xor
+static inline int atomic_fetch_xor(unsigned int mask, atomic_t *v)
+{
+	int out;
+
+	__asm__ __volatile__ (
+		"amoxor.w %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/* Assume that atomic operations are already serializing */
+#define smp_mb__before_atomic_dec()	barrier()
+#define smp_mb__after_atomic_dec()	barrier()
+#define smp_mb__before_atomic_inc()	barrier()
+#define smp_mb__after_atomic_inc()	barrier()
+
+#else /* !CONFIG_ISA_A */
+
+#include <asm-generic/atomic.h>
+
+#endif /* CONFIG_ISA_A */
+
+#include <asm/atomic64.h>
+
+#endif /* _ASM_RISCV_ATOMIC_H */
diff --git a/arch/riscv/include/asm/atomic64.h b/arch/riscv/include/asm/atomic64.h
new file mode 100644
index 000000000000..5822c4755f06
--- /dev/null
+++ b/arch/riscv/include/asm/atomic64.h
@@ -0,0 +1,355 @@
+/*
+ * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#ifndef _ASM_RISCV_ATOMIC64_H
+#define _ASM_RISCV_ATOMIC64_H
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+#include <asm-generic/atomic64.h>
+#else /* !CONFIG_GENERIC_ATOMIC64 */
+
+#include <linux/types.h>
+
+#define ATOMIC64_INIT(i)	{ (i) }
+
+/**
+ * atomic64_read - read atomic64 variable
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically reads the value of @v.
+ */
+static inline s64 atomic64_read(const atomic64_t *v)
+{
+	return READ_ONCE(v->counter);
+}
+
+/**
+ * atomic64_set - set atomic64 variable
+ * @v: pointer to type atomic64_t
+ * @i: required value
+ *
+ * Atomically sets the value of @v to @i.
+ */
+static inline void atomic64_set(atomic64_t *v, s64 i)
+{
+	WRITE_ONCE(v->counter, i);
+}
+
+/**
+ * atomic64_add - add integer to atomic64 variable
+ * @i: integer value to add
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically adds @i to @v.
+ */
+static inline void atomic64_add(s64 a, atomic64_t *v)
+{
+	__asm__ __volatile__ (
+		"amoadd.d zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (a));
+}
+
+static inline long atomic64_fetch_add(unsigned long mask, atomic64_t *v)
+{
+	long out;
+
+	 __asm__ __volatile__ (
+		"amoadd.d %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic64_sub - subtract the atomic64 variable
+ * @i: integer value to subtract
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically subtracts @i from @v.
+ */
+static inline void atomic64_sub(s64 a, atomic64_t *v)
+{
+	atomic64_add(-a, v);
+}
+
+static inline long atomic64_fetch_sub(unsigned long mask, atomic64_t *v)
+{
+	long out;
+
+	__asm__ __volatile__ (
+		"amosub.d %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic64_add_return - add and return
+ * @i: integer value to add
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically adds @i to @v and returns @i + @v
+ */
+static inline s64 atomic64_add_return(s64 a, atomic64_t *v)
+{
+	register s64 c;
+
+	__asm__ __volatile__ (
+		"amoadd.d %0, %2, %1"
+		: "=r" (c), "+A" (v->counter)
+		: "r" (a));
+	return (c + a);
+}
+
+static inline s64 atomic64_sub_return(s64 a, atomic64_t *v)
+{
+	return atomic64_add_return(-a, v);
+}
+
+/**
+ * atomic64_inc - increment atomic64 variable
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically increments @v by 1.
+ */
+static inline void atomic64_inc(atomic64_t *v)
+{
+	atomic64_add(1L, v);
+}
+
+/**
+ * atomic64_dec - decrement atomic64 variable
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically decrements @v by 1.
+ */
+static inline void atomic64_dec(atomic64_t *v)
+{
+	atomic64_add(-1L, v);
+}
+
+static inline s64 atomic64_inc_return(atomic64_t *v)
+{
+	return atomic64_add_return(1L, v);
+}
+
+static inline s64 atomic64_dec_return(atomic64_t *v)
+{
+	return atomic64_add_return(-1L, v);
+}
+
+/**
+ * atomic64_inc_and_test - increment and test
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically increments @v by 1
+ * and returns true if the result is zero, or false for all
+ * other cases.
+ */
+static inline int atomic64_inc_and_test(atomic64_t *v)
+{
+	return (atomic64_inc_return(v) == 0);
+}
+
+/**
+ * atomic64_dec_and_test - decrement and test
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically decrements @v by 1 and
+ * returns true if the result is 0, or false for all other
+ * cases.
+ */
+static inline int atomic64_dec_and_test(atomic64_t *v)
+{
+	return (atomic64_dec_return(v) == 0);
+}
+
+/**
+ * atomic64_sub_and_test - subtract value from variable and test result
+ * @a: integer value to subtract
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically subtracts @a from @v and returns
+ * true if the result is zero, or false for all
+ * other cases.
+ */
+static inline int atomic64_sub_and_test(s64 a, atomic64_t *v)
+{
+	return (atomic64_sub_return(a, v) == 0);
+}
+
+/**
+ * atomic64_add_negative - add and test if negative
+ * @a: integer value to add
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically adds @a to @v and returns true
+ * if the result is negative, or false when
+ * result is greater than or equal to zero.
+ */
+static inline int atomic64_add_negative(s64 a, atomic64_t *v)
+{
+	return (atomic64_add_return(a, v) < 0);
+}
+
+
+static inline s64 atomic64_xchg(atomic64_t *v, s64 n)
+{
+	register s64 c;
+
+	__asm__ __volatile__ (
+		"amoswap.d %0, %2, %1"
+		: "=r" (c), "+A" (v->counter)
+		: "r" (n));
+	return c;
+}
+
+static inline s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
+{
+	return cmpxchg(&(v->counter), o, n);
+}
+
+/*
+ * atomic64_dec_if_positive - decrement by 1 if old value positive
+ * @v: pointer of type atomic_t
+ *
+ * The function returns the old value of *v minus 1, even if
+ * the atomic variable, v, was not decremented.
+ */
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
+{
+	register s64 prev, rc;
+
+	__asm__ __volatile__ (
+	"0:\n"
+		"lr.d %0, %2\n"
+		"add  %0, %0, -1\n"
+		"bltz %0, 1f\n"
+		"sc.w %1, %0, %2\n"
+		"bnez %1, 0b\n"
+	"1:\n"
+		: "=&r" (prev), "=r" (rc), "+A" (v->counter));
+	return prev;
+}
+
+/**
+ * atomic64_add_unless - add unless the number is a given value
+ * @v: pointer of type atomic64_t
+ * @a: the amount to add to v...
+ * @u: ...unless v is equal to u.
+ *
+ * Atomically adds @a to @v, so long as it was not @u.
+ * Returns true if the addition occurred and false otherwise.
+ */
+static inline int atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
+{
+	register s64 tmp;
+	register int rc = 1;
+
+	__asm__ __volatile__ (
+	"0:\n"
+		"lr.d %0, %2\n"
+		"beq  %0, %z4, 1f\n"
+		"add  %0, %0, %3\n"
+		"sc.d %1, %0, %2\n"
+		"bnez %1, 0b\n"
+	"1:"
+		: "=&r" (tmp), "=&r" (rc), "+A" (v->counter)
+		: "rI" (a), "rJ" (u));
+	return !rc;
+}
+
+static inline int atomic64_inc_not_zero(atomic64_t *v)
+{
+	return atomic64_add_unless(v, 1, 0);
+}
+
+/**
+ * atomic64_and - Atomically clear bits in atomic variable
+ * @mask: Mask of the bits to be retained
+ * @v: pointer of type atomic_t
+ *
+ * Atomically retains the bits set in @mask from @v
+ */
+static inline void atomic64_and(s64 mask, atomic64_t *v)
+{
+	__asm__ __volatile__ (
+		"amoand.d zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+static inline long atomic64_fetch_and(unsigned long mask, atomic64_t *v)
+{
+	long out;
+
+	__asm__ __volatile__ (
+		"amoand.d %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic64_or - Atomically set bits in atomic variable
+ * @mask: Mask of the bits to be set
+ * @v: pointer of type atomic_t
+ *
+ * Atomically sets the bits set in @mask in @v
+ */
+static inline void atomic64_or(s64 mask, atomic64_t *v)
+{
+	__asm__ __volatile__ (
+		"amoor.d zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+static inline long atomic64_fetch_or(unsigned long mask, atomic64_t *v)
+{
+	long out;
+
+	__asm__ __volatile__ (
+		"amoor.d %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic64_xor - Atomically flips bits in atomic variable
+ * @mask: Mask of the bits to be flipped
+ * @v: pointer of type atomic_t
+ *
+ * Atomically flips the bits set in @mask in @v
+ */
+static inline void atomic64_xor(s64 mask, atomic64_t *v)
+{
+	__asm__ __volatile__ (
+		"amoxor.d zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+static inline long atomic64_fetch_xor(unsigned long mask, atomic64_t *v)
+{
+	long out;
+
+	__asm__ __volatile__ (
+		"amoxor.d %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+#endif /* CONFIG_GENERIC_ATOMIC64 */
+
+#endif /* _ASM_RISCV_ATOMIC64_H */
diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h
new file mode 100644
index 000000000000..5eb0e501e618
--- /dev/null
+++ b/arch/riscv/include/asm/barrier.h
@@ -0,0 +1,41 @@
+/*
+ * Based on arch/arm/include/asm/barrier.h
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_RISCV_BARRIER_H
+#define _ASM_RISCV_BARRIER_H
+
+#ifndef __ASSEMBLY__
+
+#define nop()		__asm__ __volatile__ ("nop")
+
+/* These barries need to enforce ordering on both devices or memory. */
+#define mb()		__asm__ __volatile__ ("fence iorw,iorw" : : : "memory")
+#define rmb()		__asm__ __volatile__ ("fence ir,ir"     : : : "memory")
+#define wmb()		__asm__ __volatile__ ("fence ow,ow"     : : : "memory")
+
+/* These barries do not need to enforce ordering on devices, just memory. */
+#define smp_mb()	__asm__ __volatile__ ("fence rw,rw" : : : "memory")
+#define smp_rmb()	__asm__ __volatile__ ("fence r,r"   : : : "memory")
+#define smp_wmb()	__asm__ __volatile__ ("fence w,w"   : : : "memory")
+
+#include <asm-generic/barrier.h>
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BARRIER_H */
diff --git a/arch/riscv/include/asm/bitops.h b/arch/riscv/include/asm/bitops.h
new file mode 100644
index 000000000000..27e47858c6b1
--- /dev/null
+++ b/arch/riscv/include/asm/bitops.h
@@ -0,0 +1,228 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_BITOPS_H
+#define _ASM_RISCV_BITOPS_H
+
+#ifndef _LINUX_BITOPS_H
+#error "Only <linux/bitops.h> can be included directly"
+#endif /* _LINUX_BITOPS_H */
+
+#ifdef __KERNEL__
+
+#include <linux/compiler.h>
+#include <linux/irqflags.h>
+#include <asm/barrier.h>
+#include <asm/bitsperlong.h>
+
+#ifdef CONFIG_ISA_A
+
+#ifndef smp_mb__before_clear_bit
+#define smp_mb__before_clear_bit()  smp_mb()
+#define smp_mb__after_clear_bit()   smp_mb()
+#endif /* smp_mb__before_clear_bit */
+
+#include <asm-generic/bitops/__ffs.h>
+#include <asm-generic/bitops/ffz.h>
+#include <asm-generic/bitops/fls.h>
+#include <asm-generic/bitops/__fls.h>
+#include <asm-generic/bitops/fls64.h>
+#include <asm-generic/bitops/find.h>
+#include <asm-generic/bitops/sched.h>
+#include <asm-generic/bitops/ffs.h>
+
+#include <asm-generic/bitops/hweight.h>
+
+#if (BITS_PER_LONG == 64)
+#define __AMO(op)	"amo" #op ".d"
+#elif (BITS_PER_LONG == 32)
+#define __AMO(op)	"amo" #op ".w"
+#else
+#error "Unexpected BITS_PER_LONG"
+#endif
+
+#define __test_and_op_bit(op, mod, nr, addr)			\
+({								\
+	unsigned long __res, __mask;				\
+	__mask = BIT_MASK(nr);					\
+	__asm__ __volatile__ (					\
+		__AMO(op) " %0, %2, %1"				\
+		: "=r" (__res), "+A" (addr[BIT_WORD(nr)])	\
+		: "r" (mod(__mask)));				\
+	((__res & __mask) != 0);				\
+})
+
+#define __op_bit(op, mod, nr, addr)				\
+	__asm__ __volatile__ (					\
+		__AMO(op) " zero, %1, %0"			\
+		: "+A" (addr[BIT_WORD(nr)])			\
+		: "r" (mod(BIT_MASK(nr))))
+
+/* Bitmask modifiers */
+#define __NOP(x)	(x)
+#define __NOT(x)	(~(x))
+
+/**
+ * test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It may be reordered on other architectures than x86.
+ * It also implies a memory barrier.
+ */
+static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
+{
+	return __test_and_op_bit(or, __NOP, nr, addr);
+}
+
+/**
+ * test_and_clear_bit - Clear a bit and return its old value
+ * @nr: Bit to clear
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It can be reordered on other architectures other than x86.
+ * It also implies a memory barrier.
+ */
+static inline int test_and_clear_bit(int nr, volatile unsigned long *addr)
+{
+	return __test_and_op_bit(and, __NOT, nr, addr);
+}
+
+/**
+ * test_and_change_bit - Change a bit and return its old value
+ * @nr: Bit to change
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It also implies a memory barrier.
+ */
+static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
+{
+	return __test_and_op_bit(xor, __NOP, nr, addr);
+}
+
+/**
+ * set_bit - Atomically set a bit in memory
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This function is atomic and may not be reordered.  See __set_bit()
+ * if you do not require the atomic guarantees.
+ *
+ * Note: there are no guarantees that this function will not be reordered
+ * on non x86 architectures, so if you are writing portable code,
+ * make sure not to rely on its reordering guarantees.
+ *
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
+static inline void set_bit(int nr, volatile unsigned long *addr)
+{
+	__op_bit(or, __NOP, nr, addr);
+}
+
+/**
+ * clear_bit - Clears a bit in memory
+ * @nr: Bit to clear
+ * @addr: Address to start counting from
+ *
+ * clear_bit() is atomic and may not be reordered.  However, it does
+ * not contain a memory barrier, so if it is used for locking purposes,
+ * you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
+ * in order to ensure changes are visible on other processors.
+ */
+static inline void clear_bit(int nr, volatile unsigned long *addr)
+{
+	__op_bit(and, __NOT, nr, addr);
+}
+
+/**
+ * change_bit - Toggle a bit in memory
+ * @nr: Bit to change
+ * @addr: Address to start counting from
+ *
+ * change_bit() is atomic and may not be reordered. It may be
+ * reordered on other architectures than x86.
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
+static inline void change_bit(int nr, volatile unsigned long *addr)
+{
+	__op_bit(xor, __NOP, nr, addr);
+}
+
+/**
+ * test_and_set_bit_lock - Set a bit and return its old value, for lock
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is atomic and provides acquire barrier semantics.
+ * It can be used to implement bit locks.
+ */
+static inline int test_and_set_bit_lock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	return test_and_set_bit(nr, addr);
+}
+
+/**
+ * clear_bit_unlock - Clear a bit in memory, for unlock
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This operation is atomic and provides release barrier semantics.
+ */
+static inline void clear_bit_unlock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	clear_bit(nr, addr);
+}
+
+/**
+ * __clear_bit_unlock - Clear a bit in memory, for unlock
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This operation is like clear_bit_unlock, however it is not atomic.
+ * It does provide release barrier semantics so it can be used to unlock
+ * a bit lock, however it would only be used if no other CPU can modify
+ * any bits in the memory until the lock is released (a good example is
+ * if the bit lock itself protects access to the other bits in the word).
+ */
+static inline void __clear_bit_unlock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	clear_bit(nr, addr);
+}
+
+#undef __test_and_op_bit
+#undef __op_bit
+#undef __NOP
+#undef __NOT
+#undef __AMO
+
+#include <asm-generic/bitops/non-atomic.h>
+#include <asm-generic/bitops/le.h>
+#include <asm-generic/bitops/ext2-atomic.h>
+
+#else /* !CONFIG_ISA_A */
+
+#include <asm-generic/bitops.h>
+
+#endif /* CONFIG_ISA_A */
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_BITOPS_H */
diff --git a/arch/riscv/include/asm/bug.h b/arch/riscv/include/asm/bug.h
new file mode 100644
index 000000000000..e2f690c20729
--- /dev/null
+++ b/arch/riscv/include/asm/bug.h
@@ -0,0 +1,88 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_BUG_H
+#define _ASM_RISCV_BUG_H
+
+#include <linux/compiler.h>
+#include <linux/const.h>
+#include <linux/types.h>
+
+#include <asm/asm.h>
+
+#ifdef CONFIG_GENERIC_BUG
+#define __BUG_INSN	_AC(0x00100073, UL) /* sbreak */
+
+#ifndef __ASSEMBLY__
+typedef u32 bug_insn_t;
+
+#ifdef CONFIG_GENERIC_BUG_RELATIVE_POINTERS
+#define __BUG_ENTRY_ADDR	INT " 1b - 2b"
+#define __BUG_ENTRY_FILE	INT " %0 - 2b"
+#else
+#define __BUG_ENTRY_ADDR	RISCV_PTR " 1b"
+#define __BUG_ENTRY_FILE	RISCV_PTR " %0"
+#endif
+
+#ifdef CONFIG_DEBUG_BUGVERBOSE
+#define __BUG_ENTRY			\
+	__BUG_ENTRY_ADDR "\n\t"		\
+	__BUG_ENTRY_FILE "\n\t"		\
+	SHORT " %1"
+#else
+#define __BUG_ENTRY			\
+	__BUG_ENTRY_ADDR
+#endif
+
+#define BUG()							\
+do {								\
+	__asm__ __volatile__ (					\
+		"1:\n\t"					\
+			"sbreak\n"				\
+			".pushsection __bug_table,\"a\"\n\t"	\
+		"2:\n\t"					\
+			__BUG_ENTRY "\n\t"			\
+			".org 2b + %2\n\t"			\
+			".popsection"				\
+		:						\
+		: "i" (__FILE__), "i" (__LINE__),		\
+		  "i" (sizeof(struct bug_entry)));		\
+	unreachable();						\
+} while (0)
+#endif /* !__ASSEMBLY__ */
+#else /* CONFIG_GENERIC_BUG */
+#ifndef __ASSEMBLY__
+#define BUG()							\
+do {								\
+	__asm__ __volatile__ ("sbreak\n");			\
+	unreachable();						\
+} while (0)
+#endif /* !__ASSEMBLY__ */
+#endif /* CONFIG_GENERIC_BUG */
+
+#define HAVE_ARCH_BUG
+
+#include <asm-generic/bug.h>
+
+#ifndef __ASSEMBLY__
+
+struct pt_regs;
+struct task_struct;
+
+extern void die(struct pt_regs *regs, const char *str);
+extern void do_trap(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, struct task_struct *tsk);
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BUG_H */
diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h
new file mode 100644
index 000000000000..e8f0d1110d74
--- /dev/null
+++ b/arch/riscv/include/asm/cache.h
@@ -0,0 +1,22 @@
+/*
+ * Copyright (C) 2017 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CACHE_H
+#define _ASM_RISCV_CACHE_H
+
+#define L1_CACHE_SHIFT		6
+
+#define L1_CACHE_BYTES		(1 << L1_CACHE_SHIFT)
+
+#endif /* _ASM_RISCV_CACHE_H */
diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
new file mode 100644
index 000000000000..0595585013b0
--- /dev/null
+++ b/arch/riscv/include/asm/cacheflush.h
@@ -0,0 +1,39 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CACHEFLUSH_H
+#define _ASM_RISCV_CACHEFLUSH_H
+
+#include <asm-generic/cacheflush.h>
+
+#undef flush_icache_range
+#undef flush_icache_user_range
+
+static inline void local_flush_icache_all(void)
+{
+	asm volatile ("fence.i" ::: "memory");
+}
+
+#ifndef CONFIG_SMP
+
+#define flush_icache_range(start, end) local_flush_icache_all()
+#define flush_icache_user_range(vma, pg, addr, len) local_flush_icache_all()
+
+#else /* CONFIG_SMP */
+
+#define flush_icache_range(start, end) sbi_remote_fence_i(0)
+#define flush_icache_user_range(vma, pg, addr, len) sbi_remote_fence_i(0)
+
+#endif /* CONFIG_SMP */
+
+#endif /* _ASM_RISCV_CACHEFLUSH_H */
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
new file mode 100644
index 000000000000..c7ee1321ac18
--- /dev/null
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -0,0 +1,124 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CMPXCHG_H
+#define _ASM_RISCV_CMPXCHG_H
+
+#include <linux/bug.h>
+
+#ifdef CONFIG_ISA_A
+
+#include <asm/barrier.h>
+
+#define __xchg(new, ptr, size)					\
+({								\
+	__typeof__(ptr) __ptr = (ptr);				\
+	__typeof__(new) __new = (new);				\
+	__typeof__(*(ptr)) __ret;				\
+	switch (size) {						\
+	case 4:							\
+		__asm__ __volatile__ (				\
+			"amoswap.w %0, %2, %1"			\
+			: "=r" (__ret), "+A" (*__ptr)		\
+			: "r" (__new));				\
+		break;						\
+	case 8:							\
+		__asm__ __volatile__ (				\
+			"amoswap.d %0, %2, %1"			\
+			: "=r" (__ret), "+A" (*__ptr)		\
+			: "r" (__new));				\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+	}							\
+	__ret;							\
+})
+
+#define xchg(ptr, x)    (__xchg((x), (ptr), sizeof(*(ptr))))
+
+
+/*
+ * Atomic compare and exchange.  Compare OLD with MEM, if identical,
+ * store NEW in MEM.  Return the initial value in MEM.  Success is
+ * indicated by comparing RETURN with OLD.
+ */
+#define __cmpxchg(ptr, old, new, size)					\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(old) __old = (old);					\
+	__typeof__(new) __new = (new);					\
+	__typeof__(*(ptr)) __ret;					\
+	register unsigned int __rc;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+		"0:"							\
+			"lr.w %0, %2\n"					\
+			"bne  %0, %z3, 1f\n"				\
+			"sc.w %1, %z4, %2\n"				\
+			"bnez %1, 0b\n"					\
+		"1:"							\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new));			\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+		"0:"							\
+			"lr.d %0, %2\n"					\
+			"bne  %0, %z3, 1f\n"				\
+			"sc.d %1, %z4, %2\n"				\
+			"bnez %1, 0b\n"					\
+		"1:"							\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new));			\
+		break;							\
+	default:							\
+		BUILD_BUG();						\
+	}								\
+	__ret;								\
+})
+
+#define __cmpxchg_mb(ptr, old, new, size)			\
+({								\
+	__typeof__(*(ptr)) __ret;				\
+	smp_mb();						\
+	__ret = __cmpxchg((ptr), (old), (new), (size));		\
+	smp_mb();						\
+	__ret;							\
+})
+
+#define cmpxchg(ptr, o, n) \
+	(__cmpxchg_mb((ptr), (o), (n), sizeof(*(ptr))))
+
+#define cmpxchg_local(ptr, o, n) \
+	(__cmpxchg((ptr), (o), (n), sizeof(*(ptr))))
+
+#define cmpxchg64(ptr, o, n)			\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);	\
+	cmpxchg((ptr), (o), (n));		\
+})
+
+#define cmpxchg64_local(ptr, o, n)		\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);	\
+	cmpxchg_local((ptr), (o), (n));		\
+})
+
+#else /* !CONFIG_ISA_A */
+
+#include <asm-generic/cmpxchg.h>
+
+#endif /* CONFIG_ISA_A */
+
+#endif /* _ASM_RISCV_CMPXCHG_H */
diff --git a/arch/riscv/include/asm/compat.h b/arch/riscv/include/asm/compat.h
new file mode 100644
index 000000000000..b42bf54f42e4
--- /dev/null
+++ b/arch/riscv/include/asm/compat.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_COMPAT_H
+#define __ASM_COMPAT_H
+#ifdef __KERNEL__
+#ifdef CONFIG_COMPAT
+
+#if defined(CONFIG_64BIT)
+#define COMPAT_UTS_MACHINE "riscv64\0\0"
+#elif defined(CONFIG_32BIT)
+#define COMPAT_UTS_MACHINE "riscv32\0\0"
+#else
+#error "Unknown RISC-V base ISA"
+#endif
+
+#endif /*CONFIG_COMPAT*/
+#endif /*__KERNEL__*/
+#endif /*__ASM_COMPAT_H*/
diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
new file mode 100644
index 000000000000..387d0dbf0073
--- /dev/null
+++ b/arch/riscv/include/asm/csr.h
@@ -0,0 +1,125 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CSR_H
+#define _ASM_RISCV_CSR_H
+
+#include <linux/const.h>
+
+/* Status register flags */
+#define SR_IE   _AC(0x00000002, UL) /* Interrupt Enable */
+#define SR_PIE  _AC(0x00000020, UL) /* Previous IE */
+#define SR_PS   _AC(0x00000100, UL) /* Previously Supervisor */
+#define SR_SUM  _AC(0x00040000, UL) /* Supervisor may access User Memory */
+
+#define SR_FS           _AC(0x00006000, UL) /* Floating-point Status */
+#define SR_FS_OFF       _AC(0x00000000, UL)
+#define SR_FS_INITIAL   _AC(0x00002000, UL)
+#define SR_FS_CLEAN     _AC(0x00004000, UL)
+#define SR_FS_DIRTY     _AC(0x00006000, UL)
+
+#define SR_XS           _AC(0x00018000, UL) /* Extension Status */
+#define SR_XS_OFF       _AC(0x00000000, UL)
+#define SR_XS_INITIAL   _AC(0x00008000, UL)
+#define SR_XS_CLEAN     _AC(0x00010000, UL)
+#define SR_XS_DIRTY     _AC(0x00018000, UL)
+
+#ifndef CONFIG_64BIT
+#define SR_SD   _AC(0x80000000, UL) /* FS/XS dirty */
+#else
+#define SR_SD   _AC(0x8000000000000000, UL) /* FS/XS dirty */
+#endif
+
+/* SPTBR flags */
+#if __riscv_xlen == 32
+#define SPTBR_PPN     _AC(0x003FFFFF, UL)
+#define SPTBR_MODE_32 _AC(0x80000000, UL)
+#define SPTBR_MODE    SPTBR_MODE_32
+#else
+#define SPTBR_PPN     _AC(0x00000FFFFFFFFFFF, UL)
+#define SPTBR_MODE_39 _AC(0x8000000000000000, UL)
+#define SPTBR_MODE    SPTBR_MODE_39
+#endif
+
+/* Interrupt Enable and Interrupt Pending flags */
+#define SIE_SSIE _AC(0x00000002, UL) /* Software Interrupt Enable */
+#define SIE_STIE _AC(0x00000020, UL) /* Timer Interrupt Enable */
+
+#define EXC_INST_MISALIGNED     0
+#define EXC_INST_ACCESS         1
+#define EXC_BREAKPOINT          3
+#define EXC_LOAD_ACCESS         5
+#define EXC_STORE_ACCESS        7
+#define EXC_SYSCALL             8
+#define EXC_INST_PAGE_FAULT     12
+#define EXC_LOAD_PAGE_FAULT     13
+#define EXC_STORE_PAGE_FAULT    15
+
+#ifndef __ASSEMBLY__
+
+#define csr_swap(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrw %0, " #csr ", %1"		\
+			      : "=r" (__v) : "rK" (__v));	\
+	__v;							\
+})
+
+#define csr_read(csr)						\
+({								\
+	register unsigned long __v;				\
+	__asm__ __volatile__ ("csrr %0, " #csr			\
+			      : "=r" (__v));			\
+	__v;							\
+})
+
+#define csr_write(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrw " #csr ", %0"		\
+			      : : "rK" (__v));			\
+})
+
+#define csr_read_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrs %0, " #csr ", %1"		\
+			      : "=r" (__v) : "rK" (__v));	\
+	__v;							\
+})
+
+#define csr_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrs " #csr ", %0"		\
+			      : : "rK" (__v));			\
+})
+
+#define csr_read_clear(csr, val)				\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrc %0, " #csr ", %1"		\
+			      : "=r" (__v) : "rK" (__v));	\
+	__v;							\
+})
+
+#define csr_clear(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrc " #csr ", %0"		\
+			      : : "rK" (__v));			\
+})
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_CSR_H */
diff --git a/arch/riscv/include/asm/current.h b/arch/riscv/include/asm/current.h
new file mode 100644
index 000000000000..44612c32fc4a
--- /dev/null
+++ b/arch/riscv/include/asm/current.h
@@ -0,0 +1,42 @@
+/*
+ * Based on arm/arm64/include/asm/current.h
+ *
+ * Copyright (C) 2016 ARM
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef __ASM_CURRENT_H
+#define __ASM_CURRENT_H
+
+#include <linux/compiler.h>
+
+#ifndef __ASSEMBLY__
+
+struct task_struct;
+
+/* This only works because "struct thread_info" is at offset 0 from "struct
+ * task_struct".  This constraint seems to be necessary on other architectures
+ * as well, but __switch_to enforces it.  We can't check TASK_TI here because
+ * <asm/asm-offsets.h> includes this, and I can't get the definition of "struct
+ * task_struct" here due to some header ordering problems.  */
+static __always_inline struct task_struct *get_current(void)
+{
+	register struct task_struct *tp __asm__("tp");
+	return tp;
+}
+
+#define current get_current()
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_CURRENT_H */
diff --git a/arch/riscv/include/asm/delay.h b/arch/riscv/include/asm/delay.h
new file mode 100644
index 000000000000..cbb0c9eb96cb
--- /dev/null
+++ b/arch/riscv/include/asm/delay.h
@@ -0,0 +1,28 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2016 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_DELAY_H
+#define _ASM_RISCV_DELAY_H
+
+extern unsigned long riscv_timebase;
+
+#define udelay udelay
+extern void udelay(unsigned long usecs);
+
+#define ndelay ndelay
+extern void ndelay(unsigned long nsecs);
+
+extern void __delay(unsigned long cycles);
+
+#endif /* _ASM_RISCV_DELAY_H */
diff --git a/arch/riscv/include/asm/device.h b/arch/riscv/include/asm/device.h
new file mode 100644
index 000000000000..28975e528d2f
--- /dev/null
+++ b/arch/riscv/include/asm/device.h
@@ -0,0 +1,27 @@
+/*
+ * Copyright (C) 2016 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef _ASM_RISCV_DEVICE_H
+#define _ASM_RISCV_DEVICE_H
+
+#include <linux/sysfs.h>
+
+struct dev_archdata {
+	struct dma_map_ops *dma_ops;
+};
+
+struct pdev_archdata {
+};
+
+#endif /* _ASM_RISCV_DEVICE_H */
diff --git a/arch/riscv/include/asm/dma-mapping.h b/arch/riscv/include/asm/dma-mapping.h
new file mode 100644
index 000000000000..9485a58e839e
--- /dev/null
+++ b/arch/riscv/include/asm/dma-mapping.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2003-2004 Hewlett-Packard Co
+ *	David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2016 SiFive, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_RISCV_DMA_MAPPING_H
+#define __ASM_RISCV_DMA_MAPPING_H
+
+#ifdef __KERNEL__
+
+/* Use ops->dma_mapping_error (if it exists) or assume success */
+// #undef DMA_ERROR_CODE
+
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
+{
+	return &dma_noop_ops;
+}
+
+static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
+{
+	if (!dev->dma_mask)
+		return false;
+
+	return addr + size - 1 <= *dev->dma_mask;
+}
+
+#endif	/* __KERNEL__ */
+#endif	/* __ASM_RISCV_DMA_MAPPING_H */
diff --git a/arch/riscv/include/asm/elf.h b/arch/riscv/include/asm/elf.h
new file mode 100644
index 000000000000..add1245690f6
--- /dev/null
+++ b/arch/riscv/include/asm/elf.h
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) 2003 Matjaz Breskvar <phoenix@bsemi.com>
+ * Copyright (C) 2010-2011 Jonas Bonn <jonas@southpole.se>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef _ASM_RISCV_ELF_H
+#define _ASM_RISCV_ELF_H
+
+#include <uapi/asm/elf.h>
+#include <asm/auxvec.h>
+#include <asm/byteorder.h>
+
+/* TODO: Move definition into include/uapi/linux/elf-em.h */
+#define EM_RISCV	0xF3
+
+/*
+ * These are used to set parameters in the core dumps.
+ */
+#define ELF_ARCH	EM_RISCV
+
+#ifdef CONFIG_64BIT
+#define ELF_CLASS	ELFCLASS64
+#else
+#define ELF_CLASS	ELFCLASS32
+#endif
+
+#if defined(__LITTLE_ENDIAN)
+#define ELF_DATA	ELFDATA2LSB
+#elif defined(__BIG_ENDIAN)
+#define ELF_DATA	ELFDATA2MSB
+#else
+#error "Unknown endianness"
+#endif
+
+/*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+#define elf_check_arch(x) ((x)->e_machine == EM_RISCV)
+
+#define CORE_DUMP_USE_REGSET
+#define ELF_EXEC_PAGESIZE	(PAGE_SIZE)
+
+/*
+ * This is the location that an ET_DYN program is loaded if exec'ed.  Typical
+ * use of this is to invoke "./ld.so someprog" to test out a new version of
+ * the loader.  We need to make sure that it is out of the way of the program
+ * that it will "exec", and that there is sufficient room for the brk.
+ */
+#define ELF_ET_DYN_BASE		((TASK_SIZE / 3) * 2)
+
+/*
+ * This yields a mask that user programs can use to figure out what
+ * instruction set this CPU supports.  This could be done in user space,
+ * but it's not easy, and we've already done it here.
+ */
+#define ELF_HWCAP	(0)
+
+/*
+ * This yields a string that ld.so will use to load implementation
+ * specific libraries for optimization.  This is more specific in
+ * intent than poking at uname or /proc/cpuinfo.
+ */
+#define ELF_PLATFORM	(NULL)
+
+#define ARCH_DLINFO						\
+do {								\
+	NEW_AUX_ENT(AT_SYSINFO_EHDR,				\
+		(elf_addr_t)current->mm->context.vdso);		\
+} while (0)
+
+
+#ifdef __KERNEL__
+#define ARCH_HAS_SETUP_ADDITIONAL_PAGES
+struct linux_binprm;
+extern int arch_setup_additional_pages(struct linux_binprm *bprm,
+	int uses_interp);
+#endif
+
+#endif /* _ASM_RISCV_ELF_H */
diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h
new file mode 100644
index 000000000000..a3ee80082b88
--- /dev/null
+++ b/arch/riscv/include/asm/io.h
@@ -0,0 +1,152 @@
+/*
+ * {read,write}{b,w,l,q} based on arch/arm64/include/asm/io.h
+ *   which was based on arch/arm/include/io.h
+ *
+ * Copyright (C) 1996-2000 Russell King
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_IO_H
+#define _ASM_RISCV_IO_H
+
+#ifdef __KERNEL__
+
+#ifdef CONFIG_MMU
+
+extern void __iomem *ioremap(phys_addr_t offset, unsigned long size);
+
+#define ioremap_nocache(addr, size) ioremap((addr), (size))
+#define ioremap_wc(addr, size) ioremap((addr), (size))
+#define ioremap_wt(addr, size) ioremap((addr), (size))
+
+extern void iounmap(void __iomem *addr);
+
+#endif /* CONFIG_MMU */
+
+/*
+ * Generic IO read/write.  These perform native-endian accesses.
+ */
+#define __raw_writeb __raw_writeb
+static inline void __raw_writeb(u8 val, volatile void __iomem *addr)
+{
+	asm volatile("sb %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#define __raw_writew __raw_writew
+static inline void __raw_writew(u16 val, volatile void __iomem *addr)
+{
+	asm volatile("sh %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#define __raw_writel __raw_writel
+static inline void __raw_writel(u32 val, volatile void __iomem *addr)
+{
+	asm volatile("sw %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#ifdef __riscv64
+#define __raw_writeq __raw_writeq
+static inline void __raw_writeq(u64 val, volatile void __iomem *addr)
+{
+	asm volatile("sd %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+#endif
+
+#define __raw_readb __raw_readb
+static inline u8 __raw_readb(const volatile void __iomem *addr)
+{
+	u8 val;
+
+	asm volatile("lb %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#define __raw_readw __raw_readw
+static inline u16 __raw_readw(const volatile void __iomem *addr)
+{
+	u16 val;
+
+	asm volatile("lh %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#define __raw_readl __raw_readl
+static inline u32 __raw_readl(const volatile void __iomem *addr)
+{
+	u32 val;
+
+	asm volatile("lw %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#ifdef __riscv64
+#define __raw_readq __raw_readq
+static inline u64 __raw_readq(const volatile void __iomem *addr)
+{
+	u64 val;
+
+	asm volatile("ld %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+#endif
+
+/* IO barriers.  These only fence on the IO bits because they're only required
+ * to order device access.  We're defining mmiowb because our AMO instructions
+ * (which are used to implement locks) don't specify ordering.  From Chapter 7
+ * of v2.2 of the user ISA:
+ * "The bits order accesses to one of the two address domains, memory or I/O,
+ * depending on which address domain the atomic instruction is accessing. No
+ * ordering constraint is implied to accesses to the other domain, and a FENCE
+ * instruction should be used to order across both domains."
+ */
+
+#define __iormb()	__asm__ __volatile__ ("fence i,io" : : : "memory");
+#define __iowmb()	__asm__ __volatile__ ("fence io,o" : : : "memory");
+
+#define mmiowb()	__asm__ __volatile__ ("fence io,io" : : : "memory");
+
+/*
+ * Relaxed I/O memory access primitives. These follow the Device memory
+ * ordering rules but do not guarantee any ordering relative to Normal memory
+ * accesses.
+ */
+#define readb_relaxed(c)	({ u8  __r = __raw_readb(c); __r; })
+#define readw_relaxed(c)	({ u16 __r = le16_to_cpu((__force __le16)__raw_readw(c)); __r; })
+#define readl_relaxed(c)	({ u32 __r = le32_to_cpu((__force __le32)__raw_readl(c)); __r; })
+#define readq_relaxed(c)	({ u64 __r = le64_to_cpu((__force __le64)__raw_readq(c)); __r; })
+
+#define writeb_relaxed(v,c)	((void)__raw_writeb((v),(c)))
+#define writew_relaxed(v,c)	((void)__raw_writew((__force u16)cpu_to_le16(v),(c)))
+#define writel_relaxed(v,c)	((void)__raw_writel((__force u32)cpu_to_le32(v),(c)))
+#define writeq_relaxed(v,c)	((void)__raw_writeq((__force u64)cpu_to_le64(v),(c)))
+
+/*
+ * I/O memory access primitives. Reads are ordered relative to any
+ * following Normal memory access. Writes are ordered relative to any prior
+ * Normal memory access.
+ */
+#define readb(c)		({ u8  __v = readb_relaxed(c); __iormb(); __v; })
+#define readw(c)		({ u16 __v = readw_relaxed(c); __iormb(); __v; })
+#define readl(c)		({ u32 __v = readl_relaxed(c); __iormb(); __v; })
+#define readq(c)		({ u64 __v = readq_relaxed(c); __iormb(); __v; })
+
+#define writeb(v,c)		({ __iowmb(); writeb_relaxed((v),(c)); })
+#define writew(v,c)		({ __iowmb(); writew_relaxed((v),(c)); })
+#define writel(v,c)		({ __iowmb(); writel_relaxed((v),(c)); })
+#define writeq(v,c)		({ __iowmb(); writeq_relaxed((v),(c)); })
+
+#include <asm-generic/io.h>
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_IO_H */
diff --git a/arch/riscv/include/asm/irq.h b/arch/riscv/include/asm/irq.h
new file mode 100644
index 000000000000..e64c61e89f76
--- /dev/null
+++ b/arch/riscv/include/asm/irq.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_IRQ_H
+#define _ASM_RISCV_IRQ_H
+
+#define NR_IRQS         0
+
+#define INTERRUPT_CAUSE_SOFTWARE    1
+#define INTERRUPT_CAUSE_TIMER       5
+#define INTERRUPT_CAUSE_EXTERNAL    9
+
+void riscv_timer_interrupt(void);
+
+#include <asm-generic/irq.h>
+
+/* The value of csr sie before init_traps runs (core is up) */
+DECLARE_PER_CPU(atomic_long_t, riscv_early_sie);
+
+#endif /* _ASM_RISCV_IRQ_H */
diff --git a/arch/riscv/include/asm/irqflags.h b/arch/riscv/include/asm/irqflags.h
new file mode 100644
index 000000000000..6fdc860d7f84
--- /dev/null
+++ b/arch/riscv/include/asm/irqflags.h
@@ -0,0 +1,63 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef _ASM_RISCV_IRQFLAGS_H
+#define _ASM_RISCV_IRQFLAGS_H
+
+#include <asm/processor.h>
+#include <asm/csr.h>
+
+/* read interrupt enabled status */
+static inline unsigned long arch_local_save_flags(void)
+{
+	return csr_read(sstatus);
+}
+
+/* unconditionally enable interrupts */
+static inline void arch_local_irq_enable(void)
+{
+	csr_set(sstatus, SR_IE);
+}
+
+/* unconditionally disable interrupts */
+static inline void arch_local_irq_disable(void)
+{
+	csr_clear(sstatus, SR_IE);
+}
+
+/* get status and disable interrupts */
+static inline unsigned long arch_local_irq_save(void)
+{
+	return csr_read_clear(sstatus, SR_IE);
+}
+
+/* test flags */
+static inline int arch_irqs_disabled_flags(unsigned long flags)
+{
+	return !(flags & SR_IE);
+}
+
+/* test hardware interrupt enable bit */
+static inline int arch_irqs_disabled(void)
+{
+	return arch_irqs_disabled_flags(arch_local_save_flags());
+}
+
+/* set interrupt enabled status */
+static inline void arch_local_irq_restore(unsigned long flags)
+{
+	csr_set(sstatus, flags & SR_IE);
+}
+
+#endif /* _ASM_RISCV_IRQFLAGS_H */
diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
new file mode 100644
index 000000000000..1190de7a0f74
--- /dev/null
+++ b/arch/riscv/include/asm/kprobes.h
@@ -0,0 +1,22 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef ASM_RISCV_KPROBES_H
+#define ASM_RISCV_KPROBES_H
+
+#ifdef CONFIG_KPROBES
+#error "RISC-V doesn't skpport CONFIG_KPROBES"
+#endif
+
+#endif
diff --git a/arch/riscv/include/asm/linkage.h b/arch/riscv/include/asm/linkage.h
new file mode 100644
index 000000000000..b7b304ca89c4
--- /dev/null
+++ b/arch/riscv/include/asm/linkage.h
@@ -0,0 +1,20 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_LINKAGE_H
+#define _ASM_RISCV_LINKAGE_H
+
+#define __ALIGN		.balign 4
+#define __ALIGN_STR	".balign 4"
+
+#endif /* _ASM_RISCV_LINKAGE_H */
diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h
new file mode 100644
index 000000000000..66805cba9a27
--- /dev/null
+++ b/arch/riscv/include/asm/mmu.h
@@ -0,0 +1,26 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef _ASM_RISCV_MMU_H
+#define _ASM_RISCV_MMU_H
+
+#ifndef __ASSEMBLY__
+
+typedef struct {
+	void *vdso;
+} mm_context_t;
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_MMU_H */
diff --git a/arch/riscv/include/asm/mmu_context.h b/arch/riscv/include/asm/mmu_context.h
new file mode 100644
index 000000000000..de1fc1631fc4
--- /dev/null
+++ b/arch/riscv/include/asm/mmu_context.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_MMU_CONTEXT_H
+#define _ASM_RISCV_MMU_CONTEXT_H
+
+#include <asm-generic/mm_hooks.h>
+
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <asm/tlbflush.h>
+
+static inline void enter_lazy_tlb(struct mm_struct *mm,
+	struct task_struct *task)
+{
+}
+
+/* Initialize context-related info for a new mm_struct */
+static inline int init_new_context(struct task_struct *task,
+	struct mm_struct *mm)
+{
+	return 0;
+}
+
+static inline void destroy_context(struct mm_struct *mm)
+{
+}
+
+static inline pgd_t *current_pgdir(void)
+{
+	return pfn_to_virt(csr_read(sptbr) & SPTBR_PPN);
+}
+
+static inline void set_pgdir(pgd_t *pgd)
+{
+	csr_write(sptbr, virt_to_pfn(pgd) | SPTBR_MODE);
+}
+
+static inline void switch_mm(struct mm_struct *prev,
+	struct mm_struct *next, struct task_struct *task)
+{
+	if (likely(prev != next)) {
+		set_pgdir(next->pgd);
+		local_flush_tlb_all();
+	}
+}
+
+static inline void activate_mm(struct mm_struct *prev,
+			       struct mm_struct *next)
+{
+	switch_mm(prev, next, NULL);
+}
+
+static inline void deactivate_mm(struct task_struct *task,
+	struct mm_struct *mm)
+{
+}
+
+#endif /* _ASM_RISCV_MMU_CONTEXT_H */
diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
new file mode 100644
index 000000000000..e1491c20d6fd
--- /dev/null
+++ b/arch/riscv/include/asm/page.h
@@ -0,0 +1,138 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ * Copyright (C) 2017 XiaojingZhu <zhuxiaoj@ict.ac.cn>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PAGE_H
+#define _ASM_RISCV_PAGE_H
+
+#include <linux/pfn.h>
+#include <linux/const.h>
+
+#define PAGE_SHIFT	(12)
+#define PAGE_SIZE	(_AC(1, UL) << PAGE_SHIFT)
+#define PAGE_MASK	(~(PAGE_SIZE - 1))
+
+#ifdef __KERNEL__
+
+/*
+ * PAGE_OFFSET -- the first address of the first page of memory.
+ * When not using MMU this corresponds to the first free page in
+ * physical memory (aligned on a page boundary).
+ */
+#ifdef CONFIG_64BIT
+#define PAGE_OFFSET		_AC(0xffffffff80000000, UL)
+#else
+#define PAGE_OFFSET		_AC(0xc0000000, UL)
+#endif
+
+#define KERN_VIRT_SIZE (-PAGE_OFFSET)
+
+#ifndef __ASSEMBLY__
+
+#define PAGE_UP(addr)	(((addr)+((PAGE_SIZE)-1))&(~((PAGE_SIZE)-1)))
+#define PAGE_DOWN(addr)	((addr)&(~((PAGE_SIZE)-1)))
+
+/* align addr on a size boundary - adjust address up/down if needed */
+#define _ALIGN_UP(addr, size)	(((addr)+((size)-1))&(~((size)-1)))
+#define _ALIGN_DOWN(addr, size)	((addr)&(~((size)-1)))
+
+/* align addr on a size boundary - adjust address up if needed */
+#define _ALIGN(addr, size)	_ALIGN_UP(addr, size)
+
+#define clear_page(pgaddr)			memset((pgaddr), 0, PAGE_SIZE)
+#define copy_page(to, from)			memcpy((to), (from), PAGE_SIZE)
+
+#define clear_user_page(pgaddr, vaddr, page)	memset((pgaddr), 0, PAGE_SIZE)
+#define copy_user_page(vto, vfrom, vaddr, topg) \
+			memcpy((vto), (vfrom), PAGE_SIZE)
+
+/*
+ * Use struct definitions to apply C type checking
+ */
+
+/* Page Global Directory entry */
+typedef struct {
+	unsigned long pgd;
+} pgd_t;
+
+/* Page Table entry */
+typedef struct {
+	unsigned long pte;
+} pte_t;
+
+typedef struct {
+	unsigned long pgprot;
+} pgprot_t;
+
+typedef struct page *pgtable_t;
+
+#define pte_val(x)	((x).pte)
+#define pgd_val(x)	((x).pgd)
+#define pgprot_val(x)	((x).pgprot)
+
+#define __pte(x)	((pte_t) { (x) })
+#define __pgd(x)	((pgd_t) { (x) })
+#define __pgprot(x)	((pgprot_t) { (x) })
+
+#ifdef CONFIG_64BITS
+#define PTE_FMT "%016lx"
+#else
+#define PTE_FMT "%08lx"
+#endif
+
+extern unsigned long va_pa_offset;
+extern unsigned long pfn_base;
+
+extern unsigned long max_low_pfn;
+extern unsigned long min_low_pfn;
+
+#define __pa(x)		((unsigned long)(x) - va_pa_offset)
+#define __va(x)		((void *)((unsigned long) (x) + va_pa_offset))
+
+#define phys_to_pfn(phys)	(PFN_DOWN(phys))
+#define pfn_to_phys(pfn)	(PFN_PHYS(pfn))
+
+#define virt_to_pfn(vaddr)	(phys_to_pfn(__pa(vaddr)))
+#define pfn_to_virt(pfn)	(__va(pfn_to_phys(pfn)))
+
+#define virt_to_page(vaddr)	(pfn_to_page(virt_to_pfn(vaddr)))
+#define page_to_virt(page)	(pfn_to_virt(page_to_pfn(page)))
+
+#define page_to_phys(page)	(pfn_to_phys(page_to_pfn(page)))
+#define page_to_bus(page)	(page_to_phys(page))
+#define phys_to_page(paddr)	(pfn_to_page(phys_to_pfn(paddr)))
+
+#define pfn_valid(pfn) \
+	(((pfn) >= pfn_base) && (((pfn)-pfn_base) < max_mapnr))
+
+#define ARCH_PFN_OFFSET		(pfn_base)
+
+#endif /* __ASSEMBLY__ */
+
+#define virt_addr_valid(vaddr)	(pfn_valid(virt_to_pfn(vaddr)))
+
+#endif /* __KERNEL__ */
+
+#define VM_DATA_DEFAULT_FLAGS	(VM_READ | VM_WRITE | \
+				 VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
+
+#include <asm-generic/memory_model.h>
+#include <asm-generic/getorder.h>
+
+/* vDSO support */
+/* We do define AT_SYSINFO_EHDR but don't use the gate mechanism */
+#define __HAVE_ARCH_GATE_AREA
+
+#endif /* _ASM_RISCV_PAGE_H */
diff --git a/arch/riscv/include/asm/pci.h b/arch/riscv/include/asm/pci.h
new file mode 100644
index 000000000000..c829a4ce8d25
--- /dev/null
+++ b/arch/riscv/include/asm/pci.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright (C) 2016 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef __ASM_RISCV_PCI_H
+#define __ASM_RISCV_PCI_H
+#ifdef __KERNEL__
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/io.h>
+
+#define PCIBIOS_MIN_IO		0
+#define PCIBIOS_MIN_MEM		0
+
+/* RISC-V shim does not initialize PCI bus */
+#define pcibios_assign_all_busses() 1
+
+/* RISC-V TileLink and PCIe share the share address space */
+#define PCI_DMA_BUS_IS_PHYS 1
+
+extern int isa_dma_bridge_buggy;
+
+#ifdef CONFIG_PCI
+static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
+{
+	/* no legacy IRQ on risc-v */
+	return -ENODEV;
+}
+
+static inline int pci_proc_domain(struct pci_bus *bus)
+{
+	/* always show the domain in /proc */
+	return 1;
+}
+#endif  /* CONFIG_PCI */
+
+#endif  /* __KERNEL__ */
+#endif  /* __ASM_PCI_H */
diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h
new file mode 100644
index 000000000000..b40074bcb164
--- /dev/null
+++ b/arch/riscv/include/asm/pgalloc.h
@@ -0,0 +1,124 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGALLOC_H
+#define _ASM_RISCV_PGALLOC_H
+
+#include <linux/mm.h>
+#include <asm/tlb.h>
+
+static inline void pmd_populate_kernel(struct mm_struct *mm,
+	pmd_t *pmd, pte_t *pte)
+{
+	unsigned long pfn = virt_to_pfn(pte);
+
+	set_pmd(pmd, __pmd((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+
+static inline void pmd_populate(struct mm_struct *mm,
+	pmd_t *pmd, pgtable_t pte)
+{
+	unsigned long pfn = virt_to_pfn(page_address(pte));
+
+	set_pmd(pmd, __pmd((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+
+#ifndef __PAGETABLE_PMD_FOLDED
+static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+{
+	unsigned long pfn = virt_to_pfn(pmd);
+
+	set_pud(pud, __pud((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+#endif /* __PAGETABLE_PMD_FOLDED */
+
+#define pmd_pgtable(pmd)	pmd_page(pmd)
+
+static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+{
+	pgd_t *pgd;
+
+	pgd = (pgd_t *)__get_free_page(GFP_KERNEL);
+	if (likely(pgd != NULL)) {
+		memset(pgd, 0, USER_PTRS_PER_PGD * sizeof(pgd_t));
+		/* Copy kernel mappings */
+		memcpy(pgd + USER_PTRS_PER_PGD,
+			init_mm.pgd + USER_PTRS_PER_PGD,
+			(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
+	}
+	return pgd;
+}
+
+static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
+{
+	free_page((unsigned long)pgd);
+}
+
+#ifndef __PAGETABLE_PMD_FOLDED
+
+static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
+{
+	return (pmd_t *)__get_free_page(
+		GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+}
+
+static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
+{
+	free_page((unsigned long)pmd);
+}
+
+#define __pmd_free_tlb(tlb, pmd, addr)  pmd_free((tlb)->mm, pmd)
+
+#endif /* __PAGETABLE_PMD_FOLDED */
+
+static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
+	unsigned long address)
+{
+	return (pte_t *)__get_free_page(
+		GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+}
+
+static inline struct page *pte_alloc_one(struct mm_struct *mm,
+	unsigned long address)
+{
+	struct page *pte;
+
+	pte = alloc_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+	if (likely(pte != NULL))
+		pgtable_page_ctor(pte);
+	return pte;
+}
+
+static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
+{
+	free_page((unsigned long)pte);
+}
+
+static inline void pte_free(struct mm_struct *mm, pgtable_t pte)
+{
+	pgtable_page_dtor(pte);
+	__free_page(pte);
+}
+
+#define __pte_free_tlb(tlb, pte, buf)   \
+do {                                    \
+	pgtable_page_dtor(pte);         \
+	tlb_remove_page((tlb), pte);    \
+} while (0)
+
+static inline void check_pgt_cache(void)
+{
+}
+
+#endif /* _ASM_RISCV_PGALLOC_H */
diff --git a/arch/riscv/include/asm/pgtable-32.h b/arch/riscv/include/asm/pgtable-32.h
new file mode 100644
index 000000000000..d61974b74182
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-32.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_32_H
+#define _ASM_RISCV_PGTABLE_32_H
+
+#include <asm-generic/pgtable-nopmd.h>
+#include <linux/const.h>
+
+/* Size of region mapped by a page global directory */
+#define PGDIR_SHIFT     22
+#define PGDIR_SIZE      (_AC(1, UL) << PGDIR_SHIFT)
+#define PGDIR_MASK      (~(PGDIR_SIZE - 1))
+
+#endif /* _ASM_RISCV_PGTABLE_32_H */
diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h
new file mode 100644
index 000000000000..7aa0ea9bd8bb
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-64.h
@@ -0,0 +1,84 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_64_H
+#define _ASM_RISCV_PGTABLE_64_H
+
+#include <linux/const.h>
+
+#define PGDIR_SHIFT     30
+/* Size of region mapped by a page global directory */
+#define PGDIR_SIZE      (_AC(1, UL) << PGDIR_SHIFT)
+#define PGDIR_MASK      (~(PGDIR_SIZE - 1))
+
+#define PMD_SHIFT       21
+/* Size of region mapped by a page middle directory */
+#define PMD_SIZE        (_AC(1, UL) << PMD_SHIFT)
+#define PMD_MASK        (~(PMD_SIZE - 1))
+
+/* Page Middle Directory entry */
+typedef struct {
+	unsigned long pmd;
+} pmd_t;
+
+#define pmd_val(x)      ((x).pmd)
+#define __pmd(x)        ((pmd_t) { (x) })
+
+#define PTRS_PER_PMD    (PAGE_SIZE / sizeof(pmd_t))
+
+static inline int pud_present(pud_t pud)
+{
+	return (pud_val(pud) & _PAGE_PRESENT);
+}
+
+static inline int pud_none(pud_t pud)
+{
+	return (pud_val(pud) == 0);
+}
+
+static inline int pud_bad(pud_t pud)
+{
+	return !pud_present(pud);
+}
+
+static inline void set_pud(pud_t *pudp, pud_t pud)
+{
+	*pudp = pud;
+}
+
+static inline void pud_clear(pud_t *pudp)
+{
+	set_pud(pudp, __pud(0));
+}
+
+static inline unsigned long pud_page_vaddr(pud_t pud)
+{
+	return (unsigned long)pfn_to_virt(pud_val(pud) >> _PAGE_PFN_SHIFT);
+}
+
+#define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
+
+static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
+{
+	return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(addr);
+}
+
+static inline pmd_t pfn_pmd(unsigned long pfn, pgprot_t prot)
+{
+	return __pmd((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+#define pmd_ERROR(e) \
+	pr_err("%s:%d: bad pmd %016lx.\n", __FILE__, __LINE__, pmd_val(e))
+
+#endif /* _ASM_RISCV_PGTABLE_64_H */
diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm/pgtable-bits.h
new file mode 100644
index 000000000000..997ddbb1d370
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-bits.h
@@ -0,0 +1,48 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_BITS_H
+#define _ASM_RISCV_PGTABLE_BITS_H
+
+/*
+ * PTE format:
+ * | XLEN-1  10 | 9             8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0
+ *       PFN      reserved for SW   D   A   G   U   X   W   R   V
+ */
+
+#define _PAGE_ACCESSED_OFFSET 6
+
+#define _PAGE_PRESENT   (1 << 0)
+#define _PAGE_READ      (1 << 1)    /* Readable */
+#define _PAGE_WRITE     (1 << 2)    /* Writable */
+#define _PAGE_EXEC      (1 << 3)    /* Executable */
+#define _PAGE_USER      (1 << 4)    /* User */
+#define _PAGE_GLOBAL    (1 << 5)    /* Global */
+#define _PAGE_ACCESSED  (1 << 6)    /* Set by hardware on any access */
+#define _PAGE_DIRTY     (1 << 7)    /* Set by hardware on any write */
+#define _PAGE_SOFT      (1 << 8)    /* Reserved for software */
+
+#define _PAGE_SPECIAL   _PAGE_SOFT
+#define _PAGE_TABLE     _PAGE_PRESENT
+
+#define _PAGE_PFN_SHIFT 10
+
+/* Set of bits to preserve across pte_modify() */
+#define _PAGE_CHG_MASK  (~(unsigned long)(_PAGE_PRESENT | _PAGE_READ |	\
+					  _PAGE_WRITE | _PAGE_EXEC |	\
+					  _PAGE_USER | _PAGE_GLOBAL))
+
+/* Advertise support for _PAGE_SPECIAL */
+#define __HAVE_ARCH_PTE_SPECIAL
+
+#endif /* _ASM_RISCV_PGTABLE_BITS_H */
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
new file mode 100644
index 000000000000..8bb44014f5c3
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable.h
@@ -0,0 +1,427 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_H
+#define _ASM_RISCV_PGTABLE_H
+
+#include <linux/mmzone.h>
+
+#include <asm/pgtable-bits.h>
+
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_MMU
+
+/* Page Upper Directory not used in RISC-V */
+#include <asm-generic/pgtable-nopud.h>
+#include <asm/page.h>
+#include <asm/tlbflush.h>
+#include <linux/mm_types.h>
+
+#ifdef CONFIG_64BIT
+#include <asm/pgtable-64.h>
+#else
+#include <asm/pgtable-32.h>
+#endif /* CONFIG_64BIT */
+
+/* Number of entries in the page global directory */
+#define PTRS_PER_PGD    (PAGE_SIZE / sizeof(pgd_t))
+/* Number of entries in the page table */
+#define PTRS_PER_PTE    (PAGE_SIZE / sizeof(pte_t))
+
+/* Number of PGD entries that a user-mode program can use */
+#define USER_PTRS_PER_PGD   (TASK_SIZE / PGDIR_SIZE)
+#define FIRST_USER_ADDRESS  0
+
+/* Page protection bits */
+#define _PAGE_BASE	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER)
+
+#define PAGE_NONE		__pgprot(0)
+#define PAGE_READ		__pgprot(_PAGE_BASE | _PAGE_READ)
+#define PAGE_WRITE		__pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_WRITE)
+#define PAGE_EXEC		__pgprot(_PAGE_BASE | _PAGE_EXEC)
+#define PAGE_READ_EXEC		__pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC)
+#define PAGE_WRITE_EXEC		__pgprot(_PAGE_BASE | _PAGE_READ |	\
+					 _PAGE_EXEC | _PAGE_WRITE)
+
+#define PAGE_COPY		PAGE_READ
+#define PAGE_COPY_EXEC		PAGE_EXEC
+#define PAGE_COPY_READ_EXEC	PAGE_READ_EXEC
+#define PAGE_SHARED		PAGE_WRITE
+#define PAGE_SHARED_EXEC	PAGE_WRITE_EXEC
+
+#define _PAGE_KERNEL		(_PAGE_READ \
+				| _PAGE_WRITE \
+				| _PAGE_PRESENT \
+				| _PAGE_ACCESSED \
+				| _PAGE_DIRTY)
+
+#define PAGE_KERNEL		__pgprot(_PAGE_KERNEL)
+#define PAGE_KERNEL_EXEC	__pgprot(_PAGE_KERNEL | _PAGE_EXEC)
+
+extern pgd_t swapper_pg_dir[];
+
+/* MAP_PRIVATE permissions: xwr (copy-on-write) */
+#define __P000	PAGE_NONE
+#define __P001	PAGE_READ
+#define __P010	PAGE_COPY
+#define __P011	PAGE_COPY
+#define __P100	PAGE_EXEC
+#define __P101	PAGE_READ_EXEC
+#define __P110	PAGE_COPY_EXEC
+#define __P111	PAGE_COPY_READ_EXEC
+
+/* MAP_SHARED permissions: xwr */
+#define __S000	PAGE_NONE
+#define __S001	PAGE_READ
+#define __S010	PAGE_SHARED
+#define __S011	PAGE_SHARED
+#define __S100	PAGE_EXEC
+#define __S101	PAGE_READ_EXEC
+#define __S110	PAGE_SHARED_EXEC
+#define __S111	PAGE_SHARED_EXEC
+
+/*
+ * ZERO_PAGE is a global shared page that is always zero,
+ * used for zero-mapped memory areas, etc.
+ */
+extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
+
+static inline int pmd_present(pmd_t pmd)
+{
+	return (pmd_val(pmd) & _PAGE_PRESENT);
+}
+
+static inline int pmd_none(pmd_t pmd)
+{
+	return (pmd_val(pmd) == 0);
+}
+
+static inline int pmd_bad(pmd_t pmd)
+{
+	return !pmd_present(pmd);
+}
+
+static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
+{
+	*pmdp = pmd;
+}
+
+static inline void pmd_clear(pmd_t *pmdp)
+{
+	set_pmd(pmdp, __pmd(0));
+}
+
+
+static inline pgd_t pfn_pgd(unsigned long pfn, pgprot_t prot)
+{
+	return __pgd((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+#define pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
+
+/* Locate an entry in the page global directory */
+static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr)
+{
+	return mm->pgd + pgd_index(addr);
+}
+/* Locate an entry in the kernel page global directory */
+#define pgd_offset_k(addr)      pgd_offset(&init_mm, (addr))
+
+static inline struct page *pmd_page(pmd_t pmd)
+{
+	return pfn_to_page(pmd_val(pmd) >> _PAGE_PFN_SHIFT);
+}
+
+static inline unsigned long pmd_page_vaddr(pmd_t pmd)
+{
+	return (unsigned long)pfn_to_virt(pmd_val(pmd) >> _PAGE_PFN_SHIFT);
+}
+
+/* Yields the page frame number (PFN) of a page table entry */
+static inline unsigned long pte_pfn(pte_t pte)
+{
+	return (pte_val(pte) >> _PAGE_PFN_SHIFT);
+}
+
+#define pte_page(x)     pfn_to_page(pte_pfn(x))
+
+/* Constructs a page table entry */
+static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot)
+{
+	return __pte((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+static inline pte_t mk_pte(struct page *page, pgprot_t prot)
+{
+	return pfn_pte(page_to_pfn(page), prot);
+}
+
+#define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+
+static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long addr)
+{
+	return (pte_t *)pmd_page_vaddr(*pmd) + pte_index(addr);
+}
+
+#define pte_offset_map(dir, addr)	pte_offset_kernel((dir), (addr))
+#define pte_unmap(pte)			((void)(pte))
+
+/*
+ * Certain architectures need to do special things when PTEs within
+ * a page table are directly modified.  Thus, the following hook is
+ * made available.
+ */
+static inline void set_pte(pte_t *ptep, pte_t pteval)
+{
+	*ptep = pteval;
+}
+
+static inline void set_pte_at(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep, pte_t pteval)
+{
+	set_pte(ptep, pteval);
+}
+
+static inline void pte_clear(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep)
+{
+	set_pte_at(mm, addr, ptep, __pte(0));
+}
+
+static inline int pte_present(pte_t pte)
+{
+	return (pte_val(pte) & _PAGE_PRESENT);
+}
+
+static inline int pte_none(pte_t pte)
+{
+	return (pte_val(pte) == 0);
+}
+
+/* static inline int pte_read(pte_t pte) */
+
+static inline int pte_write(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_WRITE;
+}
+
+static inline int pte_huge(pte_t pte)
+{
+	return pte_present(pte)
+		&& (pte_val(pte) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC));
+}
+
+/* static inline int pte_exec(pte_t pte) */
+
+static inline int pte_dirty(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_DIRTY;
+}
+
+static inline int pte_young(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_ACCESSED;
+}
+
+static inline int pte_special(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_SPECIAL;
+}
+
+/* static inline pte_t pte_rdprotect(pte_t pte) */
+
+static inline pte_t pte_wrprotect(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_WRITE));
+}
+
+/* static inline pte_t pte_mkread(pte_t pte) */
+
+static inline pte_t pte_mkwrite(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_WRITE);
+}
+
+/* static inline pte_t pte_mkexec(pte_t pte) */
+
+static inline pte_t pte_mkdirty(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_DIRTY);
+}
+
+static inline pte_t pte_mkclean(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_DIRTY));
+}
+
+static inline pte_t pte_mkyoung(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_ACCESSED);
+}
+
+static inline pte_t pte_mkold(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_ACCESSED));
+}
+
+static inline pte_t pte_mkspecial(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_SPECIAL);
+}
+
+/* Modify page protection bits */
+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+	return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
+}
+
+#define pgd_ERROR(e) \
+	pr_err("%s:%d: bad pgd " PTE_FMT ".\n", __FILE__, __LINE__, pgd_val(e))
+
+
+/* Commit new configuration to MMU hardware */
+static inline void update_mmu_cache(struct vm_area_struct *vma,
+	unsigned long address, pte_t *ptep)
+{
+	/* The kernel assumes that TLBs don't cache invalid entries, but
+	 * in RISC-V, SFENCE.VMA specifies an ordering constraint, not a
+	 * cache flush; it is necessary even after writing invalid entries.
+	 * Relying on flush_tlb_fix_spurious_fault would suffice, but
+	 * the extra traps reduce performance.  So, eagerly SFENCE.VMA.
+	 */
+	local_flush_tlb_page(address);
+}
+
+#define __HAVE_ARCH_PTE_SAME
+static inline int pte_same(pte_t pte_a, pte_t pte_b)
+{
+	return pte_val(pte_a) == pte_val(pte_b);
+}
+
+#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
+static inline int ptep_set_access_flags(struct vm_area_struct *vma,
+					unsigned long address, pte_t *ptep,
+					pte_t entry, int dirty)
+{
+	if (!pte_same(*ptep, entry))
+		set_pte_at(vma->vm_mm, address, ptep, entry);
+	/* update_mmu_cache will unconditionally execute, handling both
+	 * the case that the PTE changed and the spurious fault case.
+	 */
+	return true;
+}
+
+#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
+static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+}
+
+#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
+static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
+					    unsigned long address,
+					    pte_t *ptep)
+{
+	if (!pte_young(*ptep))
+		return 0;
+	return test_and_clear_bit(_PAGE_ACCESSED_OFFSET, &pte_val(*ptep));
+}
+
+#define __HAVE_ARCH_PTEP_SET_WRPROTECT
+static inline void ptep_set_wrprotect(struct mm_struct *mm,
+				      unsigned long address, pte_t *ptep)
+{
+	atomic_long_and(~(unsigned long)_PAGE_WRITE, (atomic_long_t *)ptep);
+}
+
+#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
+static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
+					 unsigned long address, pte_t *ptep)
+{
+	/*
+	 * This comment is borrowed from x86, but applies equally to RISC-V:
+	 *
+	 * Clearing the accessed bit without a TLB flush
+	 * doesn't cause data corruption. [ It could cause incorrect
+	 * page aging and the (mistaken) reclaim of hot pages, but the
+	 * chance of that should be relatively low. ]
+	 *
+	 * So as a performance optimization don't flush the TLB when
+	 * clearing the accessed bit, it will eventually be flushed by
+	 * a context switch or a VM operation anyway. [ In the rare
+	 * event of it not getting flushed for a long time the delay
+	 * shouldn't really matter because there's no real memory
+	 * pressure for swapout to react to. ]
+	 */
+	return ptep_test_and_clear_young(vma, address, ptep);
+}
+
+/*
+ * Encode and decode a swap entry
+ *
+ * Format of swap PTE:
+ *	bit            0:	_PAGE_PRESENT (zero)
+ *	bit            1:	reserved for future use (zero)
+ *	bits      2 to 6:	swap type
+ *	bits 7 to XLEN-1:	swap offset
+ */
+#define __SWP_TYPE_SHIFT	2
+#define __SWP_TYPE_BITS		5
+#define __SWP_TYPE_MASK		((1UL << __SWP_TYPE_BITS) - 1)
+#define __SWP_OFFSET_SHIFT	(__SWP_TYPE_BITS + __SWP_TYPE_SHIFT)
+
+#define MAX_SWAPFILES_CHECK()	\
+	BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > __SWP_TYPE_BITS)
+
+#define __swp_type(x)	(((x).val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK)
+#define __swp_offset(x)	((x).val >> __SWP_OFFSET_SHIFT)
+#define __swp_entry(type, offset) ((swp_entry_t) \
+	{ ((type) << __SWP_TYPE_SHIFT) | ((offset) << __SWP_OFFSET_SHIFT) })
+
+#define __pte_to_swp_entry(pte)	((swp_entry_t) { pte_val(pte) })
+#define __swp_entry_to_pte(x)	((pte_t) { (x).val })
+
+#ifdef CONFIG_FLATMEM
+#define kern_addr_valid(addr)   (1) /* FIXME */
+#endif
+
+extern void paging_init(void);
+
+static inline void pgtable_cache_init(void)
+{
+	/* No page table caches to initialize */
+}
+
+#endif /* CONFIG_MMU */
+
+#define VMALLOC_SIZE     (KERN_VIRT_SIZE >> 1)
+#define VMALLOC_END      (PAGE_OFFSET - 1)
+#define VMALLOC_START    (PAGE_OFFSET - VMALLOC_SIZE)
+
+/* Task size is 0x40000000000 for RV64 or 0xb800000 for RV32.
+ * Note that PGDIR_SIZE must evenly divide TASK_SIZE.
+ */
+#ifdef CONFIG_64BIT
+#define TASK_SIZE (PGDIR_SIZE * PTRS_PER_PGD / 2)
+#else
+#define TASK_SIZE VMALLOC_START
+#endif
+
+#include <asm-generic/pgtable.h>
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PGTABLE_H */
diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
new file mode 100644
index 000000000000..0e523a75a8a0
--- /dev/null
+++ b/arch/riscv/include/asm/processor.h
@@ -0,0 +1,102 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PROCESSOR_H
+#define _ASM_RISCV_PROCESSOR_H
+
+#include <linux/const.h>
+
+#include <asm/ptrace.h>
+
+/*
+ * This decides where the kernel will search for a free chunk of vm
+ * space during mmap's.
+ */
+#define TASK_UNMAPPED_BASE	PAGE_ALIGN(TASK_SIZE >> 1)
+
+#ifdef __KERNEL__
+#define STACK_TOP		TASK_SIZE
+#define STACK_TOP_MAX		STACK_TOP
+#define STACK_ALIGN		16
+#endif /* __KERNEL__ */
+
+#ifndef __ASSEMBLY__
+
+struct task_struct;
+struct pt_regs;
+
+/*
+ * Default implementation of macro that returns current
+ * instruction pointer ("program counter").
+ */
+#define current_text_addr()	({ __label__ _l; _l: &&_l; })
+
+/* CPU-specific state of a task */
+struct thread_struct {
+	/* Callee-saved registers */
+	unsigned long ra;
+	unsigned long sp;	/* Kernel mode stack */
+	unsigned long s[12];	/* s[0]: frame pointer */
+	struct user_fpregs_struct fstate;
+};
+
+#define INIT_THREAD {					\
+	.sp = sizeof(init_stack) + (long)&init_stack,	\
+}
+
+/* Return saved (kernel) PC of a blocked thread. */
+#define thread_saved_pc(t)	((t)->thread.ra)
+#define thread_saved_sp(t)	((t)->thread.sp)
+#define thread_saved_fp(t)	((t)->thread.s[0])
+
+#define task_pt_regs(tsk)						\
+	((struct pt_regs *)(task_stack_page(tsk) + THREAD_SIZE		\
+			    - ALIGN(sizeof(struct pt_regs), STACK_ALIGN)))
+
+#define KSTK_EIP(tsk)		(task_pt_regs(tsk)->sepc)
+#define KSTK_ESP(tsk)		(task_pt_regs(tsk)->sp)
+
+
+/* Do necessary setup to start up a newly executed thread. */
+extern void start_thread(struct pt_regs *regs,
+			unsigned long pc, unsigned long sp);
+
+/* Free all resources held by a thread. */
+static inline void release_thread(struct task_struct *dead_task)
+{
+}
+
+extern unsigned long get_wchan(struct task_struct *p);
+
+
+static inline void cpu_relax(void)
+{
+#ifdef __riscv_muldiv
+	int dummy;
+	/* In lieu of a halt instruction, induce a long-latency stall. */
+	__asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy));
+#endif
+	barrier();
+}
+
+static inline void wait_for_interrupt(void)
+{
+	__asm__ __volatile__ ("wfi");
+}
+
+struct device_node;
+extern int riscv_of_processor_hart(struct device_node *node);
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PROCESSOR_H */
diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h
new file mode 100644
index 000000000000..340201868842
--- /dev/null
+++ b/arch/riscv/include/asm/ptrace.h
@@ -0,0 +1,116 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PTRACE_H
+#define _ASM_RISCV_PTRACE_H
+
+#include <uapi/asm/ptrace.h>
+#include <asm/csr.h>
+
+#ifndef __ASSEMBLY__
+
+struct pt_regs {
+	unsigned long sepc;
+	unsigned long ra;
+	unsigned long sp;
+	unsigned long gp;
+	unsigned long tp;
+	unsigned long t0;
+	unsigned long t1;
+	unsigned long t2;
+	unsigned long s0;
+	unsigned long s1;
+	unsigned long a0;
+	unsigned long a1;
+	unsigned long a2;
+	unsigned long a3;
+	unsigned long a4;
+	unsigned long a5;
+	unsigned long a6;
+	unsigned long a7;
+	unsigned long s2;
+	unsigned long s3;
+	unsigned long s4;
+	unsigned long s5;
+	unsigned long s6;
+	unsigned long s7;
+	unsigned long s8;
+	unsigned long s9;
+	unsigned long s10;
+	unsigned long s11;
+	unsigned long t3;
+	unsigned long t4;
+	unsigned long t5;
+	unsigned long t6;
+	/* Supervisor CSRs */
+	unsigned long sstatus;
+	unsigned long sbadaddr;
+	unsigned long scause;
+};
+
+#ifdef CONFIG_64BIT
+#define REG_FMT "%016lx"
+#else
+#define REG_FMT "%08lx"
+#endif
+
+#define user_mode(regs) (((regs)->sstatus & SR_PS) == 0)
+
+
+/* Helpers for working with the instruction pointer */
+#define GET_IP(regs) ((regs)->sepc)
+#define SET_IP(regs, val) (GET_IP(regs) = (val))
+
+static inline unsigned long instruction_pointer(struct pt_regs *regs)
+{
+	return GET_IP(regs);
+}
+static inline void instruction_pointer_set(struct pt_regs *regs,
+					   unsigned long val)
+{
+	SET_IP(regs, val);
+}
+
+#define profile_pc(regs) instruction_pointer(regs)
+
+/* Helpers for working with the user stack pointer */
+#define GET_USP(regs) ((regs)->sp)
+#define SET_USP(regs, val) (GET_USP(regs) = (val))
+
+static inline unsigned long user_stack_pointer(struct pt_regs *regs)
+{
+	return GET_USP(regs);
+}
+static inline void user_stack_pointer_set(struct pt_regs *regs,
+					  unsigned long val)
+{
+	SET_USP(regs, val);
+}
+
+/* Helpers for working with the frame pointer */
+#define GET_FP(regs) ((regs)->s0)
+#define SET_FP(regs, val) (GET_FP(regs) = (val))
+
+static inline unsigned long frame_pointer(struct pt_regs *regs)
+{
+	return GET_FP(regs);
+}
+static inline void frame_pointer_set(struct pt_regs *regs,
+				     unsigned long val)
+{
+	SET_FP(regs, val);
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PTRACE_H */
diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
new file mode 100644
index 000000000000..b6bb10b92fe2
--- /dev/null
+++ b/arch/riscv/include/asm/sbi.h
@@ -0,0 +1,100 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SBI_H
+#define _ASM_RISCV_SBI_H
+
+#include <linux/types.h>
+
+#define SBI_SET_TIMER 0
+#define SBI_CONSOLE_PUTCHAR 1
+#define SBI_CONSOLE_GETCHAR 2
+#define SBI_CLEAR_IPI 3
+#define SBI_SEND_IPI 4
+#define SBI_REMOTE_FENCE_I 5
+#define SBI_REMOTE_SFENCE_VMA 6
+#define SBI_REMOTE_SFENCE_VMA_ASID 7
+#define SBI_SHUTDOWN 8
+
+#define SBI_CALL(which, arg0, arg1, arg2) ({			\
+	register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0);	\
+	register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1);	\
+	register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2);	\
+	register uintptr_t a7 asm ("a7") = (uintptr_t)(which);	\
+	asm volatile ("ecall"					\
+		      : "+r" (a0)				\
+		      : "r" (a1), "r" (a2), "r" (a7)		\
+		      : "memory");				\
+	a0;							\
+})
+
+/* Lazy implementations until SBI is finalized */
+#define SBI_CALL_0(which) SBI_CALL(which, 0, 0, 0)
+#define SBI_CALL_1(which, arg0) SBI_CALL(which, arg0, 0, 0)
+#define SBI_CALL_2(which, arg0, arg1) SBI_CALL(which, arg0, arg1, 0)
+
+static inline void sbi_console_putchar(int ch)
+{
+	SBI_CALL_1(SBI_CONSOLE_PUTCHAR, ch);
+}
+
+static inline int sbi_console_getchar(void)
+{
+	return SBI_CALL_0(SBI_CONSOLE_GETCHAR);
+}
+
+static inline void sbi_set_timer(uint64_t stime_value)
+{
+#if __riscv_xlen == 32
+	SBI_CALL_2(SBI_SET_TIMER, stime_value, stime_value >> 32);
+#else
+	SBI_CALL_1(SBI_SET_TIMER, stime_value);
+#endif
+}
+
+static inline void sbi_shutdown(void)
+{
+	SBI_CALL_0(SBI_SHUTDOWN);
+}
+
+static inline void sbi_clear_ipi(void)
+{
+	SBI_CALL_0(SBI_CLEAR_IPI);
+}
+
+static inline void sbi_send_ipi(const unsigned long *hart_mask)
+{
+	SBI_CALL_1(SBI_SEND_IPI, hart_mask);
+}
+
+static inline void sbi_remote_fence_i(const unsigned long *hart_mask)
+{
+	SBI_CALL_1(SBI_REMOTE_FENCE_I, hart_mask);
+}
+
+static inline void sbi_remote_sfence_vma(const unsigned long *hart_mask,
+					 unsigned long start,
+					 unsigned long size)
+{
+	SBI_CALL_1(SBI_REMOTE_SFENCE_VMA, hart_mask);
+}
+
+static inline void sbi_remote_sfence_vma_asid(const unsigned long *hart_mask,
+					      unsigned long start,
+					      unsigned long size,
+					      unsigned long asid)
+{
+	SBI_CALL_1(SBI_REMOTE_SFENCE_VMA_ASID, hart_mask);
+}
+
+#endif
diff --git a/arch/riscv/include/asm/smp.h b/arch/riscv/include/asm/smp.h
new file mode 100644
index 000000000000..f61f8c25f95b
--- /dev/null
+++ b/arch/riscv/include/asm/smp.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SMP_H
+#define _ASM_RISCV_SMP_H
+
+#include <linux/cpumask.h>
+#include <linux/irqreturn.h>
+
+#ifdef CONFIG_SMP
+
+/* SMP initialization hook for setup_arch */
+void __init init_clockevent(void);
+
+/* SMP initialization hook for setup_arch */
+void __init setup_smp(void);
+
+/* Hook for the generic smp_call_function_many() routine. */
+void arch_send_call_function_ipi_mask(struct cpumask *mask);
+
+/* Hook for the generic smp_call_function_single() routine. */
+void arch_send_call_function_single_ipi(int cpu);
+
+#define raw_smp_processor_id() (current_thread_info()->cpu)
+
+/* Interprocessor interrupt handler */
+irqreturn_t handle_ipi(void);
+
+#endif /* CONFIG_SMP */
+
+#endif /* _ASM_RISCV_SMP_H */
diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
new file mode 100644
index 000000000000..9736f5714e54
--- /dev/null
+++ b/arch/riscv/include/asm/spinlock.h
@@ -0,0 +1,155 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SPINLOCK_H
+#define _ASM_RISCV_SPINLOCK_H
+
+#include <linux/kernel.h>
+#include <asm/current.h>
+
+/*
+ * Simple spin lock operations.  These provide no fairness guarantees.
+ */
+
+#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
+#define arch_spin_is_locked(x)	((x)->lock != 0)
+#define arch_spin_unlock_wait(x) \
+		do { cpu_relax(); } while ((x)->lock)
+
+static inline void arch_spin_unlock(arch_spinlock_t *lock)
+{
+	__asm__ __volatile__ (
+		"amoswap.w.rl x0, x0, %0"
+		: "=A" (lock->lock)
+		:: "memory");
+}
+
+static inline int arch_spin_trylock(arch_spinlock_t *lock)
+{
+	int tmp = 1, busy;
+
+	__asm__ __volatile__ (
+		"amoswap.w.aq %0, %2, %1"
+		: "=r" (busy), "+A" (lock->lock)
+		: "r" (tmp)
+		: "memory");
+
+	return !busy;
+}
+
+static inline void arch_spin_lock(arch_spinlock_t *lock)
+{
+	while (1) {
+		if (arch_spin_is_locked(lock))
+			continue;
+
+		if (arch_spin_trylock(lock))
+			break;
+	}
+}
+
+/***********************************************************/
+
+static inline int arch_read_can_lock(arch_rwlock_t *lock)
+{
+	return lock->lock >= 0;
+}
+
+static inline int arch_write_can_lock(arch_rwlock_t *lock)
+{
+	return lock->lock == 0;
+}
+
+static inline void arch_read_lock(arch_rwlock_t *lock)
+{
+	int tmp;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bltz	%1, 1b\n"
+		"	addi	%1, %1, 1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		: "+A" (lock->lock), "=&r" (tmp)
+		:: "memory");
+}
+
+static inline void arch_write_lock(arch_rwlock_t *lock)
+{
+	int tmp;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bnez	%1, 1b\n"
+		"	li	%1, -1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		: "+A" (lock->lock), "=&r" (tmp)
+		:: "memory");
+}
+
+static inline int arch_read_trylock(arch_rwlock_t *lock)
+{
+	int busy;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bltz	%1, 1f\n"
+		"	addi	%1, %1, 1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		"1:\n"
+		: "+A" (lock->lock), "=&r" (busy)
+		:: "memory");
+
+	return !busy;
+}
+
+static inline int arch_write_trylock(arch_rwlock_t *lock)
+{
+	int busy;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bnez	%1, 1f\n"
+		"	li	%1, -1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		"1:\n"
+		: "+A" (lock->lock), "=&r" (busy)
+		:: "memory");
+
+	return !busy;
+}
+
+static inline void arch_read_unlock(arch_rwlock_t *lock)
+{
+	__asm__ __volatile__(
+		"amoadd.w.rl x0, %1, %0"
+		: "+A" (lock->lock)
+		: "r" (-1)
+		: "memory");
+}
+
+static inline void arch_write_unlock(arch_rwlock_t *lock)
+{
+	__asm__ __volatile__ (
+		"amoswap.w.rl x0, x0, %0"
+		: "=A" (lock->lock)
+		:: "memory");
+}
+
+#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
+#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
+
+#endif /* _ASM_RISCV_SPINLOCK_H */
diff --git a/arch/riscv/include/asm/spinlock_types.h b/arch/riscv/include/asm/spinlock_types.h
new file mode 100644
index 000000000000..83ac4ac9e2ac
--- /dev/null
+++ b/arch/riscv/include/asm/spinlock_types.h
@@ -0,0 +1,33 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SPINLOCK_TYPES_H
+#define _ASM_RISCV_SPINLOCK_TYPES_H
+
+#ifndef __LINUX_SPINLOCK_TYPES_H
+# error "please don't include this file directly"
+#endif
+
+typedef struct {
+	volatile unsigned int lock;
+} arch_spinlock_t;
+
+#define __ARCH_SPIN_LOCK_UNLOCKED	{ 0 }
+
+typedef struct {
+	volatile unsigned int lock;
+} arch_rwlock_t;
+
+#define __ARCH_RW_LOCK_UNLOCKED		{ 0 }
+
+#endif
diff --git a/arch/riscv/include/asm/string.h b/arch/riscv/include/asm/string.h
new file mode 100644
index 000000000000..1c59cf4e8ccc
--- /dev/null
+++ b/arch/riscv/include/asm/string.h
@@ -0,0 +1,30 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_STRING_H
+#define _ASM_RISCV_STRING_H
+
+#ifdef __KERNEL__
+
+#include <linux/types.h>
+#include <linux/linkage.h>
+
+#define __HAVE_ARCH_MEMSET
+extern asmlinkage void *memset(void *, int, size_t);
+
+#define __HAVE_ARCH_MEMCPY
+extern asmlinkage void *memcpy(void *, const void *, size_t);
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_STRING_H */
diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h
new file mode 100644
index 000000000000..dd6b05bff75b
--- /dev/null
+++ b/arch/riscv/include/asm/switch_to.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SWITCH_TO_H
+#define _ASM_RISCV_SWITCH_TO_H
+
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/csr.h>
+
+extern void __fstate_save(struct task_struct *save_to);
+extern void __fstate_restore(struct task_struct *restore_from);
+
+static inline void __fstate_clean(struct pt_regs *regs)
+{
+	regs->sstatus |= (regs->sstatus & ~(SR_FS)) | SR_FS_CLEAN;
+}
+
+static inline void fstate_save(struct task_struct *task,
+			       struct pt_regs *regs)
+{
+	if ((regs->sstatus & SR_FS) == SR_FS_DIRTY) {
+		__fstate_save(task);
+		__fstate_clean(regs);
+	}
+}
+
+static inline void fstate_restore(struct task_struct *task,
+				  struct pt_regs *regs)
+{
+	if ((regs->sstatus & SR_FS) != SR_FS_OFF) {
+		__fstate_restore(task);
+		__fstate_clean(regs);
+	}
+}
+
+static inline void __switch_to_aux(struct task_struct *prev,
+				   struct task_struct *next)
+{
+	struct pt_regs *regs;
+
+	regs = task_pt_regs(prev);
+	if (unlikely(regs->sstatus & SR_SD))
+		fstate_save(prev, regs);
+	fstate_restore(next, task_pt_regs(next));
+}
+
+extern struct task_struct *__switch_to(struct task_struct *,
+				       struct task_struct *);
+
+#define switch_to(prev, next, last)			\
+do {							\
+	struct task_struct *__prev = (prev);		\
+	struct task_struct *__next = (next);		\
+	__switch_to_aux(__prev, __next);		\
+	((last) = __switch_to(__prev, __next));		\
+} while (0)
+
+#endif /* _ASM_RISCV_SWITCH_TO_H */
diff --git a/arch/riscv/include/asm/syscall.h b/arch/riscv/include/asm/syscall.h
new file mode 100644
index 000000000000..c70f0e7a06b8
--- /dev/null
+++ b/arch/riscv/include/asm/syscall.h
@@ -0,0 +1,90 @@
+/*
+ * Copyright (C) 2008-2009 Red Hat, Inc.  All rights reserved.
+ * Copyright 2010 Tilera Corporation. All Rights Reserved.
+ * Copyright 2015 Regents of the University of California, Berkeley
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ *
+ * See asm-generic/syscall.h for descriptions of what we must do here.
+ */
+
+#ifndef _ASM_RISCV_SYSCALL_H
+#define _ASM_RISCV_SYSCALL_H
+
+#include <linux/sched.h>
+#include <linux/err.h>
+
+/* The array of function pointers for syscalls. */
+extern void *sys_call_table[];
+
+/*
+ * Only the low 32 bits of orig_r0 are meaningful, so we return int.
+ * This importantly ignores the high bits on 64-bit, so comparisons
+ * sign-extend the low 32 bits.
+ */
+static inline int syscall_get_nr(struct task_struct *task,
+				 struct pt_regs *regs)
+{
+	return regs->a7;
+}
+
+static inline void syscall_set_nr(struct task_struct *task,
+				  struct pt_regs *regs,
+				  int sysno)
+{
+	regs->a7 = sysno;
+}
+
+static inline void syscall_rollback(struct task_struct *task,
+				    struct pt_regs *regs)
+{
+	/* FIXME: We can't do this... */
+}
+
+static inline long syscall_get_error(struct task_struct *task,
+				     struct pt_regs *regs)
+{
+	unsigned long error = regs->a0;
+
+	return IS_ERR_VALUE(error) ? error : 0;
+}
+
+static inline long syscall_get_return_value(struct task_struct *task,
+					    struct pt_regs *regs)
+{
+	return regs->a0;
+}
+
+static inline void syscall_set_return_value(struct task_struct *task,
+					    struct pt_regs *regs,
+					    int error, long val)
+{
+	regs->a0 = (long) error ?: val;
+}
+
+static inline void syscall_get_arguments(struct task_struct *task,
+					 struct pt_regs *regs,
+					 unsigned int i, unsigned int n,
+					 unsigned long *args)
+{
+	BUG_ON(i + n > 6);
+	memcpy(args, &regs->a0 + i * sizeof(regs->a0), n * sizeof(args[0]));
+}
+
+static inline void syscall_set_arguments(struct task_struct *task,
+					 struct pt_regs *regs,
+					 unsigned int i, unsigned int n,
+					 const unsigned long *args)
+{
+	BUG_ON(i + n > 6);
+	memcpy(&regs->a0 + i * sizeof(regs->a0), args, n * sizeof(regs->a0));
+}
+
+#endif	/* _ASM_TILE_SYSCALL_H */
diff --git a/arch/riscv/include/asm/syscalls.h b/arch/riscv/include/asm/syscalls.h
new file mode 100644
index 000000000000..d85267c4f7ea
--- /dev/null
+++ b/arch/riscv/include/asm/syscalls.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SYSCALLS_H
+#define _ASM_RISCV_SYSCALLS_H
+
+#include <linux/linkage.h>
+
+#include <asm-generic/syscalls.h>
+
+/* kernel/sys_riscv.c */
+asmlinkage long sys_sysriscv(unsigned long, unsigned long,
+	unsigned long, unsigned long);
+
+#endif /* _ASM_RISCV_SYSCALLS_H */
diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
new file mode 100644
index 000000000000..f05b7d68c1ad
--- /dev/null
+++ b/arch/riscv/include/asm/thread_info.h
@@ -0,0 +1,96 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_THREAD_INFO_H
+#define _ASM_RISCV_THREAD_INFO_H
+
+#ifdef __KERNEL__
+
+#include <asm/page.h>
+#include <linux/const.h>
+
+/* thread information allocation */
+#define THREAD_SIZE_ORDER	(1)
+#define THREAD_SIZE		(PAGE_SIZE << THREAD_SIZE_ORDER)
+
+#ifndef __ASSEMBLY__
+
+#include <asm/processor.h>
+#include <asm/csr.h>
+
+typedef unsigned long mm_segment_t;
+
+/*
+ * low level task data that entry.S needs immediate access to
+ * - this struct should fit entirely inside of one cache line
+ * - if the members of this struct changes, the assembly constants
+ *   in asm-offsets.c must be updated accordingly
+ * - thread_info is included in task_struct at an offset of 0.  This means that
+ *   tp points to both thread_info and task_struct.
+ */
+struct thread_info {
+	unsigned long		flags;		/* low level flags */
+	int                     preempt_count;  /* 0=>preemptible, <0=>BUG */
+	mm_segment_t		addr_limit;
+	/* These stack pointers are overwritten on every system call or
+	 * exception.  SP is also saved to the stack it can be recovered when
+	 * overwritten.
+	 */
+	long			kernel_sp;	/* Kernel stack pointer */
+	long			user_sp;	/* User stack pointer */
+};
+
+/*
+ * macros/functions for gaining access to the thread information structure
+ *
+ * preempt_count needs to be 1 initially, until the scheduler is functional.
+ */
+#define INIT_THREAD_INFO(tsk)			\
+{						\
+	.flags		= 0,			\
+	.preempt_count	= INIT_PREEMPT_COUNT,	\
+	.addr_limit	= KERNEL_DS,		\
+}
+
+#define init_stack		(init_thread_union.stack)
+
+#endif /* !__ASSEMBLY__ */
+
+/*
+ * thread information flags
+ * - these are process state flags that various assembly files may need to
+ *   access
+ * - pending work-to-be-done flags are in lowest half-word
+ * - other flags in upper half-word(s)
+ */
+#define TIF_SYSCALL_TRACE	0	/* syscall trace active */
+#define TIF_NOTIFY_RESUME	1	/* callback before returning to user */
+#define TIF_SIGPENDING		2	/* signal pending */
+#define TIF_NEED_RESCHED	3	/* rescheduling necessary */
+#define TIF_RESTORE_SIGMASK	4	/* restore signal mask in do_signal() */
+#define TIF_MEMDIE		5	/* is terminating due to OOM killer */
+#define TIF_SYSCALL_TRACEPOINT  6       /* syscall tracepoint instrumentation */
+
+#define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+#define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
+#define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
+#define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
+
+#define _TIF_WORK_MASK \
+	(_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED)
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_THREAD_INFO_H */
diff --git a/arch/riscv/include/asm/timex.h b/arch/riscv/include/asm/timex.h
new file mode 100644
index 000000000000..bb66b83352fb
--- /dev/null
+++ b/arch/riscv/include/asm/timex.h
@@ -0,0 +1,43 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TIMEX_H
+#define _ASM_RISCV_TIMEX_H
+
+#include <asm/param.h>
+
+#ifdef __KERNEL__
+
+typedef unsigned long cycles_t;
+
+static inline cycles_t get_cycles(void)
+{
+	cycles_t n;
+
+	__asm__ __volatile__ (
+		"rdtime %0"
+		: "=r" (n));
+	return n;
+}
+
+#define ARCH_HAS_READ_CURRENT_TIMER
+
+static inline int read_current_timer(unsigned long *timer_val)
+{
+	*timer_val = get_cycles();
+	return 0;
+}
+
+#endif
+
+#endif /* _ASM_RISCV_TIMEX_H */
diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h
new file mode 100644
index 000000000000..c229509288ea
--- /dev/null
+++ b/arch/riscv/include/asm/tlb.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TLB_H
+#define _ASM_RISCV_TLB_H
+
+#include <asm-generic/tlb.h>
+
+static inline void tlb_flush(struct mmu_gather *tlb)
+{
+	flush_tlb_mm(tlb->mm);
+}
+
+#endif /* _ASM_RISCV_TLB_H */
diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
new file mode 100644
index 000000000000..5ee4ae370b5e
--- /dev/null
+++ b/arch/riscv/include/asm/tlbflush.h
@@ -0,0 +1,64 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TLBFLUSH_H
+#define _ASM_RISCV_TLBFLUSH_H
+
+#ifdef CONFIG_MMU
+
+/* Flush entire local TLB */
+static inline void local_flush_tlb_all(void)
+{
+	__asm__ __volatile__ ("sfence.vma" : : : "memory");
+}
+
+/* Flush one page from local TLB */
+static inline void local_flush_tlb_page(unsigned long addr)
+{
+	__asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory");
+}
+
+#ifndef CONFIG_SMP
+
+#define flush_tlb_all() local_flush_tlb_all()
+#define flush_tlb_page(vma, addr) local_flush_tlb_page(addr)
+#define flush_tlb_range(vma, start, end) local_flush_tlb_all()
+
+#else /* CONFIG_SMP */
+
+#include <asm/sbi.h>
+
+#define flush_tlb_all() sbi_remote_sfence_vma(0, 0, -1)
+#define flush_tlb_page(vma, addr) flush_tlb_range(vma, addr, 0)
+#define flush_tlb_range(vma, start, end) \
+	sbi_remote_sfence_vma(0, start, (end) - (start))
+
+#endif /* CONFIG_SMP */
+
+/* Flush the TLB entries of the specified mm context */
+static inline void flush_tlb_mm(struct mm_struct *mm)
+{
+	flush_tlb_all();
+}
+
+/* Flush a range of kernel pages */
+static inline void flush_tlb_kernel_range(unsigned long start,
+	unsigned long end)
+{
+	flush_tlb_all();
+}
+
+#endif /* CONFIG_MMU */
+
+#endif /* _ASM_RISCV_TLBFLUSH_H */
diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
new file mode 100644
index 000000000000..f3ece74219d7
--- /dev/null
+++ b/arch/riscv/include/asm/uaccess.h
@@ -0,0 +1,440 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ *
+ * This file was copied from include/asm-generic/uaccess.h
+ */
+
+#ifndef _ASM_RISCV_UACCESS_H
+#define _ASM_RISCV_UACCESS_H
+
+/*
+ * User space memory access functions
+ */
+#include <linux/errno.h>
+#include <linux/compiler.h>
+#include <linux/thread_info.h>
+#include <asm/byteorder.h>
+#include <asm/asm.h>
+
+#ifdef CONFIG_RV_PUM
+#define __enable_user_access()							\
+	__asm__ __volatile__ ("csrs sstatus, %0" : : "r" (SR_SUM) : "memory")
+#define __disable_user_access()							\
+	__asm__ __volatile__ ("csrc sstatus, %0" : : "r" (SR_SUM) : "memory")
+#else
+#define __enable_user_access()
+#define __disable_user_access()
+#endif
+
+/*
+ * The fs value determines whether argument validity checking should be
+ * performed or not.  If get_fs() == USER_DS, checking is performed, with
+ * get_fs() == KERNEL_DS, checking is bypassed.
+ *
+ * For historical reasons, these macros are grossly misnamed.
+ */
+
+#define KERNEL_DS	(~0UL)
+#define USER_DS		(TASK_SIZE)
+
+#define get_ds()	(KERNEL_DS)
+#define get_fs()	(current_thread_info()->addr_limit)
+
+static inline void set_fs(mm_segment_t fs)
+{
+	current_thread_info()->addr_limit = fs;
+}
+
+#define segment_eq(a, b) ((a) == (b))
+
+#define user_addr_max()	(get_fs())
+
+
+#define VERIFY_READ	0
+#define VERIFY_WRITE	1
+
+/**
+ * access_ok: - Checks if a user space pointer is valid
+ * @type: Type of access: %VERIFY_READ or %VERIFY_WRITE.  Note that
+ *        %VERIFY_WRITE is a superset of %VERIFY_READ - if it is safe
+ *        to write to a block, it is always safe to read from it.
+ * @addr: User space pointer to start of block to check
+ * @size: Size of block to check
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * Checks if a pointer to a block of memory in user space is valid.
+ *
+ * Returns true (nonzero) if the memory block may be valid, false (zero)
+ * if it is definitely invalid.
+ *
+ * Note that, depending on architecture, this function probably just
+ * checks that the pointer is in the user space range - after calling
+ * this function, memory access functions may still return -EFAULT.
+ */
+#define access_ok(type, addr, size) ({					\
+	__chk_user_ptr(addr);						\
+	likely(__access_ok((unsigned long __force)(addr), (size)));	\
+})
+
+/* Ensure that the range [addr, addr+size) is within the process's
+ * address space
+ */
+static inline int __access_ok(unsigned long addr, unsigned long size)
+{
+	const mm_segment_t fs = get_fs();
+
+	return (size <= fs) && (addr <= (fs - size));
+}
+
+/*
+ * The exception table consists of pairs of addresses: the first is the
+ * address of an instruction that is allowed to fault, and the second is
+ * the address at which the program should continue.  No registers are
+ * modified, so it is entirely up to the continuation code to figure out
+ * what to do.
+ *
+ * All the routines below use bits of fixup code that are out of line
+ * with the main instruction path.  This means when everything is well,
+ * we don't even have to jump over them.  Further, they do not intrude
+ * on our cache or tlb entries.
+ */
+
+struct exception_table_entry {
+	unsigned long insn, fixup;
+};
+
+extern int fixup_exception(struct pt_regs *state);
+
+#if defined(__LITTLE_ENDIAN)
+#define __MSW	1
+#define __LSW	0
+#elif defined(__BIG_ENDIAN)
+#define __MSW	0
+#define	__LSW	1
+#else
+#error "Unknown endianness"
+#endif
+
+/*
+ * The "__xxx" versions of the user access functions do not verify the address
+ * space - it must have been done previously with a separate "access_ok()"
+ * call.
+ */
+
+#ifdef CONFIG_MMU
+#define __get_user_asm(insn, x, ptr, err)			\
+do {								\
+	uintptr_t __tmp;					\
+	__typeof__(x) __x;					\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	" insn " %1, %3\n"			\
+		"2:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"3:\n"						\
+		"	li %0, %4\n"				\
+		"	li %1, 0\n"				\
+		"	jump 2b, %2\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 3b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=&r" (__x), "=r" (__tmp)		\
+		: "m" (*(ptr)), "i" (-EFAULT));			\
+	__disable_user_access();				\
+	(x) = __x;						\
+} while (0)
+#endif /* CONFIG_MMU */
+
+#ifdef CONFIG_64BIT
+#define __get_user_8(x, ptr, err) \
+	__get_user_asm("ld", x, ptr, err)
+#else /* !CONFIG_64BIT */
+#ifdef CONFIG_MMU
+#define __get_user_8(x, ptr, err)				\
+do {								\
+	u32 __user *__ptr = (u32 __user *)(ptr);		\
+	u32 __lo, __hi;						\
+	uintptr_t __tmp;					\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	lw %1, %4\n"				\
+		"2:\n"						\
+		"	lw %2, %5\n"				\
+		"3:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"4:\n"						\
+		"	li %0, %6\n"				\
+		"	li %1, 0\n"				\
+		"	li %2, 0\n"				\
+		"	jump 3b, %3\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 4b\n"			\
+		"	" RISCV_PTR " 2b, 4b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=&r" (__lo), "=r" (__hi),	\
+			"=r" (__tmp)				\
+		: "m" (__ptr[__LSW]), "m" (__ptr[__MSW]),	\
+			"i" (-EFAULT));				\
+	__disable_user_access();				\
+	(x) = (__typeof__(x))((__typeof__((x)-(x)))(		\
+		(((u64)__hi << 32) | __lo)));			\
+} while (0)
+#endif /* CONFIG_MMU */
+#endif /* CONFIG_64BIT */
+
+
+/**
+ * __get_user: - Get a simple variable from user space, with less checking.
+ * @x:   Variable to store result.
+ * @ptr: Source address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple variable from user space to kernel
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and the result of
+ * dereferencing @ptr must be assignable to @x without a cast.
+ *
+ * Caller must check the pointer with access_ok() before calling this
+ * function.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ * On error, the variable @x is set to zero.
+ */
+#define __get_user(x, ptr)					\
+({								\
+	register long __gu_err = 0;				\
+	const __typeof__(*(ptr)) __user *__gu_ptr = (ptr);	\
+	__chk_user_ptr(__gu_ptr);				\
+	switch (sizeof(*__gu_ptr)) {				\
+	case 1:							\
+		__get_user_asm("lb", (x), __gu_ptr, __gu_err);	\
+		break;						\
+	case 2:							\
+		__get_user_asm("lh", (x), __gu_ptr, __gu_err);	\
+		break;						\
+	case 4:							\
+		__get_user_asm("lw", (x), __gu_ptr, __gu_err);	\
+		break;						\
+	case 8:							\
+		__get_user_8((x), __gu_ptr, __gu_err);	\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+	}							\
+	__gu_err;						\
+})
+
+/**
+ * get_user: - Get a simple variable from user space.
+ * @x:   Variable to store result.
+ * @ptr: Source address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple variable from user space to kernel
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and the result of
+ * dereferencing @ptr must be assignable to @x without a cast.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ * On error, the variable @x is set to zero.
+ */
+#define get_user(x, ptr)					\
+({								\
+	const __typeof__(*(ptr)) __user *__p = (ptr);		\
+	might_fault();						\
+	access_ok(VERIFY_READ, __p, sizeof(*__p)) ?		\
+		__get_user((x), __p) :				\
+		((x) = 0, -EFAULT);				\
+})
+
+
+#ifdef CONFIG_MMU
+#define __put_user_asm(insn, x, ptr, err)			\
+do {								\
+	uintptr_t __tmp;					\
+	__typeof__(*(ptr)) __x = x;				\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	" insn " %z3, %2\n"			\
+		"2:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"3:\n"						\
+		"	li %0, %4\n"				\
+		"	jump 2b, %1\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 3b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=r" (__tmp), "=m" (*(ptr))	\
+		: "rJ" (__x), "i" (-EFAULT));			\
+	__disable_user_access();				\
+} while (0)
+#endif /* CONFIG_MMU */
+
+
+#ifdef CONFIG_64BIT
+#define __put_user_8(x, ptr, err) \
+	__put_user_asm("sd", x, ptr, err)
+#else /* !CONFIG_64BIT */
+#ifdef CONFIG_MMU
+#define __put_user_8(x, ptr, err)				\
+do {								\
+	u32 __user *__ptr = (u32 __user *)(ptr);		\
+	u64 __x = (__typeof__((x)-(x)))(x);			\
+	uintptr_t __tmp;					\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	sw %z4, %2\n"				\
+		"2:\n"						\
+		"	sw %z5, %3\n"				\
+		"3:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"4:\n"						\
+		"	li %0, %6\n"				\
+		"	jump 2b, %1\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 4b\n"			\
+		"	" RISCV_PTR " 2b, 4b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=r" (__tmp),			\
+			"=m" (__ptr[__LSW]),			\
+			"=m" (__ptr[__MSW])			\
+		: "rJ" (__x), "rJ" (__x >> 32), "i" (-EFAULT));	\
+	__disable_user_access();				\
+} while (0)
+#endif /* CONFIG_MMU */
+#endif /* CONFIG_64BIT */
+
+
+/**
+ * __put_user: - Write a simple value into user space, with less checking.
+ * @x:   Value to copy to user space.
+ * @ptr: Destination address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple value from kernel space to user
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and @x must be assignable
+ * to the result of dereferencing @ptr.
+ *
+ * Caller must check the pointer with access_ok() before calling this
+ * function.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ */
+#define __put_user(x, ptr)					\
+({								\
+	register long __pu_err = 0;				\
+	__typeof__(*(ptr)) __user *__gu_ptr = (ptr);		\
+	__chk_user_ptr(__gu_ptr);				\
+	switch (sizeof(*__gu_ptr)) {				\
+	case 1:							\
+		__put_user_asm("sb", (x), __gu_ptr, __pu_err);	\
+		break;						\
+	case 2:							\
+		__put_user_asm("sh", (x), __gu_ptr, __pu_err);	\
+		break;						\
+	case 4:							\
+		__put_user_asm("sw", (x), __gu_ptr, __pu_err);	\
+		break;						\
+	case 8:							\
+		__put_user_8((x), __gu_ptr, __pu_err);	\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+	}							\
+	__pu_err;						\
+})
+
+/**
+ * put_user: - Write a simple value into user space.
+ * @x:   Value to copy to user space.
+ * @ptr: Destination address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple value from kernel space to user
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and @x must be assignable
+ * to the result of dereferencing @ptr.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ */
+#define put_user(x, ptr)					\
+({								\
+	__typeof__(*(ptr)) __user *__p = (ptr);			\
+	might_fault();						\
+	access_ok(VERIFY_WRITE, __p, sizeof(*__p)) ?		\
+		__put_user((x), __p) :				\
+		-EFAULT;					\
+})
+
+
+extern unsigned long __must_check __copy_user(void __user *to,
+	const void __user *from, unsigned long n);
+
+static inline unsigned long
+raw_copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+	return __copy_user(to, from, n);
+}
+
+static inline unsigned long
+raw_copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+	return __copy_user(to, from, n);
+}
+
+extern long strncpy_from_user(char *dest, const char __user *src, long count);
+
+extern long __must_check strlen_user(const char __user *str);
+extern long __must_check strnlen_user(const char __user *str, long n);
+
+extern
+unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
+
+static inline
+unsigned long __must_check clear_user(void __user *to, unsigned long n)
+{
+	might_fault();
+	return access_ok(VERIFY_WRITE, to, n) ?
+		__clear_user(to, n) : n;
+}
+
+#endif /* _ASM_RISCV_UACCESS_H */
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
new file mode 100644
index 000000000000..9f250ed007cd
--- /dev/null
+++ b/arch/riscv/include/asm/unistd.h
@@ -0,0 +1,16 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#define __ARCH_HAVE_MMU
+#define __ARCH_WANT_SYS_CLONE
+#include <uapi/asm/unistd.h>
diff --git a/arch/riscv/include/asm/vdso.h b/arch/riscv/include/asm/vdso.h
new file mode 100644
index 000000000000..95768a3810a7
--- /dev/null
+++ b/arch/riscv/include/asm/vdso.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright (C) 2012 ARM Limited
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_RISCV_VDSO_H
+#define _ASM_RISCV_VDSO_H
+
+#include <linux/types.h>
+
+struct vdso_data {
+};
+
+#define VDSO_SYMBOL(base, name)					\
+({								\
+	extern const char __vdso_##name[];			\
+	(void __user *)((unsigned long)(base) + __vdso_##name);	\
+})
+
+#endif /* _ASM_RISCV_VDSO_H */
diff --git a/arch/riscv/include/asm/word-at-a-time.h b/arch/riscv/include/asm/word-at-a-time.h
new file mode 100644
index 000000000000..aa6238791d3e
--- /dev/null
+++ b/arch/riscv/include/asm/word-at-a-time.h
@@ -0,0 +1,55 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ * Derived from arch/x86/include/asm/word-at-a-time.h
+ */
+
+#ifndef _ASM_RISCV_WORD_AT_A_TIME_H
+#define _ASM_RISCV_WORD_AT_A_TIME_H
+
+
+#include <linux/kernel.h>
+
+struct word_at_a_time {
+	const unsigned long one_bits, high_bits;
+};
+
+#define WORD_AT_A_TIME_CONSTANTS { REPEAT_BYTE(0x01), REPEAT_BYTE(0x80) }
+
+static inline unsigned long has_zero(unsigned long val,
+	unsigned long *bits, const struct word_at_a_time *c)
+{
+	unsigned long mask = ((val - c->one_bits) & ~val) & c->high_bits;
+	*bits = mask;
+	return mask;
+}
+
+static inline unsigned long prep_zero_mask(unsigned long val,
+	unsigned long bits, const struct word_at_a_time *c)
+{
+	return bits;
+}
+
+static inline unsigned long create_zero_mask(unsigned long bits)
+{
+	bits = (bits - 1) & ~bits;
+	return bits >> 7;
+}
+
+static inline unsigned long find_zero(unsigned long mask)
+{
+	return fls64(mask) >> 3;
+}
+
+/* The mask we created is directly usable as a bytemask */
+#define zero_bytemask(mask) (mask)
+
+#endif /* _ASM_RISCV_WORD_AT_A_TIME_H */
diff --git a/arch/riscv/include/uapi/asm/Kbuild b/arch/riscv/include/uapi/asm/Kbuild
new file mode 100644
index 000000000000..b15bf6bc0e94
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/Kbuild
@@ -0,0 +1,2 @@
+# UAPI Header export list
+include include/uapi/asm-generic/Kbuild.asm
diff --git a/arch/riscv/include/uapi/asm/auxvec.h b/arch/riscv/include/uapi/asm/auxvec.h
new file mode 100644
index 000000000000..1376515547cd
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/auxvec.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _UAPI_ASM_RISCV_AUXVEC_H
+#define _UAPI_ASM_RISCV_AUXVEC_H
+
+/* vDSO location */
+#define AT_SYSINFO_EHDR 33
+
+#endif /* _UAPI_ASM_RISCV_AUXVEC_H */
diff --git a/arch/riscv/include/uapi/asm/bitsperlong.h b/arch/riscv/include/uapi/asm/bitsperlong.h
new file mode 100644
index 000000000000..0b3cb52fd29d
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/bitsperlong.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _UAPI_ASM_RISCV_BITSPERLONG_H
+#define _UAPI_ASM_RISCV_BITSPERLONG_H
+
+#define __BITS_PER_LONG (__SIZEOF_POINTER__ * 8)
+
+#include <asm-generic/bitsperlong.h>
+
+#endif /* _UAPI_ASM_RISCV_BITSPERLONG_H */
diff --git a/arch/riscv/include/uapi/asm/byteorder.h b/arch/riscv/include/uapi/asm/byteorder.h
new file mode 100644
index 000000000000..4ca38af2cd32
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/byteorder.h
@@ -0,0 +1,23 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _UAPI_ASM_RISCV_BYTEORDER_H
+#define _UAPI_ASM_RISCV_BYTEORDER_H
+
+#include <linux/byteorder/little_endian.h>
+
+#endif /* _UAPI_ASM_RISCV_BYTEORDER_H */
diff --git a/arch/riscv/include/uapi/asm/elf.h b/arch/riscv/include/uapi/asm/elf.h
new file mode 100644
index 000000000000..e438edd97589
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/elf.h
@@ -0,0 +1,83 @@
+/*
+ * Copyright (C) 2003 Matjaz Breskvar <phoenix@bsemi.com>
+ * Copyright (C) 2010-2011 Jonas Bonn <jonas@southpole.se>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef _UAPI_ASM_ELF_H
+#define _UAPI_ASM_ELF_H
+
+#include <asm/ptrace.h>
+
+/* ELF register definitions */
+typedef unsigned long elf_greg_t;
+typedef struct user_regs_struct elf_gregset_t;
+#define ELF_NGREG (sizeof(elf_gregset_t) / sizeof(elf_greg_t))
+
+typedef struct user_fpregs_struct elf_fpregset_t;
+
+#define ELF_RISCV_R_SYM(r_info) ((r_info) >> 32)
+#define ELF_RISCV_R_TYPE(r_info) ((r_info) & 0xffffffff)
+
+/*
+ * RISC-V relocation types
+ */
+
+/* Relocation types used by the dynamic linker */
+#define R_RISCV_NONE		0
+#define R_RISCV_32		1
+#define R_RISCV_64		2
+#define R_RISCV_RELATIVE	3
+#define R_RISCV_COPY		4
+#define R_RISCV_JUMP_SLOT	5
+#define R_RISCV_TLS_DTPMOD32	6
+#define R_RISCV_TLS_DTPMOD64	7
+#define R_RISCV_TLS_DTPREL32	8
+#define R_RISCV_TLS_DTPREL64	9
+#define R_RISCV_TLS_TPREL32	10
+#define R_RISCV_TLS_TPREL64	11
+
+/* Relocation types not used by the dynamic linker */
+#define R_RISCV_BRANCH		16
+#define R_RISCV_JAL		17
+#define R_RISCV_CALL		18
+#define R_RISCV_CALL_PLT	19
+#define R_RISCV_GOT_HI20	20
+#define R_RISCV_TLS_GOT_HI20	21
+#define R_RISCV_TLS_GD_HI20	22
+#define R_RISCV_PCREL_HI20	23
+#define R_RISCV_PCREL_LO12_I	24
+#define R_RISCV_PCREL_LO12_S	25
+#define R_RISCV_HI20		26
+#define R_RISCV_LO12_I		27
+#define R_RISCV_LO12_S		28
+#define R_RISCV_TPREL_HI20	29
+#define R_RISCV_TPREL_LO12_I	30
+#define R_RISCV_TPREL_LO12_S	31
+#define R_RISCV_TPREL_ADD	32
+#define R_RISCV_ADD8		33
+#define R_RISCV_ADD16		34
+#define R_RISCV_ADD32		35
+#define R_RISCV_ADD64		36
+#define R_RISCV_SUB8		37
+#define R_RISCV_SUB16		38
+#define R_RISCV_SUB32		39
+#define R_RISCV_SUB64		40
+#define R_RISCV_GNU_VTINHERIT	41
+#define R_RISCV_GNU_VTENTRY	42
+#define R_RISCV_ALIGN		43
+#define R_RISCV_RVC_BRANCH	44
+#define R_RISCV_RVC_JUMP	45
+#define R_RISCV_LUI			46
+#define R_RISCV_GPREL_I		47
+#define R_RISCV_GPREL_S		48
+#define R_RISCV_TPREL_I		49
+#define R_RISCV_TPREL_S		50
+#define R_RISCV_RELAX		51
+
+#endif /* _UAPI_ASM_ELF_H */
diff --git a/arch/riscv/include/uapi/asm/ptrace.h b/arch/riscv/include/uapi/asm/ptrace.h
new file mode 100644
index 000000000000..b95ee16e6896
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/ptrace.h
@@ -0,0 +1,68 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _UAPI_ASM_RISCV_PTRACE_H
+#define _UAPI_ASM_RISCV_PTRACE_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/types.h>
+
+/* User-mode register state for core dumps, ptrace, sigcontext
+ *
+ * This decouples struct pt_regs from the userspace ABI.
+ * struct user_regs_struct must form a prefix of struct pt_regs.
+ */
+struct user_regs_struct {
+	unsigned long pc;
+	unsigned long ra;
+	unsigned long sp;
+	unsigned long gp;
+	unsigned long tp;
+	unsigned long t0;
+	unsigned long t1;
+	unsigned long t2;
+	unsigned long s0;
+	unsigned long s1;
+	unsigned long a0;
+	unsigned long a1;
+	unsigned long a2;
+	unsigned long a3;
+	unsigned long a4;
+	unsigned long a5;
+	unsigned long a6;
+	unsigned long a7;
+	unsigned long s2;
+	unsigned long s3;
+	unsigned long s4;
+	unsigned long s5;
+	unsigned long s6;
+	unsigned long s7;
+	unsigned long s8;
+	unsigned long s9;
+	unsigned long s10;
+	unsigned long s11;
+	unsigned long t3;
+	unsigned long t4;
+	unsigned long t5;
+	unsigned long t6;
+};
+
+struct user_fpregs_struct {
+	__u64 f[32];
+	__u32 fcsr;
+};
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _UAPI_ASM_RISCV_PTRACE_H */
diff --git a/arch/riscv/include/uapi/asm/sigcontext.h b/arch/riscv/include/uapi/asm/sigcontext.h
new file mode 100644
index 000000000000..dc882d598834
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/sigcontext.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _UAPI_ASM_RISCV_SIGCONTEXT_H
+#define _UAPI_ASM_RISCV_SIGCONTEXT_H
+
+#include <asm/ptrace.h>
+
+/* Signal context structure
+ *
+ * This contains the context saved before a signal handler is invoked;
+ * it is restored by sys_sigreturn / sys_rt_sigreturn.
+ */
+struct sigcontext {
+	struct user_regs_struct sc_regs;
+	struct user_fpregs_struct sc_fpregs;
+};
+
+#endif /* _UAPI_ASM_RISCV_SIGCONTEXT_H */
diff --git a/arch/riscv/include/uapi/asm/siginfo.h b/arch/riscv/include/uapi/asm/siginfo.h
new file mode 100644
index 000000000000..f96849aac662
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/siginfo.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2016 SiFive, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_SIGINFO_H
+#define __ASM_SIGINFO_H
+
+#define __ARCH_SI_PREAMBLE_SIZE	(__SIZEOF_POINTER__ == 4 ? 12 : 16)
+
+#include <asm-generic/siginfo.h>
+
+#endif
diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h
new file mode 100644
index 000000000000..aedba7d32370
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/unistd.h
@@ -0,0 +1,22 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <asm-generic/unistd.h>
+
+#define __NR_sysriscv  __NR_arch_specific_syscall
+#ifndef __riscv_atomic
+__SYSCALL(__NR_sysriscv, sys_sysriscv)
+#endif
+
+#define RISCV_ATOMIC_CMPXCHG    1
+#define RISCV_ATOMIC_CMPXCHG64  2
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 13/17] RISC-V: Add include subdirectory
@ 2017-06-06 23:00     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

This patch adds the include files for the RISC-V port.  These are mostly
based on the score port, but there are a lot of arm64-based files as
well.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/Kbuild             |  61 +++++
 arch/riscv/include/asm/asm-offsets.h      |   1 +
 arch/riscv/include/asm/asm.h              |  76 ++++++
 arch/riscv/include/asm/atomic.h           | 349 ++++++++++++++++++++++++
 arch/riscv/include/asm/atomic64.h         | 355 ++++++++++++++++++++++++
 arch/riscv/include/asm/barrier.h          |  41 +++
 arch/riscv/include/asm/bitops.h           | 228 ++++++++++++++++
 arch/riscv/include/asm/bug.h              |  88 ++++++
 arch/riscv/include/asm/cache.h            |  22 ++
 arch/riscv/include/asm/cacheflush.h       |  39 +++
 arch/riscv/include/asm/cmpxchg.h          | 124 +++++++++
 arch/riscv/include/asm/compat.h           |  31 +++
 arch/riscv/include/asm/csr.h              | 125 +++++++++
 arch/riscv/include/asm/current.h          |  42 +++
 arch/riscv/include/asm/delay.h            |  28 ++
 arch/riscv/include/asm/device.h           |  27 ++
 arch/riscv/include/asm/dma-mapping.h      |  41 +++
 arch/riscv/include/asm/elf.h              |  85 ++++++
 arch/riscv/include/asm/io.h               | 152 +++++++++++
 arch/riscv/include/asm/irq.h              |  31 +++
 arch/riscv/include/asm/irqflags.h         |  63 +++++
 arch/riscv/include/asm/kprobes.h          |  22 ++
 arch/riscv/include/asm/linkage.h          |  20 ++
 arch/riscv/include/asm/mmu.h              |  26 ++
 arch/riscv/include/asm/mmu_context.h      |  69 +++++
 arch/riscv/include/asm/page.h             | 138 ++++++++++
 arch/riscv/include/asm/pci.h              |  50 ++++
 arch/riscv/include/asm/pgalloc.h          | 124 +++++++++
 arch/riscv/include/asm/pgtable-32.h       |  25 ++
 arch/riscv/include/asm/pgtable-64.h       |  84 ++++++
 arch/riscv/include/asm/pgtable-bits.h     |  48 ++++
 arch/riscv/include/asm/pgtable.h          | 427 +++++++++++++++++++++++++++++
 arch/riscv/include/asm/processor.h        | 102 +++++++
 arch/riscv/include/asm/ptrace.h           | 116 ++++++++
 arch/riscv/include/asm/sbi.h              | 100 +++++++
 arch/riscv/include/asm/smp.h              |  41 +++
 arch/riscv/include/asm/spinlock.h         | 155 +++++++++++
 arch/riscv/include/asm/spinlock_types.h   |  33 +++
 arch/riscv/include/asm/string.h           |  30 ++
 arch/riscv/include/asm/switch_to.h        |  69 +++++
 arch/riscv/include/asm/syscall.h          |  90 ++++++
 arch/riscv/include/asm/syscalls.h         |  25 ++
 arch/riscv/include/asm/thread_info.h      |  96 +++++++
 arch/riscv/include/asm/timex.h            |  43 +++
 arch/riscv/include/asm/tlb.h              |  24 ++
 arch/riscv/include/asm/tlbflush.h         |  64 +++++
 arch/riscv/include/asm/uaccess.h          | 440 ++++++++++++++++++++++++++++++
 arch/riscv/include/asm/unistd.h           |  16 ++
 arch/riscv/include/asm/vdso.h             |  32 +++
 arch/riscv/include/asm/word-at-a-time.h   |  55 ++++
 arch/riscv/include/uapi/asm/Kbuild        |   2 +
 arch/riscv/include/uapi/asm/auxvec.h      |  24 ++
 arch/riscv/include/uapi/asm/bitsperlong.h |  25 ++
 arch/riscv/include/uapi/asm/byteorder.h   |  23 ++
 arch/riscv/include/uapi/asm/elf.h         |  83 ++++++
 arch/riscv/include/uapi/asm/ptrace.h      |  68 +++++
 arch/riscv/include/uapi/asm/sigcontext.h  |  29 ++
 arch/riscv/include/uapi/asm/siginfo.h     |  24 ++
 arch/riscv/include/uapi/asm/unistd.h      |  22 ++
 59 files changed, 4873 insertions(+)
 create mode 100644 arch/riscv/include/asm/Kbuild
 create mode 100644 arch/riscv/include/asm/asm-offsets.h
 create mode 100644 arch/riscv/include/asm/asm.h
 create mode 100644 arch/riscv/include/asm/atomic.h
 create mode 100644 arch/riscv/include/asm/atomic64.h
 create mode 100644 arch/riscv/include/asm/barrier.h
 create mode 100644 arch/riscv/include/asm/bitops.h
 create mode 100644 arch/riscv/include/asm/bug.h
 create mode 100644 arch/riscv/include/asm/cache.h
 create mode 100644 arch/riscv/include/asm/cacheflush.h
 create mode 100644 arch/riscv/include/asm/cmpxchg.h
 create mode 100644 arch/riscv/include/asm/compat.h
 create mode 100644 arch/riscv/include/asm/csr.h
 create mode 100644 arch/riscv/include/asm/current.h
 create mode 100644 arch/riscv/include/asm/delay.h
 create mode 100644 arch/riscv/include/asm/device.h
 create mode 100644 arch/riscv/include/asm/dma-mapping.h
 create mode 100644 arch/riscv/include/asm/elf.h
 create mode 100644 arch/riscv/include/asm/io.h
 create mode 100644 arch/riscv/include/asm/irq.h
 create mode 100644 arch/riscv/include/asm/irqflags.h
 create mode 100644 arch/riscv/include/asm/kprobes.h
 create mode 100644 arch/riscv/include/asm/linkage.h
 create mode 100644 arch/riscv/include/asm/mmu.h
 create mode 100644 arch/riscv/include/asm/mmu_context.h
 create mode 100644 arch/riscv/include/asm/page.h
 create mode 100644 arch/riscv/include/asm/pci.h
 create mode 100644 arch/riscv/include/asm/pgalloc.h
 create mode 100644 arch/riscv/include/asm/pgtable-32.h
 create mode 100644 arch/riscv/include/asm/pgtable-64.h
 create mode 100644 arch/riscv/include/asm/pgtable-bits.h
 create mode 100644 arch/riscv/include/asm/pgtable.h
 create mode 100644 arch/riscv/include/asm/processor.h
 create mode 100644 arch/riscv/include/asm/ptrace.h
 create mode 100644 arch/riscv/include/asm/sbi.h
 create mode 100644 arch/riscv/include/asm/smp.h
 create mode 100644 arch/riscv/include/asm/spinlock.h
 create mode 100644 arch/riscv/include/asm/spinlock_types.h
 create mode 100644 arch/riscv/include/asm/string.h
 create mode 100644 arch/riscv/include/asm/switch_to.h
 create mode 100644 arch/riscv/include/asm/syscall.h
 create mode 100644 arch/riscv/include/asm/syscalls.h
 create mode 100644 arch/riscv/include/asm/thread_info.h
 create mode 100644 arch/riscv/include/asm/timex.h
 create mode 100644 arch/riscv/include/asm/tlb.h
 create mode 100644 arch/riscv/include/asm/tlbflush.h
 create mode 100644 arch/riscv/include/asm/uaccess.h
 create mode 100644 arch/riscv/include/asm/unistd.h
 create mode 100644 arch/riscv/include/asm/vdso.h
 create mode 100644 arch/riscv/include/asm/word-at-a-time.h
 create mode 100644 arch/riscv/include/uapi/asm/Kbuild
 create mode 100644 arch/riscv/include/uapi/asm/auxvec.h
 create mode 100644 arch/riscv/include/uapi/asm/bitsperlong.h
 create mode 100644 arch/riscv/include/uapi/asm/byteorder.h
 create mode 100644 arch/riscv/include/uapi/asm/elf.h
 create mode 100644 arch/riscv/include/uapi/asm/ptrace.h
 create mode 100644 arch/riscv/include/uapi/asm/sigcontext.h
 create mode 100644 arch/riscv/include/uapi/asm/siginfo.h
 create mode 100644 arch/riscv/include/uapi/asm/unistd.h

diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild
new file mode 100644
index 000000000000..710397395981
--- /dev/null
+++ b/arch/riscv/include/asm/Kbuild
@@ -0,0 +1,61 @@
+generic-y += bugs.h
+generic-y += cacheflush.h
+generic-y += checksum.h
+generic-y += clkdev.h
+generic-y += cputime.h
+generic-y += div64.h
+generic-y += dma.h
+generic-y += dma-contiguous.h
+generic-y += emergency-restart.h
+generic-y += errno.h
+generic-y += exec.h
+generic-y += fb.h
+generic-y += fcntl.h
+generic-y += ftrace.h
+generic-y += futex.h
+generic-y += hardirq.h
+generic-y += hash.h
+generic-y += hw_irq.h
+generic-y += ioctl.h
+generic-y += ioctls.h
+generic-y += ipcbuf.h
+generic-y += irq_regs.h
+generic-y += irq_work.h
+generic-y += kdebug.h
+generic-y += kmap_types.h
+generic-y += kvm_para.h
+generic-y += local.h
+generic-y += mm-arch-hooks.h
+generic-y += mman.h
+generic-y += module.h
+generic-y += msgbuf.h
+generic-y += mutex.h
+generic-y += param.h
+generic-y += percpu.h
+generic-y += poll.h
+generic-y += posix_types.h
+generic-y += preempt.h
+generic-y += resource.h
+generic-y += scatterlist.h
+generic-y += sections.h
+generic-y += sembuf.h
+generic-y += setup.h
+generic-y += shmbuf.h
+generic-y += shmparam.h
+generic-y += signal.h
+generic-y += socket.h
+generic-y += sockios.h
+generic-y += stat.h
+generic-y += statfs.h
+generic-y += swab.h
+generic-y += termbits.h
+generic-y += termios.h
+generic-y += topology.h
+generic-y += trace_clock.h
+generic-y += types.h
+generic-y += ucontext.h
+generic-y += unaligned.h
+generic-y += user.h
+generic-y += vga.h
+generic-y += vmlinux.lds.h
+generic-y += xor.h
diff --git a/arch/riscv/include/asm/asm-offsets.h b/arch/riscv/include/asm/asm-offsets.h
new file mode 100644
index 000000000000..d370ee36a182
--- /dev/null
+++ b/arch/riscv/include/asm/asm-offsets.h
@@ -0,0 +1 @@
+#include <generated/asm-offsets.h>
diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h
new file mode 100644
index 000000000000..6cbbb6a68d76
--- /dev/null
+++ b/arch/riscv/include/asm/asm.h
@@ -0,0 +1,76 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_ASM_H
+#define _ASM_RISCV_ASM_H
+
+#ifdef __ASSEMBLY__
+#define __ASM_STR(x)	x
+#else
+#define __ASM_STR(x)	#x
+#endif
+
+#if __riscv_xlen == 64
+#define __REG_SEL(a, b)	__ASM_STR(a)
+#elif __riscv_xlen == 32
+#define __REG_SEL(a, b)	__ASM_STR(b)
+#else
+#error "Unexpected __riscv_xlen"
+#endif
+
+#define REG_L		__REG_SEL(ld, lw)
+#define REG_S		__REG_SEL(sd, sw)
+#define SZREG		__REG_SEL(8, 4)
+#define LGREG		__REG_SEL(3, 2)
+
+#if __SIZEOF_POINTER__ == 8
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.dword
+#define RISCV_SZPTR		8
+#define RISCV_LGPTR		3
+#else
+#define RISCV_PTR		".dword"
+#define RISCV_SZPTR		"8"
+#define RISCV_LGPTR		"3"
+#endif
+#elif __SIZEOF_POINTER__ == 4
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.word
+#define RISCV_SZPTR		4
+#define RISCV_LGPTR		2
+#else
+#define RISCV_PTR		".word"
+#define RISCV_SZPTR		"4"
+#define RISCV_LGPTR		"2"
+#endif
+#else
+#error "Unexpected __SIZEOF_POINTER__"
+#endif
+
+#if (__SIZEOF_INT__ == 4)
+#define INT		__ASM_STR(.word)
+#define SZINT		__ASM_STR(4)
+#define LGINT		__ASM_STR(2)
+#else
+#error "Unexpected __SIZEOF_INT__"
+#endif
+
+#if (__SIZEOF_SHORT__ == 2)
+#define SHORT		__ASM_STR(.half)
+#define SZSHORT		__ASM_STR(2)
+#define LGSHORT		__ASM_STR(1)
+#else
+#error "Unexpected __SIZEOF_SHORT__"
+#endif
+
+#endif /* _ASM_RISCV_ASM_H */
diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
new file mode 100644
index 000000000000..17ab22036e1a
--- /dev/null
+++ b/arch/riscv/include/asm/atomic.h
@@ -0,0 +1,349 @@
+/*
+ * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#ifndef _ASM_RISCV_ATOMIC_H
+#define _ASM_RISCV_ATOMIC_H
+
+#ifdef CONFIG_ISA_A
+
+#include <asm/cmpxchg.h>
+#include <asm/barrier.h>
+
+#define ATOMIC_INIT(i)	{ (i) }
+
+/**
+ * atomic_read - read atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically reads the value of @v.
+ */
+static inline int atomic_read(const atomic_t *v)
+{
+	return READ_ONCE(v->counter);
+}
+
+/**
+ * atomic_set - set atomic variable
+ * @v: pointer of type atomic_t
+ * @i: required value
+ *
+ * Atomically sets the value of @v to @i.
+ */
+static inline void atomic_set(atomic_t *v, int i)
+{
+	WRITE_ONCE(v->counter, i);
+}
+
+/**
+ * atomic_add - add integer to atomic variable
+ * @i: integer value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically adds @i to @v.
+ */
+static inline void atomic_add(int i, atomic_t *v)
+{
+	__asm__ __volatile__ (
+		"amoadd.w zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (i));
+}
+
+#define atomic_fetch_add atomic_fetch_add
+static inline int atomic_fetch_add(unsigned int mask, atomic_t *v)
+{
+	int out;
+
+	__asm__ __volatile__ (
+		"amoadd.w %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic_sub - subtract integer from atomic variable
+ * @i: integer value to subtract
+ * @v: pointer of type atomic_t
+ *
+ * Atomically subtracts @i from @v.
+ */
+static inline void atomic_sub(int i, atomic_t *v)
+{
+	atomic_add(-i, v);
+}
+
+#define atomic_fetch_sub atomic_fetch_sub
+static inline int atomic_fetch_sub(unsigned int mask, atomic_t *v)
+{
+	int out;
+
+	__asm__ __volatile__ (
+		"amosub.w %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic_add_return - add integer to atomic variable
+ * @i: integer value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically adds @i to @v and returns the result
+ */
+static inline int atomic_add_return(int i, atomic_t *v)
+{
+	register int c;
+
+	__asm__ __volatile__ (
+		"amoadd.w %0, %2, %1"
+		: "=r" (c), "+A" (v->counter)
+		: "r" (i));
+	return (c + i);
+}
+
+/**
+ * atomic_sub_return - subtract integer from atomic variable
+ * @i: integer value to subtract
+ * @v: pointer of type atomic_t
+ *
+ * Atomically subtracts @i from @v and returns the result
+ */
+static inline int atomic_sub_return(int i, atomic_t *v)
+{
+	return atomic_add_return(-i, v);
+}
+
+/**
+ * atomic_inc - increment atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1.
+ */
+static inline void atomic_inc(atomic_t *v)
+{
+	atomic_add(1, v);
+}
+
+/**
+ * atomic_dec - decrement atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1.
+ */
+static inline void atomic_dec(atomic_t *v)
+{
+	atomic_add(-1, v);
+}
+
+static inline int atomic_inc_return(atomic_t *v)
+{
+	return atomic_add_return(1, v);
+}
+
+static inline int atomic_dec_return(atomic_t *v)
+{
+	return atomic_sub_return(1, v);
+}
+
+/**
+ * atomic_sub_and_test - subtract value from variable and test result
+ * @i: integer value to subtract
+ * @v: pointer of type atomic_t
+ *
+ * Atomically subtracts @i from @v and returns
+ * true if the result is zero, or false for all
+ * other cases.
+ */
+static inline int atomic_sub_and_test(int i, atomic_t *v)
+{
+	return (atomic_sub_return(i, v) == 0);
+}
+
+/**
+ * atomic_inc_and_test - increment and test
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1
+ * and returns true if the result is zero, or false for all
+ * other cases.
+ */
+static inline int atomic_inc_and_test(atomic_t *v)
+{
+	return (atomic_inc_return(v) == 0);
+}
+
+/**
+ * atomic_dec_and_test - decrement and test
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1 and
+ * returns true if the result is 0, or false for all other
+ * cases.
+ */
+static inline int atomic_dec_and_test(atomic_t *v)
+{
+	return (atomic_dec_return(v) == 0);
+}
+
+/**
+ * atomic_add_negative - add and test if negative
+ * @i: integer value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically adds @i to @v and returns true
+ * if the result is negative, or false when
+ * result is greater than or equal to zero.
+ */
+static inline int atomic_add_negative(int i, atomic_t *v)
+{
+	return (atomic_add_return(i, v) < 0);
+}
+
+
+static inline int atomic_xchg(atomic_t *v, int n)
+{
+	register int c;
+
+	__asm__ __volatile__ (
+		"amoswap.w %0, %2, %1"
+		: "=r" (c), "+A" (v->counter)
+		: "r" (n));
+	return c;
+}
+
+static inline int atomic_cmpxchg(atomic_t *v, int o, int n)
+{
+	return cmpxchg(&(v->counter), o, n);
+}
+
+/**
+ * __atomic_add_unless - add unless the number is already a given value
+ * @v: pointer of type atomic_t
+ * @a: the amount to add to v...
+ * @u: ...unless v is equal to u.
+ *
+ * Atomically adds @a to @v, so long as @v was not already @u.
+ * Returns the old value of @v.
+ */
+static inline int __atomic_add_unless(atomic_t *v, int a, int u)
+{
+	register int prev, rc;
+
+	__asm__ __volatile__ (
+	"0:\n"
+		"lr.w %0, %2\n"
+		"beq  %0, %4, 1f\n"
+		"add  %1, %0, %3\n"
+		"sc.w %1, %1, %2\n"
+		"bnez %1, 0b\n"
+	"1:"
+		: "=&r" (prev), "=&r" (rc), "+A" (v->counter)
+		: "r" (a), "r" (u));
+	return prev;
+}
+
+/**
+ * atomic_and - Atomically clear bits in atomic variable
+ * @mask: Mask of the bits to be retained
+ * @v: pointer of type atomic_t
+ *
+ * Atomically retains the bits set in @mask from @v
+ */
+static inline void atomic_and(unsigned int mask, atomic_t *v)
+{
+	__asm__ __volatile__ (
+		"amoand.w zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+#define atomic_fetch_and atomic_fetch_and
+static inline int atomic_fetch_and(unsigned int mask, atomic_t *v)
+{
+	int out;
+
+	__asm__ __volatile__ (
+		"amoand.w %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic_or - Atomically set bits in atomic variable
+ * @mask: Mask of the bits to be set
+ * @v: pointer of type atomic_t
+ *
+ * Atomically sets the bits set in @mask in @v
+ */
+static inline void atomic_or(unsigned int mask, atomic_t *v)
+{
+	__asm__ __volatile__ (
+		"amoor.w zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+#define atomic_fetch_or atomic_fetch_or
+static inline int atomic_fetch_or(unsigned int mask, atomic_t *v)
+{
+	int out;
+
+	__asm__ __volatile__ (
+		"amoor.w %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic_xor - Atomically flips bits in atomic variable
+ * @mask: Mask of the bits to be flipped
+ * @v: pointer of type atomic_t
+ *
+ * Atomically flips the bits set in @mask in @v
+ */
+static inline void atomic_xor(unsigned int mask, atomic_t *v)
+{
+	__asm__ __volatile__ (
+		"amoxor.w zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+#define atomic_fetch_xor atomic_fetch_xor
+static inline int atomic_fetch_xor(unsigned int mask, atomic_t *v)
+{
+	int out;
+
+	__asm__ __volatile__ (
+		"amoxor.w %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/* Assume that atomic operations are already serializing */
+#define smp_mb__before_atomic_dec()	barrier()
+#define smp_mb__after_atomic_dec()	barrier()
+#define smp_mb__before_atomic_inc()	barrier()
+#define smp_mb__after_atomic_inc()	barrier()
+
+#else /* !CONFIG_ISA_A */
+
+#include <asm-generic/atomic.h>
+
+#endif /* CONFIG_ISA_A */
+
+#include <asm/atomic64.h>
+
+#endif /* _ASM_RISCV_ATOMIC_H */
diff --git a/arch/riscv/include/asm/atomic64.h b/arch/riscv/include/asm/atomic64.h
new file mode 100644
index 000000000000..5822c4755f06
--- /dev/null
+++ b/arch/riscv/include/asm/atomic64.h
@@ -0,0 +1,355 @@
+/*
+ * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#ifndef _ASM_RISCV_ATOMIC64_H
+#define _ASM_RISCV_ATOMIC64_H
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+#include <asm-generic/atomic64.h>
+#else /* !CONFIG_GENERIC_ATOMIC64 */
+
+#include <linux/types.h>
+
+#define ATOMIC64_INIT(i)	{ (i) }
+
+/**
+ * atomic64_read - read atomic64 variable
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically reads the value of @v.
+ */
+static inline s64 atomic64_read(const atomic64_t *v)
+{
+	return READ_ONCE(v->counter);
+}
+
+/**
+ * atomic64_set - set atomic64 variable
+ * @v: pointer to type atomic64_t
+ * @i: required value
+ *
+ * Atomically sets the value of @v to @i.
+ */
+static inline void atomic64_set(atomic64_t *v, s64 i)
+{
+	WRITE_ONCE(v->counter, i);
+}
+
+/**
+ * atomic64_add - add integer to atomic64 variable
+ * @i: integer value to add
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically adds @i to @v.
+ */
+static inline void atomic64_add(s64 a, atomic64_t *v)
+{
+	__asm__ __volatile__ (
+		"amoadd.d zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (a));
+}
+
+static inline long atomic64_fetch_add(unsigned long mask, atomic64_t *v)
+{
+	long out;
+
+	 __asm__ __volatile__ (
+		"amoadd.d %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic64_sub - subtract the atomic64 variable
+ * @i: integer value to subtract
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically subtracts @i from @v.
+ */
+static inline void atomic64_sub(s64 a, atomic64_t *v)
+{
+	atomic64_add(-a, v);
+}
+
+static inline long atomic64_fetch_sub(unsigned long mask, atomic64_t *v)
+{
+	long out;
+
+	__asm__ __volatile__ (
+		"amosub.d %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic64_add_return - add and return
+ * @i: integer value to add
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically adds @i to @v and returns @i + @v
+ */
+static inline s64 atomic64_add_return(s64 a, atomic64_t *v)
+{
+	register s64 c;
+
+	__asm__ __volatile__ (
+		"amoadd.d %0, %2, %1"
+		: "=r" (c), "+A" (v->counter)
+		: "r" (a));
+	return (c + a);
+}
+
+static inline s64 atomic64_sub_return(s64 a, atomic64_t *v)
+{
+	return atomic64_add_return(-a, v);
+}
+
+/**
+ * atomic64_inc - increment atomic64 variable
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically increments @v by 1.
+ */
+static inline void atomic64_inc(atomic64_t *v)
+{
+	atomic64_add(1L, v);
+}
+
+/**
+ * atomic64_dec - decrement atomic64 variable
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically decrements @v by 1.
+ */
+static inline void atomic64_dec(atomic64_t *v)
+{
+	atomic64_add(-1L, v);
+}
+
+static inline s64 atomic64_inc_return(atomic64_t *v)
+{
+	return atomic64_add_return(1L, v);
+}
+
+static inline s64 atomic64_dec_return(atomic64_t *v)
+{
+	return atomic64_add_return(-1L, v);
+}
+
+/**
+ * atomic64_inc_and_test - increment and test
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically increments @v by 1
+ * and returns true if the result is zero, or false for all
+ * other cases.
+ */
+static inline int atomic64_inc_and_test(atomic64_t *v)
+{
+	return (atomic64_inc_return(v) == 0);
+}
+
+/**
+ * atomic64_dec_and_test - decrement and test
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically decrements @v by 1 and
+ * returns true if the result is 0, or false for all other
+ * cases.
+ */
+static inline int atomic64_dec_and_test(atomic64_t *v)
+{
+	return (atomic64_dec_return(v) == 0);
+}
+
+/**
+ * atomic64_sub_and_test - subtract value from variable and test result
+ * @a: integer value to subtract
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically subtracts @a from @v and returns
+ * true if the result is zero, or false for all
+ * other cases.
+ */
+static inline int atomic64_sub_and_test(s64 a, atomic64_t *v)
+{
+	return (atomic64_sub_return(a, v) == 0);
+}
+
+/**
+ * atomic64_add_negative - add and test if negative
+ * @a: integer value to add
+ * @v: pointer to type atomic64_t
+ *
+ * Atomically adds @a to @v and returns true
+ * if the result is negative, or false when
+ * result is greater than or equal to zero.
+ */
+static inline int atomic64_add_negative(s64 a, atomic64_t *v)
+{
+	return (atomic64_add_return(a, v) < 0);
+}
+
+
+static inline s64 atomic64_xchg(atomic64_t *v, s64 n)
+{
+	register s64 c;
+
+	__asm__ __volatile__ (
+		"amoswap.d %0, %2, %1"
+		: "=r" (c), "+A" (v->counter)
+		: "r" (n));
+	return c;
+}
+
+static inline s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
+{
+	return cmpxchg(&(v->counter), o, n);
+}
+
+/*
+ * atomic64_dec_if_positive - decrement by 1 if old value positive
+ * @v: pointer of type atomic_t
+ *
+ * The function returns the old value of *v minus 1, even if
+ * the atomic variable, v, was not decremented.
+ */
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
+{
+	register s64 prev, rc;
+
+	__asm__ __volatile__ (
+	"0:\n"
+		"lr.d %0, %2\n"
+		"add  %0, %0, -1\n"
+		"bltz %0, 1f\n"
+		"sc.w %1, %0, %2\n"
+		"bnez %1, 0b\n"
+	"1:\n"
+		: "=&r" (prev), "=r" (rc), "+A" (v->counter));
+	return prev;
+}
+
+/**
+ * atomic64_add_unless - add unless the number is a given value
+ * @v: pointer of type atomic64_t
+ * @a: the amount to add to v...
+ * @u: ...unless v is equal to u.
+ *
+ * Atomically adds @a to @v, so long as it was not @u.
+ * Returns true if the addition occurred and false otherwise.
+ */
+static inline int atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
+{
+	register s64 tmp;
+	register int rc = 1;
+
+	__asm__ __volatile__ (
+	"0:\n"
+		"lr.d %0, %2\n"
+		"beq  %0, %z4, 1f\n"
+		"add  %0, %0, %3\n"
+		"sc.d %1, %0, %2\n"
+		"bnez %1, 0b\n"
+	"1:"
+		: "=&r" (tmp), "=&r" (rc), "+A" (v->counter)
+		: "rI" (a), "rJ" (u));
+	return !rc;
+}
+
+static inline int atomic64_inc_not_zero(atomic64_t *v)
+{
+	return atomic64_add_unless(v, 1, 0);
+}
+
+/**
+ * atomic64_and - Atomically clear bits in atomic variable
+ * @mask: Mask of the bits to be retained
+ * @v: pointer of type atomic_t
+ *
+ * Atomically retains the bits set in @mask from @v
+ */
+static inline void atomic64_and(s64 mask, atomic64_t *v)
+{
+	__asm__ __volatile__ (
+		"amoand.d zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+static inline long atomic64_fetch_and(unsigned long mask, atomic64_t *v)
+{
+	long out;
+
+	__asm__ __volatile__ (
+		"amoand.d %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic64_or - Atomically set bits in atomic variable
+ * @mask: Mask of the bits to be set
+ * @v: pointer of type atomic_t
+ *
+ * Atomically sets the bits set in @mask in @v
+ */
+static inline void atomic64_or(s64 mask, atomic64_t *v)
+{
+	__asm__ __volatile__ (
+		"amoor.d zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+static inline long atomic64_fetch_or(unsigned long mask, atomic64_t *v)
+{
+	long out;
+
+	__asm__ __volatile__ (
+		"amoor.d %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+/**
+ * atomic64_xor - Atomically flips bits in atomic variable
+ * @mask: Mask of the bits to be flipped
+ * @v: pointer of type atomic_t
+ *
+ * Atomically flips the bits set in @mask in @v
+ */
+static inline void atomic64_xor(s64 mask, atomic64_t *v)
+{
+	__asm__ __volatile__ (
+		"amoxor.d zero, %1, %0"
+		: "+A" (v->counter)
+		: "r" (mask));
+}
+
+static inline long atomic64_fetch_xor(unsigned long mask, atomic64_t *v)
+{
+	long out;
+
+	__asm__ __volatile__ (
+		"amoxor.d %2, %1, %0"
+		: "+A" (v->counter), "=r" (out)
+		: "r" (mask));
+	return out;
+}
+
+#endif /* CONFIG_GENERIC_ATOMIC64 */
+
+#endif /* _ASM_RISCV_ATOMIC64_H */
diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h
new file mode 100644
index 000000000000..5eb0e501e618
--- /dev/null
+++ b/arch/riscv/include/asm/barrier.h
@@ -0,0 +1,41 @@
+/*
+ * Based on arch/arm/include/asm/barrier.h
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_RISCV_BARRIER_H
+#define _ASM_RISCV_BARRIER_H
+
+#ifndef __ASSEMBLY__
+
+#define nop()		__asm__ __volatile__ ("nop")
+
+/* These barries need to enforce ordering on both devices or memory. */
+#define mb()		__asm__ __volatile__ ("fence iorw,iorw" : : : "memory")
+#define rmb()		__asm__ __volatile__ ("fence ir,ir"     : : : "memory")
+#define wmb()		__asm__ __volatile__ ("fence ow,ow"     : : : "memory")
+
+/* These barries do not need to enforce ordering on devices, just memory. */
+#define smp_mb()	__asm__ __volatile__ ("fence rw,rw" : : : "memory")
+#define smp_rmb()	__asm__ __volatile__ ("fence r,r"   : : : "memory")
+#define smp_wmb()	__asm__ __volatile__ ("fence w,w"   : : : "memory")
+
+#include <asm-generic/barrier.h>
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BARRIER_H */
diff --git a/arch/riscv/include/asm/bitops.h b/arch/riscv/include/asm/bitops.h
new file mode 100644
index 000000000000..27e47858c6b1
--- /dev/null
+++ b/arch/riscv/include/asm/bitops.h
@@ -0,0 +1,228 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_BITOPS_H
+#define _ASM_RISCV_BITOPS_H
+
+#ifndef _LINUX_BITOPS_H
+#error "Only <linux/bitops.h> can be included directly"
+#endif /* _LINUX_BITOPS_H */
+
+#ifdef __KERNEL__
+
+#include <linux/compiler.h>
+#include <linux/irqflags.h>
+#include <asm/barrier.h>
+#include <asm/bitsperlong.h>
+
+#ifdef CONFIG_ISA_A
+
+#ifndef smp_mb__before_clear_bit
+#define smp_mb__before_clear_bit()  smp_mb()
+#define smp_mb__after_clear_bit()   smp_mb()
+#endif /* smp_mb__before_clear_bit */
+
+#include <asm-generic/bitops/__ffs.h>
+#include <asm-generic/bitops/ffz.h>
+#include <asm-generic/bitops/fls.h>
+#include <asm-generic/bitops/__fls.h>
+#include <asm-generic/bitops/fls64.h>
+#include <asm-generic/bitops/find.h>
+#include <asm-generic/bitops/sched.h>
+#include <asm-generic/bitops/ffs.h>
+
+#include <asm-generic/bitops/hweight.h>
+
+#if (BITS_PER_LONG == 64)
+#define __AMO(op)	"amo" #op ".d"
+#elif (BITS_PER_LONG == 32)
+#define __AMO(op)	"amo" #op ".w"
+#else
+#error "Unexpected BITS_PER_LONG"
+#endif
+
+#define __test_and_op_bit(op, mod, nr, addr)			\
+({								\
+	unsigned long __res, __mask;				\
+	__mask = BIT_MASK(nr);					\
+	__asm__ __volatile__ (					\
+		__AMO(op) " %0, %2, %1"				\
+		: "=r" (__res), "+A" (addr[BIT_WORD(nr)])	\
+		: "r" (mod(__mask)));				\
+	((__res & __mask) != 0);				\
+})
+
+#define __op_bit(op, mod, nr, addr)				\
+	__asm__ __volatile__ (					\
+		__AMO(op) " zero, %1, %0"			\
+		: "+A" (addr[BIT_WORD(nr)])			\
+		: "r" (mod(BIT_MASK(nr))))
+
+/* Bitmask modifiers */
+#define __NOP(x)	(x)
+#define __NOT(x)	(~(x))
+
+/**
+ * test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It may be reordered on other architectures than x86.
+ * It also implies a memory barrier.
+ */
+static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
+{
+	return __test_and_op_bit(or, __NOP, nr, addr);
+}
+
+/**
+ * test_and_clear_bit - Clear a bit and return its old value
+ * @nr: Bit to clear
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It can be reordered on other architectures other than x86.
+ * It also implies a memory barrier.
+ */
+static inline int test_and_clear_bit(int nr, volatile unsigned long *addr)
+{
+	return __test_and_op_bit(and, __NOT, nr, addr);
+}
+
+/**
+ * test_and_change_bit - Change a bit and return its old value
+ * @nr: Bit to change
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It also implies a memory barrier.
+ */
+static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
+{
+	return __test_and_op_bit(xor, __NOP, nr, addr);
+}
+
+/**
+ * set_bit - Atomically set a bit in memory
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This function is atomic and may not be reordered.  See __set_bit()
+ * if you do not require the atomic guarantees.
+ *
+ * Note: there are no guarantees that this function will not be reordered
+ * on non x86 architectures, so if you are writing portable code,
+ * make sure not to rely on its reordering guarantees.
+ *
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
+static inline void set_bit(int nr, volatile unsigned long *addr)
+{
+	__op_bit(or, __NOP, nr, addr);
+}
+
+/**
+ * clear_bit - Clears a bit in memory
+ * @nr: Bit to clear
+ * @addr: Address to start counting from
+ *
+ * clear_bit() is atomic and may not be reordered.  However, it does
+ * not contain a memory barrier, so if it is used for locking purposes,
+ * you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
+ * in order to ensure changes are visible on other processors.
+ */
+static inline void clear_bit(int nr, volatile unsigned long *addr)
+{
+	__op_bit(and, __NOT, nr, addr);
+}
+
+/**
+ * change_bit - Toggle a bit in memory
+ * @nr: Bit to change
+ * @addr: Address to start counting from
+ *
+ * change_bit() is atomic and may not be reordered. It may be
+ * reordered on other architectures than x86.
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
+static inline void change_bit(int nr, volatile unsigned long *addr)
+{
+	__op_bit(xor, __NOP, nr, addr);
+}
+
+/**
+ * test_and_set_bit_lock - Set a bit and return its old value, for lock
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is atomic and provides acquire barrier semantics.
+ * It can be used to implement bit locks.
+ */
+static inline int test_and_set_bit_lock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	return test_and_set_bit(nr, addr);
+}
+
+/**
+ * clear_bit_unlock - Clear a bit in memory, for unlock
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This operation is atomic and provides release barrier semantics.
+ */
+static inline void clear_bit_unlock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	clear_bit(nr, addr);
+}
+
+/**
+ * __clear_bit_unlock - Clear a bit in memory, for unlock
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This operation is like clear_bit_unlock, however it is not atomic.
+ * It does provide release barrier semantics so it can be used to unlock
+ * a bit lock, however it would only be used if no other CPU can modify
+ * any bits in the memory until the lock is released (a good example is
+ * if the bit lock itself protects access to the other bits in the word).
+ */
+static inline void __clear_bit_unlock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	clear_bit(nr, addr);
+}
+
+#undef __test_and_op_bit
+#undef __op_bit
+#undef __NOP
+#undef __NOT
+#undef __AMO
+
+#include <asm-generic/bitops/non-atomic.h>
+#include <asm-generic/bitops/le.h>
+#include <asm-generic/bitops/ext2-atomic.h>
+
+#else /* !CONFIG_ISA_A */
+
+#include <asm-generic/bitops.h>
+
+#endif /* CONFIG_ISA_A */
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_BITOPS_H */
diff --git a/arch/riscv/include/asm/bug.h b/arch/riscv/include/asm/bug.h
new file mode 100644
index 000000000000..e2f690c20729
--- /dev/null
+++ b/arch/riscv/include/asm/bug.h
@@ -0,0 +1,88 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_BUG_H
+#define _ASM_RISCV_BUG_H
+
+#include <linux/compiler.h>
+#include <linux/const.h>
+#include <linux/types.h>
+
+#include <asm/asm.h>
+
+#ifdef CONFIG_GENERIC_BUG
+#define __BUG_INSN	_AC(0x00100073, UL) /* sbreak */
+
+#ifndef __ASSEMBLY__
+typedef u32 bug_insn_t;
+
+#ifdef CONFIG_GENERIC_BUG_RELATIVE_POINTERS
+#define __BUG_ENTRY_ADDR	INT " 1b - 2b"
+#define __BUG_ENTRY_FILE	INT " %0 - 2b"
+#else
+#define __BUG_ENTRY_ADDR	RISCV_PTR " 1b"
+#define __BUG_ENTRY_FILE	RISCV_PTR " %0"
+#endif
+
+#ifdef CONFIG_DEBUG_BUGVERBOSE
+#define __BUG_ENTRY			\
+	__BUG_ENTRY_ADDR "\n\t"		\
+	__BUG_ENTRY_FILE "\n\t"		\
+	SHORT " %1"
+#else
+#define __BUG_ENTRY			\
+	__BUG_ENTRY_ADDR
+#endif
+
+#define BUG()							\
+do {								\
+	__asm__ __volatile__ (					\
+		"1:\n\t"					\
+			"sbreak\n"				\
+			".pushsection __bug_table,\"a\"\n\t"	\
+		"2:\n\t"					\
+			__BUG_ENTRY "\n\t"			\
+			".org 2b + %2\n\t"			\
+			".popsection"				\
+		:						\
+		: "i" (__FILE__), "i" (__LINE__),		\
+		  "i" (sizeof(struct bug_entry)));		\
+	unreachable();						\
+} while (0)
+#endif /* !__ASSEMBLY__ */
+#else /* CONFIG_GENERIC_BUG */
+#ifndef __ASSEMBLY__
+#define BUG()							\
+do {								\
+	__asm__ __volatile__ ("sbreak\n");			\
+	unreachable();						\
+} while (0)
+#endif /* !__ASSEMBLY__ */
+#endif /* CONFIG_GENERIC_BUG */
+
+#define HAVE_ARCH_BUG
+
+#include <asm-generic/bug.h>
+
+#ifndef __ASSEMBLY__
+
+struct pt_regs;
+struct task_struct;
+
+extern void die(struct pt_regs *regs, const char *str);
+extern void do_trap(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, struct task_struct *tsk);
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BUG_H */
diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h
new file mode 100644
index 000000000000..e8f0d1110d74
--- /dev/null
+++ b/arch/riscv/include/asm/cache.h
@@ -0,0 +1,22 @@
+/*
+ * Copyright (C) 2017 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CACHE_H
+#define _ASM_RISCV_CACHE_H
+
+#define L1_CACHE_SHIFT		6
+
+#define L1_CACHE_BYTES		(1 << L1_CACHE_SHIFT)
+
+#endif /* _ASM_RISCV_CACHE_H */
diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
new file mode 100644
index 000000000000..0595585013b0
--- /dev/null
+++ b/arch/riscv/include/asm/cacheflush.h
@@ -0,0 +1,39 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CACHEFLUSH_H
+#define _ASM_RISCV_CACHEFLUSH_H
+
+#include <asm-generic/cacheflush.h>
+
+#undef flush_icache_range
+#undef flush_icache_user_range
+
+static inline void local_flush_icache_all(void)
+{
+	asm volatile ("fence.i" ::: "memory");
+}
+
+#ifndef CONFIG_SMP
+
+#define flush_icache_range(start, end) local_flush_icache_all()
+#define flush_icache_user_range(vma, pg, addr, len) local_flush_icache_all()
+
+#else /* CONFIG_SMP */
+
+#define flush_icache_range(start, end) sbi_remote_fence_i(0)
+#define flush_icache_user_range(vma, pg, addr, len) sbi_remote_fence_i(0)
+
+#endif /* CONFIG_SMP */
+
+#endif /* _ASM_RISCV_CACHEFLUSH_H */
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
new file mode 100644
index 000000000000..c7ee1321ac18
--- /dev/null
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -0,0 +1,124 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CMPXCHG_H
+#define _ASM_RISCV_CMPXCHG_H
+
+#include <linux/bug.h>
+
+#ifdef CONFIG_ISA_A
+
+#include <asm/barrier.h>
+
+#define __xchg(new, ptr, size)					\
+({								\
+	__typeof__(ptr) __ptr = (ptr);				\
+	__typeof__(new) __new = (new);				\
+	__typeof__(*(ptr)) __ret;				\
+	switch (size) {						\
+	case 4:							\
+		__asm__ __volatile__ (				\
+			"amoswap.w %0, %2, %1"			\
+			: "=r" (__ret), "+A" (*__ptr)		\
+			: "r" (__new));				\
+		break;						\
+	case 8:							\
+		__asm__ __volatile__ (				\
+			"amoswap.d %0, %2, %1"			\
+			: "=r" (__ret), "+A" (*__ptr)		\
+			: "r" (__new));				\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+	}							\
+	__ret;							\
+})
+
+#define xchg(ptr, x)    (__xchg((x), (ptr), sizeof(*(ptr))))
+
+
+/*
+ * Atomic compare and exchange.  Compare OLD with MEM, if identical,
+ * store NEW in MEM.  Return the initial value in MEM.  Success is
+ * indicated by comparing RETURN with OLD.
+ */
+#define __cmpxchg(ptr, old, new, size)					\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(old) __old = (old);					\
+	__typeof__(new) __new = (new);					\
+	__typeof__(*(ptr)) __ret;					\
+	register unsigned int __rc;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+		"0:"							\
+			"lr.w %0, %2\n"					\
+			"bne  %0, %z3, 1f\n"				\
+			"sc.w %1, %z4, %2\n"				\
+			"bnez %1, 0b\n"					\
+		"1:"							\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new));			\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+		"0:"							\
+			"lr.d %0, %2\n"					\
+			"bne  %0, %z3, 1f\n"				\
+			"sc.d %1, %z4, %2\n"				\
+			"bnez %1, 0b\n"					\
+		"1:"							\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new));			\
+		break;							\
+	default:							\
+		BUILD_BUG();						\
+	}								\
+	__ret;								\
+})
+
+#define __cmpxchg_mb(ptr, old, new, size)			\
+({								\
+	__typeof__(*(ptr)) __ret;				\
+	smp_mb();						\
+	__ret = __cmpxchg((ptr), (old), (new), (size));		\
+	smp_mb();						\
+	__ret;							\
+})
+
+#define cmpxchg(ptr, o, n) \
+	(__cmpxchg_mb((ptr), (o), (n), sizeof(*(ptr))))
+
+#define cmpxchg_local(ptr, o, n) \
+	(__cmpxchg((ptr), (o), (n), sizeof(*(ptr))))
+
+#define cmpxchg64(ptr, o, n)			\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);	\
+	cmpxchg((ptr), (o), (n));		\
+})
+
+#define cmpxchg64_local(ptr, o, n)		\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);	\
+	cmpxchg_local((ptr), (o), (n));		\
+})
+
+#else /* !CONFIG_ISA_A */
+
+#include <asm-generic/cmpxchg.h>
+
+#endif /* CONFIG_ISA_A */
+
+#endif /* _ASM_RISCV_CMPXCHG_H */
diff --git a/arch/riscv/include/asm/compat.h b/arch/riscv/include/asm/compat.h
new file mode 100644
index 000000000000..b42bf54f42e4
--- /dev/null
+++ b/arch/riscv/include/asm/compat.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_COMPAT_H
+#define __ASM_COMPAT_H
+#ifdef __KERNEL__
+#ifdef CONFIG_COMPAT
+
+#if defined(CONFIG_64BIT)
+#define COMPAT_UTS_MACHINE "riscv64\0\0"
+#elif defined(CONFIG_32BIT)
+#define COMPAT_UTS_MACHINE "riscv32\0\0"
+#else
+#error "Unknown RISC-V base ISA"
+#endif
+
+#endif /*CONFIG_COMPAT*/
+#endif /*__KERNEL__*/
+#endif /*__ASM_COMPAT_H*/
diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
new file mode 100644
index 000000000000..387d0dbf0073
--- /dev/null
+++ b/arch/riscv/include/asm/csr.h
@@ -0,0 +1,125 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CSR_H
+#define _ASM_RISCV_CSR_H
+
+#include <linux/const.h>
+
+/* Status register flags */
+#define SR_IE   _AC(0x00000002, UL) /* Interrupt Enable */
+#define SR_PIE  _AC(0x00000020, UL) /* Previous IE */
+#define SR_PS   _AC(0x00000100, UL) /* Previously Supervisor */
+#define SR_SUM  _AC(0x00040000, UL) /* Supervisor may access User Memory */
+
+#define SR_FS           _AC(0x00006000, UL) /* Floating-point Status */
+#define SR_FS_OFF       _AC(0x00000000, UL)
+#define SR_FS_INITIAL   _AC(0x00002000, UL)
+#define SR_FS_CLEAN     _AC(0x00004000, UL)
+#define SR_FS_DIRTY     _AC(0x00006000, UL)
+
+#define SR_XS           _AC(0x00018000, UL) /* Extension Status */
+#define SR_XS_OFF       _AC(0x00000000, UL)
+#define SR_XS_INITIAL   _AC(0x00008000, UL)
+#define SR_XS_CLEAN     _AC(0x00010000, UL)
+#define SR_XS_DIRTY     _AC(0x00018000, UL)
+
+#ifndef CONFIG_64BIT
+#define SR_SD   _AC(0x80000000, UL) /* FS/XS dirty */
+#else
+#define SR_SD   _AC(0x8000000000000000, UL) /* FS/XS dirty */
+#endif
+
+/* SPTBR flags */
+#if __riscv_xlen == 32
+#define SPTBR_PPN     _AC(0x003FFFFF, UL)
+#define SPTBR_MODE_32 _AC(0x80000000, UL)
+#define SPTBR_MODE    SPTBR_MODE_32
+#else
+#define SPTBR_PPN     _AC(0x00000FFFFFFFFFFF, UL)
+#define SPTBR_MODE_39 _AC(0x8000000000000000, UL)
+#define SPTBR_MODE    SPTBR_MODE_39
+#endif
+
+/* Interrupt Enable and Interrupt Pending flags */
+#define SIE_SSIE _AC(0x00000002, UL) /* Software Interrupt Enable */
+#define SIE_STIE _AC(0x00000020, UL) /* Timer Interrupt Enable */
+
+#define EXC_INST_MISALIGNED     0
+#define EXC_INST_ACCESS         1
+#define EXC_BREAKPOINT          3
+#define EXC_LOAD_ACCESS         5
+#define EXC_STORE_ACCESS        7
+#define EXC_SYSCALL             8
+#define EXC_INST_PAGE_FAULT     12
+#define EXC_LOAD_PAGE_FAULT     13
+#define EXC_STORE_PAGE_FAULT    15
+
+#ifndef __ASSEMBLY__
+
+#define csr_swap(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrw %0, " #csr ", %1"		\
+			      : "=r" (__v) : "rK" (__v));	\
+	__v;							\
+})
+
+#define csr_read(csr)						\
+({								\
+	register unsigned long __v;				\
+	__asm__ __volatile__ ("csrr %0, " #csr			\
+			      : "=r" (__v));			\
+	__v;							\
+})
+
+#define csr_write(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrw " #csr ", %0"		\
+			      : : "rK" (__v));			\
+})
+
+#define csr_read_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrs %0, " #csr ", %1"		\
+			      : "=r" (__v) : "rK" (__v));	\
+	__v;							\
+})
+
+#define csr_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrs " #csr ", %0"		\
+			      : : "rK" (__v));			\
+})
+
+#define csr_read_clear(csr, val)				\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrc %0, " #csr ", %1"		\
+			      : "=r" (__v) : "rK" (__v));	\
+	__v;							\
+})
+
+#define csr_clear(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrc " #csr ", %0"		\
+			      : : "rK" (__v));			\
+})
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_CSR_H */
diff --git a/arch/riscv/include/asm/current.h b/arch/riscv/include/asm/current.h
new file mode 100644
index 000000000000..44612c32fc4a
--- /dev/null
+++ b/arch/riscv/include/asm/current.h
@@ -0,0 +1,42 @@
+/*
+ * Based on arm/arm64/include/asm/current.h
+ *
+ * Copyright (C) 2016 ARM
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef __ASM_CURRENT_H
+#define __ASM_CURRENT_H
+
+#include <linux/compiler.h>
+
+#ifndef __ASSEMBLY__
+
+struct task_struct;
+
+/* This only works because "struct thread_info" is at offset 0 from "struct
+ * task_struct".  This constraint seems to be necessary on other architectures
+ * as well, but __switch_to enforces it.  We can't check TASK_TI here because
+ * <asm/asm-offsets.h> includes this, and I can't get the definition of "struct
+ * task_struct" here due to some header ordering problems.  */
+static __always_inline struct task_struct *get_current(void)
+{
+	register struct task_struct *tp __asm__("tp");
+	return tp;
+}
+
+#define current get_current()
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_CURRENT_H */
diff --git a/arch/riscv/include/asm/delay.h b/arch/riscv/include/asm/delay.h
new file mode 100644
index 000000000000..cbb0c9eb96cb
--- /dev/null
+++ b/arch/riscv/include/asm/delay.h
@@ -0,0 +1,28 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2016 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_DELAY_H
+#define _ASM_RISCV_DELAY_H
+
+extern unsigned long riscv_timebase;
+
+#define udelay udelay
+extern void udelay(unsigned long usecs);
+
+#define ndelay ndelay
+extern void ndelay(unsigned long nsecs);
+
+extern void __delay(unsigned long cycles);
+
+#endif /* _ASM_RISCV_DELAY_H */
diff --git a/arch/riscv/include/asm/device.h b/arch/riscv/include/asm/device.h
new file mode 100644
index 000000000000..28975e528d2f
--- /dev/null
+++ b/arch/riscv/include/asm/device.h
@@ -0,0 +1,27 @@
+/*
+ * Copyright (C) 2016 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef _ASM_RISCV_DEVICE_H
+#define _ASM_RISCV_DEVICE_H
+
+#include <linux/sysfs.h>
+
+struct dev_archdata {
+	struct dma_map_ops *dma_ops;
+};
+
+struct pdev_archdata {
+};
+
+#endif /* _ASM_RISCV_DEVICE_H */
diff --git a/arch/riscv/include/asm/dma-mapping.h b/arch/riscv/include/asm/dma-mapping.h
new file mode 100644
index 000000000000..9485a58e839e
--- /dev/null
+++ b/arch/riscv/include/asm/dma-mapping.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2003-2004 Hewlett-Packard Co
+ *	David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2016 SiFive, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_RISCV_DMA_MAPPING_H
+#define __ASM_RISCV_DMA_MAPPING_H
+
+#ifdef __KERNEL__
+
+/* Use ops->dma_mapping_error (if it exists) or assume success */
+// #undef DMA_ERROR_CODE
+
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
+{
+	return &dma_noop_ops;
+}
+
+static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
+{
+	if (!dev->dma_mask)
+		return false;
+
+	return addr + size - 1 <= *dev->dma_mask;
+}
+
+#endif	/* __KERNEL__ */
+#endif	/* __ASM_RISCV_DMA_MAPPING_H */
diff --git a/arch/riscv/include/asm/elf.h b/arch/riscv/include/asm/elf.h
new file mode 100644
index 000000000000..add1245690f6
--- /dev/null
+++ b/arch/riscv/include/asm/elf.h
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) 2003 Matjaz Breskvar <phoenix@bsemi.com>
+ * Copyright (C) 2010-2011 Jonas Bonn <jonas@southpole.se>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef _ASM_RISCV_ELF_H
+#define _ASM_RISCV_ELF_H
+
+#include <uapi/asm/elf.h>
+#include <asm/auxvec.h>
+#include <asm/byteorder.h>
+
+/* TODO: Move definition into include/uapi/linux/elf-em.h */
+#define EM_RISCV	0xF3
+
+/*
+ * These are used to set parameters in the core dumps.
+ */
+#define ELF_ARCH	EM_RISCV
+
+#ifdef CONFIG_64BIT
+#define ELF_CLASS	ELFCLASS64
+#else
+#define ELF_CLASS	ELFCLASS32
+#endif
+
+#if defined(__LITTLE_ENDIAN)
+#define ELF_DATA	ELFDATA2LSB
+#elif defined(__BIG_ENDIAN)
+#define ELF_DATA	ELFDATA2MSB
+#else
+#error "Unknown endianness"
+#endif
+
+/*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+#define elf_check_arch(x) ((x)->e_machine == EM_RISCV)
+
+#define CORE_DUMP_USE_REGSET
+#define ELF_EXEC_PAGESIZE	(PAGE_SIZE)
+
+/*
+ * This is the location that an ET_DYN program is loaded if exec'ed.  Typical
+ * use of this is to invoke "./ld.so someprog" to test out a new version of
+ * the loader.  We need to make sure that it is out of the way of the program
+ * that it will "exec", and that there is sufficient room for the brk.
+ */
+#define ELF_ET_DYN_BASE		((TASK_SIZE / 3) * 2)
+
+/*
+ * This yields a mask that user programs can use to figure out what
+ * instruction set this CPU supports.  This could be done in user space,
+ * but it's not easy, and we've already done it here.
+ */
+#define ELF_HWCAP	(0)
+
+/*
+ * This yields a string that ld.so will use to load implementation
+ * specific libraries for optimization.  This is more specific in
+ * intent than poking at uname or /proc/cpuinfo.
+ */
+#define ELF_PLATFORM	(NULL)
+
+#define ARCH_DLINFO						\
+do {								\
+	NEW_AUX_ENT(AT_SYSINFO_EHDR,				\
+		(elf_addr_t)current->mm->context.vdso);		\
+} while (0)
+
+
+#ifdef __KERNEL__
+#define ARCH_HAS_SETUP_ADDITIONAL_PAGES
+struct linux_binprm;
+extern int arch_setup_additional_pages(struct linux_binprm *bprm,
+	int uses_interp);
+#endif
+
+#endif /* _ASM_RISCV_ELF_H */
diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h
new file mode 100644
index 000000000000..a3ee80082b88
--- /dev/null
+++ b/arch/riscv/include/asm/io.h
@@ -0,0 +1,152 @@
+/*
+ * {read,write}{b,w,l,q} based on arch/arm64/include/asm/io.h
+ *   which was based on arch/arm/include/io.h
+ *
+ * Copyright (C) 1996-2000 Russell King
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_IO_H
+#define _ASM_RISCV_IO_H
+
+#ifdef __KERNEL__
+
+#ifdef CONFIG_MMU
+
+extern void __iomem *ioremap(phys_addr_t offset, unsigned long size);
+
+#define ioremap_nocache(addr, size) ioremap((addr), (size))
+#define ioremap_wc(addr, size) ioremap((addr), (size))
+#define ioremap_wt(addr, size) ioremap((addr), (size))
+
+extern void iounmap(void __iomem *addr);
+
+#endif /* CONFIG_MMU */
+
+/*
+ * Generic IO read/write.  These perform native-endian accesses.
+ */
+#define __raw_writeb __raw_writeb
+static inline void __raw_writeb(u8 val, volatile void __iomem *addr)
+{
+	asm volatile("sb %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#define __raw_writew __raw_writew
+static inline void __raw_writew(u16 val, volatile void __iomem *addr)
+{
+	asm volatile("sh %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#define __raw_writel __raw_writel
+static inline void __raw_writel(u32 val, volatile void __iomem *addr)
+{
+	asm volatile("sw %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#ifdef __riscv64
+#define __raw_writeq __raw_writeq
+static inline void __raw_writeq(u64 val, volatile void __iomem *addr)
+{
+	asm volatile("sd %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+#endif
+
+#define __raw_readb __raw_readb
+static inline u8 __raw_readb(const volatile void __iomem *addr)
+{
+	u8 val;
+
+	asm volatile("lb %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#define __raw_readw __raw_readw
+static inline u16 __raw_readw(const volatile void __iomem *addr)
+{
+	u16 val;
+
+	asm volatile("lh %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#define __raw_readl __raw_readl
+static inline u32 __raw_readl(const volatile void __iomem *addr)
+{
+	u32 val;
+
+	asm volatile("lw %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#ifdef __riscv64
+#define __raw_readq __raw_readq
+static inline u64 __raw_readq(const volatile void __iomem *addr)
+{
+	u64 val;
+
+	asm volatile("ld %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+#endif
+
+/* IO barriers.  These only fence on the IO bits because they're only required
+ * to order device access.  We're defining mmiowb because our AMO instructions
+ * (which are used to implement locks) don't specify ordering.  From Chapter 7
+ * of v2.2 of the user ISA:
+ * "The bits order accesses to one of the two address domains, memory or I/O,
+ * depending on which address domain the atomic instruction is accessing. No
+ * ordering constraint is implied to accesses to the other domain, and a FENCE
+ * instruction should be used to order across both domains."
+ */
+
+#define __iormb()	__asm__ __volatile__ ("fence i,io" : : : "memory");
+#define __iowmb()	__asm__ __volatile__ ("fence io,o" : : : "memory");
+
+#define mmiowb()	__asm__ __volatile__ ("fence io,io" : : : "memory");
+
+/*
+ * Relaxed I/O memory access primitives. These follow the Device memory
+ * ordering rules but do not guarantee any ordering relative to Normal memory
+ * accesses.
+ */
+#define readb_relaxed(c)	({ u8  __r = __raw_readb(c); __r; })
+#define readw_relaxed(c)	({ u16 __r = le16_to_cpu((__force __le16)__raw_readw(c)); __r; })
+#define readl_relaxed(c)	({ u32 __r = le32_to_cpu((__force __le32)__raw_readl(c)); __r; })
+#define readq_relaxed(c)	({ u64 __r = le64_to_cpu((__force __le64)__raw_readq(c)); __r; })
+
+#define writeb_relaxed(v,c)	((void)__raw_writeb((v),(c)))
+#define writew_relaxed(v,c)	((void)__raw_writew((__force u16)cpu_to_le16(v),(c)))
+#define writel_relaxed(v,c)	((void)__raw_writel((__force u32)cpu_to_le32(v),(c)))
+#define writeq_relaxed(v,c)	((void)__raw_writeq((__force u64)cpu_to_le64(v),(c)))
+
+/*
+ * I/O memory access primitives. Reads are ordered relative to any
+ * following Normal memory access. Writes are ordered relative to any prior
+ * Normal memory access.
+ */
+#define readb(c)		({ u8  __v = readb_relaxed(c); __iormb(); __v; })
+#define readw(c)		({ u16 __v = readw_relaxed(c); __iormb(); __v; })
+#define readl(c)		({ u32 __v = readl_relaxed(c); __iormb(); __v; })
+#define readq(c)		({ u64 __v = readq_relaxed(c); __iormb(); __v; })
+
+#define writeb(v,c)		({ __iowmb(); writeb_relaxed((v),(c)); })
+#define writew(v,c)		({ __iowmb(); writew_relaxed((v),(c)); })
+#define writel(v,c)		({ __iowmb(); writel_relaxed((v),(c)); })
+#define writeq(v,c)		({ __iowmb(); writeq_relaxed((v),(c)); })
+
+#include <asm-generic/io.h>
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_IO_H */
diff --git a/arch/riscv/include/asm/irq.h b/arch/riscv/include/asm/irq.h
new file mode 100644
index 000000000000..e64c61e89f76
--- /dev/null
+++ b/arch/riscv/include/asm/irq.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_IRQ_H
+#define _ASM_RISCV_IRQ_H
+
+#define NR_IRQS         0
+
+#define INTERRUPT_CAUSE_SOFTWARE    1
+#define INTERRUPT_CAUSE_TIMER       5
+#define INTERRUPT_CAUSE_EXTERNAL    9
+
+void riscv_timer_interrupt(void);
+
+#include <asm-generic/irq.h>
+
+/* The value of csr sie before init_traps runs (core is up) */
+DECLARE_PER_CPU(atomic_long_t, riscv_early_sie);
+
+#endif /* _ASM_RISCV_IRQ_H */
diff --git a/arch/riscv/include/asm/irqflags.h b/arch/riscv/include/asm/irqflags.h
new file mode 100644
index 000000000000..6fdc860d7f84
--- /dev/null
+++ b/arch/riscv/include/asm/irqflags.h
@@ -0,0 +1,63 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef _ASM_RISCV_IRQFLAGS_H
+#define _ASM_RISCV_IRQFLAGS_H
+
+#include <asm/processor.h>
+#include <asm/csr.h>
+
+/* read interrupt enabled status */
+static inline unsigned long arch_local_save_flags(void)
+{
+	return csr_read(sstatus);
+}
+
+/* unconditionally enable interrupts */
+static inline void arch_local_irq_enable(void)
+{
+	csr_set(sstatus, SR_IE);
+}
+
+/* unconditionally disable interrupts */
+static inline void arch_local_irq_disable(void)
+{
+	csr_clear(sstatus, SR_IE);
+}
+
+/* get status and disable interrupts */
+static inline unsigned long arch_local_irq_save(void)
+{
+	return csr_read_clear(sstatus, SR_IE);
+}
+
+/* test flags */
+static inline int arch_irqs_disabled_flags(unsigned long flags)
+{
+	return !(flags & SR_IE);
+}
+
+/* test hardware interrupt enable bit */
+static inline int arch_irqs_disabled(void)
+{
+	return arch_irqs_disabled_flags(arch_local_save_flags());
+}
+
+/* set interrupt enabled status */
+static inline void arch_local_irq_restore(unsigned long flags)
+{
+	csr_set(sstatus, flags & SR_IE);
+}
+
+#endif /* _ASM_RISCV_IRQFLAGS_H */
diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
new file mode 100644
index 000000000000..1190de7a0f74
--- /dev/null
+++ b/arch/riscv/include/asm/kprobes.h
@@ -0,0 +1,22 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef ASM_RISCV_KPROBES_H
+#define ASM_RISCV_KPROBES_H
+
+#ifdef CONFIG_KPROBES
+#error "RISC-V doesn't skpport CONFIG_KPROBES"
+#endif
+
+#endif
diff --git a/arch/riscv/include/asm/linkage.h b/arch/riscv/include/asm/linkage.h
new file mode 100644
index 000000000000..b7b304ca89c4
--- /dev/null
+++ b/arch/riscv/include/asm/linkage.h
@@ -0,0 +1,20 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_LINKAGE_H
+#define _ASM_RISCV_LINKAGE_H
+
+#define __ALIGN		.balign 4
+#define __ALIGN_STR	".balign 4"
+
+#endif /* _ASM_RISCV_LINKAGE_H */
diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h
new file mode 100644
index 000000000000..66805cba9a27
--- /dev/null
+++ b/arch/riscv/include/asm/mmu.h
@@ -0,0 +1,26 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef _ASM_RISCV_MMU_H
+#define _ASM_RISCV_MMU_H
+
+#ifndef __ASSEMBLY__
+
+typedef struct {
+	void *vdso;
+} mm_context_t;
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_MMU_H */
diff --git a/arch/riscv/include/asm/mmu_context.h b/arch/riscv/include/asm/mmu_context.h
new file mode 100644
index 000000000000..de1fc1631fc4
--- /dev/null
+++ b/arch/riscv/include/asm/mmu_context.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_MMU_CONTEXT_H
+#define _ASM_RISCV_MMU_CONTEXT_H
+
+#include <asm-generic/mm_hooks.h>
+
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <asm/tlbflush.h>
+
+static inline void enter_lazy_tlb(struct mm_struct *mm,
+	struct task_struct *task)
+{
+}
+
+/* Initialize context-related info for a new mm_struct */
+static inline int init_new_context(struct task_struct *task,
+	struct mm_struct *mm)
+{
+	return 0;
+}
+
+static inline void destroy_context(struct mm_struct *mm)
+{
+}
+
+static inline pgd_t *current_pgdir(void)
+{
+	return pfn_to_virt(csr_read(sptbr) & SPTBR_PPN);
+}
+
+static inline void set_pgdir(pgd_t *pgd)
+{
+	csr_write(sptbr, virt_to_pfn(pgd) | SPTBR_MODE);
+}
+
+static inline void switch_mm(struct mm_struct *prev,
+	struct mm_struct *next, struct task_struct *task)
+{
+	if (likely(prev != next)) {
+		set_pgdir(next->pgd);
+		local_flush_tlb_all();
+	}
+}
+
+static inline void activate_mm(struct mm_struct *prev,
+			       struct mm_struct *next)
+{
+	switch_mm(prev, next, NULL);
+}
+
+static inline void deactivate_mm(struct task_struct *task,
+	struct mm_struct *mm)
+{
+}
+
+#endif /* _ASM_RISCV_MMU_CONTEXT_H */
diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
new file mode 100644
index 000000000000..e1491c20d6fd
--- /dev/null
+++ b/arch/riscv/include/asm/page.h
@@ -0,0 +1,138 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ * Copyright (C) 2017 XiaojingZhu <zhuxiaoj@ict.ac.cn>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PAGE_H
+#define _ASM_RISCV_PAGE_H
+
+#include <linux/pfn.h>
+#include <linux/const.h>
+
+#define PAGE_SHIFT	(12)
+#define PAGE_SIZE	(_AC(1, UL) << PAGE_SHIFT)
+#define PAGE_MASK	(~(PAGE_SIZE - 1))
+
+#ifdef __KERNEL__
+
+/*
+ * PAGE_OFFSET -- the first address of the first page of memory.
+ * When not using MMU this corresponds to the first free page in
+ * physical memory (aligned on a page boundary).
+ */
+#ifdef CONFIG_64BIT
+#define PAGE_OFFSET		_AC(0xffffffff80000000, UL)
+#else
+#define PAGE_OFFSET		_AC(0xc0000000, UL)
+#endif
+
+#define KERN_VIRT_SIZE (-PAGE_OFFSET)
+
+#ifndef __ASSEMBLY__
+
+#define PAGE_UP(addr)	(((addr)+((PAGE_SIZE)-1))&(~((PAGE_SIZE)-1)))
+#define PAGE_DOWN(addr)	((addr)&(~((PAGE_SIZE)-1)))
+
+/* align addr on a size boundary - adjust address up/down if needed */
+#define _ALIGN_UP(addr, size)	(((addr)+((size)-1))&(~((size)-1)))
+#define _ALIGN_DOWN(addr, size)	((addr)&(~((size)-1)))
+
+/* align addr on a size boundary - adjust address up if needed */
+#define _ALIGN(addr, size)	_ALIGN_UP(addr, size)
+
+#define clear_page(pgaddr)			memset((pgaddr), 0, PAGE_SIZE)
+#define copy_page(to, from)			memcpy((to), (from), PAGE_SIZE)
+
+#define clear_user_page(pgaddr, vaddr, page)	memset((pgaddr), 0, PAGE_SIZE)
+#define copy_user_page(vto, vfrom, vaddr, topg) \
+			memcpy((vto), (vfrom), PAGE_SIZE)
+
+/*
+ * Use struct definitions to apply C type checking
+ */
+
+/* Page Global Directory entry */
+typedef struct {
+	unsigned long pgd;
+} pgd_t;
+
+/* Page Table entry */
+typedef struct {
+	unsigned long pte;
+} pte_t;
+
+typedef struct {
+	unsigned long pgprot;
+} pgprot_t;
+
+typedef struct page *pgtable_t;
+
+#define pte_val(x)	((x).pte)
+#define pgd_val(x)	((x).pgd)
+#define pgprot_val(x)	((x).pgprot)
+
+#define __pte(x)	((pte_t) { (x) })
+#define __pgd(x)	((pgd_t) { (x) })
+#define __pgprot(x)	((pgprot_t) { (x) })
+
+#ifdef CONFIG_64BITS
+#define PTE_FMT "%016lx"
+#else
+#define PTE_FMT "%08lx"
+#endif
+
+extern unsigned long va_pa_offset;
+extern unsigned long pfn_base;
+
+extern unsigned long max_low_pfn;
+extern unsigned long min_low_pfn;
+
+#define __pa(x)		((unsigned long)(x) - va_pa_offset)
+#define __va(x)		((void *)((unsigned long) (x) + va_pa_offset))
+
+#define phys_to_pfn(phys)	(PFN_DOWN(phys))
+#define pfn_to_phys(pfn)	(PFN_PHYS(pfn))
+
+#define virt_to_pfn(vaddr)	(phys_to_pfn(__pa(vaddr)))
+#define pfn_to_virt(pfn)	(__va(pfn_to_phys(pfn)))
+
+#define virt_to_page(vaddr)	(pfn_to_page(virt_to_pfn(vaddr)))
+#define page_to_virt(page)	(pfn_to_virt(page_to_pfn(page)))
+
+#define page_to_phys(page)	(pfn_to_phys(page_to_pfn(page)))
+#define page_to_bus(page)	(page_to_phys(page))
+#define phys_to_page(paddr)	(pfn_to_page(phys_to_pfn(paddr)))
+
+#define pfn_valid(pfn) \
+	(((pfn) >= pfn_base) && (((pfn)-pfn_base) < max_mapnr))
+
+#define ARCH_PFN_OFFSET		(pfn_base)
+
+#endif /* __ASSEMBLY__ */
+
+#define virt_addr_valid(vaddr)	(pfn_valid(virt_to_pfn(vaddr)))
+
+#endif /* __KERNEL__ */
+
+#define VM_DATA_DEFAULT_FLAGS	(VM_READ | VM_WRITE | \
+				 VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
+
+#include <asm-generic/memory_model.h>
+#include <asm-generic/getorder.h>
+
+/* vDSO support */
+/* We do define AT_SYSINFO_EHDR but don't use the gate mechanism */
+#define __HAVE_ARCH_GATE_AREA
+
+#endif /* _ASM_RISCV_PAGE_H */
diff --git a/arch/riscv/include/asm/pci.h b/arch/riscv/include/asm/pci.h
new file mode 100644
index 000000000000..c829a4ce8d25
--- /dev/null
+++ b/arch/riscv/include/asm/pci.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright (C) 2016 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef __ASM_RISCV_PCI_H
+#define __ASM_RISCV_PCI_H
+#ifdef __KERNEL__
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/io.h>
+
+#define PCIBIOS_MIN_IO		0
+#define PCIBIOS_MIN_MEM		0
+
+/* RISC-V shim does not initialize PCI bus */
+#define pcibios_assign_all_busses() 1
+
+/* RISC-V TileLink and PCIe share the share address space */
+#define PCI_DMA_BUS_IS_PHYS 1
+
+extern int isa_dma_bridge_buggy;
+
+#ifdef CONFIG_PCI
+static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
+{
+	/* no legacy IRQ on risc-v */
+	return -ENODEV;
+}
+
+static inline int pci_proc_domain(struct pci_bus *bus)
+{
+	/* always show the domain in /proc */
+	return 1;
+}
+#endif  /* CONFIG_PCI */
+
+#endif  /* __KERNEL__ */
+#endif  /* __ASM_PCI_H */
diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h
new file mode 100644
index 000000000000..b40074bcb164
--- /dev/null
+++ b/arch/riscv/include/asm/pgalloc.h
@@ -0,0 +1,124 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGALLOC_H
+#define _ASM_RISCV_PGALLOC_H
+
+#include <linux/mm.h>
+#include <asm/tlb.h>
+
+static inline void pmd_populate_kernel(struct mm_struct *mm,
+	pmd_t *pmd, pte_t *pte)
+{
+	unsigned long pfn = virt_to_pfn(pte);
+
+	set_pmd(pmd, __pmd((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+
+static inline void pmd_populate(struct mm_struct *mm,
+	pmd_t *pmd, pgtable_t pte)
+{
+	unsigned long pfn = virt_to_pfn(page_address(pte));
+
+	set_pmd(pmd, __pmd((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+
+#ifndef __PAGETABLE_PMD_FOLDED
+static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+{
+	unsigned long pfn = virt_to_pfn(pmd);
+
+	set_pud(pud, __pud((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+#endif /* __PAGETABLE_PMD_FOLDED */
+
+#define pmd_pgtable(pmd)	pmd_page(pmd)
+
+static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+{
+	pgd_t *pgd;
+
+	pgd = (pgd_t *)__get_free_page(GFP_KERNEL);
+	if (likely(pgd != NULL)) {
+		memset(pgd, 0, USER_PTRS_PER_PGD * sizeof(pgd_t));
+		/* Copy kernel mappings */
+		memcpy(pgd + USER_PTRS_PER_PGD,
+			init_mm.pgd + USER_PTRS_PER_PGD,
+			(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
+	}
+	return pgd;
+}
+
+static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
+{
+	free_page((unsigned long)pgd);
+}
+
+#ifndef __PAGETABLE_PMD_FOLDED
+
+static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
+{
+	return (pmd_t *)__get_free_page(
+		GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+}
+
+static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
+{
+	free_page((unsigned long)pmd);
+}
+
+#define __pmd_free_tlb(tlb, pmd, addr)  pmd_free((tlb)->mm, pmd)
+
+#endif /* __PAGETABLE_PMD_FOLDED */
+
+static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
+	unsigned long address)
+{
+	return (pte_t *)__get_free_page(
+		GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+}
+
+static inline struct page *pte_alloc_one(struct mm_struct *mm,
+	unsigned long address)
+{
+	struct page *pte;
+
+	pte = alloc_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+	if (likely(pte != NULL))
+		pgtable_page_ctor(pte);
+	return pte;
+}
+
+static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
+{
+	free_page((unsigned long)pte);
+}
+
+static inline void pte_free(struct mm_struct *mm, pgtable_t pte)
+{
+	pgtable_page_dtor(pte);
+	__free_page(pte);
+}
+
+#define __pte_free_tlb(tlb, pte, buf)   \
+do {                                    \
+	pgtable_page_dtor(pte);         \
+	tlb_remove_page((tlb), pte);    \
+} while (0)
+
+static inline void check_pgt_cache(void)
+{
+}
+
+#endif /* _ASM_RISCV_PGALLOC_H */
diff --git a/arch/riscv/include/asm/pgtable-32.h b/arch/riscv/include/asm/pgtable-32.h
new file mode 100644
index 000000000000..d61974b74182
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-32.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_32_H
+#define _ASM_RISCV_PGTABLE_32_H
+
+#include <asm-generic/pgtable-nopmd.h>
+#include <linux/const.h>
+
+/* Size of region mapped by a page global directory */
+#define PGDIR_SHIFT     22
+#define PGDIR_SIZE      (_AC(1, UL) << PGDIR_SHIFT)
+#define PGDIR_MASK      (~(PGDIR_SIZE - 1))
+
+#endif /* _ASM_RISCV_PGTABLE_32_H */
diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h
new file mode 100644
index 000000000000..7aa0ea9bd8bb
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-64.h
@@ -0,0 +1,84 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_64_H
+#define _ASM_RISCV_PGTABLE_64_H
+
+#include <linux/const.h>
+
+#define PGDIR_SHIFT     30
+/* Size of region mapped by a page global directory */
+#define PGDIR_SIZE      (_AC(1, UL) << PGDIR_SHIFT)
+#define PGDIR_MASK      (~(PGDIR_SIZE - 1))
+
+#define PMD_SHIFT       21
+/* Size of region mapped by a page middle directory */
+#define PMD_SIZE        (_AC(1, UL) << PMD_SHIFT)
+#define PMD_MASK        (~(PMD_SIZE - 1))
+
+/* Page Middle Directory entry */
+typedef struct {
+	unsigned long pmd;
+} pmd_t;
+
+#define pmd_val(x)      ((x).pmd)
+#define __pmd(x)        ((pmd_t) { (x) })
+
+#define PTRS_PER_PMD    (PAGE_SIZE / sizeof(pmd_t))
+
+static inline int pud_present(pud_t pud)
+{
+	return (pud_val(pud) & _PAGE_PRESENT);
+}
+
+static inline int pud_none(pud_t pud)
+{
+	return (pud_val(pud) == 0);
+}
+
+static inline int pud_bad(pud_t pud)
+{
+	return !pud_present(pud);
+}
+
+static inline void set_pud(pud_t *pudp, pud_t pud)
+{
+	*pudp = pud;
+}
+
+static inline void pud_clear(pud_t *pudp)
+{
+	set_pud(pudp, __pud(0));
+}
+
+static inline unsigned long pud_page_vaddr(pud_t pud)
+{
+	return (unsigned long)pfn_to_virt(pud_val(pud) >> _PAGE_PFN_SHIFT);
+}
+
+#define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
+
+static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
+{
+	return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(addr);
+}
+
+static inline pmd_t pfn_pmd(unsigned long pfn, pgprot_t prot)
+{
+	return __pmd((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+#define pmd_ERROR(e) \
+	pr_err("%s:%d: bad pmd %016lx.\n", __FILE__, __LINE__, pmd_val(e))
+
+#endif /* _ASM_RISCV_PGTABLE_64_H */
diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm/pgtable-bits.h
new file mode 100644
index 000000000000..997ddbb1d370
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-bits.h
@@ -0,0 +1,48 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_BITS_H
+#define _ASM_RISCV_PGTABLE_BITS_H
+
+/*
+ * PTE format:
+ * | XLEN-1  10 | 9             8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0
+ *       PFN      reserved for SW   D   A   G   U   X   W   R   V
+ */
+
+#define _PAGE_ACCESSED_OFFSET 6
+
+#define _PAGE_PRESENT   (1 << 0)
+#define _PAGE_READ      (1 << 1)    /* Readable */
+#define _PAGE_WRITE     (1 << 2)    /* Writable */
+#define _PAGE_EXEC      (1 << 3)    /* Executable */
+#define _PAGE_USER      (1 << 4)    /* User */
+#define _PAGE_GLOBAL    (1 << 5)    /* Global */
+#define _PAGE_ACCESSED  (1 << 6)    /* Set by hardware on any access */
+#define _PAGE_DIRTY     (1 << 7)    /* Set by hardware on any write */
+#define _PAGE_SOFT      (1 << 8)    /* Reserved for software */
+
+#define _PAGE_SPECIAL   _PAGE_SOFT
+#define _PAGE_TABLE     _PAGE_PRESENT
+
+#define _PAGE_PFN_SHIFT 10
+
+/* Set of bits to preserve across pte_modify() */
+#define _PAGE_CHG_MASK  (~(unsigned long)(_PAGE_PRESENT | _PAGE_READ |	\
+					  _PAGE_WRITE | _PAGE_EXEC |	\
+					  _PAGE_USER | _PAGE_GLOBAL))
+
+/* Advertise support for _PAGE_SPECIAL */
+#define __HAVE_ARCH_PTE_SPECIAL
+
+#endif /* _ASM_RISCV_PGTABLE_BITS_H */
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
new file mode 100644
index 000000000000..8bb44014f5c3
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable.h
@@ -0,0 +1,427 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_H
+#define _ASM_RISCV_PGTABLE_H
+
+#include <linux/mmzone.h>
+
+#include <asm/pgtable-bits.h>
+
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_MMU
+
+/* Page Upper Directory not used in RISC-V */
+#include <asm-generic/pgtable-nopud.h>
+#include <asm/page.h>
+#include <asm/tlbflush.h>
+#include <linux/mm_types.h>
+
+#ifdef CONFIG_64BIT
+#include <asm/pgtable-64.h>
+#else
+#include <asm/pgtable-32.h>
+#endif /* CONFIG_64BIT */
+
+/* Number of entries in the page global directory */
+#define PTRS_PER_PGD    (PAGE_SIZE / sizeof(pgd_t))
+/* Number of entries in the page table */
+#define PTRS_PER_PTE    (PAGE_SIZE / sizeof(pte_t))
+
+/* Number of PGD entries that a user-mode program can use */
+#define USER_PTRS_PER_PGD   (TASK_SIZE / PGDIR_SIZE)
+#define FIRST_USER_ADDRESS  0
+
+/* Page protection bits */
+#define _PAGE_BASE	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER)
+
+#define PAGE_NONE		__pgprot(0)
+#define PAGE_READ		__pgprot(_PAGE_BASE | _PAGE_READ)
+#define PAGE_WRITE		__pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_WRITE)
+#define PAGE_EXEC		__pgprot(_PAGE_BASE | _PAGE_EXEC)
+#define PAGE_READ_EXEC		__pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC)
+#define PAGE_WRITE_EXEC		__pgprot(_PAGE_BASE | _PAGE_READ |	\
+					 _PAGE_EXEC | _PAGE_WRITE)
+
+#define PAGE_COPY		PAGE_READ
+#define PAGE_COPY_EXEC		PAGE_EXEC
+#define PAGE_COPY_READ_EXEC	PAGE_READ_EXEC
+#define PAGE_SHARED		PAGE_WRITE
+#define PAGE_SHARED_EXEC	PAGE_WRITE_EXEC
+
+#define _PAGE_KERNEL		(_PAGE_READ \
+				| _PAGE_WRITE \
+				| _PAGE_PRESENT \
+				| _PAGE_ACCESSED \
+				| _PAGE_DIRTY)
+
+#define PAGE_KERNEL		__pgprot(_PAGE_KERNEL)
+#define PAGE_KERNEL_EXEC	__pgprot(_PAGE_KERNEL | _PAGE_EXEC)
+
+extern pgd_t swapper_pg_dir[];
+
+/* MAP_PRIVATE permissions: xwr (copy-on-write) */
+#define __P000	PAGE_NONE
+#define __P001	PAGE_READ
+#define __P010	PAGE_COPY
+#define __P011	PAGE_COPY
+#define __P100	PAGE_EXEC
+#define __P101	PAGE_READ_EXEC
+#define __P110	PAGE_COPY_EXEC
+#define __P111	PAGE_COPY_READ_EXEC
+
+/* MAP_SHARED permissions: xwr */
+#define __S000	PAGE_NONE
+#define __S001	PAGE_READ
+#define __S010	PAGE_SHARED
+#define __S011	PAGE_SHARED
+#define __S100	PAGE_EXEC
+#define __S101	PAGE_READ_EXEC
+#define __S110	PAGE_SHARED_EXEC
+#define __S111	PAGE_SHARED_EXEC
+
+/*
+ * ZERO_PAGE is a global shared page that is always zero,
+ * used for zero-mapped memory areas, etc.
+ */
+extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
+
+static inline int pmd_present(pmd_t pmd)
+{
+	return (pmd_val(pmd) & _PAGE_PRESENT);
+}
+
+static inline int pmd_none(pmd_t pmd)
+{
+	return (pmd_val(pmd) == 0);
+}
+
+static inline int pmd_bad(pmd_t pmd)
+{
+	return !pmd_present(pmd);
+}
+
+static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
+{
+	*pmdp = pmd;
+}
+
+static inline void pmd_clear(pmd_t *pmdp)
+{
+	set_pmd(pmdp, __pmd(0));
+}
+
+
+static inline pgd_t pfn_pgd(unsigned long pfn, pgprot_t prot)
+{
+	return __pgd((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+#define pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
+
+/* Locate an entry in the page global directory */
+static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr)
+{
+	return mm->pgd + pgd_index(addr);
+}
+/* Locate an entry in the kernel page global directory */
+#define pgd_offset_k(addr)      pgd_offset(&init_mm, (addr))
+
+static inline struct page *pmd_page(pmd_t pmd)
+{
+	return pfn_to_page(pmd_val(pmd) >> _PAGE_PFN_SHIFT);
+}
+
+static inline unsigned long pmd_page_vaddr(pmd_t pmd)
+{
+	return (unsigned long)pfn_to_virt(pmd_val(pmd) >> _PAGE_PFN_SHIFT);
+}
+
+/* Yields the page frame number (PFN) of a page table entry */
+static inline unsigned long pte_pfn(pte_t pte)
+{
+	return (pte_val(pte) >> _PAGE_PFN_SHIFT);
+}
+
+#define pte_page(x)     pfn_to_page(pte_pfn(x))
+
+/* Constructs a page table entry */
+static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot)
+{
+	return __pte((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+static inline pte_t mk_pte(struct page *page, pgprot_t prot)
+{
+	return pfn_pte(page_to_pfn(page), prot);
+}
+
+#define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+
+static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long addr)
+{
+	return (pte_t *)pmd_page_vaddr(*pmd) + pte_index(addr);
+}
+
+#define pte_offset_map(dir, addr)	pte_offset_kernel((dir), (addr))
+#define pte_unmap(pte)			((void)(pte))
+
+/*
+ * Certain architectures need to do special things when PTEs within
+ * a page table are directly modified.  Thus, the following hook is
+ * made available.
+ */
+static inline void set_pte(pte_t *ptep, pte_t pteval)
+{
+	*ptep = pteval;
+}
+
+static inline void set_pte_at(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep, pte_t pteval)
+{
+	set_pte(ptep, pteval);
+}
+
+static inline void pte_clear(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep)
+{
+	set_pte_at(mm, addr, ptep, __pte(0));
+}
+
+static inline int pte_present(pte_t pte)
+{
+	return (pte_val(pte) & _PAGE_PRESENT);
+}
+
+static inline int pte_none(pte_t pte)
+{
+	return (pte_val(pte) == 0);
+}
+
+/* static inline int pte_read(pte_t pte) */
+
+static inline int pte_write(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_WRITE;
+}
+
+static inline int pte_huge(pte_t pte)
+{
+	return pte_present(pte)
+		&& (pte_val(pte) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC));
+}
+
+/* static inline int pte_exec(pte_t pte) */
+
+static inline int pte_dirty(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_DIRTY;
+}
+
+static inline int pte_young(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_ACCESSED;
+}
+
+static inline int pte_special(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_SPECIAL;
+}
+
+/* static inline pte_t pte_rdprotect(pte_t pte) */
+
+static inline pte_t pte_wrprotect(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_WRITE));
+}
+
+/* static inline pte_t pte_mkread(pte_t pte) */
+
+static inline pte_t pte_mkwrite(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_WRITE);
+}
+
+/* static inline pte_t pte_mkexec(pte_t pte) */
+
+static inline pte_t pte_mkdirty(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_DIRTY);
+}
+
+static inline pte_t pte_mkclean(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_DIRTY));
+}
+
+static inline pte_t pte_mkyoung(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_ACCESSED);
+}
+
+static inline pte_t pte_mkold(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_ACCESSED));
+}
+
+static inline pte_t pte_mkspecial(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_SPECIAL);
+}
+
+/* Modify page protection bits */
+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+	return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
+}
+
+#define pgd_ERROR(e) \
+	pr_err("%s:%d: bad pgd " PTE_FMT ".\n", __FILE__, __LINE__, pgd_val(e))
+
+
+/* Commit new configuration to MMU hardware */
+static inline void update_mmu_cache(struct vm_area_struct *vma,
+	unsigned long address, pte_t *ptep)
+{
+	/* The kernel assumes that TLBs don't cache invalid entries, but
+	 * in RISC-V, SFENCE.VMA specifies an ordering constraint, not a
+	 * cache flush; it is necessary even after writing invalid entries.
+	 * Relying on flush_tlb_fix_spurious_fault would suffice, but
+	 * the extra traps reduce performance.  So, eagerly SFENCE.VMA.
+	 */
+	local_flush_tlb_page(address);
+}
+
+#define __HAVE_ARCH_PTE_SAME
+static inline int pte_same(pte_t pte_a, pte_t pte_b)
+{
+	return pte_val(pte_a) == pte_val(pte_b);
+}
+
+#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
+static inline int ptep_set_access_flags(struct vm_area_struct *vma,
+					unsigned long address, pte_t *ptep,
+					pte_t entry, int dirty)
+{
+	if (!pte_same(*ptep, entry))
+		set_pte_at(vma->vm_mm, address, ptep, entry);
+	/* update_mmu_cache will unconditionally execute, handling both
+	 * the case that the PTE changed and the spurious fault case.
+	 */
+	return true;
+}
+
+#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
+static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+}
+
+#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
+static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
+					    unsigned long address,
+					    pte_t *ptep)
+{
+	if (!pte_young(*ptep))
+		return 0;
+	return test_and_clear_bit(_PAGE_ACCESSED_OFFSET, &pte_val(*ptep));
+}
+
+#define __HAVE_ARCH_PTEP_SET_WRPROTECT
+static inline void ptep_set_wrprotect(struct mm_struct *mm,
+				      unsigned long address, pte_t *ptep)
+{
+	atomic_long_and(~(unsigned long)_PAGE_WRITE, (atomic_long_t *)ptep);
+}
+
+#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
+static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
+					 unsigned long address, pte_t *ptep)
+{
+	/*
+	 * This comment is borrowed from x86, but applies equally to RISC-V:
+	 *
+	 * Clearing the accessed bit without a TLB flush
+	 * doesn't cause data corruption. [ It could cause incorrect
+	 * page aging and the (mistaken) reclaim of hot pages, but the
+	 * chance of that should be relatively low. ]
+	 *
+	 * So as a performance optimization don't flush the TLB when
+	 * clearing the accessed bit, it will eventually be flushed by
+	 * a context switch or a VM operation anyway. [ In the rare
+	 * event of it not getting flushed for a long time the delay
+	 * shouldn't really matter because there's no real memory
+	 * pressure for swapout to react to. ]
+	 */
+	return ptep_test_and_clear_young(vma, address, ptep);
+}
+
+/*
+ * Encode and decode a swap entry
+ *
+ * Format of swap PTE:
+ *	bit            0:	_PAGE_PRESENT (zero)
+ *	bit            1:	reserved for future use (zero)
+ *	bits      2 to 6:	swap type
+ *	bits 7 to XLEN-1:	swap offset
+ */
+#define __SWP_TYPE_SHIFT	2
+#define __SWP_TYPE_BITS		5
+#define __SWP_TYPE_MASK		((1UL << __SWP_TYPE_BITS) - 1)
+#define __SWP_OFFSET_SHIFT	(__SWP_TYPE_BITS + __SWP_TYPE_SHIFT)
+
+#define MAX_SWAPFILES_CHECK()	\
+	BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > __SWP_TYPE_BITS)
+
+#define __swp_type(x)	(((x).val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK)
+#define __swp_offset(x)	((x).val >> __SWP_OFFSET_SHIFT)
+#define __swp_entry(type, offset) ((swp_entry_t) \
+	{ ((type) << __SWP_TYPE_SHIFT) | ((offset) << __SWP_OFFSET_SHIFT) })
+
+#define __pte_to_swp_entry(pte)	((swp_entry_t) { pte_val(pte) })
+#define __swp_entry_to_pte(x)	((pte_t) { (x).val })
+
+#ifdef CONFIG_FLATMEM
+#define kern_addr_valid(addr)   (1) /* FIXME */
+#endif
+
+extern void paging_init(void);
+
+static inline void pgtable_cache_init(void)
+{
+	/* No page table caches to initialize */
+}
+
+#endif /* CONFIG_MMU */
+
+#define VMALLOC_SIZE     (KERN_VIRT_SIZE >> 1)
+#define VMALLOC_END      (PAGE_OFFSET - 1)
+#define VMALLOC_START    (PAGE_OFFSET - VMALLOC_SIZE)
+
+/* Task size is 0x40000000000 for RV64 or 0xb800000 for RV32.
+ * Note that PGDIR_SIZE must evenly divide TASK_SIZE.
+ */
+#ifdef CONFIG_64BIT
+#define TASK_SIZE (PGDIR_SIZE * PTRS_PER_PGD / 2)
+#else
+#define TASK_SIZE VMALLOC_START
+#endif
+
+#include <asm-generic/pgtable.h>
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PGTABLE_H */
diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
new file mode 100644
index 000000000000..0e523a75a8a0
--- /dev/null
+++ b/arch/riscv/include/asm/processor.h
@@ -0,0 +1,102 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PROCESSOR_H
+#define _ASM_RISCV_PROCESSOR_H
+
+#include <linux/const.h>
+
+#include <asm/ptrace.h>
+
+/*
+ * This decides where the kernel will search for a free chunk of vm
+ * space during mmap's.
+ */
+#define TASK_UNMAPPED_BASE	PAGE_ALIGN(TASK_SIZE >> 1)
+
+#ifdef __KERNEL__
+#define STACK_TOP		TASK_SIZE
+#define STACK_TOP_MAX		STACK_TOP
+#define STACK_ALIGN		16
+#endif /* __KERNEL__ */
+
+#ifndef __ASSEMBLY__
+
+struct task_struct;
+struct pt_regs;
+
+/*
+ * Default implementation of macro that returns current
+ * instruction pointer ("program counter").
+ */
+#define current_text_addr()	({ __label__ _l; _l: &&_l; })
+
+/* CPU-specific state of a task */
+struct thread_struct {
+	/* Callee-saved registers */
+	unsigned long ra;
+	unsigned long sp;	/* Kernel mode stack */
+	unsigned long s[12];	/* s[0]: frame pointer */
+	struct user_fpregs_struct fstate;
+};
+
+#define INIT_THREAD {					\
+	.sp = sizeof(init_stack) + (long)&init_stack,	\
+}
+
+/* Return saved (kernel) PC of a blocked thread. */
+#define thread_saved_pc(t)	((t)->thread.ra)
+#define thread_saved_sp(t)	((t)->thread.sp)
+#define thread_saved_fp(t)	((t)->thread.s[0])
+
+#define task_pt_regs(tsk)						\
+	((struct pt_regs *)(task_stack_page(tsk) + THREAD_SIZE		\
+			    - ALIGN(sizeof(struct pt_regs), STACK_ALIGN)))
+
+#define KSTK_EIP(tsk)		(task_pt_regs(tsk)->sepc)
+#define KSTK_ESP(tsk)		(task_pt_regs(tsk)->sp)
+
+
+/* Do necessary setup to start up a newly executed thread. */
+extern void start_thread(struct pt_regs *regs,
+			unsigned long pc, unsigned long sp);
+
+/* Free all resources held by a thread. */
+static inline void release_thread(struct task_struct *dead_task)
+{
+}
+
+extern unsigned long get_wchan(struct task_struct *p);
+
+
+static inline void cpu_relax(void)
+{
+#ifdef __riscv_muldiv
+	int dummy;
+	/* In lieu of a halt instruction, induce a long-latency stall. */
+	__asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy));
+#endif
+	barrier();
+}
+
+static inline void wait_for_interrupt(void)
+{
+	__asm__ __volatile__ ("wfi");
+}
+
+struct device_node;
+extern int riscv_of_processor_hart(struct device_node *node);
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PROCESSOR_H */
diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h
new file mode 100644
index 000000000000..340201868842
--- /dev/null
+++ b/arch/riscv/include/asm/ptrace.h
@@ -0,0 +1,116 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PTRACE_H
+#define _ASM_RISCV_PTRACE_H
+
+#include <uapi/asm/ptrace.h>
+#include <asm/csr.h>
+
+#ifndef __ASSEMBLY__
+
+struct pt_regs {
+	unsigned long sepc;
+	unsigned long ra;
+	unsigned long sp;
+	unsigned long gp;
+	unsigned long tp;
+	unsigned long t0;
+	unsigned long t1;
+	unsigned long t2;
+	unsigned long s0;
+	unsigned long s1;
+	unsigned long a0;
+	unsigned long a1;
+	unsigned long a2;
+	unsigned long a3;
+	unsigned long a4;
+	unsigned long a5;
+	unsigned long a6;
+	unsigned long a7;
+	unsigned long s2;
+	unsigned long s3;
+	unsigned long s4;
+	unsigned long s5;
+	unsigned long s6;
+	unsigned long s7;
+	unsigned long s8;
+	unsigned long s9;
+	unsigned long s10;
+	unsigned long s11;
+	unsigned long t3;
+	unsigned long t4;
+	unsigned long t5;
+	unsigned long t6;
+	/* Supervisor CSRs */
+	unsigned long sstatus;
+	unsigned long sbadaddr;
+	unsigned long scause;
+};
+
+#ifdef CONFIG_64BIT
+#define REG_FMT "%016lx"
+#else
+#define REG_FMT "%08lx"
+#endif
+
+#define user_mode(regs) (((regs)->sstatus & SR_PS) == 0)
+
+
+/* Helpers for working with the instruction pointer */
+#define GET_IP(regs) ((regs)->sepc)
+#define SET_IP(regs, val) (GET_IP(regs) = (val))
+
+static inline unsigned long instruction_pointer(struct pt_regs *regs)
+{
+	return GET_IP(regs);
+}
+static inline void instruction_pointer_set(struct pt_regs *regs,
+					   unsigned long val)
+{
+	SET_IP(regs, val);
+}
+
+#define profile_pc(regs) instruction_pointer(regs)
+
+/* Helpers for working with the user stack pointer */
+#define GET_USP(regs) ((regs)->sp)
+#define SET_USP(regs, val) (GET_USP(regs) = (val))
+
+static inline unsigned long user_stack_pointer(struct pt_regs *regs)
+{
+	return GET_USP(regs);
+}
+static inline void user_stack_pointer_set(struct pt_regs *regs,
+					  unsigned long val)
+{
+	SET_USP(regs, val);
+}
+
+/* Helpers for working with the frame pointer */
+#define GET_FP(regs) ((regs)->s0)
+#define SET_FP(regs, val) (GET_FP(regs) = (val))
+
+static inline unsigned long frame_pointer(struct pt_regs *regs)
+{
+	return GET_FP(regs);
+}
+static inline void frame_pointer_set(struct pt_regs *regs,
+				     unsigned long val)
+{
+	SET_FP(regs, val);
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PTRACE_H */
diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
new file mode 100644
index 000000000000..b6bb10b92fe2
--- /dev/null
+++ b/arch/riscv/include/asm/sbi.h
@@ -0,0 +1,100 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SBI_H
+#define _ASM_RISCV_SBI_H
+
+#include <linux/types.h>
+
+#define SBI_SET_TIMER 0
+#define SBI_CONSOLE_PUTCHAR 1
+#define SBI_CONSOLE_GETCHAR 2
+#define SBI_CLEAR_IPI 3
+#define SBI_SEND_IPI 4
+#define SBI_REMOTE_FENCE_I 5
+#define SBI_REMOTE_SFENCE_VMA 6
+#define SBI_REMOTE_SFENCE_VMA_ASID 7
+#define SBI_SHUTDOWN 8
+
+#define SBI_CALL(which, arg0, arg1, arg2) ({			\
+	register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0);	\
+	register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1);	\
+	register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2);	\
+	register uintptr_t a7 asm ("a7") = (uintptr_t)(which);	\
+	asm volatile ("ecall"					\
+		      : "+r" (a0)				\
+		      : "r" (a1), "r" (a2), "r" (a7)		\
+		      : "memory");				\
+	a0;							\
+})
+
+/* Lazy implementations until SBI is finalized */
+#define SBI_CALL_0(which) SBI_CALL(which, 0, 0, 0)
+#define SBI_CALL_1(which, arg0) SBI_CALL(which, arg0, 0, 0)
+#define SBI_CALL_2(which, arg0, arg1) SBI_CALL(which, arg0, arg1, 0)
+
+static inline void sbi_console_putchar(int ch)
+{
+	SBI_CALL_1(SBI_CONSOLE_PUTCHAR, ch);
+}
+
+static inline int sbi_console_getchar(void)
+{
+	return SBI_CALL_0(SBI_CONSOLE_GETCHAR);
+}
+
+static inline void sbi_set_timer(uint64_t stime_value)
+{
+#if __riscv_xlen == 32
+	SBI_CALL_2(SBI_SET_TIMER, stime_value, stime_value >> 32);
+#else
+	SBI_CALL_1(SBI_SET_TIMER, stime_value);
+#endif
+}
+
+static inline void sbi_shutdown(void)
+{
+	SBI_CALL_0(SBI_SHUTDOWN);
+}
+
+static inline void sbi_clear_ipi(void)
+{
+	SBI_CALL_0(SBI_CLEAR_IPI);
+}
+
+static inline void sbi_send_ipi(const unsigned long *hart_mask)
+{
+	SBI_CALL_1(SBI_SEND_IPI, hart_mask);
+}
+
+static inline void sbi_remote_fence_i(const unsigned long *hart_mask)
+{
+	SBI_CALL_1(SBI_REMOTE_FENCE_I, hart_mask);
+}
+
+static inline void sbi_remote_sfence_vma(const unsigned long *hart_mask,
+					 unsigned long start,
+					 unsigned long size)
+{
+	SBI_CALL_1(SBI_REMOTE_SFENCE_VMA, hart_mask);
+}
+
+static inline void sbi_remote_sfence_vma_asid(const unsigned long *hart_mask,
+					      unsigned long start,
+					      unsigned long size,
+					      unsigned long asid)
+{
+	SBI_CALL_1(SBI_REMOTE_SFENCE_VMA_ASID, hart_mask);
+}
+
+#endif
diff --git a/arch/riscv/include/asm/smp.h b/arch/riscv/include/asm/smp.h
new file mode 100644
index 000000000000..f61f8c25f95b
--- /dev/null
+++ b/arch/riscv/include/asm/smp.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SMP_H
+#define _ASM_RISCV_SMP_H
+
+#include <linux/cpumask.h>
+#include <linux/irqreturn.h>
+
+#ifdef CONFIG_SMP
+
+/* SMP initialization hook for setup_arch */
+void __init init_clockevent(void);
+
+/* SMP initialization hook for setup_arch */
+void __init setup_smp(void);
+
+/* Hook for the generic smp_call_function_many() routine. */
+void arch_send_call_function_ipi_mask(struct cpumask *mask);
+
+/* Hook for the generic smp_call_function_single() routine. */
+void arch_send_call_function_single_ipi(int cpu);
+
+#define raw_smp_processor_id() (current_thread_info()->cpu)
+
+/* Interprocessor interrupt handler */
+irqreturn_t handle_ipi(void);
+
+#endif /* CONFIG_SMP */
+
+#endif /* _ASM_RISCV_SMP_H */
diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
new file mode 100644
index 000000000000..9736f5714e54
--- /dev/null
+++ b/arch/riscv/include/asm/spinlock.h
@@ -0,0 +1,155 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SPINLOCK_H
+#define _ASM_RISCV_SPINLOCK_H
+
+#include <linux/kernel.h>
+#include <asm/current.h>
+
+/*
+ * Simple spin lock operations.  These provide no fairness guarantees.
+ */
+
+#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
+#define arch_spin_is_locked(x)	((x)->lock != 0)
+#define arch_spin_unlock_wait(x) \
+		do { cpu_relax(); } while ((x)->lock)
+
+static inline void arch_spin_unlock(arch_spinlock_t *lock)
+{
+	__asm__ __volatile__ (
+		"amoswap.w.rl x0, x0, %0"
+		: "=A" (lock->lock)
+		:: "memory");
+}
+
+static inline int arch_spin_trylock(arch_spinlock_t *lock)
+{
+	int tmp = 1, busy;
+
+	__asm__ __volatile__ (
+		"amoswap.w.aq %0, %2, %1"
+		: "=r" (busy), "+A" (lock->lock)
+		: "r" (tmp)
+		: "memory");
+
+	return !busy;
+}
+
+static inline void arch_spin_lock(arch_spinlock_t *lock)
+{
+	while (1) {
+		if (arch_spin_is_locked(lock))
+			continue;
+
+		if (arch_spin_trylock(lock))
+			break;
+	}
+}
+
+/***********************************************************/
+
+static inline int arch_read_can_lock(arch_rwlock_t *lock)
+{
+	return lock->lock >= 0;
+}
+
+static inline int arch_write_can_lock(arch_rwlock_t *lock)
+{
+	return lock->lock == 0;
+}
+
+static inline void arch_read_lock(arch_rwlock_t *lock)
+{
+	int tmp;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bltz	%1, 1b\n"
+		"	addi	%1, %1, 1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		: "+A" (lock->lock), "=&r" (tmp)
+		:: "memory");
+}
+
+static inline void arch_write_lock(arch_rwlock_t *lock)
+{
+	int tmp;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bnez	%1, 1b\n"
+		"	li	%1, -1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		: "+A" (lock->lock), "=&r" (tmp)
+		:: "memory");
+}
+
+static inline int arch_read_trylock(arch_rwlock_t *lock)
+{
+	int busy;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bltz	%1, 1f\n"
+		"	addi	%1, %1, 1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		"1:\n"
+		: "+A" (lock->lock), "=&r" (busy)
+		:: "memory");
+
+	return !busy;
+}
+
+static inline int arch_write_trylock(arch_rwlock_t *lock)
+{
+	int busy;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bnez	%1, 1f\n"
+		"	li	%1, -1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		"1:\n"
+		: "+A" (lock->lock), "=&r" (busy)
+		:: "memory");
+
+	return !busy;
+}
+
+static inline void arch_read_unlock(arch_rwlock_t *lock)
+{
+	__asm__ __volatile__(
+		"amoadd.w.rl x0, %1, %0"
+		: "+A" (lock->lock)
+		: "r" (-1)
+		: "memory");
+}
+
+static inline void arch_write_unlock(arch_rwlock_t *lock)
+{
+	__asm__ __volatile__ (
+		"amoswap.w.rl x0, x0, %0"
+		: "=A" (lock->lock)
+		:: "memory");
+}
+
+#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
+#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
+
+#endif /* _ASM_RISCV_SPINLOCK_H */
diff --git a/arch/riscv/include/asm/spinlock_types.h b/arch/riscv/include/asm/spinlock_types.h
new file mode 100644
index 000000000000..83ac4ac9e2ac
--- /dev/null
+++ b/arch/riscv/include/asm/spinlock_types.h
@@ -0,0 +1,33 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SPINLOCK_TYPES_H
+#define _ASM_RISCV_SPINLOCK_TYPES_H
+
+#ifndef __LINUX_SPINLOCK_TYPES_H
+# error "please don't include this file directly"
+#endif
+
+typedef struct {
+	volatile unsigned int lock;
+} arch_spinlock_t;
+
+#define __ARCH_SPIN_LOCK_UNLOCKED	{ 0 }
+
+typedef struct {
+	volatile unsigned int lock;
+} arch_rwlock_t;
+
+#define __ARCH_RW_LOCK_UNLOCKED		{ 0 }
+
+#endif
diff --git a/arch/riscv/include/asm/string.h b/arch/riscv/include/asm/string.h
new file mode 100644
index 000000000000..1c59cf4e8ccc
--- /dev/null
+++ b/arch/riscv/include/asm/string.h
@@ -0,0 +1,30 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_STRING_H
+#define _ASM_RISCV_STRING_H
+
+#ifdef __KERNEL__
+
+#include <linux/types.h>
+#include <linux/linkage.h>
+
+#define __HAVE_ARCH_MEMSET
+extern asmlinkage void *memset(void *, int, size_t);
+
+#define __HAVE_ARCH_MEMCPY
+extern asmlinkage void *memcpy(void *, const void *, size_t);
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_STRING_H */
diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h
new file mode 100644
index 000000000000..dd6b05bff75b
--- /dev/null
+++ b/arch/riscv/include/asm/switch_to.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SWITCH_TO_H
+#define _ASM_RISCV_SWITCH_TO_H
+
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/csr.h>
+
+extern void __fstate_save(struct task_struct *save_to);
+extern void __fstate_restore(struct task_struct *restore_from);
+
+static inline void __fstate_clean(struct pt_regs *regs)
+{
+	regs->sstatus |= (regs->sstatus & ~(SR_FS)) | SR_FS_CLEAN;
+}
+
+static inline void fstate_save(struct task_struct *task,
+			       struct pt_regs *regs)
+{
+	if ((regs->sstatus & SR_FS) == SR_FS_DIRTY) {
+		__fstate_save(task);
+		__fstate_clean(regs);
+	}
+}
+
+static inline void fstate_restore(struct task_struct *task,
+				  struct pt_regs *regs)
+{
+	if ((regs->sstatus & SR_FS) != SR_FS_OFF) {
+		__fstate_restore(task);
+		__fstate_clean(regs);
+	}
+}
+
+static inline void __switch_to_aux(struct task_struct *prev,
+				   struct task_struct *next)
+{
+	struct pt_regs *regs;
+
+	regs = task_pt_regs(prev);
+	if (unlikely(regs->sstatus & SR_SD))
+		fstate_save(prev, regs);
+	fstate_restore(next, task_pt_regs(next));
+}
+
+extern struct task_struct *__switch_to(struct task_struct *,
+				       struct task_struct *);
+
+#define switch_to(prev, next, last)			\
+do {							\
+	struct task_struct *__prev = (prev);		\
+	struct task_struct *__next = (next);		\
+	__switch_to_aux(__prev, __next);		\
+	((last) = __switch_to(__prev, __next));		\
+} while (0)
+
+#endif /* _ASM_RISCV_SWITCH_TO_H */
diff --git a/arch/riscv/include/asm/syscall.h b/arch/riscv/include/asm/syscall.h
new file mode 100644
index 000000000000..c70f0e7a06b8
--- /dev/null
+++ b/arch/riscv/include/asm/syscall.h
@@ -0,0 +1,90 @@
+/*
+ * Copyright (C) 2008-2009 Red Hat, Inc.  All rights reserved.
+ * Copyright 2010 Tilera Corporation. All Rights Reserved.
+ * Copyright 2015 Regents of the University of California, Berkeley
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ *
+ * See asm-generic/syscall.h for descriptions of what we must do here.
+ */
+
+#ifndef _ASM_RISCV_SYSCALL_H
+#define _ASM_RISCV_SYSCALL_H
+
+#include <linux/sched.h>
+#include <linux/err.h>
+
+/* The array of function pointers for syscalls. */
+extern void *sys_call_table[];
+
+/*
+ * Only the low 32 bits of orig_r0 are meaningful, so we return int.
+ * This importantly ignores the high bits on 64-bit, so comparisons
+ * sign-extend the low 32 bits.
+ */
+static inline int syscall_get_nr(struct task_struct *task,
+				 struct pt_regs *regs)
+{
+	return regs->a7;
+}
+
+static inline void syscall_set_nr(struct task_struct *task,
+				  struct pt_regs *regs,
+				  int sysno)
+{
+	regs->a7 = sysno;
+}
+
+static inline void syscall_rollback(struct task_struct *task,
+				    struct pt_regs *regs)
+{
+	/* FIXME: We can't do this... */
+}
+
+static inline long syscall_get_error(struct task_struct *task,
+				     struct pt_regs *regs)
+{
+	unsigned long error = regs->a0;
+
+	return IS_ERR_VALUE(error) ? error : 0;
+}
+
+static inline long syscall_get_return_value(struct task_struct *task,
+					    struct pt_regs *regs)
+{
+	return regs->a0;
+}
+
+static inline void syscall_set_return_value(struct task_struct *task,
+					    struct pt_regs *regs,
+					    int error, long val)
+{
+	regs->a0 = (long) error ?: val;
+}
+
+static inline void syscall_get_arguments(struct task_struct *task,
+					 struct pt_regs *regs,
+					 unsigned int i, unsigned int n,
+					 unsigned long *args)
+{
+	BUG_ON(i + n > 6);
+	memcpy(args, &regs->a0 + i * sizeof(regs->a0), n * sizeof(args[0]));
+}
+
+static inline void syscall_set_arguments(struct task_struct *task,
+					 struct pt_regs *regs,
+					 unsigned int i, unsigned int n,
+					 const unsigned long *args)
+{
+	BUG_ON(i + n > 6);
+	memcpy(&regs->a0 + i * sizeof(regs->a0), args, n * sizeof(regs->a0));
+}
+
+#endif	/* _ASM_TILE_SYSCALL_H */
diff --git a/arch/riscv/include/asm/syscalls.h b/arch/riscv/include/asm/syscalls.h
new file mode 100644
index 000000000000..d85267c4f7ea
--- /dev/null
+++ b/arch/riscv/include/asm/syscalls.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SYSCALLS_H
+#define _ASM_RISCV_SYSCALLS_H
+
+#include <linux/linkage.h>
+
+#include <asm-generic/syscalls.h>
+
+/* kernel/sys_riscv.c */
+asmlinkage long sys_sysriscv(unsigned long, unsigned long,
+	unsigned long, unsigned long);
+
+#endif /* _ASM_RISCV_SYSCALLS_H */
diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
new file mode 100644
index 000000000000..f05b7d68c1ad
--- /dev/null
+++ b/arch/riscv/include/asm/thread_info.h
@@ -0,0 +1,96 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_THREAD_INFO_H
+#define _ASM_RISCV_THREAD_INFO_H
+
+#ifdef __KERNEL__
+
+#include <asm/page.h>
+#include <linux/const.h>
+
+/* thread information allocation */
+#define THREAD_SIZE_ORDER	(1)
+#define THREAD_SIZE		(PAGE_SIZE << THREAD_SIZE_ORDER)
+
+#ifndef __ASSEMBLY__
+
+#include <asm/processor.h>
+#include <asm/csr.h>
+
+typedef unsigned long mm_segment_t;
+
+/*
+ * low level task data that entry.S needs immediate access to
+ * - this struct should fit entirely inside of one cache line
+ * - if the members of this struct changes, the assembly constants
+ *   in asm-offsets.c must be updated accordingly
+ * - thread_info is included in task_struct at an offset of 0.  This means that
+ *   tp points to both thread_info and task_struct.
+ */
+struct thread_info {
+	unsigned long		flags;		/* low level flags */
+	int                     preempt_count;  /* 0=>preemptible, <0=>BUG */
+	mm_segment_t		addr_limit;
+	/* These stack pointers are overwritten on every system call or
+	 * exception.  SP is also saved to the stack it can be recovered when
+	 * overwritten.
+	 */
+	long			kernel_sp;	/* Kernel stack pointer */
+	long			user_sp;	/* User stack pointer */
+};
+
+/*
+ * macros/functions for gaining access to the thread information structure
+ *
+ * preempt_count needs to be 1 initially, until the scheduler is functional.
+ */
+#define INIT_THREAD_INFO(tsk)			\
+{						\
+	.flags		= 0,			\
+	.preempt_count	= INIT_PREEMPT_COUNT,	\
+	.addr_limit	= KERNEL_DS,		\
+}
+
+#define init_stack		(init_thread_union.stack)
+
+#endif /* !__ASSEMBLY__ */
+
+/*
+ * thread information flags
+ * - these are process state flags that various assembly files may need to
+ *   access
+ * - pending work-to-be-done flags are in lowest half-word
+ * - other flags in upper half-word(s)
+ */
+#define TIF_SYSCALL_TRACE	0	/* syscall trace active */
+#define TIF_NOTIFY_RESUME	1	/* callback before returning to user */
+#define TIF_SIGPENDING		2	/* signal pending */
+#define TIF_NEED_RESCHED	3	/* rescheduling necessary */
+#define TIF_RESTORE_SIGMASK	4	/* restore signal mask in do_signal() */
+#define TIF_MEMDIE		5	/* is terminating due to OOM killer */
+#define TIF_SYSCALL_TRACEPOINT  6       /* syscall tracepoint instrumentation */
+
+#define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+#define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
+#define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
+#define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
+
+#define _TIF_WORK_MASK \
+	(_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED)
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_THREAD_INFO_H */
diff --git a/arch/riscv/include/asm/timex.h b/arch/riscv/include/asm/timex.h
new file mode 100644
index 000000000000..bb66b83352fb
--- /dev/null
+++ b/arch/riscv/include/asm/timex.h
@@ -0,0 +1,43 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TIMEX_H
+#define _ASM_RISCV_TIMEX_H
+
+#include <asm/param.h>
+
+#ifdef __KERNEL__
+
+typedef unsigned long cycles_t;
+
+static inline cycles_t get_cycles(void)
+{
+	cycles_t n;
+
+	__asm__ __volatile__ (
+		"rdtime %0"
+		: "=r" (n));
+	return n;
+}
+
+#define ARCH_HAS_READ_CURRENT_TIMER
+
+static inline int read_current_timer(unsigned long *timer_val)
+{
+	*timer_val = get_cycles();
+	return 0;
+}
+
+#endif
+
+#endif /* _ASM_RISCV_TIMEX_H */
diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h
new file mode 100644
index 000000000000..c229509288ea
--- /dev/null
+++ b/arch/riscv/include/asm/tlb.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TLB_H
+#define _ASM_RISCV_TLB_H
+
+#include <asm-generic/tlb.h>
+
+static inline void tlb_flush(struct mmu_gather *tlb)
+{
+	flush_tlb_mm(tlb->mm);
+}
+
+#endif /* _ASM_RISCV_TLB_H */
diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
new file mode 100644
index 000000000000..5ee4ae370b5e
--- /dev/null
+++ b/arch/riscv/include/asm/tlbflush.h
@@ -0,0 +1,64 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TLBFLUSH_H
+#define _ASM_RISCV_TLBFLUSH_H
+
+#ifdef CONFIG_MMU
+
+/* Flush entire local TLB */
+static inline void local_flush_tlb_all(void)
+{
+	__asm__ __volatile__ ("sfence.vma" : : : "memory");
+}
+
+/* Flush one page from local TLB */
+static inline void local_flush_tlb_page(unsigned long addr)
+{
+	__asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory");
+}
+
+#ifndef CONFIG_SMP
+
+#define flush_tlb_all() local_flush_tlb_all()
+#define flush_tlb_page(vma, addr) local_flush_tlb_page(addr)
+#define flush_tlb_range(vma, start, end) local_flush_tlb_all()
+
+#else /* CONFIG_SMP */
+
+#include <asm/sbi.h>
+
+#define flush_tlb_all() sbi_remote_sfence_vma(0, 0, -1)
+#define flush_tlb_page(vma, addr) flush_tlb_range(vma, addr, 0)
+#define flush_tlb_range(vma, start, end) \
+	sbi_remote_sfence_vma(0, start, (end) - (start))
+
+#endif /* CONFIG_SMP */
+
+/* Flush the TLB entries of the specified mm context */
+static inline void flush_tlb_mm(struct mm_struct *mm)
+{
+	flush_tlb_all();
+}
+
+/* Flush a range of kernel pages */
+static inline void flush_tlb_kernel_range(unsigned long start,
+	unsigned long end)
+{
+	flush_tlb_all();
+}
+
+#endif /* CONFIG_MMU */
+
+#endif /* _ASM_RISCV_TLBFLUSH_H */
diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
new file mode 100644
index 000000000000..f3ece74219d7
--- /dev/null
+++ b/arch/riscv/include/asm/uaccess.h
@@ -0,0 +1,440 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ *
+ * This file was copied from include/asm-generic/uaccess.h
+ */
+
+#ifndef _ASM_RISCV_UACCESS_H
+#define _ASM_RISCV_UACCESS_H
+
+/*
+ * User space memory access functions
+ */
+#include <linux/errno.h>
+#include <linux/compiler.h>
+#include <linux/thread_info.h>
+#include <asm/byteorder.h>
+#include <asm/asm.h>
+
+#ifdef CONFIG_RV_PUM
+#define __enable_user_access()							\
+	__asm__ __volatile__ ("csrs sstatus, %0" : : "r" (SR_SUM) : "memory")
+#define __disable_user_access()							\
+	__asm__ __volatile__ ("csrc sstatus, %0" : : "r" (SR_SUM) : "memory")
+#else
+#define __enable_user_access()
+#define __disable_user_access()
+#endif
+
+/*
+ * The fs value determines whether argument validity checking should be
+ * performed or not.  If get_fs() == USER_DS, checking is performed, with
+ * get_fs() == KERNEL_DS, checking is bypassed.
+ *
+ * For historical reasons, these macros are grossly misnamed.
+ */
+
+#define KERNEL_DS	(~0UL)
+#define USER_DS		(TASK_SIZE)
+
+#define get_ds()	(KERNEL_DS)
+#define get_fs()	(current_thread_info()->addr_limit)
+
+static inline void set_fs(mm_segment_t fs)
+{
+	current_thread_info()->addr_limit = fs;
+}
+
+#define segment_eq(a, b) ((a) == (b))
+
+#define user_addr_max()	(get_fs())
+
+
+#define VERIFY_READ	0
+#define VERIFY_WRITE	1
+
+/**
+ * access_ok: - Checks if a user space pointer is valid
+ * @type: Type of access: %VERIFY_READ or %VERIFY_WRITE.  Note that
+ *        %VERIFY_WRITE is a superset of %VERIFY_READ - if it is safe
+ *        to write to a block, it is always safe to read from it.
+ * @addr: User space pointer to start of block to check
+ * @size: Size of block to check
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * Checks if a pointer to a block of memory in user space is valid.
+ *
+ * Returns true (nonzero) if the memory block may be valid, false (zero)
+ * if it is definitely invalid.
+ *
+ * Note that, depending on architecture, this function probably just
+ * checks that the pointer is in the user space range - after calling
+ * this function, memory access functions may still return -EFAULT.
+ */
+#define access_ok(type, addr, size) ({					\
+	__chk_user_ptr(addr);						\
+	likely(__access_ok((unsigned long __force)(addr), (size)));	\
+})
+
+/* Ensure that the range [addr, addr+size) is within the process's
+ * address space
+ */
+static inline int __access_ok(unsigned long addr, unsigned long size)
+{
+	const mm_segment_t fs = get_fs();
+
+	return (size <= fs) && (addr <= (fs - size));
+}
+
+/*
+ * The exception table consists of pairs of addresses: the first is the
+ * address of an instruction that is allowed to fault, and the second is
+ * the address at which the program should continue.  No registers are
+ * modified, so it is entirely up to the continuation code to figure out
+ * what to do.
+ *
+ * All the routines below use bits of fixup code that are out of line
+ * with the main instruction path.  This means when everything is well,
+ * we don't even have to jump over them.  Further, they do not intrude
+ * on our cache or tlb entries.
+ */
+
+struct exception_table_entry {
+	unsigned long insn, fixup;
+};
+
+extern int fixup_exception(struct pt_regs *state);
+
+#if defined(__LITTLE_ENDIAN)
+#define __MSW	1
+#define __LSW	0
+#elif defined(__BIG_ENDIAN)
+#define __MSW	0
+#define	__LSW	1
+#else
+#error "Unknown endianness"
+#endif
+
+/*
+ * The "__xxx" versions of the user access functions do not verify the address
+ * space - it must have been done previously with a separate "access_ok()"
+ * call.
+ */
+
+#ifdef CONFIG_MMU
+#define __get_user_asm(insn, x, ptr, err)			\
+do {								\
+	uintptr_t __tmp;					\
+	__typeof__(x) __x;					\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	" insn " %1, %3\n"			\
+		"2:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"3:\n"						\
+		"	li %0, %4\n"				\
+		"	li %1, 0\n"				\
+		"	jump 2b, %2\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 3b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=&r" (__x), "=r" (__tmp)		\
+		: "m" (*(ptr)), "i" (-EFAULT));			\
+	__disable_user_access();				\
+	(x) = __x;						\
+} while (0)
+#endif /* CONFIG_MMU */
+
+#ifdef CONFIG_64BIT
+#define __get_user_8(x, ptr, err) \
+	__get_user_asm("ld", x, ptr, err)
+#else /* !CONFIG_64BIT */
+#ifdef CONFIG_MMU
+#define __get_user_8(x, ptr, err)				\
+do {								\
+	u32 __user *__ptr = (u32 __user *)(ptr);		\
+	u32 __lo, __hi;						\
+	uintptr_t __tmp;					\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	lw %1, %4\n"				\
+		"2:\n"						\
+		"	lw %2, %5\n"				\
+		"3:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"4:\n"						\
+		"	li %0, %6\n"				\
+		"	li %1, 0\n"				\
+		"	li %2, 0\n"				\
+		"	jump 3b, %3\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 4b\n"			\
+		"	" RISCV_PTR " 2b, 4b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=&r" (__lo), "=r" (__hi),	\
+			"=r" (__tmp)				\
+		: "m" (__ptr[__LSW]), "m" (__ptr[__MSW]),	\
+			"i" (-EFAULT));				\
+	__disable_user_access();				\
+	(x) = (__typeof__(x))((__typeof__((x)-(x)))(		\
+		(((u64)__hi << 32) | __lo)));			\
+} while (0)
+#endif /* CONFIG_MMU */
+#endif /* CONFIG_64BIT */
+
+
+/**
+ * __get_user: - Get a simple variable from user space, with less checking.
+ * @x:   Variable to store result.
+ * @ptr: Source address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple variable from user space to kernel
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and the result of
+ * dereferencing @ptr must be assignable to @x without a cast.
+ *
+ * Caller must check the pointer with access_ok() before calling this
+ * function.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ * On error, the variable @x is set to zero.
+ */
+#define __get_user(x, ptr)					\
+({								\
+	register long __gu_err = 0;				\
+	const __typeof__(*(ptr)) __user *__gu_ptr = (ptr);	\
+	__chk_user_ptr(__gu_ptr);				\
+	switch (sizeof(*__gu_ptr)) {				\
+	case 1:							\
+		__get_user_asm("lb", (x), __gu_ptr, __gu_err);	\
+		break;						\
+	case 2:							\
+		__get_user_asm("lh", (x), __gu_ptr, __gu_err);	\
+		break;						\
+	case 4:							\
+		__get_user_asm("lw", (x), __gu_ptr, __gu_err);	\
+		break;						\
+	case 8:							\
+		__get_user_8((x), __gu_ptr, __gu_err);	\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+	}							\
+	__gu_err;						\
+})
+
+/**
+ * get_user: - Get a simple variable from user space.
+ * @x:   Variable to store result.
+ * @ptr: Source address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple variable from user space to kernel
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and the result of
+ * dereferencing @ptr must be assignable to @x without a cast.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ * On error, the variable @x is set to zero.
+ */
+#define get_user(x, ptr)					\
+({								\
+	const __typeof__(*(ptr)) __user *__p = (ptr);		\
+	might_fault();						\
+	access_ok(VERIFY_READ, __p, sizeof(*__p)) ?		\
+		__get_user((x), __p) :				\
+		((x) = 0, -EFAULT);				\
+})
+
+
+#ifdef CONFIG_MMU
+#define __put_user_asm(insn, x, ptr, err)			\
+do {								\
+	uintptr_t __tmp;					\
+	__typeof__(*(ptr)) __x = x;				\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	" insn " %z3, %2\n"			\
+		"2:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"3:\n"						\
+		"	li %0, %4\n"				\
+		"	jump 2b, %1\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 3b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=r" (__tmp), "=m" (*(ptr))	\
+		: "rJ" (__x), "i" (-EFAULT));			\
+	__disable_user_access();				\
+} while (0)
+#endif /* CONFIG_MMU */
+
+
+#ifdef CONFIG_64BIT
+#define __put_user_8(x, ptr, err) \
+	__put_user_asm("sd", x, ptr, err)
+#else /* !CONFIG_64BIT */
+#ifdef CONFIG_MMU
+#define __put_user_8(x, ptr, err)				\
+do {								\
+	u32 __user *__ptr = (u32 __user *)(ptr);		\
+	u64 __x = (__typeof__((x)-(x)))(x);			\
+	uintptr_t __tmp;					\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	sw %z4, %2\n"				\
+		"2:\n"						\
+		"	sw %z5, %3\n"				\
+		"3:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"4:\n"						\
+		"	li %0, %6\n"				\
+		"	jump 2b, %1\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 4b\n"			\
+		"	" RISCV_PTR " 2b, 4b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=r" (__tmp),			\
+			"=m" (__ptr[__LSW]),			\
+			"=m" (__ptr[__MSW])			\
+		: "rJ" (__x), "rJ" (__x >> 32), "i" (-EFAULT));	\
+	__disable_user_access();				\
+} while (0)
+#endif /* CONFIG_MMU */
+#endif /* CONFIG_64BIT */
+
+
+/**
+ * __put_user: - Write a simple value into user space, with less checking.
+ * @x:   Value to copy to user space.
+ * @ptr: Destination address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple value from kernel space to user
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and @x must be assignable
+ * to the result of dereferencing @ptr.
+ *
+ * Caller must check the pointer with access_ok() before calling this
+ * function.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ */
+#define __put_user(x, ptr)					\
+({								\
+	register long __pu_err = 0;				\
+	__typeof__(*(ptr)) __user *__gu_ptr = (ptr);		\
+	__chk_user_ptr(__gu_ptr);				\
+	switch (sizeof(*__gu_ptr)) {				\
+	case 1:							\
+		__put_user_asm("sb", (x), __gu_ptr, __pu_err);	\
+		break;						\
+	case 2:							\
+		__put_user_asm("sh", (x), __gu_ptr, __pu_err);	\
+		break;						\
+	case 4:							\
+		__put_user_asm("sw", (x), __gu_ptr, __pu_err);	\
+		break;						\
+	case 8:							\
+		__put_user_8((x), __gu_ptr, __pu_err);	\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+	}							\
+	__pu_err;						\
+})
+
+/**
+ * put_user: - Write a simple value into user space.
+ * @x:   Value to copy to user space.
+ * @ptr: Destination address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple value from kernel space to user
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and @x must be assignable
+ * to the result of dereferencing @ptr.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ */
+#define put_user(x, ptr)					\
+({								\
+	__typeof__(*(ptr)) __user *__p = (ptr);			\
+	might_fault();						\
+	access_ok(VERIFY_WRITE, __p, sizeof(*__p)) ?		\
+		__put_user((x), __p) :				\
+		-EFAULT;					\
+})
+
+
+extern unsigned long __must_check __copy_user(void __user *to,
+	const void __user *from, unsigned long n);
+
+static inline unsigned long
+raw_copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+	return __copy_user(to, from, n);
+}
+
+static inline unsigned long
+raw_copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+	return __copy_user(to, from, n);
+}
+
+extern long strncpy_from_user(char *dest, const char __user *src, long count);
+
+extern long __must_check strlen_user(const char __user *str);
+extern long __must_check strnlen_user(const char __user *str, long n);
+
+extern
+unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
+
+static inline
+unsigned long __must_check clear_user(void __user *to, unsigned long n)
+{
+	might_fault();
+	return access_ok(VERIFY_WRITE, to, n) ?
+		__clear_user(to, n) : n;
+}
+
+#endif /* _ASM_RISCV_UACCESS_H */
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
new file mode 100644
index 000000000000..9f250ed007cd
--- /dev/null
+++ b/arch/riscv/include/asm/unistd.h
@@ -0,0 +1,16 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#define __ARCH_HAVE_MMU
+#define __ARCH_WANT_SYS_CLONE
+#include <uapi/asm/unistd.h>
diff --git a/arch/riscv/include/asm/vdso.h b/arch/riscv/include/asm/vdso.h
new file mode 100644
index 000000000000..95768a3810a7
--- /dev/null
+++ b/arch/riscv/include/asm/vdso.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright (C) 2012 ARM Limited
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_RISCV_VDSO_H
+#define _ASM_RISCV_VDSO_H
+
+#include <linux/types.h>
+
+struct vdso_data {
+};
+
+#define VDSO_SYMBOL(base, name)					\
+({								\
+	extern const char __vdso_##name[];			\
+	(void __user *)((unsigned long)(base) + __vdso_##name);	\
+})
+
+#endif /* _ASM_RISCV_VDSO_H */
diff --git a/arch/riscv/include/asm/word-at-a-time.h b/arch/riscv/include/asm/word-at-a-time.h
new file mode 100644
index 000000000000..aa6238791d3e
--- /dev/null
+++ b/arch/riscv/include/asm/word-at-a-time.h
@@ -0,0 +1,55 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ * Derived from arch/x86/include/asm/word-at-a-time.h
+ */
+
+#ifndef _ASM_RISCV_WORD_AT_A_TIME_H
+#define _ASM_RISCV_WORD_AT_A_TIME_H
+
+
+#include <linux/kernel.h>
+
+struct word_at_a_time {
+	const unsigned long one_bits, high_bits;
+};
+
+#define WORD_AT_A_TIME_CONSTANTS { REPEAT_BYTE(0x01), REPEAT_BYTE(0x80) }
+
+static inline unsigned long has_zero(unsigned long val,
+	unsigned long *bits, const struct word_at_a_time *c)
+{
+	unsigned long mask = ((val - c->one_bits) & ~val) & c->high_bits;
+	*bits = mask;
+	return mask;
+}
+
+static inline unsigned long prep_zero_mask(unsigned long val,
+	unsigned long bits, const struct word_at_a_time *c)
+{
+	return bits;
+}
+
+static inline unsigned long create_zero_mask(unsigned long bits)
+{
+	bits = (bits - 1) & ~bits;
+	return bits >> 7;
+}
+
+static inline unsigned long find_zero(unsigned long mask)
+{
+	return fls64(mask) >> 3;
+}
+
+/* The mask we created is directly usable as a bytemask */
+#define zero_bytemask(mask) (mask)
+
+#endif /* _ASM_RISCV_WORD_AT_A_TIME_H */
diff --git a/arch/riscv/include/uapi/asm/Kbuild b/arch/riscv/include/uapi/asm/Kbuild
new file mode 100644
index 000000000000..b15bf6bc0e94
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/Kbuild
@@ -0,0 +1,2 @@
+# UAPI Header export list
+include include/uapi/asm-generic/Kbuild.asm
diff --git a/arch/riscv/include/uapi/asm/auxvec.h b/arch/riscv/include/uapi/asm/auxvec.h
new file mode 100644
index 000000000000..1376515547cd
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/auxvec.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _UAPI_ASM_RISCV_AUXVEC_H
+#define _UAPI_ASM_RISCV_AUXVEC_H
+
+/* vDSO location */
+#define AT_SYSINFO_EHDR 33
+
+#endif /* _UAPI_ASM_RISCV_AUXVEC_H */
diff --git a/arch/riscv/include/uapi/asm/bitsperlong.h b/arch/riscv/include/uapi/asm/bitsperlong.h
new file mode 100644
index 000000000000..0b3cb52fd29d
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/bitsperlong.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _UAPI_ASM_RISCV_BITSPERLONG_H
+#define _UAPI_ASM_RISCV_BITSPERLONG_H
+
+#define __BITS_PER_LONG (__SIZEOF_POINTER__ * 8)
+
+#include <asm-generic/bitsperlong.h>
+
+#endif /* _UAPI_ASM_RISCV_BITSPERLONG_H */
diff --git a/arch/riscv/include/uapi/asm/byteorder.h b/arch/riscv/include/uapi/asm/byteorder.h
new file mode 100644
index 000000000000..4ca38af2cd32
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/byteorder.h
@@ -0,0 +1,23 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _UAPI_ASM_RISCV_BYTEORDER_H
+#define _UAPI_ASM_RISCV_BYTEORDER_H
+
+#include <linux/byteorder/little_endian.h>
+
+#endif /* _UAPI_ASM_RISCV_BYTEORDER_H */
diff --git a/arch/riscv/include/uapi/asm/elf.h b/arch/riscv/include/uapi/asm/elf.h
new file mode 100644
index 000000000000..e438edd97589
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/elf.h
@@ -0,0 +1,83 @@
+/*
+ * Copyright (C) 2003 Matjaz Breskvar <phoenix@bsemi.com>
+ * Copyright (C) 2010-2011 Jonas Bonn <jonas@southpole.se>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef _UAPI_ASM_ELF_H
+#define _UAPI_ASM_ELF_H
+
+#include <asm/ptrace.h>
+
+/* ELF register definitions */
+typedef unsigned long elf_greg_t;
+typedef struct user_regs_struct elf_gregset_t;
+#define ELF_NGREG (sizeof(elf_gregset_t) / sizeof(elf_greg_t))
+
+typedef struct user_fpregs_struct elf_fpregset_t;
+
+#define ELF_RISCV_R_SYM(r_info) ((r_info) >> 32)
+#define ELF_RISCV_R_TYPE(r_info) ((r_info) & 0xffffffff)
+
+/*
+ * RISC-V relocation types
+ */
+
+/* Relocation types used by the dynamic linker */
+#define R_RISCV_NONE		0
+#define R_RISCV_32		1
+#define R_RISCV_64		2
+#define R_RISCV_RELATIVE	3
+#define R_RISCV_COPY		4
+#define R_RISCV_JUMP_SLOT	5
+#define R_RISCV_TLS_DTPMOD32	6
+#define R_RISCV_TLS_DTPMOD64	7
+#define R_RISCV_TLS_DTPREL32	8
+#define R_RISCV_TLS_DTPREL64	9
+#define R_RISCV_TLS_TPREL32	10
+#define R_RISCV_TLS_TPREL64	11
+
+/* Relocation types not used by the dynamic linker */
+#define R_RISCV_BRANCH		16
+#define R_RISCV_JAL		17
+#define R_RISCV_CALL		18
+#define R_RISCV_CALL_PLT	19
+#define R_RISCV_GOT_HI20	20
+#define R_RISCV_TLS_GOT_HI20	21
+#define R_RISCV_TLS_GD_HI20	22
+#define R_RISCV_PCREL_HI20	23
+#define R_RISCV_PCREL_LO12_I	24
+#define R_RISCV_PCREL_LO12_S	25
+#define R_RISCV_HI20		26
+#define R_RISCV_LO12_I		27
+#define R_RISCV_LO12_S		28
+#define R_RISCV_TPREL_HI20	29
+#define R_RISCV_TPREL_LO12_I	30
+#define R_RISCV_TPREL_LO12_S	31
+#define R_RISCV_TPREL_ADD	32
+#define R_RISCV_ADD8		33
+#define R_RISCV_ADD16		34
+#define R_RISCV_ADD32		35
+#define R_RISCV_ADD64		36
+#define R_RISCV_SUB8		37
+#define R_RISCV_SUB16		38
+#define R_RISCV_SUB32		39
+#define R_RISCV_SUB64		40
+#define R_RISCV_GNU_VTINHERIT	41
+#define R_RISCV_GNU_VTENTRY	42
+#define R_RISCV_ALIGN		43
+#define R_RISCV_RVC_BRANCH	44
+#define R_RISCV_RVC_JUMP	45
+#define R_RISCV_LUI			46
+#define R_RISCV_GPREL_I		47
+#define R_RISCV_GPREL_S		48
+#define R_RISCV_TPREL_I		49
+#define R_RISCV_TPREL_S		50
+#define R_RISCV_RELAX		51
+
+#endif /* _UAPI_ASM_ELF_H */
diff --git a/arch/riscv/include/uapi/asm/ptrace.h b/arch/riscv/include/uapi/asm/ptrace.h
new file mode 100644
index 000000000000..b95ee16e6896
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/ptrace.h
@@ -0,0 +1,68 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _UAPI_ASM_RISCV_PTRACE_H
+#define _UAPI_ASM_RISCV_PTRACE_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/types.h>
+
+/* User-mode register state for core dumps, ptrace, sigcontext
+ *
+ * This decouples struct pt_regs from the userspace ABI.
+ * struct user_regs_struct must form a prefix of struct pt_regs.
+ */
+struct user_regs_struct {
+	unsigned long pc;
+	unsigned long ra;
+	unsigned long sp;
+	unsigned long gp;
+	unsigned long tp;
+	unsigned long t0;
+	unsigned long t1;
+	unsigned long t2;
+	unsigned long s0;
+	unsigned long s1;
+	unsigned long a0;
+	unsigned long a1;
+	unsigned long a2;
+	unsigned long a3;
+	unsigned long a4;
+	unsigned long a5;
+	unsigned long a6;
+	unsigned long a7;
+	unsigned long s2;
+	unsigned long s3;
+	unsigned long s4;
+	unsigned long s5;
+	unsigned long s6;
+	unsigned long s7;
+	unsigned long s8;
+	unsigned long s9;
+	unsigned long s10;
+	unsigned long s11;
+	unsigned long t3;
+	unsigned long t4;
+	unsigned long t5;
+	unsigned long t6;
+};
+
+struct user_fpregs_struct {
+	__u64 f[32];
+	__u32 fcsr;
+};
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _UAPI_ASM_RISCV_PTRACE_H */
diff --git a/arch/riscv/include/uapi/asm/sigcontext.h b/arch/riscv/include/uapi/asm/sigcontext.h
new file mode 100644
index 000000000000..dc882d598834
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/sigcontext.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _UAPI_ASM_RISCV_SIGCONTEXT_H
+#define _UAPI_ASM_RISCV_SIGCONTEXT_H
+
+#include <asm/ptrace.h>
+
+/* Signal context structure
+ *
+ * This contains the context saved before a signal handler is invoked;
+ * it is restored by sys_sigreturn / sys_rt_sigreturn.
+ */
+struct sigcontext {
+	struct user_regs_struct sc_regs;
+	struct user_fpregs_struct sc_fpregs;
+};
+
+#endif /* _UAPI_ASM_RISCV_SIGCONTEXT_H */
diff --git a/arch/riscv/include/uapi/asm/siginfo.h b/arch/riscv/include/uapi/asm/siginfo.h
new file mode 100644
index 000000000000..f96849aac662
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/siginfo.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2016 SiFive, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_SIGINFO_H
+#define __ASM_SIGINFO_H
+
+#define __ARCH_SI_PREAMBLE_SIZE	(__SIZEOF_POINTER__ == 4 ? 12 : 16)
+
+#include <asm-generic/siginfo.h>
+
+#endif
diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h
new file mode 100644
index 000000000000..aedba7d32370
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/unistd.h
@@ -0,0 +1,22 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <asm-generic/unistd.h>
+
+#define __NR_sysriscv  __NR_arch_specific_syscall
+#ifndef __riscv_atomic
+__SYSCALL(__NR_sysriscv, sys_sysriscv)
+#endif
+
+#define RISCV_ATOMIC_CMPXCHG    1
+#define RISCV_ATOMIC_CMPXCHG64  2
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 14/17] RISC-V: lib files
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 23:00     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

Most of these files are based off code in GCC, but the delay code is
mostly from ARM.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/lib/Makefile  |   5 ++
 arch/riscv/lib/delay.c   | 107 ++++++++++++++++++++++++++++++++++++++++
 arch/riscv/lib/memcpy.S  |  98 +++++++++++++++++++++++++++++++++++++
 arch/riscv/lib/memset.S  | 118 ++++++++++++++++++++++++++++++++++++++++++++
 arch/riscv/lib/uaccess.S | 125 +++++++++++++++++++++++++++++++++++++++++++++++
 arch/riscv/lib/udivdi3.S |  39 +++++++++++++++
 6 files changed, 492 insertions(+)
 create mode 100644 arch/riscv/lib/Makefile
 create mode 100644 arch/riscv/lib/delay.c
 create mode 100644 arch/riscv/lib/memcpy.S
 create mode 100644 arch/riscv/lib/memset.S
 create mode 100644 arch/riscv/lib/uaccess.S
 create mode 100644 arch/riscv/lib/udivdi3.S

diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile
new file mode 100644
index 000000000000..120c38e77a46
--- /dev/null
+++ b/arch/riscv/lib/Makefile
@@ -0,0 +1,5 @@
+lib-y	:= delay.o memcpy.o memset.o uaccess.o
+
+ifeq ($(CONFIG_64BIT),)
+lib-y += udivdi3.o
+endif
diff --git a/arch/riscv/lib/delay.c b/arch/riscv/lib/delay.c
new file mode 100644
index 000000000000..ceff08a5b762
--- /dev/null
+++ b/arch/riscv/lib/delay.c
@@ -0,0 +1,107 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/delay.h>
+#include <linux/param.h>
+#include <linux/timex.h>
+#include <linux/export.h>
+
+/*
+ * This is copies from arch/arm/include/asm/delay.h
+ *
+ * Loop (or tick) based delay:
+ *
+ * loops = loops_per_jiffy * jiffies_per_sec * delay_us / us_per_sec
+ *
+ * where:
+ *
+ * jiffies_per_sec = HZ
+ * us_per_sec = 1000000
+ *
+ * Therefore the constant part is HZ / 1000000 which is a small
+ * fractional number. To make this usable with integer math, we
+ * scale up this constant by 2^31, perform the actual multiplication,
+ * and scale the result back down by 2^31 with a simple shift:
+ *
+ * loops = (loops_per_jiffy * delay_us * UDELAY_MULT) >> 31
+ *
+ * where:
+ *
+ * UDELAY_MULT = 2^31 * HZ / 1000000
+ *             = (2^31 / 1000000) * HZ
+ *             = 2147.483648 * HZ
+ *             = 2147 * HZ + 483648 * HZ / 1000000
+ *
+ * 31 is the biggest scale shift value that won't overflow 32 bits for
+ * delay_us * UDELAY_MULT assuming HZ <= 1000 and delay_us <= 2000.
+ */
+#define MAX_UDELAY_US	2000
+#define MAX_UDELAY_HZ	1000
+#define UDELAY_MULT	(2147UL * HZ + 483648UL * HZ / 1000000UL)
+#define UDELAY_SHIFT	31
+
+#if HZ > MAX_UDELAY_HZ
+#error "HZ > MAX_UDELAY_HZ"
+#endif
+
+/* RISC-V supports both UDELAY and NDELAY.  This is largely the same as above,
+ * but with different constants.  I added 10 bits to the shift to get this, but
+ * the result is that I need a 64-bit multiply, which is slow on 32-bit
+ * platforms.
+ *
+ * NDELAY_MULT = 2^41 * HZ / 1000000000
+ *             = (2^41 / 1000000000) * HZ
+ *             = 2199.02325555 * HZ
+ *             = 2199 * HZ + 23255550 * HZ / 1000000000
+ *
+ * The maximum here is to avoid 64-bit overflow, but it isn't checked as it
+ * won't happen.
+ */
+#define MAX_NDELAY_NS   (1ULL << 42)
+#define MAX_NDELAY_HZ	MAX_UDELAY_HZ
+#define NDELAY_MULT	((unsigned long long)(2199ULL * HZ + 23255550ULL * HZ / 1000000000ULL))
+#define NDELAY_SHIFT	41
+
+#if HZ > MAX_NDELAY_HZ
+#error "HZ > MAX_NDELAY_HZ"
+#endif
+
+void __delay(unsigned long cycles)
+{
+	u64 t0 = get_cycles();
+
+	while ((unsigned long)(get_cycles() - t0) < cycles)
+		cpu_relax();
+}
+
+void udelay(unsigned long usecs)
+{
+	unsigned long ucycles = usecs * lpj_fine * UDELAY_MULT;
+
+	if (usecs > MAX_UDELAY_US) {
+		__delay((u64)usecs * riscv_timebase / 1000000ULL);
+		return;
+	}
+
+	__delay(ucycles >> UDELAY_SHIFT);
+}
+EXPORT_SYMBOL(udelay);
+
+void ndelay(unsigned long nsecs)
+{
+	/* This doesn't bother checking for overflow, as it won't happen (it's
+	 * an hour) of delay. */
+	unsigned long long ncycles = nsecs * lpj_fine * NDELAY_MULT;
+	__delay(ncycles >> NDELAY_SHIFT);
+}
+EXPORT_SYMBOL(ndelay);
diff --git a/arch/riscv/lib/memcpy.S b/arch/riscv/lib/memcpy.S
new file mode 100644
index 000000000000..5b576cf041eb
--- /dev/null
+++ b/arch/riscv/lib/memcpy.S
@@ -0,0 +1,98 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/asm.h>
+
+/* void *memcpy(void *, const void *, size_t) */
+ENTRY(memcpy)
+	move t6, a0  /* Preserve return value */
+
+	/* Defer to byte-oriented copy for small sizes */
+	sltiu a3, a2, 128
+	bnez a3, 4f
+	/* Use word-oriented copy only if low-order bits match */
+	andi a3, t6, SZREG-1
+	andi a4, a1, SZREG-1
+	bne a3, a4, 4f
+
+	beqz a3, 2f  /* Skip if already aligned */
+	/* Round to nearest double word-aligned address
+	   greater than or equal to start address */
+	andi a3, a1, ~(SZREG-1)
+	addi a3, a3, SZREG
+	/* Handle initial misalignment */
+	sub a4, a3, a1
+1:
+	lb a5, 0(a1)
+	addi a1, a1, 1
+	sb a5, 0(t6)
+	addi t6, t6, 1
+	bltu a1, a3, 1b
+	sub a2, a2, a4  /* Update count */
+
+2:
+	andi a4, a2, ~((16*SZREG)-1)
+	beqz a4, 4f
+	add a3, a1, a4
+3:
+	REG_L a4,       0(a1)
+	REG_L a5,   SZREG(a1)
+	REG_L a6, 2*SZREG(a1)
+	REG_L a7, 3*SZREG(a1)
+	REG_L t0, 4*SZREG(a1)
+	REG_L t1, 5*SZREG(a1)
+	REG_L t2, 6*SZREG(a1)
+	REG_L t3, 7*SZREG(a1)
+	REG_L t4, 8*SZREG(a1)
+	REG_L t5, 9*SZREG(a1)
+	REG_S a4,       0(t6)
+	REG_S a5,   SZREG(t6)
+	REG_S a6, 2*SZREG(t6)
+	REG_S a7, 3*SZREG(t6)
+	REG_S t0, 4*SZREG(t6)
+	REG_S t1, 5*SZREG(t6)
+	REG_S t2, 6*SZREG(t6)
+	REG_S t3, 7*SZREG(t6)
+	REG_S t4, 8*SZREG(t6)
+	REG_S t5, 9*SZREG(t6)
+	REG_L a4, 10*SZREG(a1)
+	REG_L a5, 11*SZREG(a1)
+	REG_L a6, 12*SZREG(a1)
+	REG_L a7, 13*SZREG(a1)
+	REG_L t0, 14*SZREG(a1)
+	REG_L t1, 15*SZREG(a1)
+	addi a1, a1, 16*SZREG
+	REG_S a4, 10*SZREG(t6)
+	REG_S a5, 11*SZREG(t6)
+	REG_S a6, 12*SZREG(t6)
+	REG_S a7, 13*SZREG(t6)
+	REG_S t0, 14*SZREG(t6)
+	REG_S t1, 15*SZREG(t6)
+	addi t6, t6, 16*SZREG
+	bltu a1, a3, 3b
+	andi a2, a2, (16*SZREG)-1  /* Update count */
+
+4:
+	/* Handle trailing misalignment */
+	beqz a2, 6f
+	add a3, a1, a2
+5:
+	lb a4, 0(a1)
+	addi a1, a1, 1
+	sb a4, 0(t6)
+	addi t6, t6, 1
+	bltu a1, a3, 5b
+6:
+	ret
+END(memcpy)
diff --git a/arch/riscv/lib/memset.S b/arch/riscv/lib/memset.S
new file mode 100644
index 000000000000..d83c38099653
--- /dev/null
+++ b/arch/riscv/lib/memset.S
@@ -0,0 +1,118 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#include <linux/linkage.h>
+#include <asm/asm.h>
+
+/* void *memset(void *, int, size_t) */
+ENTRY(memset)
+	move t0, a0  /* Preserve return value */
+
+	/* Defer to byte-oriented fill for small sizes */
+	sltiu a3, a2, 16
+	bnez a3, 4f
+
+	/* Round to nearest XLEN-aligned address
+	   greater than or equal to start address */
+	addi a3, t0, SZREG-1
+	andi a3, a3, ~(SZREG-1)
+	beq a3, t0, 2f  /* Skip if already aligned */
+	/* Handle initial misalignment */
+	sub a4, a3, t0
+1:
+	sb a1, 0(t0)
+	addi t0, t0, 1
+	bltu t0, a3, 1b
+	sub a2, a2, a4  /* Update count */
+
+2: /* Duff's device with 32 XLEN stores per iteration */
+	/* Broadcast value into all bytes */
+	andi a1, a1, 0xff
+	slli a3, a1, 8
+	or a1, a3, a1
+	slli a3, a1, 16
+	or a1, a3, a1
+#ifdef CONFIG_64BIT
+	slli a3, a1, 32
+	or a1, a3, a1
+#endif
+
+	/* Calculate end address */
+	andi a4, a2, ~(SZREG-1)
+	add a3, t0, a4
+
+	andi a4, a4, 31*SZREG  /* Calculate remainder */
+	beqz a4, 3f            /* Shortcut if no remainder */
+	neg a4, a4
+	addi a4, a4, 32*SZREG  /* Calculate initial offset */
+
+	/* Adjust start address with offset */
+	sub t0, t0, a4
+
+	/* Jump into loop body */
+	/* Assumes 32-bit instruction lengths */
+	la a5, 3f
+#ifdef CONFIG_64BIT
+	srli a4, a4, 1
+#endif
+	add a5, a5, a4
+	jr a5
+3:
+	REG_S a1,        0(t0)
+	REG_S a1,    SZREG(t0)
+	REG_S a1,  2*SZREG(t0)
+	REG_S a1,  3*SZREG(t0)
+	REG_S a1,  4*SZREG(t0)
+	REG_S a1,  5*SZREG(t0)
+	REG_S a1,  6*SZREG(t0)
+	REG_S a1,  7*SZREG(t0)
+	REG_S a1,  8*SZREG(t0)
+	REG_S a1,  9*SZREG(t0)
+	REG_S a1, 10*SZREG(t0)
+	REG_S a1, 11*SZREG(t0)
+	REG_S a1, 12*SZREG(t0)
+	REG_S a1, 13*SZREG(t0)
+	REG_S a1, 14*SZREG(t0)
+	REG_S a1, 15*SZREG(t0)
+	REG_S a1, 16*SZREG(t0)
+	REG_S a1, 17*SZREG(t0)
+	REG_S a1, 18*SZREG(t0)
+	REG_S a1, 19*SZREG(t0)
+	REG_S a1, 20*SZREG(t0)
+	REG_S a1, 21*SZREG(t0)
+	REG_S a1, 22*SZREG(t0)
+	REG_S a1, 23*SZREG(t0)
+	REG_S a1, 24*SZREG(t0)
+	REG_S a1, 25*SZREG(t0)
+	REG_S a1, 26*SZREG(t0)
+	REG_S a1, 27*SZREG(t0)
+	REG_S a1, 28*SZREG(t0)
+	REG_S a1, 29*SZREG(t0)
+	REG_S a1, 30*SZREG(t0)
+	REG_S a1, 31*SZREG(t0)
+	addi t0, t0, 32*SZREG
+	bltu t0, a3, 3b
+	andi a2, a2, SZREG-1  /* Update count */
+
+4:
+	/* Handle trailing misalignment */
+	beqz a2, 6f
+	add a3, t0, a2
+5:
+	sb a1, 0(t0)
+	addi t0, t0, 1
+	bltu t0, a3, 5b
+6:
+	ret
+END(memset)
diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S
new file mode 100644
index 000000000000..cba994696aff
--- /dev/null
+++ b/arch/riscv/lib/uaccess.S
@@ -0,0 +1,125 @@
+#include <linux/linkage.h>
+#include <asm/asm.h>
+#include <asm/csr.h>
+
+	.altmacro
+	.macro fixup op reg addr lbl
+	LOCAL _epc
+_epc:
+	\op \reg, \addr
+	.section __ex_table,"a"
+	.balign RISCV_SZPTR
+	RISCV_PTR _epc, \lbl
+	.previous
+	.endm
+
+ENTRY(__copy_user)
+
+#ifdef CONFIG_RV_PUM
+	/* Enable access to user memory */
+	li t6, SR_SUM
+	csrs sstatus, t6
+#endif
+
+	add a3, a1, a2
+	/* Use word-oriented copy only if low-order bits match */
+	andi t0, a0, SZREG-1
+	andi t1, a1, SZREG-1
+	bne t0, t1, 2f
+
+	addi t0, a1, SZREG-1
+	andi t1, a3, ~(SZREG-1)
+	andi t0, t0, ~(SZREG-1)
+	/* a3: terminal address of source region
+	 * t0: lowest XLEN-aligned address in source
+	 * t1: highest XLEN-aligned address in source
+	 */
+	bgeu t0, t1, 2f
+	bltu a1, t0, 4f
+1:
+	fixup REG_L, t2, (a1), 10f
+	fixup REG_S, t2, (a0), 10f
+	addi a1, a1, SZREG
+	addi a0, a0, SZREG
+	bltu a1, t1, 1b
+2:
+	bltu a1, a3, 5f
+
+3:
+#ifdef CONFIG_RV_PUM
+	/* Disable access to user memory */
+	csrc sstatus, t6
+#endif
+	li a0, 0
+	ret
+4: /* Edge case: unalignment */
+	fixup lbu, t2, (a1), 10f
+	fixup sb, t2, (a0), 10f
+	addi a1, a1, 1
+	addi a0, a0, 1
+	bltu a1, t0, 4b
+	j 1b
+5: /* Edge case: remainder */
+	fixup lbu, t2, (a1), 10f
+	fixup sb, t2, (a0), 10f
+	addi a1, a1, 1
+	addi a0, a0, 1
+	bltu a1, a3, 5b
+	j 3b
+ENDPROC(__copy_user)
+
+
+ENTRY(__clear_user)
+
+#ifdef CONFIG_RV_PUM
+	/* Enable access to user memory */
+	li t6, SR_SUM
+	csrs sstatus, t6
+#endif
+
+	add a3, a0, a1
+	addi t0, a0, SZREG-1
+	andi t1, a3, ~(SZREG-1)
+	andi t0, t0, ~(SZREG-1)
+	/* a3: terminal address of target region
+	 * t0: lowest doubleword-aligned address in target region
+	 * t1: highest doubleword-aligned address in target region
+	 */
+	bgeu t0, t1, 2f
+	bltu a0, t0, 4f
+1:
+	fixup REG_S, zero, (a0), 10f
+	addi a0, a0, SZREG
+	bltu a0, t1, 1b
+2:
+	bltu a0, a3, 5f
+
+3:
+#ifdef CONFIG_RV_PUM
+	/* Disable access to user memory */
+	csrc sstatus, t6
+#endif
+	li a0, 0
+	ret
+4: /* Edge case: unalignment */
+	fixup sb, zero, (a0), 10f
+	addi a0, a0, 1
+	bltu a0, t0, 4b
+	j 1b
+5: /* Edge case: remainder */
+	fixup sb, zero, (a0), 10f
+	addi a0, a0, 1
+	bltu a0, a3, 5b
+	j 3b
+ENDPROC(__clear_user)
+
+	.section .fixup,"ax"
+	.balign 4
+10:
+#ifdef CONFIG_RV_PUM
+	/* Disable access to user memory */
+	csrs sstatus, t6
+#endif
+	sub a0, a3, a0
+	ret
+	.previous
diff --git a/arch/riscv/lib/udivdi3.S b/arch/riscv/lib/udivdi3.S
new file mode 100644
index 000000000000..d8a7a1d03615
--- /dev/null
+++ b/arch/riscv/lib/udivdi3.S
@@ -0,0 +1,39 @@
+/*
+ * Copyright (C) 2016-2017 Free Software Foundation, Inc.
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+  .globl __udivdi3
+__udivdi3:
+  mv    a2, a1
+  mv    a1, a0
+  li    a0, -1
+  beqz  a2, .L5
+  li    a3, 1
+  bgeu  a2, a1, .L2
+.L1:
+  blez  a2, .L2
+  slli  a2, a2, 1
+  slli  a3, a3, 1
+  bgtu  a1, a2, .L1
+.L2:
+  li    a0, 0
+.L3:
+  bltu  a1, a2, .L4
+  sub   a1, a1, a2
+  or    a0, a0, a3
+.L4:
+  srli  a3, a3, 1
+  srli  a2, a2, 1
+  bnez  a3, .L3
+.L5:
+  ret
+
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 14/17] RISC-V: lib files
@ 2017-06-06 23:00     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

Most of these files are based off code in GCC, but the delay code is
mostly from ARM.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/lib/Makefile  |   5 ++
 arch/riscv/lib/delay.c   | 107 ++++++++++++++++++++++++++++++++++++++++
 arch/riscv/lib/memcpy.S  |  98 +++++++++++++++++++++++++++++++++++++
 arch/riscv/lib/memset.S  | 118 ++++++++++++++++++++++++++++++++++++++++++++
 arch/riscv/lib/uaccess.S | 125 +++++++++++++++++++++++++++++++++++++++++++++++
 arch/riscv/lib/udivdi3.S |  39 +++++++++++++++
 6 files changed, 492 insertions(+)
 create mode 100644 arch/riscv/lib/Makefile
 create mode 100644 arch/riscv/lib/delay.c
 create mode 100644 arch/riscv/lib/memcpy.S
 create mode 100644 arch/riscv/lib/memset.S
 create mode 100644 arch/riscv/lib/uaccess.S
 create mode 100644 arch/riscv/lib/udivdi3.S

diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile
new file mode 100644
index 000000000000..120c38e77a46
--- /dev/null
+++ b/arch/riscv/lib/Makefile
@@ -0,0 +1,5 @@
+lib-y	:= delay.o memcpy.o memset.o uaccess.o
+
+ifeq ($(CONFIG_64BIT),)
+lib-y += udivdi3.o
+endif
diff --git a/arch/riscv/lib/delay.c b/arch/riscv/lib/delay.c
new file mode 100644
index 000000000000..ceff08a5b762
--- /dev/null
+++ b/arch/riscv/lib/delay.c
@@ -0,0 +1,107 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/delay.h>
+#include <linux/param.h>
+#include <linux/timex.h>
+#include <linux/export.h>
+
+/*
+ * This is copies from arch/arm/include/asm/delay.h
+ *
+ * Loop (or tick) based delay:
+ *
+ * loops = loops_per_jiffy * jiffies_per_sec * delay_us / us_per_sec
+ *
+ * where:
+ *
+ * jiffies_per_sec = HZ
+ * us_per_sec = 1000000
+ *
+ * Therefore the constant part is HZ / 1000000 which is a small
+ * fractional number. To make this usable with integer math, we
+ * scale up this constant by 2^31, perform the actual multiplication,
+ * and scale the result back down by 2^31 with a simple shift:
+ *
+ * loops = (loops_per_jiffy * delay_us * UDELAY_MULT) >> 31
+ *
+ * where:
+ *
+ * UDELAY_MULT = 2^31 * HZ / 1000000
+ *             = (2^31 / 1000000) * HZ
+ *             = 2147.483648 * HZ
+ *             = 2147 * HZ + 483648 * HZ / 1000000
+ *
+ * 31 is the biggest scale shift value that won't overflow 32 bits for
+ * delay_us * UDELAY_MULT assuming HZ <= 1000 and delay_us <= 2000.
+ */
+#define MAX_UDELAY_US	2000
+#define MAX_UDELAY_HZ	1000
+#define UDELAY_MULT	(2147UL * HZ + 483648UL * HZ / 1000000UL)
+#define UDELAY_SHIFT	31
+
+#if HZ > MAX_UDELAY_HZ
+#error "HZ > MAX_UDELAY_HZ"
+#endif
+
+/* RISC-V supports both UDELAY and NDELAY.  This is largely the same as above,
+ * but with different constants.  I added 10 bits to the shift to get this, but
+ * the result is that I need a 64-bit multiply, which is slow on 32-bit
+ * platforms.
+ *
+ * NDELAY_MULT = 2^41 * HZ / 1000000000
+ *             = (2^41 / 1000000000) * HZ
+ *             = 2199.02325555 * HZ
+ *             = 2199 * HZ + 23255550 * HZ / 1000000000
+ *
+ * The maximum here is to avoid 64-bit overflow, but it isn't checked as it
+ * won't happen.
+ */
+#define MAX_NDELAY_NS   (1ULL << 42)
+#define MAX_NDELAY_HZ	MAX_UDELAY_HZ
+#define NDELAY_MULT	((unsigned long long)(2199ULL * HZ + 23255550ULL * HZ / 1000000000ULL))
+#define NDELAY_SHIFT	41
+
+#if HZ > MAX_NDELAY_HZ
+#error "HZ > MAX_NDELAY_HZ"
+#endif
+
+void __delay(unsigned long cycles)
+{
+	u64 t0 = get_cycles();
+
+	while ((unsigned long)(get_cycles() - t0) < cycles)
+		cpu_relax();
+}
+
+void udelay(unsigned long usecs)
+{
+	unsigned long ucycles = usecs * lpj_fine * UDELAY_MULT;
+
+	if (usecs > MAX_UDELAY_US) {
+		__delay((u64)usecs * riscv_timebase / 1000000ULL);
+		return;
+	}
+
+	__delay(ucycles >> UDELAY_SHIFT);
+}
+EXPORT_SYMBOL(udelay);
+
+void ndelay(unsigned long nsecs)
+{
+	/* This doesn't bother checking for overflow, as it won't happen (it's
+	 * an hour) of delay. */
+	unsigned long long ncycles = nsecs * lpj_fine * NDELAY_MULT;
+	__delay(ncycles >> NDELAY_SHIFT);
+}
+EXPORT_SYMBOL(ndelay);
diff --git a/arch/riscv/lib/memcpy.S b/arch/riscv/lib/memcpy.S
new file mode 100644
index 000000000000..5b576cf041eb
--- /dev/null
+++ b/arch/riscv/lib/memcpy.S
@@ -0,0 +1,98 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/asm.h>
+
+/* void *memcpy(void *, const void *, size_t) */
+ENTRY(memcpy)
+	move t6, a0  /* Preserve return value */
+
+	/* Defer to byte-oriented copy for small sizes */
+	sltiu a3, a2, 128
+	bnez a3, 4f
+	/* Use word-oriented copy only if low-order bits match */
+	andi a3, t6, SZREG-1
+	andi a4, a1, SZREG-1
+	bne a3, a4, 4f
+
+	beqz a3, 2f  /* Skip if already aligned */
+	/* Round to nearest double word-aligned address
+	   greater than or equal to start address */
+	andi a3, a1, ~(SZREG-1)
+	addi a3, a3, SZREG
+	/* Handle initial misalignment */
+	sub a4, a3, a1
+1:
+	lb a5, 0(a1)
+	addi a1, a1, 1
+	sb a5, 0(t6)
+	addi t6, t6, 1
+	bltu a1, a3, 1b
+	sub a2, a2, a4  /* Update count */
+
+2:
+	andi a4, a2, ~((16*SZREG)-1)
+	beqz a4, 4f
+	add a3, a1, a4
+3:
+	REG_L a4,       0(a1)
+	REG_L a5,   SZREG(a1)
+	REG_L a6, 2*SZREG(a1)
+	REG_L a7, 3*SZREG(a1)
+	REG_L t0, 4*SZREG(a1)
+	REG_L t1, 5*SZREG(a1)
+	REG_L t2, 6*SZREG(a1)
+	REG_L t3, 7*SZREG(a1)
+	REG_L t4, 8*SZREG(a1)
+	REG_L t5, 9*SZREG(a1)
+	REG_S a4,       0(t6)
+	REG_S a5,   SZREG(t6)
+	REG_S a6, 2*SZREG(t6)
+	REG_S a7, 3*SZREG(t6)
+	REG_S t0, 4*SZREG(t6)
+	REG_S t1, 5*SZREG(t6)
+	REG_S t2, 6*SZREG(t6)
+	REG_S t3, 7*SZREG(t6)
+	REG_S t4, 8*SZREG(t6)
+	REG_S t5, 9*SZREG(t6)
+	REG_L a4, 10*SZREG(a1)
+	REG_L a5, 11*SZREG(a1)
+	REG_L a6, 12*SZREG(a1)
+	REG_L a7, 13*SZREG(a1)
+	REG_L t0, 14*SZREG(a1)
+	REG_L t1, 15*SZREG(a1)
+	addi a1, a1, 16*SZREG
+	REG_S a4, 10*SZREG(t6)
+	REG_S a5, 11*SZREG(t6)
+	REG_S a6, 12*SZREG(t6)
+	REG_S a7, 13*SZREG(t6)
+	REG_S t0, 14*SZREG(t6)
+	REG_S t1, 15*SZREG(t6)
+	addi t6, t6, 16*SZREG
+	bltu a1, a3, 3b
+	andi a2, a2, (16*SZREG)-1  /* Update count */
+
+4:
+	/* Handle trailing misalignment */
+	beqz a2, 6f
+	add a3, a1, a2
+5:
+	lb a4, 0(a1)
+	addi a1, a1, 1
+	sb a4, 0(t6)
+	addi t6, t6, 1
+	bltu a1, a3, 5b
+6:
+	ret
+END(memcpy)
diff --git a/arch/riscv/lib/memset.S b/arch/riscv/lib/memset.S
new file mode 100644
index 000000000000..d83c38099653
--- /dev/null
+++ b/arch/riscv/lib/memset.S
@@ -0,0 +1,118 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#include <linux/linkage.h>
+#include <asm/asm.h>
+
+/* void *memset(void *, int, size_t) */
+ENTRY(memset)
+	move t0, a0  /* Preserve return value */
+
+	/* Defer to byte-oriented fill for small sizes */
+	sltiu a3, a2, 16
+	bnez a3, 4f
+
+	/* Round to nearest XLEN-aligned address
+	   greater than or equal to start address */
+	addi a3, t0, SZREG-1
+	andi a3, a3, ~(SZREG-1)
+	beq a3, t0, 2f  /* Skip if already aligned */
+	/* Handle initial misalignment */
+	sub a4, a3, t0
+1:
+	sb a1, 0(t0)
+	addi t0, t0, 1
+	bltu t0, a3, 1b
+	sub a2, a2, a4  /* Update count */
+
+2: /* Duff's device with 32 XLEN stores per iteration */
+	/* Broadcast value into all bytes */
+	andi a1, a1, 0xff
+	slli a3, a1, 8
+	or a1, a3, a1
+	slli a3, a1, 16
+	or a1, a3, a1
+#ifdef CONFIG_64BIT
+	slli a3, a1, 32
+	or a1, a3, a1
+#endif
+
+	/* Calculate end address */
+	andi a4, a2, ~(SZREG-1)
+	add a3, t0, a4
+
+	andi a4, a4, 31*SZREG  /* Calculate remainder */
+	beqz a4, 3f            /* Shortcut if no remainder */
+	neg a4, a4
+	addi a4, a4, 32*SZREG  /* Calculate initial offset */
+
+	/* Adjust start address with offset */
+	sub t0, t0, a4
+
+	/* Jump into loop body */
+	/* Assumes 32-bit instruction lengths */
+	la a5, 3f
+#ifdef CONFIG_64BIT
+	srli a4, a4, 1
+#endif
+	add a5, a5, a4
+	jr a5
+3:
+	REG_S a1,        0(t0)
+	REG_S a1,    SZREG(t0)
+	REG_S a1,  2*SZREG(t0)
+	REG_S a1,  3*SZREG(t0)
+	REG_S a1,  4*SZREG(t0)
+	REG_S a1,  5*SZREG(t0)
+	REG_S a1,  6*SZREG(t0)
+	REG_S a1,  7*SZREG(t0)
+	REG_S a1,  8*SZREG(t0)
+	REG_S a1,  9*SZREG(t0)
+	REG_S a1, 10*SZREG(t0)
+	REG_S a1, 11*SZREG(t0)
+	REG_S a1, 12*SZREG(t0)
+	REG_S a1, 13*SZREG(t0)
+	REG_S a1, 14*SZREG(t0)
+	REG_S a1, 15*SZREG(t0)
+	REG_S a1, 16*SZREG(t0)
+	REG_S a1, 17*SZREG(t0)
+	REG_S a1, 18*SZREG(t0)
+	REG_S a1, 19*SZREG(t0)
+	REG_S a1, 20*SZREG(t0)
+	REG_S a1, 21*SZREG(t0)
+	REG_S a1, 22*SZREG(t0)
+	REG_S a1, 23*SZREG(t0)
+	REG_S a1, 24*SZREG(t0)
+	REG_S a1, 25*SZREG(t0)
+	REG_S a1, 26*SZREG(t0)
+	REG_S a1, 27*SZREG(t0)
+	REG_S a1, 28*SZREG(t0)
+	REG_S a1, 29*SZREG(t0)
+	REG_S a1, 30*SZREG(t0)
+	REG_S a1, 31*SZREG(t0)
+	addi t0, t0, 32*SZREG
+	bltu t0, a3, 3b
+	andi a2, a2, SZREG-1  /* Update count */
+
+4:
+	/* Handle trailing misalignment */
+	beqz a2, 6f
+	add a3, t0, a2
+5:
+	sb a1, 0(t0)
+	addi t0, t0, 1
+	bltu t0, a3, 5b
+6:
+	ret
+END(memset)
diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S
new file mode 100644
index 000000000000..cba994696aff
--- /dev/null
+++ b/arch/riscv/lib/uaccess.S
@@ -0,0 +1,125 @@
+#include <linux/linkage.h>
+#include <asm/asm.h>
+#include <asm/csr.h>
+
+	.altmacro
+	.macro fixup op reg addr lbl
+	LOCAL _epc
+_epc:
+	\op \reg, \addr
+	.section __ex_table,"a"
+	.balign RISCV_SZPTR
+	RISCV_PTR _epc, \lbl
+	.previous
+	.endm
+
+ENTRY(__copy_user)
+
+#ifdef CONFIG_RV_PUM
+	/* Enable access to user memory */
+	li t6, SR_SUM
+	csrs sstatus, t6
+#endif
+
+	add a3, a1, a2
+	/* Use word-oriented copy only if low-order bits match */
+	andi t0, a0, SZREG-1
+	andi t1, a1, SZREG-1
+	bne t0, t1, 2f
+
+	addi t0, a1, SZREG-1
+	andi t1, a3, ~(SZREG-1)
+	andi t0, t0, ~(SZREG-1)
+	/* a3: terminal address of source region
+	 * t0: lowest XLEN-aligned address in source
+	 * t1: highest XLEN-aligned address in source
+	 */
+	bgeu t0, t1, 2f
+	bltu a1, t0, 4f
+1:
+	fixup REG_L, t2, (a1), 10f
+	fixup REG_S, t2, (a0), 10f
+	addi a1, a1, SZREG
+	addi a0, a0, SZREG
+	bltu a1, t1, 1b
+2:
+	bltu a1, a3, 5f
+
+3:
+#ifdef CONFIG_RV_PUM
+	/* Disable access to user memory */
+	csrc sstatus, t6
+#endif
+	li a0, 0
+	ret
+4: /* Edge case: unalignment */
+	fixup lbu, t2, (a1), 10f
+	fixup sb, t2, (a0), 10f
+	addi a1, a1, 1
+	addi a0, a0, 1
+	bltu a1, t0, 4b
+	j 1b
+5: /* Edge case: remainder */
+	fixup lbu, t2, (a1), 10f
+	fixup sb, t2, (a0), 10f
+	addi a1, a1, 1
+	addi a0, a0, 1
+	bltu a1, a3, 5b
+	j 3b
+ENDPROC(__copy_user)
+
+
+ENTRY(__clear_user)
+
+#ifdef CONFIG_RV_PUM
+	/* Enable access to user memory */
+	li t6, SR_SUM
+	csrs sstatus, t6
+#endif
+
+	add a3, a0, a1
+	addi t0, a0, SZREG-1
+	andi t1, a3, ~(SZREG-1)
+	andi t0, t0, ~(SZREG-1)
+	/* a3: terminal address of target region
+	 * t0: lowest doubleword-aligned address in target region
+	 * t1: highest doubleword-aligned address in target region
+	 */
+	bgeu t0, t1, 2f
+	bltu a0, t0, 4f
+1:
+	fixup REG_S, zero, (a0), 10f
+	addi a0, a0, SZREG
+	bltu a0, t1, 1b
+2:
+	bltu a0, a3, 5f
+
+3:
+#ifdef CONFIG_RV_PUM
+	/* Disable access to user memory */
+	csrc sstatus, t6
+#endif
+	li a0, 0
+	ret
+4: /* Edge case: unalignment */
+	fixup sb, zero, (a0), 10f
+	addi a0, a0, 1
+	bltu a0, t0, 4b
+	j 1b
+5: /* Edge case: remainder */
+	fixup sb, zero, (a0), 10f
+	addi a0, a0, 1
+	bltu a0, a3, 5b
+	j 3b
+ENDPROC(__clear_user)
+
+	.section .fixup,"ax"
+	.balign 4
+10:
+#ifdef CONFIG_RV_PUM
+	/* Disable access to user memory */
+	csrs sstatus, t6
+#endif
+	sub a0, a3, a0
+	ret
+	.previous
diff --git a/arch/riscv/lib/udivdi3.S b/arch/riscv/lib/udivdi3.S
new file mode 100644
index 000000000000..d8a7a1d03615
--- /dev/null
+++ b/arch/riscv/lib/udivdi3.S
@@ -0,0 +1,39 @@
+/*
+ * Copyright (C) 2016-2017 Free Software Foundation, Inc.
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+  .globl __udivdi3
+__udivdi3:
+  mv    a2, a1
+  mv    a1, a0
+  li    a0, -1
+  beqz  a2, .L5
+  li    a3, 1
+  bgeu  a2, a1, .L2
+.L1:
+  blez  a2, .L2
+  slli  a2, a2, 1
+  slli  a3, a3, 1
+  bgtu  a1, a2, .L1
+.L2:
+  li    a0, 0
+.L3:
+  bltu  a1, a2, .L4
+  sub   a1, a1, a2
+  or    a0, a0, a3
+.L4:
+  srli  a3, a3, 1
+  srli  a2, a2, 1
+  bnez  a3, .L3
+.L5:
+  ret
+
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 15/17] RISC-V: Add mm subdirectory
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 23:00     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

These files are mostly based on the score port, but as all the non-stub
functions are very ISA specific they've been heavily modified.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/mm/Makefile  |   1 +
 arch/riscv/mm/extable.c |  37 +++++++
 arch/riscv/mm/fault.c   | 280 ++++++++++++++++++++++++++++++++++++++++++++++++
 arch/riscv/mm/init.c    |  72 +++++++++++++
 arch/riscv/mm/ioremap.c |  93 ++++++++++++++++
 5 files changed, 483 insertions(+)
 create mode 100644 arch/riscv/mm/Makefile
 create mode 100644 arch/riscv/mm/extable.c
 create mode 100644 arch/riscv/mm/fault.c
 create mode 100644 arch/riscv/mm/init.c
 create mode 100644 arch/riscv/mm/ioremap.c

diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
new file mode 100644
index 000000000000..36ebe6feb5d6
--- /dev/null
+++ b/arch/riscv/mm/Makefile
@@ -0,0 +1 @@
+obj-y := init.o fault.o extable.o ioremap.o
diff --git a/arch/riscv/mm/extable.c b/arch/riscv/mm/extable.c
new file mode 100644
index 000000000000..11bb9417123b
--- /dev/null
+++ b/arch/riscv/mm/extable.c
@@ -0,0 +1,37 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+
+#include <linux/extable.h>
+#include <linux/module.h>
+#include <linux/uaccess.h>
+
+int fixup_exception(struct pt_regs *regs)
+{
+	const struct exception_table_entry *fixup;
+
+	fixup = search_exception_tables(regs->sepc);
+	if (fixup) {
+		regs->sepc = fixup->fixup;
+		return 1;
+	}
+	return 0;
+}
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
new file mode 100644
index 000000000000..b2a431c7f233
--- /dev/null
+++ b/arch/riscv/mm/fault.c
@@ -0,0 +1,280 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+
+#include <linux/mm.h>
+#include <linux/kernel.h>
+#include <linux/interrupt.h>
+#include <linux/perf_event.h>
+#include <linux/signal.h>
+#include <linux/uaccess.h>
+
+#include <asm/pgalloc.h>
+#include <asm/ptrace.h>
+#include <asm/uaccess.h>
+
+/*
+ * This routine handles page faults.  It determines the address and the
+ * problem, and then passes it off to one of the appropriate routines.
+ */
+asmlinkage void do_page_fault(struct pt_regs *regs)
+{
+	struct task_struct *tsk;
+	struct vm_area_struct *vma;
+	struct mm_struct *mm;
+	unsigned long addr, cause;
+	unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
+	int fault, code = SEGV_MAPERR;
+
+	cause = regs->scause;
+	addr = regs->sbadaddr;
+
+	tsk = current;
+	mm = tsk->mm;
+
+	/*
+	 * Fault-in kernel-space virtual memory on-demand.
+	 * The 'reference' page table is init_mm.pgd.
+	 *
+	 * NOTE! We MUST NOT take any locks for this case. We may
+	 * be in an interrupt or a critical region, and should
+	 * only copy the information from the master page table,
+	 * nothing more.
+	 */
+	if (unlikely((addr >= VMALLOC_START) && (addr <= VMALLOC_END)))
+		goto vmalloc_fault;
+
+	/* Enable interrupts if they were enabled in the parent context. */
+	if (likely(regs->sstatus & SR_PIE))
+		local_irq_enable();
+
+	/*
+	 * If we're in an interrupt, have no user context, or are running
+	 * in an atomic region, then we must not take the fault.
+	 */
+	if (unlikely(faulthandler_disabled() || !mm))
+		goto no_context;
+
+	if (user_mode(regs))
+		flags |= FAULT_FLAG_USER;
+
+	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
+
+retry:
+	down_read(&mm->mmap_sem);
+	vma = find_vma(mm, addr);
+	if (unlikely(!vma))
+		goto bad_area;
+	if (likely(vma->vm_start <= addr))
+		goto good_area;
+	if (unlikely(!(vma->vm_flags & VM_GROWSDOWN)))
+		goto bad_area;
+	if (unlikely(expand_stack(vma, addr)))
+		goto bad_area;
+
+	/*
+	 * Ok, we have a good vm_area for this memory access, so
+	 * we can handle it.
+	 */
+good_area:
+	code = SEGV_ACCERR;
+
+	switch (cause) {
+	case EXC_INST_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_EXEC))
+			goto bad_area;
+		break;
+	case EXC_LOAD_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_READ))
+			goto bad_area;
+		break;
+	case EXC_STORE_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_WRITE))
+			goto bad_area;
+		flags |= FAULT_FLAG_WRITE;
+		break;
+	default:
+		panic("%s: unhandled cause %lu", __func__, cause);
+	}
+
+	/*
+	 * If for any reason at all we could not handle the fault,
+	 * make sure we exit gracefully rather than endlessly redo
+	 * the fault.
+	 */
+	fault = handle_mm_fault(vma, addr, flags);
+
+	/*
+	 * If we need to retry but a fatal signal is pending, handle the
+	 * signal first. We do not need to release the mmap_sem because it
+	 * would already be released in __lock_page_or_retry in mm/filemap.c.
+	 */
+	if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(tsk))
+		return;
+
+	if (unlikely(fault & VM_FAULT_ERROR)) {
+		if (fault & VM_FAULT_OOM)
+			goto out_of_memory;
+		else if (fault & VM_FAULT_SIGBUS)
+			goto do_sigbus;
+		BUG();
+	}
+
+	/*
+	 * Major/minor page fault accounting is only done on the
+	 * initial attempt. If we go through a retry, it is extremely
+	 * likely that the page will be found in page cache at that point.
+	 */
+	if (flags & FAULT_FLAG_ALLOW_RETRY) {
+		if (fault & VM_FAULT_MAJOR) {
+			tsk->maj_flt++;
+			perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ,
+				      1, regs, addr);
+		} else {
+			tsk->min_flt++;
+			perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN,
+				      1, regs, addr);
+		}
+		if (fault & VM_FAULT_RETRY) {
+			/*
+			 * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
+			 * of starvation.
+			 */
+			flags &= ~(FAULT_FLAG_ALLOW_RETRY);
+			flags |= FAULT_FLAG_TRIED;
+
+			/*
+			 * No need to up_read(&mm->mmap_sem) as we would
+			 * have already released it in __lock_page_or_retry
+			 * in mm/filemap.c.
+			 */
+			goto retry;
+		}
+	}
+
+	up_read(&mm->mmap_sem);
+	return;
+
+	/*
+	 * Something tried to access memory that isn't in our memory map.
+	 * Fix it, but check if it's kernel or user first.
+	 */
+bad_area:
+	up_read(&mm->mmap_sem);
+	/* User mode accesses just cause a SIGSEGV */
+	if (user_mode(regs)) {
+		do_trap(regs, SIGSEGV, code, addr, tsk);
+		return;
+	}
+
+no_context:
+	/* Are we prepared to handle this kernel fault? */
+	if (fixup_exception(regs))
+		return;
+
+	/*
+	 * Oops. The kernel tried to access some bad page. We'll have to
+	 * terminate things with extreme prejudice.
+	 */
+	bust_spinlocks(1);
+	pr_alert("Unable to handle kernel %s at virtual address " REG_FMT "\n",
+		(addr < PAGE_SIZE) ? "NULL pointer dereference" :
+		"paging request", addr);
+	die(regs, "Oops");
+	do_exit(SIGKILL);
+
+	/*
+	 * We ran out of memory, call the OOM killer, and return the userspace
+	 * (which will retry the fault, or kill us if we got oom-killed).
+	 */
+out_of_memory:
+	up_read(&mm->mmap_sem);
+	if (!user_mode(regs))
+		goto no_context;
+	pagefault_out_of_memory();
+	return;
+
+do_sigbus:
+	up_read(&mm->mmap_sem);
+	/* Kernel mode? Handle exceptions or die */
+	if (!user_mode(regs))
+		goto no_context;
+	do_trap(regs, SIGBUS, BUS_ADRERR, addr, tsk);
+	return;
+
+vmalloc_fault:
+	{
+		pgd_t *pgd, *pgd_k;
+		pud_t *pud, *pud_k;
+		p4d_t *p4d, *p4d_k;
+		pmd_t *pmd, *pmd_k;
+		pte_t *pte_k;
+		int index;
+
+		if (user_mode(regs))
+			goto bad_area;
+
+		/*
+		 * Synchronize this task's top level page-table
+		 * with the 'reference' page table.
+		 *
+		 * Do _not_ use "tsk->active_mm->pgd" here.
+		 * We might be inside an interrupt in the middle
+		 * of a task switch.
+		 */
+		index = pgd_index(addr);
+		pgd = (pgd_t *)pfn_to_virt(csr_read(sptbr)) + index;
+		pgd_k = init_mm.pgd + index;
+
+		if (!pgd_present(*pgd_k))
+			goto no_context;
+		set_pgd(pgd, *pgd_k);
+
+		p4d = p4d_offset(pgd, addr);
+		p4d_k = p4d_offset(pgd_k, addr);
+		if (!p4d_present(*p4d_k))
+			goto no_context;
+
+		pud = pud_offset(p4d, addr);
+		pud_k = pud_offset(p4d_k, addr);
+		if (!pud_present(*pud_k))
+			goto no_context;
+
+		/* Since the vmalloc area is global, it is unnecessary
+		 * to copy individual PTEs
+		 */
+		pmd = pmd_offset(pud, addr);
+		pmd_k = pmd_offset(pud_k, addr);
+		if (!pmd_present(*pmd_k))
+			goto no_context;
+		set_pmd(pmd, *pmd_k);
+
+		/* Make sure the actual PTE exists as well to
+		 * catch kernel vmalloc-area accesses to non-mapped
+		 * addresses. If we don't do this, this will just
+		 * silently loop forever.
+		 */
+		pte_k = pte_offset_kernel(pmd_k, addr);
+		if (!pte_present(*pte_k))
+			goto no_context;
+		return;
+	}
+}
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
new file mode 100644
index 000000000000..8ad464ce4a4c
--- /dev/null
+++ b/arch/riscv/mm/init.c
@@ -0,0 +1,72 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/bootmem.h>
+#include <linux/initrd.h>
+#include <linux/memblock.h>
+#include <linux/swap.h>
+
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+#include <asm/pgtable.h>
+#include <asm/io.h>
+
+static void __init zone_sizes_init(void)
+{
+	unsigned long zones_size[MAX_NR_ZONES];
+
+	memset(zones_size, 0, sizeof(zones_size));
+	zones_size[ZONE_NORMAL] = pfn_base + max_mapnr;
+	free_area_init_node(0, zones_size, pfn_base, NULL);
+}
+
+void setup_zero_page(void)
+{
+	memset((void *)empty_zero_page, 0, PAGE_SIZE);
+}
+
+void __init paging_init(void)
+{
+	init_mm.pgd = (pgd_t *)pfn_to_virt(csr_read(sptbr));
+
+	setup_zero_page();
+	local_flush_tlb_all();
+	zone_sizes_init();
+}
+
+void __init mem_init(void)
+{
+#ifdef CONFIG_FLATMEM
+	BUG_ON(!mem_map);
+#endif /* CONFIG_FLATMEM */
+
+	high_memory = (void *)(__va(PFN_PHYS(max_low_pfn)));
+	free_all_bootmem();
+
+	mem_init_print_info(NULL);
+}
+
+void free_initmem(void)
+{
+	free_initmem_default(0);
+}
+
+#ifdef CONFIG_BLK_DEV_INITRD
+void free_initrd_mem(unsigned long start, unsigned long end)
+{
+//	free_reserved_area(start, end, 0, "initrd");
+}
+#endif /* CONFIG_BLK_DEV_INITRD */
+
diff --git a/arch/riscv/mm/ioremap.c b/arch/riscv/mm/ioremap.c
new file mode 100644
index 000000000000..c5cc0935096d
--- /dev/null
+++ b/arch/riscv/mm/ioremap.c
@@ -0,0 +1,93 @@
+/*
+ * (C) Copyright 1995 1996 Linus Torvalds
+ * (C) Copyright 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <linux/io.h>
+
+#include <asm/pgtable.h>
+
+/*
+ * Remap an arbitrary physical address space into the kernel virtual
+ * address space. Needed when the kernel wants to access high addresses
+ * directly.
+ *
+ * NOTE! We need to allow non-page-aligned mappings too: we will obviously
+ * have to convert them into an offset in a page-aligned mapping, but the
+ * caller shouldn't need to know that small detail.
+ */
+static void __iomem *__ioremap_caller(phys_addr_t addr, size_t size,
+	pgprot_t prot, void *caller)
+{
+	phys_addr_t last_addr;
+	unsigned long offset, vaddr;
+	struct vm_struct *area;
+
+	/* Disallow wrap-around or zero size */
+	last_addr = addr + size - 1;
+	if (!size || last_addr < addr)
+		return NULL;
+
+	/* Page-align mappings */
+	offset = addr & (~PAGE_MASK);
+	addr &= PAGE_MASK;
+	size = PAGE_ALIGN(size + offset);
+
+	area = get_vm_area_caller(size, VM_IOREMAP, caller);
+	if (!area)
+		return NULL;
+	vaddr = (unsigned long)area->addr;
+
+	if (ioremap_page_range(vaddr, vaddr + size, addr, prot)) {
+		free_vm_area(area);
+		return NULL;
+	}
+
+	return (void __iomem *)(vaddr + offset);
+}
+
+/*
+ * ioremap     -   map bus memory into CPU space
+ * @offset:    bus address of the memory
+ * @size:      size of the resource to map
+ *
+ * ioremap performs a platform specific sequence of operations to
+ * make bus memory CPU accessible via the readb/readw/readl/writeb/
+ * writew/writel functions and the other mmio helpers. The returned
+ * address is not guaranteed to be usable directly as a virtual
+ * address.
+ *
+ * Must be freed with iounmap.
+ */
+void __iomem *ioremap(phys_addr_t offset, unsigned long size)
+{
+	return __ioremap_caller(offset, size, PAGE_KERNEL,
+		__builtin_return_address(0));
+}
+EXPORT_SYMBOL(ioremap);
+
+
+/**
+ * iounmap - Free a IO remapping
+ * @addr: virtual address from ioremap_*
+ *
+ * Caller must ensure there is only one unmapping for the same pointer.
+ */
+void iounmap(void __iomem *addr)
+{
+	vunmap((void *)((unsigned long)addr & PAGE_MASK));
+}
+EXPORT_SYMBOL(iounmap);
+
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 15/17] RISC-V: Add mm subdirectory
@ 2017-06-06 23:00     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

These files are mostly based on the score port, but as all the non-stub
functions are very ISA specific they've been heavily modified.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/mm/Makefile  |   1 +
 arch/riscv/mm/extable.c |  37 +++++++
 arch/riscv/mm/fault.c   | 280 ++++++++++++++++++++++++++++++++++++++++++++++++
 arch/riscv/mm/init.c    |  72 +++++++++++++
 arch/riscv/mm/ioremap.c |  93 ++++++++++++++++
 5 files changed, 483 insertions(+)
 create mode 100644 arch/riscv/mm/Makefile
 create mode 100644 arch/riscv/mm/extable.c
 create mode 100644 arch/riscv/mm/fault.c
 create mode 100644 arch/riscv/mm/init.c
 create mode 100644 arch/riscv/mm/ioremap.c

diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
new file mode 100644
index 000000000000..36ebe6feb5d6
--- /dev/null
+++ b/arch/riscv/mm/Makefile
@@ -0,0 +1 @@
+obj-y := init.o fault.o extable.o ioremap.o
diff --git a/arch/riscv/mm/extable.c b/arch/riscv/mm/extable.c
new file mode 100644
index 000000000000..11bb9417123b
--- /dev/null
+++ b/arch/riscv/mm/extable.c
@@ -0,0 +1,37 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+
+#include <linux/extable.h>
+#include <linux/module.h>
+#include <linux/uaccess.h>
+
+int fixup_exception(struct pt_regs *regs)
+{
+	const struct exception_table_entry *fixup;
+
+	fixup = search_exception_tables(regs->sepc);
+	if (fixup) {
+		regs->sepc = fixup->fixup;
+		return 1;
+	}
+	return 0;
+}
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
new file mode 100644
index 000000000000..b2a431c7f233
--- /dev/null
+++ b/arch/riscv/mm/fault.c
@@ -0,0 +1,280 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+
+#include <linux/mm.h>
+#include <linux/kernel.h>
+#include <linux/interrupt.h>
+#include <linux/perf_event.h>
+#include <linux/signal.h>
+#include <linux/uaccess.h>
+
+#include <asm/pgalloc.h>
+#include <asm/ptrace.h>
+#include <asm/uaccess.h>
+
+/*
+ * This routine handles page faults.  It determines the address and the
+ * problem, and then passes it off to one of the appropriate routines.
+ */
+asmlinkage void do_page_fault(struct pt_regs *regs)
+{
+	struct task_struct *tsk;
+	struct vm_area_struct *vma;
+	struct mm_struct *mm;
+	unsigned long addr, cause;
+	unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
+	int fault, code = SEGV_MAPERR;
+
+	cause = regs->scause;
+	addr = regs->sbadaddr;
+
+	tsk = current;
+	mm = tsk->mm;
+
+	/*
+	 * Fault-in kernel-space virtual memory on-demand.
+	 * The 'reference' page table is init_mm.pgd.
+	 *
+	 * NOTE! We MUST NOT take any locks for this case. We may
+	 * be in an interrupt or a critical region, and should
+	 * only copy the information from the master page table,
+	 * nothing more.
+	 */
+	if (unlikely((addr >= VMALLOC_START) && (addr <= VMALLOC_END)))
+		goto vmalloc_fault;
+
+	/* Enable interrupts if they were enabled in the parent context. */
+	if (likely(regs->sstatus & SR_PIE))
+		local_irq_enable();
+
+	/*
+	 * If we're in an interrupt, have no user context, or are running
+	 * in an atomic region, then we must not take the fault.
+	 */
+	if (unlikely(faulthandler_disabled() || !mm))
+		goto no_context;
+
+	if (user_mode(regs))
+		flags |= FAULT_FLAG_USER;
+
+	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
+
+retry:
+	down_read(&mm->mmap_sem);
+	vma = find_vma(mm, addr);
+	if (unlikely(!vma))
+		goto bad_area;
+	if (likely(vma->vm_start <= addr))
+		goto good_area;
+	if (unlikely(!(vma->vm_flags & VM_GROWSDOWN)))
+		goto bad_area;
+	if (unlikely(expand_stack(vma, addr)))
+		goto bad_area;
+
+	/*
+	 * Ok, we have a good vm_area for this memory access, so
+	 * we can handle it.
+	 */
+good_area:
+	code = SEGV_ACCERR;
+
+	switch (cause) {
+	case EXC_INST_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_EXEC))
+			goto bad_area;
+		break;
+	case EXC_LOAD_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_READ))
+			goto bad_area;
+		break;
+	case EXC_STORE_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_WRITE))
+			goto bad_area;
+		flags |= FAULT_FLAG_WRITE;
+		break;
+	default:
+		panic("%s: unhandled cause %lu", __func__, cause);
+	}
+
+	/*
+	 * If for any reason at all we could not handle the fault,
+	 * make sure we exit gracefully rather than endlessly redo
+	 * the fault.
+	 */
+	fault = handle_mm_fault(vma, addr, flags);
+
+	/*
+	 * If we need to retry but a fatal signal is pending, handle the
+	 * signal first. We do not need to release the mmap_sem because it
+	 * would already be released in __lock_page_or_retry in mm/filemap.c.
+	 */
+	if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(tsk))
+		return;
+
+	if (unlikely(fault & VM_FAULT_ERROR)) {
+		if (fault & VM_FAULT_OOM)
+			goto out_of_memory;
+		else if (fault & VM_FAULT_SIGBUS)
+			goto do_sigbus;
+		BUG();
+	}
+
+	/*
+	 * Major/minor page fault accounting is only done on the
+	 * initial attempt. If we go through a retry, it is extremely
+	 * likely that the page will be found in page cache at that point.
+	 */
+	if (flags & FAULT_FLAG_ALLOW_RETRY) {
+		if (fault & VM_FAULT_MAJOR) {
+			tsk->maj_flt++;
+			perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ,
+				      1, regs, addr);
+		} else {
+			tsk->min_flt++;
+			perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN,
+				      1, regs, addr);
+		}
+		if (fault & VM_FAULT_RETRY) {
+			/*
+			 * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
+			 * of starvation.
+			 */
+			flags &= ~(FAULT_FLAG_ALLOW_RETRY);
+			flags |= FAULT_FLAG_TRIED;
+
+			/*
+			 * No need to up_read(&mm->mmap_sem) as we would
+			 * have already released it in __lock_page_or_retry
+			 * in mm/filemap.c.
+			 */
+			goto retry;
+		}
+	}
+
+	up_read(&mm->mmap_sem);
+	return;
+
+	/*
+	 * Something tried to access memory that isn't in our memory map.
+	 * Fix it, but check if it's kernel or user first.
+	 */
+bad_area:
+	up_read(&mm->mmap_sem);
+	/* User mode accesses just cause a SIGSEGV */
+	if (user_mode(regs)) {
+		do_trap(regs, SIGSEGV, code, addr, tsk);
+		return;
+	}
+
+no_context:
+	/* Are we prepared to handle this kernel fault? */
+	if (fixup_exception(regs))
+		return;
+
+	/*
+	 * Oops. The kernel tried to access some bad page. We'll have to
+	 * terminate things with extreme prejudice.
+	 */
+	bust_spinlocks(1);
+	pr_alert("Unable to handle kernel %s at virtual address " REG_FMT "\n",
+		(addr < PAGE_SIZE) ? "NULL pointer dereference" :
+		"paging request", addr);
+	die(regs, "Oops");
+	do_exit(SIGKILL);
+
+	/*
+	 * We ran out of memory, call the OOM killer, and return the userspace
+	 * (which will retry the fault, or kill us if we got oom-killed).
+	 */
+out_of_memory:
+	up_read(&mm->mmap_sem);
+	if (!user_mode(regs))
+		goto no_context;
+	pagefault_out_of_memory();
+	return;
+
+do_sigbus:
+	up_read(&mm->mmap_sem);
+	/* Kernel mode? Handle exceptions or die */
+	if (!user_mode(regs))
+		goto no_context;
+	do_trap(regs, SIGBUS, BUS_ADRERR, addr, tsk);
+	return;
+
+vmalloc_fault:
+	{
+		pgd_t *pgd, *pgd_k;
+		pud_t *pud, *pud_k;
+		p4d_t *p4d, *p4d_k;
+		pmd_t *pmd, *pmd_k;
+		pte_t *pte_k;
+		int index;
+
+		if (user_mode(regs))
+			goto bad_area;
+
+		/*
+		 * Synchronize this task's top level page-table
+		 * with the 'reference' page table.
+		 *
+		 * Do _not_ use "tsk->active_mm->pgd" here.
+		 * We might be inside an interrupt in the middle
+		 * of a task switch.
+		 */
+		index = pgd_index(addr);
+		pgd = (pgd_t *)pfn_to_virt(csr_read(sptbr)) + index;
+		pgd_k = init_mm.pgd + index;
+
+		if (!pgd_present(*pgd_k))
+			goto no_context;
+		set_pgd(pgd, *pgd_k);
+
+		p4d = p4d_offset(pgd, addr);
+		p4d_k = p4d_offset(pgd_k, addr);
+		if (!p4d_present(*p4d_k))
+			goto no_context;
+
+		pud = pud_offset(p4d, addr);
+		pud_k = pud_offset(p4d_k, addr);
+		if (!pud_present(*pud_k))
+			goto no_context;
+
+		/* Since the vmalloc area is global, it is unnecessary
+		 * to copy individual PTEs
+		 */
+		pmd = pmd_offset(pud, addr);
+		pmd_k = pmd_offset(pud_k, addr);
+		if (!pmd_present(*pmd_k))
+			goto no_context;
+		set_pmd(pmd, *pmd_k);
+
+		/* Make sure the actual PTE exists as well to
+		 * catch kernel vmalloc-area accesses to non-mapped
+		 * addresses. If we don't do this, this will just
+		 * silently loop forever.
+		 */
+		pte_k = pte_offset_kernel(pmd_k, addr);
+		if (!pte_present(*pte_k))
+			goto no_context;
+		return;
+	}
+}
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
new file mode 100644
index 000000000000..8ad464ce4a4c
--- /dev/null
+++ b/arch/riscv/mm/init.c
@@ -0,0 +1,72 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/bootmem.h>
+#include <linux/initrd.h>
+#include <linux/memblock.h>
+#include <linux/swap.h>
+
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+#include <asm/pgtable.h>
+#include <asm/io.h>
+
+static void __init zone_sizes_init(void)
+{
+	unsigned long zones_size[MAX_NR_ZONES];
+
+	memset(zones_size, 0, sizeof(zones_size));
+	zones_size[ZONE_NORMAL] = pfn_base + max_mapnr;
+	free_area_init_node(0, zones_size, pfn_base, NULL);
+}
+
+void setup_zero_page(void)
+{
+	memset((void *)empty_zero_page, 0, PAGE_SIZE);
+}
+
+void __init paging_init(void)
+{
+	init_mm.pgd = (pgd_t *)pfn_to_virt(csr_read(sptbr));
+
+	setup_zero_page();
+	local_flush_tlb_all();
+	zone_sizes_init();
+}
+
+void __init mem_init(void)
+{
+#ifdef CONFIG_FLATMEM
+	BUG_ON(!mem_map);
+#endif /* CONFIG_FLATMEM */
+
+	high_memory = (void *)(__va(PFN_PHYS(max_low_pfn)));
+	free_all_bootmem();
+
+	mem_init_print_info(NULL);
+}
+
+void free_initmem(void)
+{
+	free_initmem_default(0);
+}
+
+#ifdef CONFIG_BLK_DEV_INITRD
+void free_initrd_mem(unsigned long start, unsigned long end)
+{
+//	free_reserved_area(start, end, 0, "initrd");
+}
+#endif /* CONFIG_BLK_DEV_INITRD */
+
diff --git a/arch/riscv/mm/ioremap.c b/arch/riscv/mm/ioremap.c
new file mode 100644
index 000000000000..c5cc0935096d
--- /dev/null
+++ b/arch/riscv/mm/ioremap.c
@@ -0,0 +1,93 @@
+/*
+ * (C) Copyright 1995 1996 Linus Torvalds
+ * (C) Copyright 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <linux/io.h>
+
+#include <asm/pgtable.h>
+
+/*
+ * Remap an arbitrary physical address space into the kernel virtual
+ * address space. Needed when the kernel wants to access high addresses
+ * directly.
+ *
+ * NOTE! We need to allow non-page-aligned mappings too: we will obviously
+ * have to convert them into an offset in a page-aligned mapping, but the
+ * caller shouldn't need to know that small detail.
+ */
+static void __iomem *__ioremap_caller(phys_addr_t addr, size_t size,
+	pgprot_t prot, void *caller)
+{
+	phys_addr_t last_addr;
+	unsigned long offset, vaddr;
+	struct vm_struct *area;
+
+	/* Disallow wrap-around or zero size */
+	last_addr = addr + size - 1;
+	if (!size || last_addr < addr)
+		return NULL;
+
+	/* Page-align mappings */
+	offset = addr & (~PAGE_MASK);
+	addr &= PAGE_MASK;
+	size = PAGE_ALIGN(size + offset);
+
+	area = get_vm_area_caller(size, VM_IOREMAP, caller);
+	if (!area)
+		return NULL;
+	vaddr = (unsigned long)area->addr;
+
+	if (ioremap_page_range(vaddr, vaddr + size, addr, prot)) {
+		free_vm_area(area);
+		return NULL;
+	}
+
+	return (void __iomem *)(vaddr + offset);
+}
+
+/*
+ * ioremap     -   map bus memory into CPU space
+ * @offset:    bus address of the memory
+ * @size:      size of the resource to map
+ *
+ * ioremap performs a platform specific sequence of operations to
+ * make bus memory CPU accessible via the readb/readw/readl/writeb/
+ * writew/writel functions and the other mmio helpers. The returned
+ * address is not guaranteed to be usable directly as a virtual
+ * address.
+ *
+ * Must be freed with iounmap.
+ */
+void __iomem *ioremap(phys_addr_t offset, unsigned long size)
+{
+	return __ioremap_caller(offset, size, PAGE_KERNEL,
+		__builtin_return_address(0));
+}
+EXPORT_SYMBOL(ioremap);
+
+
+/**
+ * iounmap - Free a IO remapping
+ * @addr: virtual address from ioremap_*
+ *
+ * Caller must ensure there is only one unmapping for the same pointer.
+ */
+void iounmap(void __iomem *addr)
+{
+	vunmap((void *)((unsigned long)addr & PAGE_MASK));
+}
+EXPORT_SYMBOL(iounmap);
+
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 16/17] RISC-V: Add kernel subdirectory
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 23:00     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

These files were mostly based on the score port, but many of them are
very ISA specific.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/kernel/.gitignore       |   1 +
 arch/riscv/kernel/Makefile         |  16 ++
 arch/riscv/kernel/asm-offsets.c    | 316 +++++++++++++++++++++++++++
 arch/riscv/kernel/cacheinfo.c      | 103 +++++++++
 arch/riscv/kernel/cpu.c            |  89 ++++++++
 arch/riscv/kernel/entry.S          | 437 +++++++++++++++++++++++++++++++++++++
 arch/riscv/kernel/head.S           | 147 +++++++++++++
 arch/riscv/kernel/irq.c            |  20 ++
 arch/riscv/kernel/module.c         | 215 ++++++++++++++++++
 arch/riscv/kernel/process.c        | 132 +++++++++++
 arch/riscv/kernel/ptrace.c         | 147 +++++++++++++
 arch/riscv/kernel/reset.c          |  36 +++
 arch/riscv/kernel/riscv_ksyms.c    |  16 ++
 arch/riscv/kernel/setup.c          | 240 ++++++++++++++++++++
 arch/riscv/kernel/signal.c         | 257 ++++++++++++++++++++++
 arch/riscv/kernel/smp.c            | 110 ++++++++++
 arch/riscv/kernel/smpboot.c        | 103 +++++++++
 arch/riscv/kernel/stacktrace.c     | 177 +++++++++++++++
 arch/riscv/kernel/sys_riscv.c      |  85 ++++++++
 arch/riscv/kernel/syscall_table.c  |  25 +++
 arch/riscv/kernel/traps.c          | 183 ++++++++++++++++
 arch/riscv/kernel/vdso.c           | 125 +++++++++++
 arch/riscv/kernel/vdso/.gitignore  |   1 +
 arch/riscv/kernel/vdso/Makefile    |  61 ++++++
 arch/riscv/kernel/vdso/sigreturn.S |  24 ++
 arch/riscv/kernel/vdso/vdso.S      |  27 +++
 arch/riscv/kernel/vdso/vdso.lds.S  |  76 +++++++
 arch/riscv/kernel/vmlinux.lds.S    |  92 ++++++++
 28 files changed, 3261 insertions(+)
 create mode 100644 arch/riscv/kernel/.gitignore
 create mode 100644 arch/riscv/kernel/Makefile
 create mode 100644 arch/riscv/kernel/asm-offsets.c
 create mode 100644 arch/riscv/kernel/cacheinfo.c
 create mode 100644 arch/riscv/kernel/cpu.c
 create mode 100644 arch/riscv/kernel/entry.S
 create mode 100644 arch/riscv/kernel/head.S
 create mode 100644 arch/riscv/kernel/irq.c
 create mode 100644 arch/riscv/kernel/module.c
 create mode 100644 arch/riscv/kernel/process.c
 create mode 100644 arch/riscv/kernel/ptrace.c
 create mode 100644 arch/riscv/kernel/reset.c
 create mode 100644 arch/riscv/kernel/riscv_ksyms.c
 create mode 100644 arch/riscv/kernel/setup.c
 create mode 100644 arch/riscv/kernel/signal.c
 create mode 100644 arch/riscv/kernel/smp.c
 create mode 100644 arch/riscv/kernel/smpboot.c
 create mode 100644 arch/riscv/kernel/stacktrace.c
 create mode 100644 arch/riscv/kernel/sys_riscv.c
 create mode 100644 arch/riscv/kernel/syscall_table.c
 create mode 100644 arch/riscv/kernel/traps.c
 create mode 100644 arch/riscv/kernel/vdso.c
 create mode 100644 arch/riscv/kernel/vdso/.gitignore
 create mode 100644 arch/riscv/kernel/vdso/Makefile
 create mode 100644 arch/riscv/kernel/vdso/sigreturn.S
 create mode 100644 arch/riscv/kernel/vdso/vdso.S
 create mode 100644 arch/riscv/kernel/vdso/vdso.lds.S
 create mode 100644 arch/riscv/kernel/vmlinux.lds.S

diff --git a/arch/riscv/kernel/.gitignore b/arch/riscv/kernel/.gitignore
new file mode 100644
index 000000000000..b51634f6a7cd
--- /dev/null
+++ b/arch/riscv/kernel/.gitignore
@@ -0,0 +1 @@
+/vmlinux.lds
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
new file mode 100644
index 000000000000..b6de129d4a23
--- /dev/null
+++ b/arch/riscv/kernel/Makefile
@@ -0,0 +1,16 @@
+#
+# Makefile for the RISC-V Linux kernel
+#
+
+extra-y := head.o vmlinux.lds
+
+obj-y	:= cpu.o entry.o irq.o process.o ptrace.o reset.o setup.o \
+	   signal.o syscall_table.o sys_riscv.o traps.o \
+	   riscv_ksyms.o stacktrace.o vdso.o cacheinfo.o vdso/
+
+CFLAGS_setup.o := -mcmodel=medany
+
+obj-$(CONFIG_SMP)		+= smpboot.o smp.o
+obj-$(CONFIG_MODULES)		+= module.o
+
+clean:
diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
new file mode 100644
index 000000000000..2ead5037528c
--- /dev/null
+++ b/arch/riscv/kernel/asm-offsets.c
@@ -0,0 +1,316 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/kbuild.h>
+#include <linux/sched.h>
+#include <asm/thread_info.h>
+#include <asm/ptrace.h>
+
+void asm_offsets(void)
+{
+	OFFSET(TASK_THREAD_RA, task_struct, thread.ra);
+	OFFSET(TASK_THREAD_SP, task_struct, thread.sp);
+	OFFSET(TASK_THREAD_S0, task_struct, thread.s[0]);
+	OFFSET(TASK_THREAD_S1, task_struct, thread.s[1]);
+	OFFSET(TASK_THREAD_S2, task_struct, thread.s[2]);
+	OFFSET(TASK_THREAD_S3, task_struct, thread.s[3]);
+	OFFSET(TASK_THREAD_S4, task_struct, thread.s[4]);
+	OFFSET(TASK_THREAD_S5, task_struct, thread.s[5]);
+	OFFSET(TASK_THREAD_S6, task_struct, thread.s[6]);
+	OFFSET(TASK_THREAD_S7, task_struct, thread.s[7]);
+	OFFSET(TASK_THREAD_S8, task_struct, thread.s[8]);
+	OFFSET(TASK_THREAD_S9, task_struct, thread.s[9]);
+	OFFSET(TASK_THREAD_S10, task_struct, thread.s[10]);
+	OFFSET(TASK_THREAD_S11, task_struct, thread.s[11]);
+	OFFSET(TASK_THREAD_SP, task_struct, thread.sp);
+	OFFSET(TASK_STACK, task_struct, stack);
+	OFFSET(TASK_TI, task_struct, thread_info);
+	OFFSET(TASK_TI_FLAGS, task_struct, thread_info.flags);
+	OFFSET(TASK_TI_KERNEL_SP, task_struct, thread_info.kernel_sp);
+	OFFSET(TASK_TI_USER_SP, task_struct, thread_info.user_sp);
+
+	OFFSET(TASK_THREAD_F0,  task_struct, thread.fstate.f[0]);
+	OFFSET(TASK_THREAD_F1,  task_struct, thread.fstate.f[1]);
+	OFFSET(TASK_THREAD_F2,  task_struct, thread.fstate.f[2]);
+	OFFSET(TASK_THREAD_F3,  task_struct, thread.fstate.f[3]);
+	OFFSET(TASK_THREAD_F4,  task_struct, thread.fstate.f[4]);
+	OFFSET(TASK_THREAD_F5,  task_struct, thread.fstate.f[5]);
+	OFFSET(TASK_THREAD_F6,  task_struct, thread.fstate.f[6]);
+	OFFSET(TASK_THREAD_F7,  task_struct, thread.fstate.f[7]);
+	OFFSET(TASK_THREAD_F8,  task_struct, thread.fstate.f[8]);
+	OFFSET(TASK_THREAD_F9,  task_struct, thread.fstate.f[9]);
+	OFFSET(TASK_THREAD_F10, task_struct, thread.fstate.f[10]);
+	OFFSET(TASK_THREAD_F11, task_struct, thread.fstate.f[11]);
+	OFFSET(TASK_THREAD_F12, task_struct, thread.fstate.f[12]);
+	OFFSET(TASK_THREAD_F13, task_struct, thread.fstate.f[13]);
+	OFFSET(TASK_THREAD_F14, task_struct, thread.fstate.f[14]);
+	OFFSET(TASK_THREAD_F15, task_struct, thread.fstate.f[15]);
+	OFFSET(TASK_THREAD_F16, task_struct, thread.fstate.f[16]);
+	OFFSET(TASK_THREAD_F17, task_struct, thread.fstate.f[17]);
+	OFFSET(TASK_THREAD_F18, task_struct, thread.fstate.f[18]);
+	OFFSET(TASK_THREAD_F19, task_struct, thread.fstate.f[19]);
+	OFFSET(TASK_THREAD_F20, task_struct, thread.fstate.f[20]);
+	OFFSET(TASK_THREAD_F21, task_struct, thread.fstate.f[21]);
+	OFFSET(TASK_THREAD_F22, task_struct, thread.fstate.f[22]);
+	OFFSET(TASK_THREAD_F23, task_struct, thread.fstate.f[23]);
+	OFFSET(TASK_THREAD_F24, task_struct, thread.fstate.f[24]);
+	OFFSET(TASK_THREAD_F25, task_struct, thread.fstate.f[25]);
+	OFFSET(TASK_THREAD_F26, task_struct, thread.fstate.f[26]);
+	OFFSET(TASK_THREAD_F27, task_struct, thread.fstate.f[27]);
+	OFFSET(TASK_THREAD_F28, task_struct, thread.fstate.f[28]);
+	OFFSET(TASK_THREAD_F29, task_struct, thread.fstate.f[29]);
+	OFFSET(TASK_THREAD_F30, task_struct, thread.fstate.f[30]);
+	OFFSET(TASK_THREAD_F31, task_struct, thread.fstate.f[31]);
+	OFFSET(TASK_THREAD_FCSR, task_struct, thread.fstate.fcsr);
+
+	DEFINE(PT_SIZE, sizeof(struct pt_regs));
+	OFFSET(PT_SEPC, pt_regs, sepc);
+	OFFSET(PT_RA, pt_regs, ra);
+	OFFSET(PT_FP, pt_regs, s0);
+	OFFSET(PT_S0, pt_regs, s0);
+	OFFSET(PT_S1, pt_regs, s1);
+	OFFSET(PT_S2, pt_regs, s2);
+	OFFSET(PT_S3, pt_regs, s3);
+	OFFSET(PT_S4, pt_regs, s4);
+	OFFSET(PT_S5, pt_regs, s5);
+	OFFSET(PT_S6, pt_regs, s6);
+	OFFSET(PT_S7, pt_regs, s7);
+	OFFSET(PT_S8, pt_regs, s8);
+	OFFSET(PT_S9, pt_regs, s9);
+	OFFSET(PT_S10, pt_regs, s10);
+	OFFSET(PT_S11, pt_regs, s11);
+	OFFSET(PT_SP, pt_regs, sp);
+	OFFSET(PT_TP, pt_regs, tp);
+	OFFSET(PT_A0, pt_regs, a0);
+	OFFSET(PT_A1, pt_regs, a1);
+	OFFSET(PT_A2, pt_regs, a2);
+	OFFSET(PT_A3, pt_regs, a3);
+	OFFSET(PT_A4, pt_regs, a4);
+	OFFSET(PT_A5, pt_regs, a5);
+	OFFSET(PT_A6, pt_regs, a6);
+	OFFSET(PT_A7, pt_regs, a7);
+	OFFSET(PT_T0, pt_regs, t0);
+	OFFSET(PT_T1, pt_regs, t1);
+	OFFSET(PT_T2, pt_regs, t2);
+	OFFSET(PT_T3, pt_regs, t3);
+	OFFSET(PT_T4, pt_regs, t4);
+	OFFSET(PT_T5, pt_regs, t5);
+	OFFSET(PT_T6, pt_regs, t6);
+	OFFSET(PT_GP, pt_regs, gp);
+	OFFSET(PT_SSTATUS, pt_regs, sstatus);
+	OFFSET(PT_SBADADDR, pt_regs, sbadaddr);
+	OFFSET(PT_SCAUSE, pt_regs, scause);
+
+	/* THREAD_{F,X}* might be larger than a S-type offset can handle, but
+	 * these are used in performance-sensitive assembly so we can't resort
+	 * to loading the long immediate every time.
+	 */
+	DEFINE(TASK_THREAD_RA_RA,
+		  offsetof(struct task_struct, thread.ra)
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_SP_RA,
+		  offsetof(struct task_struct, thread.sp)
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S0_RA,
+		  offsetof(struct task_struct, thread.s[0])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S1_RA,
+		  offsetof(struct task_struct, thread.s[1])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S2_RA,
+		  offsetof(struct task_struct, thread.s[2])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S3_RA,
+		  offsetof(struct task_struct, thread.s[3])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S4_RA,
+		  offsetof(struct task_struct, thread.s[4])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S5_RA,
+		  offsetof(struct task_struct, thread.s[5])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S6_RA,
+		  offsetof(struct task_struct, thread.s[6])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S7_RA,
+		  offsetof(struct task_struct, thread.s[7])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S8_RA,
+		  offsetof(struct task_struct, thread.s[8])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S9_RA,
+		  offsetof(struct task_struct, thread.s[9])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S10_RA,
+		  offsetof(struct task_struct, thread.s[10])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S11_RA,
+		  offsetof(struct task_struct, thread.s[11])
+		- offsetof(struct task_struct, thread.ra)
+	);
+
+	DEFINE(TASK_THREAD_F0_F0,
+		  offsetof(struct task_struct, thread.fstate.f[0])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F1_F0,
+		  offsetof(struct task_struct, thread.fstate.f[1])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F2_F0,
+		  offsetof(struct task_struct, thread.fstate.f[2])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F3_F0,
+		  offsetof(struct task_struct, thread.fstate.f[3])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F4_F0,
+		  offsetof(struct task_struct, thread.fstate.f[4])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F5_F0,
+		  offsetof(struct task_struct, thread.fstate.f[5])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F6_F0,
+		  offsetof(struct task_struct, thread.fstate.f[6])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F7_F0,
+		  offsetof(struct task_struct, thread.fstate.f[7])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F8_F0,
+		  offsetof(struct task_struct, thread.fstate.f[8])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F9_F0,
+		  offsetof(struct task_struct, thread.fstate.f[9])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F10_F0,
+		  offsetof(struct task_struct, thread.fstate.f[10])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F11_F0,
+		  offsetof(struct task_struct, thread.fstate.f[11])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F12_F0,
+		  offsetof(struct task_struct, thread.fstate.f[12])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F13_F0,
+		  offsetof(struct task_struct, thread.fstate.f[13])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F14_F0,
+		  offsetof(struct task_struct, thread.fstate.f[14])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F15_F0,
+		  offsetof(struct task_struct, thread.fstate.f[15])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F16_F0,
+		  offsetof(struct task_struct, thread.fstate.f[16])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F17_F0,
+		  offsetof(struct task_struct, thread.fstate.f[17])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F18_F0,
+		  offsetof(struct task_struct, thread.fstate.f[18])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F19_F0,
+		  offsetof(struct task_struct, thread.fstate.f[19])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F20_F0,
+		  offsetof(struct task_struct, thread.fstate.f[20])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F21_F0,
+		  offsetof(struct task_struct, thread.fstate.f[21])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F22_F0,
+		  offsetof(struct task_struct, thread.fstate.f[22])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F23_F0,
+		  offsetof(struct task_struct, thread.fstate.f[23])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F24_F0,
+		  offsetof(struct task_struct, thread.fstate.f[24])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F25_F0,
+		  offsetof(struct task_struct, thread.fstate.f[25])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F26_F0,
+		  offsetof(struct task_struct, thread.fstate.f[26])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F27_F0,
+		  offsetof(struct task_struct, thread.fstate.f[27])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F28_F0,
+		  offsetof(struct task_struct, thread.fstate.f[28])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F29_F0,
+		  offsetof(struct task_struct, thread.fstate.f[29])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F30_F0,
+		  offsetof(struct task_struct, thread.fstate.f[30])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F31_F0,
+		  offsetof(struct task_struct, thread.fstate.f[31])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_FCSR_F0,
+		  offsetof(struct task_struct, thread.fstate.fcsr)
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+
+	/* The assembler needs access to THREAD_SIZE as well. */
+	DEFINE(ASM_THREAD_SIZE, THREAD_SIZE);
+
+	/* We allocate a pt_regs on the stack when entering the kernel.  This
+	 * ensures the alignment is sane.
+	 */
+	DEFINE(PT_SIZE_ON_STACK, ALIGN(sizeof(struct pt_regs), STACK_ALIGN));
+}
diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c
new file mode 100644
index 000000000000..76ed95850a22
--- /dev/null
+++ b/arch/riscv/kernel/cacheinfo.c
@@ -0,0 +1,103 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/cacheinfo.h>
+#include <linux/cpu.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+
+static void ci_leaf_init(struct cacheinfo *this_leaf,
+			 struct device_node *node,
+			 enum cache_type type, unsigned int level)
+{
+	this_leaf->of_node = node;
+	this_leaf->level = level;
+	this_leaf->type = type;
+	this_leaf->physical_line_partition = 1; // not a sector cache
+	this_leaf->attributes =
+		CACHE_WRITE_BACK
+		| CACHE_READ_ALLOCATE
+		| CACHE_WRITE_ALLOCATE; // TODO: add to DTS
+}
+
+static int __init_cache_level(unsigned int cpu)
+{
+	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+	struct device_node *np = of_cpu_device_node_get(cpu);
+	int levels = 0, leaves = 0, level;
+
+	if (of_property_read_bool(np, "cache-size"))
+		++leaves;
+	if (of_property_read_bool(np, "i-cache-size"))
+		++leaves;
+	if (of_property_read_bool(np, "d-cache-size"))
+		++leaves;
+	if (leaves > 0)
+		levels = 1;
+
+	while ((np = of_find_next_cache_node(np))) {
+		if (!of_device_is_compatible(np, "cache"))
+			break;
+		if (of_property_read_u32(np, "cache-level", &level))
+			break;
+		if (level <= levels)
+			break;
+		if (of_property_read_bool(np, "cache-size"))
+			++leaves;
+		if (of_property_read_bool(np, "i-cache-size"))
+			++leaves;
+		if (of_property_read_bool(np, "d-cache-size"))
+			++leaves;
+		levels = level;
+	}
+
+	this_cpu_ci->num_levels = levels;
+	this_cpu_ci->num_leaves = leaves;
+	return 0;
+}
+
+static int __populate_cache_leaves(unsigned int cpu)
+{
+	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+	struct cacheinfo *this_leaf = this_cpu_ci->info_list;
+	struct device_node *np = of_cpu_device_node_get(cpu);
+	int levels = 1, level = 1;
+
+	if (of_property_read_bool(np, "cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
+	if (of_property_read_bool(np, "i-cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
+	if (of_property_read_bool(np, "d-cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
+
+	while ((np = of_find_next_cache_node(np))) {
+		if (!of_device_is_compatible(np, "cache"))
+			break;
+		if (of_property_read_u32(np, "cache-level", &level))
+			break;
+		if (level <= levels)
+			break;
+		if (of_property_read_bool(np, "cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
+		if (of_property_read_bool(np, "i-cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
+		if (of_property_read_bool(np, "d-cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
+		levels = level;
+	}
+
+	return 0;
+}
+
+DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
+DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c
new file mode 100644
index 000000000000..20004bd7a216
--- /dev/null
+++ b/arch/riscv/kernel/cpu.c
@@ -0,0 +1,89 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <linux/of.h>
+
+/* Return -1 if not a valid hart */
+int riscv_of_processor_hart(struct device_node *node)
+{
+	const char *isa, *status;
+	u32 hart;
+
+	if (!of_device_is_compatible(node, "riscv"))
+		return -(ENODEV);
+	if (of_property_read_u32(node, "reg", &hart)
+	    || hart >= NR_CPUS)
+		return -(ENODEV);
+	if (of_property_read_string(node, "status", &status)
+	    || strcmp(status, "okay"))
+		return -(ENODEV);
+	if (of_property_read_string(node, "riscv,isa", &isa)
+	    || isa[0] != 'r'
+	    || isa[1] != 'v')
+		return -(ENODEV);
+
+	return hart;
+}
+
+#ifdef CONFIG_PROC_FS
+
+static void *c_start(struct seq_file *m, loff_t *pos)
+{
+	*pos = cpumask_next(*pos - 1, cpu_online_mask);
+	if ((*pos) < nr_cpu_ids)
+		return (void *)(uintptr_t)(1 + *pos);
+	return NULL;
+}
+
+static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+{
+	(*pos)++;
+	return c_start(m, pos);
+}
+
+static void c_stop(struct seq_file *m, void *v)
+{
+}
+
+static int c_show(struct seq_file *m, void *v)
+{
+	unsigned long hart_id = (unsigned long)v - 1;
+	struct device_node *node = of_get_cpu_node(hart_id, NULL);
+	const char *compat, *isa, *mmu;
+
+	seq_printf(m, "hart\t: %lu\n", hart_id);
+	if (!of_property_read_string(node, "riscv,isa", &isa)
+	    && isa[0] == 'r'
+	    && isa[1] == 'v')
+		seq_printf(m, "isa\t: %s\n", isa);
+	if (!of_property_read_string(node, "mmu-type", &mmu)
+	    && !strncmp(mmu, "riscv,", 6))
+		seq_printf(m, "mmu\t: %s\n", mmu+6);
+	if (!of_property_read_string(node, "compatible", &compat)
+	    && strcmp(compat, "riscv"))
+		seq_printf(m, "uarch\t: %s\n", compat);
+	seq_puts(m, "\n");
+
+	return 0;
+}
+
+const struct seq_operations cpuinfo_op = {
+	.start	= c_start,
+	.next	= c_next,
+	.stop	= c_stop,
+	.show	= c_show
+};
+
+#endif /* CONFIG_PROC_FS */
diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
new file mode 100644
index 000000000000..0be9ca33e8fc
--- /dev/null
+++ b/arch/riscv/kernel/entry.S
@@ -0,0 +1,437 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+
+#include <asm/asm.h>
+#include <asm/csr.h>
+#include <asm/unistd.h>
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+
+	.text
+	.altmacro
+
+/* Prepares to enter a system call or exception by saving all registers to the
+ * stack.
+ */
+	.macro SAVE_ALL
+	LOCAL _restore_kernel_tpsp
+	LOCAL _save_context
+
+	/* If coming from userspace, preserve the user thread pointer and load
+	   the kernel thread pointer.  If we came from the kernel, sscratch
+	   will contain 0, and we should continue on the current TP. */
+	csrrw tp, sscratch, tp
+	bnez tp, _save_context
+
+_restore_kernel_tpsp:
+	csrr tp, sscratch
+	REG_S sp, TASK_TI_KERNEL_SP(tp)
+_save_context:
+	REG_S sp, TASK_TI_USER_SP(tp)
+	REG_L sp, TASK_TI_KERNEL_SP(tp)
+	addi sp, sp, -(PT_SIZE_ON_STACK)
+	REG_S x1,  PT_RA(sp)
+	REG_S x3,  PT_GP(sp)
+	REG_S x5,  PT_T0(sp)
+	REG_S x6,  PT_T1(sp)
+	REG_S x7,  PT_T2(sp)
+	REG_S x8,  PT_S0(sp)
+	REG_S x9,  PT_S1(sp)
+	REG_S x10, PT_A0(sp)
+	REG_S x11, PT_A1(sp)
+	REG_S x12, PT_A2(sp)
+	REG_S x13, PT_A3(sp)
+	REG_S x14, PT_A4(sp)
+	REG_S x15, PT_A5(sp)
+	REG_S x16, PT_A6(sp)
+	REG_S x17, PT_A7(sp)
+	REG_S x18, PT_S2(sp)
+	REG_S x19, PT_S3(sp)
+	REG_S x20, PT_S4(sp)
+	REG_S x21, PT_S5(sp)
+	REG_S x22, PT_S6(sp)
+	REG_S x23, PT_S7(sp)
+	REG_S x24, PT_S8(sp)
+	REG_S x25, PT_S9(sp)
+	REG_S x26, PT_S10(sp)
+	REG_S x27, PT_S11(sp)
+	REG_S x28, PT_T3(sp)
+	REG_S x29, PT_T4(sp)
+	REG_S x30, PT_T5(sp)
+	REG_S x31, PT_T6(sp)
+
+	/* Disable FPU to detect illegal usage of
+	   floating point in kernel space */
+	li t0, SR_FS
+
+	REG_L s0, TASK_TI_USER_SP(tp)
+	csrrc s1, sstatus, t0
+	csrr s2, sepc
+	csrr s3, sbadaddr
+	csrr s4, scause
+	csrr s5, sscratch
+	REG_S s0, PT_SP(sp)
+	REG_S s1, PT_SSTATUS(sp)
+	REG_S s2, PT_SEPC(sp)
+	REG_S s3, PT_SBADADDR(sp)
+	REG_S s4, PT_SCAUSE(sp)
+	REG_S s5, PT_TP(sp)
+	.endm
+
+/* Prepares to return from a system call or exception by restoring all
+ * registers from the stack.
+ */
+	.macro RESTORE_ALL
+	REG_L a0, PT_SSTATUS(sp)
+	REG_L a2, PT_SEPC(sp)
+	csrw sstatus, a0
+	csrw sepc, a2
+
+	REG_L x1,  PT_RA(sp)
+	REG_L x3,  PT_GP(sp)
+	REG_L x4,  PT_TP(sp)
+	REG_L x5,  PT_T0(sp)
+	REG_L x6,  PT_T1(sp)
+	REG_L x7,  PT_T2(sp)
+	REG_L x8,  PT_S0(sp)
+	REG_L x9,  PT_S1(sp)
+	REG_L x10, PT_A0(sp)
+	REG_L x11, PT_A1(sp)
+	REG_L x12, PT_A2(sp)
+	REG_L x13, PT_A3(sp)
+	REG_L x14, PT_A4(sp)
+	REG_L x15, PT_A5(sp)
+	REG_L x16, PT_A6(sp)
+	REG_L x17, PT_A7(sp)
+	REG_L x18, PT_S2(sp)
+	REG_L x19, PT_S3(sp)
+	REG_L x20, PT_S4(sp)
+	REG_L x21, PT_S5(sp)
+	REG_L x22, PT_S6(sp)
+	REG_L x23, PT_S7(sp)
+	REG_L x24, PT_S8(sp)
+	REG_L x25, PT_S9(sp)
+	REG_L x26, PT_S10(sp)
+	REG_L x27, PT_S11(sp)
+	REG_L x28, PT_T3(sp)
+	REG_L x29, PT_T4(sp)
+	REG_L x30, PT_T5(sp)
+	REG_L x31, PT_T6(sp)
+
+	REG_L x2,  PT_SP(sp)
+	.endm
+
+ENTRY(handle_exception)
+	SAVE_ALL
+
+	/* Set sscratch register to 0, so that if a recursive exception
+	   occurs, the exception vector knows it came from the kernel */
+	csrw sscratch, x0
+
+	la gp, __global_pointer$
+
+	la ra, ret_from_exception
+	/* MSB of cause differentiates between
+	   interrupts and exceptions */
+	bge s4, zero, 1f
+
+	/* Handle interrupts */
+	slli a0, s4, 1
+	srli a0, a0, 1
+	move a1, sp /* pt_regs */
+	tail do_IRQ
+1:
+	/* Handle syscalls */
+	li t0, EXC_SYSCALL
+	beq s4, t0, handle_syscall
+
+	/* Handle other exceptions */
+	slli t0, s4, RISCV_LGPTR
+	la t1, excp_vect_table
+	la t2, excp_vect_table_end
+	move a0, sp /* pt_regs */
+	add t0, t1, t0
+	/* Check if exception code lies within bounds */
+	bgeu t0, t2, 1f
+	REG_L t0, 0(t0)
+	jr t0
+1:
+	tail do_trap_unknown
+
+handle_syscall:
+	/* Advance SEPC to avoid executing the original
+	   scall instruction on sret */
+	addi s2, s2, 0x4
+	REG_S s2, PT_SEPC(sp)
+	/* System calls run with interrupts enabled */
+	csrs sstatus, SR_IE
+	/* Trace syscalls, but only if requested by the user. */
+	REG_L t0, TASK_TI_FLAGS(tp)
+	andi t0, t0, _TIF_SYSCALL_TRACE
+	bnez t0, handle_syscall_trace_enter
+check_syscall_nr:
+	/* Check to make sure we don't jump to a bogus syscall number. */
+	li t0, __NR_syscalls
+	la s0, sys_ni_syscall
+	/* Syscall number held in a7 */
+	bgeu a7, t0, 1f
+	la s0, sys_call_table
+	slli t0, a7, RISCV_LGPTR
+	add s0, s0, t0
+	REG_L s0, 0(s0)
+1:
+	jalr s0
+
+ret_from_syscall:
+	/* Set user a0 to kernel a0 */
+	REG_S a0, PT_A0(sp)
+	/* Trace syscalls, but only if requested by the user. */
+	REG_L t0, TASK_TI_FLAGS(tp)
+	andi t0, t0, _TIF_SYSCALL_TRACE
+	bnez t0, handle_syscall_trace_exit
+
+ret_from_exception:
+	REG_L s0, PT_SSTATUS(sp)
+	csrc sstatus, SR_IE
+	andi s0, s0, SR_PS
+	bnez s0, restore_all
+
+resume_userspace:
+	/* Interrupts must be disabled here so flags are checked atomically */
+	REG_L s0, TASK_TI_FLAGS(tp) /* current_thread_info->flags */
+	andi s1, s0, _TIF_WORK_MASK
+	bnez s1, work_pending
+
+	/* Save unwound kernel stack pointer in thread_info */
+	addi s0, sp, PT_SIZE_ON_STACK
+	REG_S s0, TASK_TI_KERNEL_SP(tp)
+
+	/* Save TP into sscratch, so we can find the kernel data structures
+	 * again. */
+	csrw sscratch, tp
+
+restore_all:
+	RESTORE_ALL
+	sret
+
+work_pending:
+	/* Enter slow path for supplementary processing */
+	la ra, ret_from_exception
+	andi s1, s0, _TIF_NEED_RESCHED
+	bnez s1, work_resched
+work_notifysig:
+	/* Handle pending signals and notify-resume requests */
+	csrs sstatus, SR_IE /* Enable interrupts for do_notify_resume() */
+	move a0, sp /* pt_regs */
+	move a1, s0 /* current_thread_info->flags */
+	tail do_notify_resume
+work_resched:
+	tail schedule
+
+/* Slow paths for ptrace. */
+handle_syscall_trace_enter:
+	move a0, sp
+	call do_syscall_trace_enter
+	REG_L a0, PT_A0(sp)
+	REG_L a1, PT_A1(sp)
+	REG_L a2, PT_A2(sp)
+	REG_L a3, PT_A3(sp)
+	REG_L a4, PT_A4(sp)
+	REG_L a5, PT_A5(sp)
+	REG_L a6, PT_A6(sp)
+	REG_L a7, PT_A7(sp)
+	j check_syscall_nr
+handle_syscall_trace_exit:
+	move a0, sp
+	call do_syscall_trace_exit
+	j ret_from_exception
+
+END(handle_exception)
+
+ENTRY(ret_from_fork)
+	la ra, ret_from_exception
+	tail schedule_tail
+ENDPROC(ret_from_fork)
+
+ENTRY(ret_from_kernel_thread)
+	call schedule_tail
+	/* Call fn(arg) */
+	la ra, ret_from_exception
+	move a0, s1
+	jr s0
+ENDPROC(ret_from_kernel_thread)
+
+
+/*
+ * Integer register context switch
+ * The callee-saved registers must be saved and restored.
+ *
+ *   a0: previous task_struct (must be preserved across the switch)
+ *   a1: next task_struct
+ */
+ENTRY(__switch_to)
+	/* Save context into prev->thread */
+	li    a2,  TASK_THREAD_RA
+	add   a0, a0, a2
+	add   a2, a1, a2
+	REG_S ra,  TASK_THREAD_RA_RA(a0)
+	REG_S sp,  TASK_THREAD_SP_RA(a0)
+	REG_S s0,  TASK_THREAD_S0_RA(a0)
+	REG_S s1,  TASK_THREAD_S1_RA(a0)
+	REG_S s2,  TASK_THREAD_S2_RA(a0)
+	REG_S s3,  TASK_THREAD_S3_RA(a0)
+	REG_S s4,  TASK_THREAD_S4_RA(a0)
+	REG_S s5,  TASK_THREAD_S5_RA(a0)
+	REG_S s6,  TASK_THREAD_S6_RA(a0)
+	REG_S s7,  TASK_THREAD_S7_RA(a0)
+	REG_S s8,  TASK_THREAD_S8_RA(a0)
+	REG_S s9,  TASK_THREAD_S9_RA(a0)
+	REG_S s10, TASK_THREAD_S10_RA(a0)
+	REG_S s11, TASK_THREAD_S11_RA(a0)
+	/* Restore context from next->thread */
+	REG_L ra,  TASK_THREAD_RA_RA(a2)
+	REG_L sp,  TASK_THREAD_SP_RA(a2)
+	REG_L s0,  TASK_THREAD_S0_RA(a2)
+	REG_L s1,  TASK_THREAD_S1_RA(a2)
+	REG_L s2,  TASK_THREAD_S2_RA(a2)
+	REG_L s3,  TASK_THREAD_S3_RA(a2)
+	REG_L s4,  TASK_THREAD_S4_RA(a2)
+	REG_L s5,  TASK_THREAD_S5_RA(a2)
+	REG_L s6,  TASK_THREAD_S6_RA(a2)
+	REG_L s7,  TASK_THREAD_S7_RA(a2)
+	REG_L s8,  TASK_THREAD_S8_RA(a2)
+	REG_L s9,  TASK_THREAD_S9_RA(a2)
+	REG_L s10, TASK_THREAD_S10_RA(a2)
+	REG_L s11, TASK_THREAD_S11_RA(a2)
+#if TASK_TI != 0
+#error "TASK_TI != 0: tp will contain a 'struct thread_info', not a 'struct task_struct' so get_current() won't work."
+	addi tp, a1, TASK_TI
+#else
+	move tp, a1
+#endif
+	ret
+ENDPROC(__switch_to)
+
+ENTRY(__fstate_save)
+	li  a2,  TASK_THREAD_F0
+	add a0, a0, a2
+	li t1, SR_FS
+	csrs sstatus, t1
+	frcsr t0
+	fsd f0,  TASK_THREAD_F0_F0(a0)
+	fsd f1,  TASK_THREAD_F1_F0(a0)
+	fsd f2,  TASK_THREAD_F2_F0(a0)
+	fsd f3,  TASK_THREAD_F3_F0(a0)
+	fsd f4,  TASK_THREAD_F4_F0(a0)
+	fsd f5,  TASK_THREAD_F5_F0(a0)
+	fsd f6,  TASK_THREAD_F6_F0(a0)
+	fsd f7,  TASK_THREAD_F7_F0(a0)
+	fsd f8,  TASK_THREAD_F8_F0(a0)
+	fsd f9,  TASK_THREAD_F9_F0(a0)
+	fsd f10, TASK_THREAD_F10_F0(a0)
+	fsd f11, TASK_THREAD_F11_F0(a0)
+	fsd f12, TASK_THREAD_F12_F0(a0)
+	fsd f13, TASK_THREAD_F13_F0(a0)
+	fsd f14, TASK_THREAD_F14_F0(a0)
+	fsd f15, TASK_THREAD_F15_F0(a0)
+	fsd f16, TASK_THREAD_F16_F0(a0)
+	fsd f17, TASK_THREAD_F17_F0(a0)
+	fsd f18, TASK_THREAD_F18_F0(a0)
+	fsd f19, TASK_THREAD_F19_F0(a0)
+	fsd f20, TASK_THREAD_F20_F0(a0)
+	fsd f21, TASK_THREAD_F21_F0(a0)
+	fsd f22, TASK_THREAD_F22_F0(a0)
+	fsd f23, TASK_THREAD_F23_F0(a0)
+	fsd f24, TASK_THREAD_F24_F0(a0)
+	fsd f25, TASK_THREAD_F25_F0(a0)
+	fsd f26, TASK_THREAD_F26_F0(a0)
+	fsd f27, TASK_THREAD_F27_F0(a0)
+	fsd f28, TASK_THREAD_F28_F0(a0)
+	fsd f29, TASK_THREAD_F29_F0(a0)
+	fsd f30, TASK_THREAD_F30_F0(a0)
+	fsd f31, TASK_THREAD_F31_F0(a0)
+	sw t0, TASK_THREAD_FCSR_F0(a0)
+	csrc sstatus, t1
+	ret
+ENDPROC(__fstate_save)
+
+ENTRY(__fstate_restore)
+	li  a2,  TASK_THREAD_F0
+	add a0, a0, a2
+	li t1, SR_FS
+	lw t0, TASK_THREAD_FCSR_F0(a0)
+	csrs sstatus, t1
+	fld f0,  TASK_THREAD_F0_F0(a0)
+	fld f1,  TASK_THREAD_F1_F0(a0)
+	fld f2,  TASK_THREAD_F2_F0(a0)
+	fld f3,  TASK_THREAD_F3_F0(a0)
+	fld f4,  TASK_THREAD_F4_F0(a0)
+	fld f5,  TASK_THREAD_F5_F0(a0)
+	fld f6,  TASK_THREAD_F6_F0(a0)
+	fld f7,  TASK_THREAD_F7_F0(a0)
+	fld f8,  TASK_THREAD_F8_F0(a0)
+	fld f9,  TASK_THREAD_F9_F0(a0)
+	fld f10, TASK_THREAD_F10_F0(a0)
+	fld f11, TASK_THREAD_F11_F0(a0)
+	fld f12, TASK_THREAD_F12_F0(a0)
+	fld f13, TASK_THREAD_F13_F0(a0)
+	fld f14, TASK_THREAD_F14_F0(a0)
+	fld f15, TASK_THREAD_F15_F0(a0)
+	fld f16, TASK_THREAD_F16_F0(a0)
+	fld f17, TASK_THREAD_F17_F0(a0)
+	fld f18, TASK_THREAD_F18_F0(a0)
+	fld f19, TASK_THREAD_F19_F0(a0)
+	fld f20, TASK_THREAD_F20_F0(a0)
+	fld f21, TASK_THREAD_F21_F0(a0)
+	fld f22, TASK_THREAD_F22_F0(a0)
+	fld f23, TASK_THREAD_F23_F0(a0)
+	fld f24, TASK_THREAD_F24_F0(a0)
+	fld f25, TASK_THREAD_F25_F0(a0)
+	fld f26, TASK_THREAD_F26_F0(a0)
+	fld f27, TASK_THREAD_F27_F0(a0)
+	fld f28, TASK_THREAD_F28_F0(a0)
+	fld f29, TASK_THREAD_F29_F0(a0)
+	fld f30, TASK_THREAD_F30_F0(a0)
+	fld f31, TASK_THREAD_F31_F0(a0)
+	fscsr t0
+	csrc sstatus, t1
+	ret
+ENDPROC(__fstate_restore)
+
+
+	.section ".rodata"
+	/* Exception vector table */
+ENTRY(excp_vect_table)
+	RISCV_PTR do_trap_insn_misaligned
+	RISCV_PTR do_trap_insn_fault
+	RISCV_PTR do_trap_insn_illegal
+	RISCV_PTR do_trap_break
+	RISCV_PTR do_trap_load_misaligned
+	RISCV_PTR do_trap_load_fault
+	RISCV_PTR do_trap_store_misaligned
+	RISCV_PTR do_trap_store_fault
+	RISCV_PTR do_trap_ecall_u /* system call, gets intercepted */
+	RISCV_PTR do_trap_ecall_s
+	RISCV_PTR do_trap_unknown
+	RISCV_PTR do_trap_ecall_m
+	RISCV_PTR do_page_fault   /* instruction page fault */
+	RISCV_PTR do_page_fault   /* load page fault */
+	RISCV_PTR do_trap_unknown
+	RISCV_PTR do_page_fault   /* store page fault */
+excp_vect_table_end:
+END(excp_vect_table)
+
diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
new file mode 100644
index 000000000000..608e57d4531f
--- /dev/null
+++ b/arch/riscv/kernel/head.S
@@ -0,0 +1,147 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+#include <asm/asm.h>
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/thread_info.h>
+#include <asm/page.h>
+#include <asm/csr.h>
+
+__INIT
+ENTRY(_start)
+	/* Mask all interrupts */
+	csrw sie, zero
+
+	/* Disable FPU to detect illegal usage of
+	   floating point in kernel space */
+	li t0, SR_FS
+	csrc sstatus, t0
+
+#ifndef CONFIG_RV_PUM
+	/* Allow access to user memory */
+	li t0, SR_SUM
+	csrs sstatus, t0
+#endif
+
+#ifdef CONFIG_ISA_A
+	/* Pick one hart to run the main boot sequence */
+	la a3, hart_lottery
+	li a2, 1
+	amoadd.w a3, a2, (a3)
+	bnez a3, .Lsecondary_start
+#else
+	/* We don't have atomic support, so the boot hart must be picked
+	 * staticly.  Hart 0 is the only sane choice.
+	 */
+	bnez a0, .Lsecondary_park
+#endif
+
+	/* Save hart ID and DTB physical address */
+	mv s0, a0
+	mv s1, a1
+
+	/* Initialize page tables and relocate to virtual addresses */
+	la sp, init_thread_union + THREAD_SIZE
+	call setup_vm
+	call relocate
+
+	/* Restore C environment */
+	la tp, init_task
+
+	la sp, init_thread_union
+	li a0, ASM_THREAD_SIZE
+	add sp, sp, a0
+
+	/* Start the kernel */
+	mv a0, s0
+	mv a1, s1
+	call sbi_save
+	tail start_kernel
+
+relocate:
+	/* Relocate return address */
+	li a1, PAGE_OFFSET
+	la a0, _start
+	sub a1, a1, a0
+	add ra, ra, a1
+
+	/* Point stvec to virtual address of intruction after sptbr write */
+	la a0, 1f
+	add a0, a0, a1
+	csrw stvec, a0
+
+	/* Compute sptbr for kernel page tables, but don't load it yet */
+	la a2, swapper_pg_dir
+	srl a2, a2, PAGE_SHIFT
+	li a1, SPTBR_MODE
+	or a2, a2, a1
+
+	/* Load trampoline page directory, which will cause us to trap to
+	   stvec if VA != PA, or simply fall through if VA == PA */
+	la a0, trampoline_pg_dir
+	srl a0, a0, PAGE_SHIFT
+	or a0, a0, a1
+	sfence.vma
+	csrw sptbr, a0
+1:
+	/* Set trap vector to spin forever to help debug */
+	la a0, .Lsecondary_park
+	csrw stvec, a0
+
+	/* Load the global pointer */
+	la gp, __global_pointer$
+
+	/* Switch to kernel page tables */
+	csrw sptbr, a2
+
+	ret
+
+.Lsecondary_start:
+#ifdef CONFIG_SMP
+	li a1, CONFIG_NR_CPUS
+	bgeu a0, a1, .Lsecondary_park
+
+	la a1, __cpu_up_stack_pointer
+	slli a0, a0, LGREG
+	add a0, a0, a1
+
+.Lwait_for_cpu_up:
+	REG_L sp, (a0)
+	beqz sp, .Lwait_for_cpu_up
+	fence
+
+	/* Enable virtual memory and relocate to virtual address */
+	call relocate
+
+	/* Initialize task_struct pointer */
+	li tp, -THREAD_SIZE
+	add tp, tp, sp
+
+	tail smp_callin
+#endif
+
+.Lsecondary_park:
+	/* We lack SMP support or have too many harts, so park this hart */
+	wfi
+	j .Lsecondary_park
+END(_start)
+
+__PAGE_ALIGNED_BSS
+	/* Empty zero page */
+	.balign PAGE_SIZE
+ENTRY(empty_zero_page)
+	.fill (empty_zero_page + PAGE_SIZE) - ., 1, 0x00
+END(empty_zero_page)
diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c
new file mode 100644
index 000000000000..737d7cce2c6d
--- /dev/null
+++ b/arch/riscv/kernel/irq.c
@@ -0,0 +1,20 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/irqchip.h>
+
+void __init init_IRQ(void)
+{
+	irqchip_init();
+}
diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
new file mode 100644
index 000000000000..753cb9894feb
--- /dev/null
+++ b/arch/riscv/kernel/module.c
@@ -0,0 +1,215 @@
+/*
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  Copyright (C) 2017 Zihao Yu
+ */
+
+#include <linux/elf.h>
+#include <linux/err.h>
+#include <linux/errno.h>
+#include <linux/moduleloader.h>
+
+static int apply_r_riscv_64_rela(struct module *me, u32 *location, Elf_Addr v)
+{
+	*(u64 *)location = v;
+	return 0;
+}
+
+static int apply_r_riscv_branch_rela(struct module *me, u32 *location,
+				     Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	u32 imm12 = (offset & 0x1000) << (31 - 12);
+	u32 imm11 = (offset & 0x800) >> (11 - 7);
+	u32 imm10_5 = (offset & 0x7e0) << (30 - 10);
+	u32 imm4_1 = (offset & 0x1e) << (11 - 4);
+
+	*location = (*location & 0x1fff07f) | imm12 | imm11 | imm10_5 | imm4_1;
+	return 0;
+}
+
+static int apply_r_riscv_jal_rela(struct module *me, u32 *location,
+				  Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	u32 imm20 = (offset & 0x100000) << (31 - 20);
+	u32 imm19_12 = (offset & 0xff000);
+	u32 imm11 = (offset & 0x800) << (20 - 11);
+	u32 imm10_1 = (offset & 0x7fe) << (30 - 10);
+
+	*location = (*location & 0xfff) | imm20 | imm19_12 | imm11 | imm10_1;
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location,
+					 Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	s32 hi20;
+
+	if (offset != (s32)offset) {
+		pr_err(
+		  "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+		  me->name, v, location);
+		return -EINVAL;
+	}
+
+	hi20 = (offset + 0x800) & 0xfffff000;
+	*location = (*location & 0xfff) | hi20;
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_lo12_i_rela(struct module *me, u32 *location,
+					   Elf_Addr v)
+{
+	/* v is the lo12 value to fill. It is calculated before calling this
+	 * handler.
+	 */
+	*location = (*location & 0xfffff) | ((v & 0xfff) << 20);
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_lo12_s_rela(struct module *me, u32 *location,
+					   Elf_Addr v)
+{
+	/* v is the lo12 value to fill. It is calculated before calling this
+	 * handler.
+	 */
+	u32 imm11_5 = (v & 0xfe0) << (31 - 11);
+	u32 imm4_0 = (v & 0x1f) << (11 - 4);
+
+	*location = (*location & 0x1fff07f) | imm11_5 | imm4_0;
+	return 0;
+}
+
+static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location,
+				       Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	s32 fill_v = offset;
+	u32 hi20, lo12;
+
+	if (offset != fill_v) {
+		pr_err(
+		  "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+		  me->name, v, location);
+		return -EINVAL;
+	}
+
+	hi20 = (offset + 0x800) & 0xfffff000;
+	lo12 = (offset - hi20) & 0xfff;
+	*location = (*location & 0xfff) | hi20;
+	*(location + 1) = (*(location + 1) & 0xfffff) | (lo12 << 20);
+	return 0;
+}
+
+static int apply_r_riscv_relax_rela(struct module *me, u32 *location,
+				    Elf_Addr v)
+{
+	return 0;
+}
+
+static int (*reloc_handlers_rela[]) (struct module *me, u32 *location,
+				Elf_Addr v) = {
+	[R_RISCV_64]			= apply_r_riscv_64_rela,
+	[R_RISCV_BRANCH]		= apply_r_riscv_branch_rela,
+	[R_RISCV_JAL]			= apply_r_riscv_jal_rela,
+	[R_RISCV_PCREL_HI20]		= apply_r_riscv_pcrel_hi20_rela,
+	[R_RISCV_PCREL_LO12_I]		= apply_r_riscv_pcrel_lo12_i_rela,
+	[R_RISCV_PCREL_LO12_S]		= apply_r_riscv_pcrel_lo12_s_rela,
+	[R_RISCV_CALL_PLT]		= apply_r_riscv_call_plt_rela,
+	[R_RISCV_RELAX]			= apply_r_riscv_relax_rela,
+};
+
+int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
+		       unsigned int symindex, unsigned int relsec,
+		       struct module *me)
+{
+	Elf_Rela *rel = (void *) sechdrs[relsec].sh_addr;
+	int (*handler)(struct module *me, u32 *location, Elf_Addr v);
+	Elf_Sym *sym;
+	u32 *location;
+	unsigned int i, type;
+	Elf_Addr v;
+	int res;
+
+	pr_debug("Applying relocate section %u to %u\n", relsec,
+	       sechdrs[relsec].sh_info);
+
+	for (i = 0; i < sechdrs[relsec].sh_size / sizeof(*rel); i++) {
+		/* This is where to make the change */
+		location = (void *)sechdrs[sechdrs[relsec].sh_info].sh_addr
+			+ rel[i].r_offset;
+		/* This is the symbol it is referring to */
+		sym = (Elf_Sym *)sechdrs[symindex].sh_addr
+			+ ELF_RISCV_R_SYM(rel[i].r_info);
+		if (IS_ERR_VALUE(sym->st_value)) {
+			/* Ignore unresolved weak symbol */
+			if (ELF_ST_BIND(sym->st_info) == STB_WEAK)
+				continue;
+			printk(KERN_WARNING "%s: Unknown symbol %s\n",
+			       me->name, strtab + sym->st_name);
+			return -ENOENT;
+		}
+
+		type = ELF_RISCV_R_TYPE(rel[i].r_info);
+
+		if (type < ARRAY_SIZE(reloc_handlers_rela))
+			handler = reloc_handlers_rela[type];
+		else
+			handler = NULL;
+
+		if (!handler) {
+			pr_err("%s: Unknown relocation type %u\n",
+			       me->name, type);
+			return -EINVAL;
+		}
+
+		v = sym->st_value + rel[i].r_addend;
+
+		if (type == R_RISCV_PCREL_LO12_I || type == R_RISCV_PCREL_LO12_S) {
+			unsigned int j;
+
+			for (j = 0; j < sechdrs[relsec].sh_size / sizeof(*rel); j++) {
+				u64 hi20_loc =
+					sechdrs[sechdrs[relsec].sh_info].sh_addr
+					+ rel[j].r_offset;
+				/* Find the corresponding HI20 PC-relative relocation entry */
+				if (hi20_loc == sym->st_value) {
+					Elf_Sym *hi20_sym =
+						(Elf_Sym *)sechdrs[symindex].sh_addr
+						+ ELF_RISCV_R_SYM(rel[j].r_info);
+					u64 hi20_sym_val =
+						hi20_sym->st_value
+						+ rel[j].r_addend;
+					/* Calculate lo12 */
+					s64 offset = hi20_sym_val - hi20_loc;
+					s32 hi20 = (offset + 0x800) & 0xfffff000;
+					s32 lo12 = offset - hi20;
+					v = lo12;
+					break;
+				}
+			}
+			if (j == sechdrs[relsec].sh_size / sizeof(*rel)) {
+				pr_err(
+				  "%s: Can not find HI20 PC-relative relocation information\n",
+				  me->name);
+				return -EINVAL;
+			}
+		}
+
+		res = handler(me, location, v);
+		if (res)
+			return res;
+	}
+
+	return 0;
+}
diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
new file mode 100644
index 000000000000..c10146ae317d
--- /dev/null
+++ b/arch/riscv/kernel/process.c
@@ -0,0 +1,132 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/tick.h>
+#include <linux/ptrace.h>
+
+#include <asm/unistd.h>
+#include <asm/uaccess.h>
+#include <asm/processor.h>
+#include <asm/csr.h>
+#include <asm/string.h>
+#include <asm/switch_to.h>
+
+extern asmlinkage void ret_from_fork(void);
+extern asmlinkage void ret_from_kernel_thread(void);
+
+void arch_cpu_idle(void)
+{
+	wait_for_interrupt();
+	local_irq_enable();
+}
+
+void show_regs(struct pt_regs *regs)
+{
+	show_regs_print_info(KERN_DEFAULT);
+
+	printk(KERN_CONT "sepc: " REG_FMT " ra : " REG_FMT " sp : " REG_FMT "\n",
+		regs->sepc, regs->ra, regs->sp);
+	printk(KERN_CONT " gp : " REG_FMT " tp : " REG_FMT " t0 : " REG_FMT "\n",
+		regs->gp, regs->tp, regs->t0);
+	printk(KERN_CONT " t1 : " REG_FMT " t2 : " REG_FMT " s0 : " REG_FMT "\n",
+		regs->t1, regs->t2, regs->s0);
+	printk(KERN_CONT " s1 : " REG_FMT " a0 : " REG_FMT " a1 : " REG_FMT "\n",
+		regs->s1, regs->a0, regs->a1);
+	printk(KERN_CONT " a2 : " REG_FMT " a3 : " REG_FMT " a4 : " REG_FMT "\n",
+		regs->a2, regs->a3, regs->a4);
+	printk(KERN_CONT " a5 : " REG_FMT " a6 : " REG_FMT " a7 : " REG_FMT "\n",
+		regs->a5, regs->a6, regs->a7);
+	printk(KERN_CONT " s2 : " REG_FMT " s3 : " REG_FMT " s4 : " REG_FMT "\n",
+		regs->s2, regs->s3, regs->s4);
+	printk(KERN_CONT " s5 : " REG_FMT " s6 : " REG_FMT " s7 : " REG_FMT "\n",
+		regs->s5, regs->s6, regs->s7);
+	printk(KERN_CONT " s8 : " REG_FMT " s9 : " REG_FMT " s10: " REG_FMT "\n",
+		regs->s8, regs->s9, regs->s10);
+	printk(KERN_CONT " s11: " REG_FMT " t3 : " REG_FMT " t4 : " REG_FMT "\n",
+		regs->s11, regs->t3, regs->t4);
+	printk(KERN_CONT " t5 : " REG_FMT " t6 : " REG_FMT "\n",
+		regs->t5, regs->t6);
+
+	printk(KERN_CONT "sstatus: " REG_FMT " sbadaddr: " REG_FMT " scause: " REG_FMT "\n",
+		regs->sstatus, regs->sbadaddr, regs->scause);
+}
+
+void start_thread(struct pt_regs *regs, unsigned long pc,
+	unsigned long sp)
+{
+	regs->sstatus = SR_PIE /* User mode, irqs on */ | SR_FS_INITIAL;
+#ifndef CONFIG_RV_PUM
+	regs->sstatus |= SR_SUM
+#endif
+	regs->sepc = pc;
+	regs->sp = sp;
+	set_fs(USER_DS);
+}
+
+void flush_thread(void)
+{
+	/* Reset FPU context
+	 *	frm: round to nearest, ties to even (IEEE default)
+	 *	fflags: accrued exceptions cleared
+	 */
+	memset(&current->thread.fstate, 0,
+		sizeof(struct user_fpregs_struct));
+}
+
+int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+{
+	fstate_save(src, task_pt_regs(src));
+	*dst = *src;
+	return 0;
+}
+
+int copy_thread(unsigned long clone_flags, unsigned long usp,
+	unsigned long arg, struct task_struct *p)
+{
+	struct pt_regs *childregs = task_pt_regs(p);
+
+	/* p->thread holds context to be restored by __switch_to() */
+	if (unlikely(p->flags & PF_KTHREAD)) {
+		/* Kernel thread */
+		const register unsigned long gp __asm__ ("gp");
+		memset(childregs, 0, sizeof(struct pt_regs));
+		childregs->gp = gp;
+		childregs->sstatus = SR_PS | SR_PIE; /* Supervisor, irqs on */
+
+		p->thread.ra = (unsigned long)ret_from_kernel_thread;
+		p->thread.s[0] = usp; /* fn */
+		p->thread.s[1] = arg;
+	} else {
+		*childregs = *(current_pt_regs());
+		if (usp) /* User fork */
+			childregs->sp = usp;
+		if (clone_flags & CLONE_SETTLS)
+			childregs->tp = childregs->a5;
+		childregs->a0 = 0; /* Return value of fork() */
+		p->thread.ra = (unsigned long)ret_from_fork;
+	}
+	p->thread.sp = (unsigned long)childregs; /* kernel sp */
+	return 0;
+}
diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
new file mode 100644
index 000000000000..69b3b2d10664
--- /dev/null
+++ b/arch/riscv/kernel/ptrace.c
@@ -0,0 +1,147 @@
+/*
+ * Copyright 2010 Tilera Corporation. All Rights Reserved.
+ * Copyright 2015 Regents of the University of California
+ * Copyright 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ *
+ * Copied from arch/tile/kernel/ptrace.c
+ */
+
+#include <asm/ptrace.h>
+#include <asm/syscall.h>
+#include <asm/thread_info.h>
+#include <linux/ptrace.h>
+#include <linux/elf.h>
+#include <linux/regset.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/tracehook.h>
+#include <trace/events/syscalls.h>
+
+enum riscv_regset {
+	REGSET_X,
+};
+
+/*
+ * Get registers from task and ready the result for userspace.
+ */
+static char *getregs(struct task_struct *child, struct pt_regs *uregs)
+{
+	*uregs = *task_pt_regs(child);
+	return (char *)uregs;
+}
+
+/* Put registers back to task. */
+static void putregs(struct task_struct *child, struct pt_regs *uregs)
+{
+	struct pt_regs *regs = task_pt_regs(child);
+	*regs = *uregs;
+}
+
+static int riscv_gpr_get(struct task_struct *target,
+			 const struct user_regset *regset,
+			 unsigned int pos, unsigned int count,
+			 void *kbuf, void __user *ubuf)
+{
+	struct pt_regs regs;
+
+	getregs(target, &regs);
+
+	return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &regs, 0,
+				   sizeof(regs));
+}
+
+static int riscv_gpr_set(struct task_struct *target,
+			 const struct user_regset *regset,
+			 unsigned int pos, unsigned int count,
+			 const void *kbuf, const void __user *ubuf)
+{
+	int ret;
+	struct pt_regs regs;
+
+	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &regs, 0,
+				 sizeof(regs));
+	if (ret)
+		return ret;
+
+	putregs(target, &regs);
+
+	return 0;
+}
+
+
+static const struct user_regset riscv_user_regset[] = {
+	[REGSET_X] = {
+		.core_note_type = NT_PRSTATUS,
+		.n = ELF_NGREG,
+		.size = sizeof(elf_greg_t),
+		.align = sizeof(elf_greg_t),
+		.get = &riscv_gpr_get,
+		.set = &riscv_gpr_set,
+	},
+};
+
+static const struct user_regset_view riscv_user_native_view = {
+	.name = "riscv",
+	.e_machine = EM_RISCV,
+	.regsets = riscv_user_regset,
+	.n = ARRAY_SIZE(riscv_user_regset),
+};
+
+const struct user_regset_view *task_user_regset_view(struct task_struct *task)
+{
+	return &riscv_user_native_view;
+}
+
+void ptrace_disable(struct task_struct *child)
+{
+	clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+}
+
+long arch_ptrace(struct task_struct *child, long request,
+		 unsigned long addr, unsigned long data)
+{
+	long ret = -EIO;
+
+	switch (request) {
+	default:
+		ret = ptrace_request(child, request, addr, data);
+		break;
+	}
+
+	return ret;
+}
+
+/* Allows PTRACE_SYSCALL to work.  These are called from entry.S in
+ * {handle,ret_from}_syscall.
+ */
+void do_syscall_trace_enter(struct pt_regs *regs)
+{
+	if (test_thread_flag(TIF_SYSCALL_TRACE))
+		if (tracehook_report_syscall_entry(regs))
+			syscall_set_nr(current, regs, -1);
+
+#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
+	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
+		trace_sys_enter(regs, syscall_get_nr(current, regs));
+#endif
+}
+
+void do_syscall_trace_exit(struct pt_regs *regs)
+{
+	if (test_thread_flag(TIF_SYSCALL_TRACE))
+		tracehook_report_syscall_exit(regs, 0);
+
+#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
+	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
+		trace_sys_exit(regs, regs->regs[0]);
+#endif
+}
diff --git a/arch/riscv/kernel/reset.c b/arch/riscv/kernel/reset.c
new file mode 100644
index 000000000000..2a53d26ffdd6
--- /dev/null
+++ b/arch/riscv/kernel/reset.c
@@ -0,0 +1,36 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/reboot.h>
+#include <linux/export.h>
+#include <asm/sbi.h>
+
+void (*pm_power_off)(void) = machine_power_off;
+EXPORT_SYMBOL(pm_power_off);
+
+void machine_restart(char *cmd)
+{
+	do_kernel_restart(cmd);
+	while (1);
+}
+
+void machine_halt(void)
+{
+	machine_power_off();
+}
+
+void machine_power_off(void)
+{
+	sbi_shutdown();
+	while (1);
+}
diff --git a/arch/riscv/kernel/riscv_ksyms.c b/arch/riscv/kernel/riscv_ksyms.c
new file mode 100644
index 000000000000..ab0db6d48101
--- /dev/null
+++ b/arch/riscv/kernel/riscv_ksyms.c
@@ -0,0 +1,16 @@
+/*
+ * Copyright (C) 2017 Zihao Yu
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/export.h>
+#include <linux/uaccess.h>
+
+/*
+ * Assembly functions that may be used (directly or indirectly) by modules
+ */
+EXPORT_SYMBOL(__copy_user);
+
diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
new file mode 100644
index 000000000000..9ed70e84d74e
--- /dev/null
+++ b/arch/riscv/kernel/setup.c
@@ -0,0 +1,240 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/memblock.h>
+#include <linux/sched.h>
+#include <linux/initrd.h>
+#include <linux/console.h>
+#include <linux/screen_info.h>
+#include <linux/of_fdt.h>
+#include <linux/of_platform.h>
+#include <linux/sched/task.h>
+
+#include <asm/setup.h>
+#include <asm/sections.h>
+#include <asm/pgtable.h>
+#include <asm/smp.h>
+#include <asm/sbi.h>
+#include <asm/tlbflush.h>
+#include <asm/thread_info.h>
+
+#ifdef CONFIG_DUMMY_CONSOLE
+struct screen_info screen_info = {
+	.orig_video_lines	= 30,
+	.orig_video_cols	= 80,
+	.orig_video_mode	= 0,
+	.orig_video_ega_bx	= 0,
+	.orig_video_isVGA	= 1,
+	.orig_video_points	= 8
+};
+#endif
+
+#ifdef CONFIG_CMDLINE_BOOL
+static char __initdata builtin_cmdline[COMMAND_LINE_SIZE] = CONFIG_CMDLINE;
+#endif /* CONFIG_CMDLINE_BOOL */
+
+unsigned long va_pa_offset;
+unsigned long pfn_base;
+
+/* The lucky hart to first increment this variable will boot the other cores */
+atomic_t hart_lottery;
+
+#ifdef CONFIG_BLK_DEV_INITRD
+static void __init setup_initrd(void)
+{
+	extern char __initramfs_start[];
+	extern unsigned long __initramfs_size;
+	unsigned long size;
+
+	if (__initramfs_size > 0) {
+		initrd_start = (unsigned long)(&__initramfs_start);
+		initrd_end = initrd_start + __initramfs_size;
+	}
+
+	if (initrd_start >= initrd_end) {
+		printk(KERN_INFO "initrd not found or empty");
+		goto disable;
+	}
+	if (__pa(initrd_end) > PFN_PHYS(max_low_pfn)) {
+		printk(KERN_ERR "initrd extends beyond end of memory");
+		goto disable;
+	}
+
+	size =  initrd_end - initrd_start;
+	memblock_reserve(__pa(initrd_start), size);
+	initrd_below_start_ok = 1;
+
+	printk(KERN_INFO "Initial ramdisk at: 0x%p (%lu bytes)\n",
+		(void *)(initrd_start), size);
+	return;
+disable:
+	printk(KERN_CONT " - disabling initrd\n");
+	initrd_start = 0;
+	initrd_end = 0;
+}
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+pgd_t trampoline_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+
+#ifndef __PAGETABLE_PMD_FOLDED
+#define NUM_SWAPPER_PMDS ((uintptr_t)-PAGE_OFFSET >> PGDIR_SHIFT)
+pmd_t swapper_pmd[PTRS_PER_PMD*((-PAGE_OFFSET)/PGDIR_SIZE)] __page_aligned_bss;
+pmd_t trampoline_pmd[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+#endif
+
+asmlinkage void __init setup_vm(void)
+{
+	extern char _start;
+	uintptr_t i;
+	uintptr_t pa = (uintptr_t) &_start;
+	pgprot_t prot = __pgprot(pgprot_val(PAGE_KERNEL) | _PAGE_EXEC);
+
+	va_pa_offset = PAGE_OFFSET - pa;
+	pfn_base = PFN_DOWN(pa);
+
+	/* Sanity check alignment and size */
+	BUG_ON((PAGE_OFFSET % PGDIR_SIZE) != 0);
+	BUG_ON((pa % (PAGE_SIZE * PTRS_PER_PTE)) != 0);
+
+#ifndef __PAGETABLE_PMD_FOLDED
+	trampoline_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD] =
+		pfn_pgd(PFN_DOWN((uintptr_t)trampoline_pmd),
+			__pgprot(_PAGE_TABLE));
+	trampoline_pmd[0] = pfn_pmd(PFN_DOWN(pa), prot);
+
+	for (i = 0; i < (-PAGE_OFFSET)/PGDIR_SIZE; ++i) {
+		size_t o = (PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD + i;
+		swapper_pg_dir[o] =
+			pfn_pgd(PFN_DOWN((uintptr_t)swapper_pmd) + i,
+				__pgprot(_PAGE_TABLE));
+	}
+	for (i = 0; i < ARRAY_SIZE(swapper_pmd); i++)
+		swapper_pmd[i] = pfn_pmd(PFN_DOWN(pa + i * PMD_SIZE), prot);
+#else
+	trampoline_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD] =
+		pfn_pgd(PFN_DOWN(pa), prot);
+
+	for (i = 0; i < (-PAGE_OFFSET)/PGDIR_SIZE; ++i) {
+		size_t o = (PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD + i;
+		swapper_pg_dir[o] =
+			pfn_pgd(PFN_DOWN(pa + i * PGDIR_SIZE), prot);
+	}
+#endif
+}
+
+void __init sbi_save(unsigned int hartid, void *dtb)
+{
+	early_init_dt_scan(__va(dtb));
+}
+
+/* Allow the user to manually add a memory region (in case DTS is broken); "mem_end=nn[KkMmGg]" */
+static int __init mem_end_override(char *p)
+{
+	resource_size_t base, end;
+
+	if (!p)
+		return -EINVAL;
+	base = (uintptr_t) __pa(PAGE_OFFSET);
+	end = memparse(p, &p) & PMD_MASK;
+	if (end == 0)
+		return -EINVAL;
+	memblock_add(base, end - base);
+	return 0;
+}
+early_param("mem_end", mem_end_override);
+
+static void __init setup_bootmem(void)
+{
+	struct memblock_region *reg;
+	phys_addr_t mem_size = 0;
+
+	/* Find the memory region containing the kernel */
+	for_each_memblock(memory, reg) {
+		phys_addr_t vmlinux_end = __pa(_end);
+		phys_addr_t end = reg->base + reg->size;
+
+		if (reg->base <= vmlinux_end && vmlinux_end <= end) {
+			/* Reserve from the start of the region to the end of
+			 * the kernel
+			 */
+			memblock_reserve(reg->base, vmlinux_end - reg->base);
+			mem_size = min(reg->size, (phys_addr_t)-PAGE_OFFSET);
+		}
+	}
+	BUG_ON(mem_size == 0);
+
+	set_max_mapnr(PFN_DOWN(mem_size));
+	max_low_pfn = pfn_base + PFN_DOWN(mem_size);
+
+#ifdef CONFIG_BLK_DEV_INITRD
+	setup_initrd();
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+	early_init_fdt_reserve_self();
+	early_init_fdt_scan_reserved_mem();
+	memblock_allow_resize();
+	memblock_dump_all();
+}
+
+void __init setup_arch(char **cmdline_p)
+{
+#ifdef CONFIG_CMDLINE_BOOL
+#ifdef CONFIG_CMDLINE_OVERRIDE
+	strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+#else
+	if (builtin_cmdline[0] != '\0') {
+		/* Append bootloader command line to built-in */
+		strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);
+		strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+		strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+	}
+#endif /* CONFIG_CMDLINE_OVERRIDE */
+#endif /* CONFIG_CMDLINE_BOOL */
+	*cmdline_p = boot_command_line;
+
+	parse_early_param();
+
+	init_mm.start_code = (unsigned long) _stext;
+	init_mm.end_code   = (unsigned long) _etext;
+	init_mm.end_data   = (unsigned long) _edata;
+	init_mm.brk        = (unsigned long) _end;
+
+	setup_bootmem();
+	paging_init();
+	unflatten_device_tree();
+
+#ifdef CONFIG_SMP
+	setup_smp();
+#endif
+
+#ifdef CONFIG_DUMMY_CONSOLE
+	conswitchp = &dummy_con;
+#endif
+}
+
+static int __init riscv_device_init(void)
+{
+	return of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
+}
+subsys_initcall_sync(riscv_device_init);
diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
new file mode 100644
index 000000000000..ea14c47f28ac
--- /dev/null
+++ b/arch/riscv/kernel/signal.c
@@ -0,0 +1,257 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/signal.h>
+#include <linux/uaccess.h>
+#include <linux/syscalls.h>
+#include <linux/tracehook.h>
+#include <linux/linkage.h>
+
+#include <asm/ucontext.h>
+#include <asm/vdso.h>
+#include <asm/switch_to.h>
+#include <asm/csr.h>
+
+#define DEBUG_SIG 0
+
+struct rt_sigframe {
+	struct siginfo info;
+	struct ucontext uc;
+};
+
+static long restore_sigcontext(struct pt_regs *regs,
+	struct sigcontext __user *sc)
+{
+	struct task_struct *task = current;
+	long err;
+	/* sc_regs is structured the same as the start of pt_regs */
+	err = __copy_from_user(regs, &sc->sc_regs, sizeof(sc->sc_regs));
+	err |= __copy_from_user(&task->thread.fstate, &sc->sc_fpregs,
+		sizeof(sc->sc_fpregs));
+	if (likely(!err))
+		fstate_restore(task, regs);
+	return err;
+}
+
+SYSCALL_DEFINE0(rt_sigreturn)
+{
+	struct pt_regs *regs = current_pt_regs();
+	struct rt_sigframe __user *frame;
+	struct task_struct *task;
+	sigset_t set;
+
+	/* Always make any pending restarted system calls return -EINTR */
+	current->restart_block.fn = do_no_restart_syscall;
+
+	frame = (struct rt_sigframe __user *)regs->sp;
+
+	if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
+		goto badframe;
+
+	if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
+		goto badframe;
+
+	set_current_blocked(&set);
+
+	if (restore_sigcontext(regs, &frame->uc.uc_mcontext))
+		goto badframe;
+
+	if (restore_altstack(&frame->uc.uc_stack))
+		goto badframe;
+
+	return regs->a0;
+
+badframe:
+	task = current;
+	if (show_unhandled_signals) {
+		pr_info_ratelimited(
+			"%s[%d]: bad frame in %s: frame=%p pc=%p sp=%p\n",
+			task->comm, task_pid_nr(task), __func__,
+			frame, (void *)regs->sepc, (void *)regs->sp);
+	}
+	force_sig(SIGSEGV, task);
+	return 0;
+}
+
+static long setup_sigcontext(struct sigcontext __user *sc,
+	struct pt_regs *regs)
+{
+	struct task_struct *task = current;
+	long err;
+	/* sc_regs is structured the same as the start of pt_regs */
+	err = __copy_to_user(&sc->sc_regs, regs, sizeof(sc->sc_regs));
+	fstate_save(task, regs);
+	err |= __copy_to_user(&sc->sc_fpregs, &task->thread.fstate,
+		sizeof(sc->sc_fpregs));
+	return err;
+}
+
+static inline void __user *get_sigframe(struct ksignal *ksig,
+	struct pt_regs *regs, size_t framesize)
+{
+	unsigned long sp;
+	/* Default to using normal stack */
+	sp = regs->sp;
+
+	/*
+	 * If we are on the alternate signal stack and would overflow it, don't.
+	 * Return an always-bogus address instead so we will die with SIGSEGV.
+	 */
+	if (on_sig_stack(sp) && !likely(on_sig_stack(sp - framesize)))
+		return (void __user __force *)(-1UL);
+
+	/* This is the X/Open sanctioned signal stack switching. */
+	sp = sigsp(sp, ksig) - framesize;
+
+	/* Align the stack frame. */
+	sp &= ~0xfUL;
+
+	return (void __user *)sp;
+}
+
+
+static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+	struct pt_regs *regs)
+{
+	struct rt_sigframe __user *frame;
+	long err = 0;
+
+	frame = get_sigframe(ksig, regs, sizeof(*frame));
+	if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+		return -EFAULT;
+
+	err |= copy_siginfo_to_user(&frame->info, &ksig->info);
+
+	/* Create the ucontext. */
+	err |= __put_user(0, &frame->uc.uc_flags);
+	err |= __put_user(NULL, &frame->uc.uc_link);
+	err |= __save_altstack(&frame->uc.uc_stack, regs->sp);
+	err |= setup_sigcontext(&frame->uc.uc_mcontext, regs);
+	err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
+	if (err)
+		return -EFAULT;
+
+	/* Set up to return from userspace. */
+	regs->ra = (unsigned long)VDSO_SYMBOL(
+		current->mm->context.vdso, rt_sigreturn);
+
+	/*
+	 * Set up registers for signal handler.
+	 * Registers that we don't modify keep the value they had from
+	 * user-space at the time we took the signal.
+	 * We always pass siginfo and mcontext, regardless of SA_SIGINFO,
+	 * since some things rely on this (e.g. glibc's debug/segfault.c).
+	 */
+	regs->sepc = (unsigned long)ksig->ka.sa.sa_handler;
+	regs->sp = (unsigned long)frame;
+	regs->a0 = ksig->sig;                     /* a0: signal number */
+	regs->a1 = (unsigned long)(&frame->info); /* a1: siginfo pointer */
+	regs->a2 = (unsigned long)(&frame->uc);   /* a2: ucontext pointer */
+
+#if DEBUG_SIG
+	pr_info("SIG deliver (%s:%d): sig=%d pc=%p ra=%p sp=%p\n",
+		current->comm, task_pid_nr(current), ksig->sig,
+		(void *)regs->sepc, (void *)regs->ra, frame);
+#endif
+
+	return 0;
+}
+
+static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
+{
+	sigset_t *oldset = sigmask_to_save();
+	int ret;
+
+	/* Are we from a system call? */
+	if (regs->scause == EXC_SYSCALL) {
+		/* If so, check system call restarting.. */
+		switch (regs->a0) {
+		case -ERESTART_RESTARTBLOCK:
+		case -ERESTARTNOHAND:
+			regs->a0 = -EINTR;
+			break;
+
+		case -ERESTARTSYS:
+			if (!(ksig->ka.sa.sa_flags & SA_RESTART)) {
+				regs->a0 = -EINTR;
+				break;
+			}
+			/* fallthrough */
+		case -ERESTARTNOINTR:
+			regs->sepc -= 0x4;
+			break;
+		}
+	}
+
+	/* Set up the stack frame */
+	ret = setup_rt_frame(ksig, oldset, regs);
+
+	signal_setup_done(ret, ksig, 0);
+}
+
+static void do_signal(struct pt_regs *regs)
+{
+	struct ksignal ksig;
+
+	if (get_signal(&ksig)) {
+		/* Actually deliver the signal */
+		handle_signal(&ksig, regs);
+		return;
+	}
+
+	/* Did we come from a system call? */
+	if (regs->scause == EXC_SYSCALL) {
+		/* Restart the system call - no handlers present */
+		switch (regs->a0) {
+		case -ERESTARTNOHAND:
+		case -ERESTARTSYS:
+		case -ERESTARTNOINTR:
+			regs->sepc -= 0x4;
+			break;
+		case -ERESTART_RESTARTBLOCK:
+			regs->a7 = __NR_restart_syscall;
+			regs->sepc -= 0x4;
+			break;
+		}
+	}
+
+	/* If there is no signal to deliver, we just put the saved
+	 * sigmask back.
+	 */
+	restore_saved_sigmask();
+}
+
+/*
+ * notification of userspace execution resumption
+ * - triggered by the _TIF_WORK_MASK flags
+ */
+asmlinkage void do_notify_resume(struct pt_regs *regs,
+	unsigned long thread_info_flags)
+{
+	/* Handle pending signal delivery */
+	if (thread_info_flags & _TIF_SIGPENDING)
+		do_signal(regs);
+
+	if (thread_info_flags & _TIF_NOTIFY_RESUME) {
+		clear_thread_flag(TIF_NOTIFY_RESUME);
+		tracehook_notify_resume(regs);
+	}
+}
diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
new file mode 100644
index 000000000000..b65c0e1020e3
--- /dev/null
+++ b/arch/riscv/kernel/smp.c
@@ -0,0 +1,110 @@
+/*
+ * SMP initialisation and IPI support
+ * Based on arch/arm64/kernel/smp.c
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/smp.h>
+#include <linux/sched.h>
+
+#include <asm/sbi.h>
+#include <asm/tlbflush.h>
+#include <asm/cacheflush.h>
+
+/* A collection of single bit ipi messages.  */
+static struct {
+	unsigned long bits ____cacheline_aligned;
+} ipi_data[NR_CPUS] __cacheline_aligned;
+
+enum ipi_message_type {
+	IPI_RESCHEDULE,
+	IPI_CALL_FUNC,
+	IPI_MAX
+};
+
+irqreturn_t handle_ipi(void)
+{
+	unsigned long *pending_ipis = &ipi_data[smp_processor_id()].bits;
+
+	/* Clear pending IPI */
+	csr_clear(sip, SIE_SSIE);
+
+	while (true) {
+		unsigned long ops;
+
+		/* Order bit clearing and data access. */
+		mb(); // test
+
+		ops = xchg(pending_ipis, 0);
+		if (ops == 0)
+			return IRQ_HANDLED;
+
+		if (ops & (1 << IPI_RESCHEDULE))
+			scheduler_ipi();
+
+		if (ops & (1 << IPI_CALL_FUNC))
+			generic_smp_call_function_interrupt();
+
+		BUG_ON((ops >> IPI_MAX) != 0);
+
+		/* Order data access and bit testing. */
+		mb();
+	}
+
+	return IRQ_HANDLED;
+}
+
+static void
+send_ipi_message(const struct cpumask *to_whom, enum ipi_message_type operation)
+{
+	int i;
+
+	mb();
+	for_each_cpu(i, to_whom)
+		set_bit(operation, &ipi_data[i].bits);
+
+	mb();
+	sbi_send_ipi(cpumask_bits(to_whom));
+}
+
+void arch_send_call_function_ipi_mask(struct cpumask *mask)
+{
+	send_ipi_message(mask, IPI_CALL_FUNC);
+}
+
+void arch_send_call_function_single_ipi(int cpu)
+{
+	send_ipi_message(cpumask_of(cpu), IPI_CALL_FUNC);
+}
+
+static void ipi_stop(void *unused)
+{
+	while (1)
+		wait_for_interrupt();
+}
+
+void smp_send_stop(void)
+{
+	on_each_cpu(ipi_stop, NULL, 1);
+}
+
+void smp_send_reschedule(int cpu)
+{
+	send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE);
+}
diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
new file mode 100644
index 000000000000..7043bbbfbc1e
--- /dev/null
+++ b/arch/riscv/kernel/smpboot.c
@@ -0,0 +1,103 @@
+/*
+ * SMP initialisation and IPI support
+ * Based on arch/arm64/kernel/smp.c
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/kernel_stat.h>
+#include <linux/notifier.h>
+#include <linux/cpu.h>
+#include <linux/percpu.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/irq.h>
+#include <linux/of.h>
+#include <linux/sched/task_stack.h>
+#include <asm/mmu_context.h>
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+#include <asm/sbi.h>
+
+void *__cpu_up_stack_pointer[NR_CPUS];
+
+void __init smp_prepare_boot_cpu(void)
+{
+}
+
+void __init smp_prepare_cpus(unsigned int max_cpus)
+{
+}
+
+void __init setup_smp(void)
+{
+	struct device_node *dn = NULL;
+	int hart, im_okay_therefore_i_am = 0;
+
+	while ((dn = of_find_node_by_type(dn, "cpu"))) {
+		hart = riscv_of_processor_hart(dn);
+		if (hart >= 0) {
+			set_cpu_possible(hart, true);
+			set_cpu_present(hart, true);
+			if (hart == smp_processor_id()) {
+				BUG_ON(im_okay_therefore_i_am);
+				im_okay_therefore_i_am = 1;
+			}
+		}
+	}
+
+	BUG_ON(!im_okay_therefore_i_am);
+}
+
+int __cpu_up(unsigned int cpu, struct task_struct *tidle)
+{
+	/* Signal cpu to start */
+	mb();
+	__cpu_up_stack_pointer[cpu] = task_stack_page(tidle) + THREAD_SIZE;
+
+	while (!cpu_online(cpu))
+		cpu_relax();
+
+	return 0;
+}
+
+void __init smp_cpus_done(unsigned int max_cpus)
+{
+}
+
+/*
+ * C entry point for a secondary processor.
+ */
+asmlinkage void __init smp_callin(void)
+{
+	struct mm_struct *mm = &init_mm;
+
+	/* All kernel threads share the same mm context.  */
+	atomic_inc(&mm->mm_count);
+	current->active_mm = mm;
+
+	trap_init();
+	init_clockevent();
+	notify_cpu_starting(smp_processor_id());
+	set_cpu_online(smp_processor_id(), 1);
+	local_flush_tlb_all();
+	local_irq_enable();
+	preempt_disable();
+	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+}
diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
new file mode 100644
index 000000000000..109f5120d5c7
--- /dev/null
+++ b/arch/riscv/kernel/stacktrace.c
@@ -0,0 +1,177 @@
+/*
+ * Copyright (C) 2008 ARM Limited
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/export.h>
+#include <linux/kallsyms.h>
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <linux/sched/task_stack.h>
+#include <linux/stacktrace.h>
+
+#ifdef CONFIG_FRAME_POINTER
+
+struct stackframe {
+	unsigned long fp;
+	unsigned long ra;
+};
+
+static void notrace walk_stackframe(struct task_struct *task,
+	struct pt_regs *regs, bool (*fn)(unsigned long, void *), void *arg)
+{
+	unsigned long fp, sp, pc;
+
+	if (regs) {
+		fp = GET_FP(regs);
+		sp = GET_USP(regs);
+		pc = GET_IP(regs);
+	} else if (task == NULL || task == current) {
+		const register unsigned long current_sp __asm__ ("sp");
+		fp = (unsigned long)__builtin_frame_address(0);
+		sp = current_sp;
+		pc = (unsigned long)walk_stackframe;
+	} else {
+		/* task blocked in __switch_to */
+		fp = task->thread.s[0];
+		sp = task->thread.sp;
+		pc = task->thread.ra;
+	}
+
+	for (;;) {
+		unsigned long low, high;
+		struct stackframe *frame;
+
+		if (unlikely(!__kernel_text_address(pc) || fn(pc, arg)))
+			break;
+
+		/* Validate frame pointer */
+		low = sp + sizeof(struct stackframe);
+		high = ALIGN(sp, THREAD_SIZE);
+		if (unlikely(fp < low || fp > high || fp & 0x7))
+			break;
+		/* Unwind stack frame */
+		frame = (struct stackframe *)fp - 1;
+		sp = fp;
+		fp = frame->fp;
+		pc = frame->ra - 0x4;
+	}
+}
+
+#else /* !CONFIG_FRAME_POINTER */
+
+static void notrace walk_stackframe(struct task_struct *task,
+	struct pt_regs *regs, bool (*fn)(unsigned long, void *), void *arg)
+{
+	unsigned long sp, pc;
+	unsigned long *ksp;
+
+	if (regs) {
+		sp = GET_USP(regs);
+		pc = GET_IP(regs);
+	} else if (task == NULL || task == current) {
+		const register unsigned long current_sp __asm__ ("sp");
+		sp = current_sp;
+		pc = (unsigned long)walk_stackframe;
+	} else {
+		/* task blocked in __switch_to */
+		sp = task->thread.sp;
+		pc = task->thread.ra;
+	}
+
+	if (unlikely(sp & 0x7))
+		return;
+
+	ksp = (unsigned long *)sp;
+	while (!kstack_end(ksp)) {
+		if (__kernel_text_address(pc) && unlikely(fn(pc, arg)))
+			break;
+		pc = (*ksp++) - 0x4;
+	}
+}
+
+#endif /* CONFIG_FRAME_POINTER */
+
+
+static bool print_trace_address(unsigned long pc, void *arg)
+{
+	print_ip_sym(pc);
+	return false;
+}
+
+void show_stack(struct task_struct *task, unsigned long *sp)
+{
+	printk("Call Trace:\n");
+	walk_stackframe(task, NULL, print_trace_address, NULL);
+}
+
+
+static bool save_wchan(unsigned long pc, void *arg)
+{
+	if (!in_sched_functions(pc)) {
+		unsigned long *p = arg;
+		*p = pc;
+		return true;
+	}
+	return false;
+}
+
+unsigned long get_wchan(struct task_struct *task)
+{
+	unsigned long pc = 0;
+
+	if (likely(task && task != current && task->state != TASK_RUNNING))
+		walk_stackframe(task, NULL, save_wchan, &pc);
+	return pc;
+}
+
+
+#ifdef CONFIG_STACKTRACE
+
+static bool __save_trace(unsigned long pc, void *arg, bool nosched)
+{
+	struct stack_trace *trace = arg;
+
+	if (unlikely(nosched && in_sched_functions(pc)))
+		return false;
+	if (unlikely(trace->skip > 0)) {
+		trace->skip--;
+		return false;
+	}
+
+	trace->entries[trace->nr_entries++] = pc;
+	return (trace->nr_entries >= trace->max_entries);
+}
+
+static bool save_trace(unsigned long pc, void *arg)
+{
+	return __save_trace(pc, arg, false);
+}
+
+/*
+ * Save stack-backtrace addresses into a stack_trace buffer.
+ */
+void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
+{
+	walk_stackframe(tsk, NULL, save_trace, trace);
+	if (trace->nr_entries < trace->max_entries)
+		trace->entries[trace->nr_entries++] = ULONG_MAX;
+}
+EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
+
+void save_stack_trace(struct stack_trace *trace)
+{
+	save_stack_trace_tsk(NULL, trace);
+}
+EXPORT_SYMBOL_GPL(save_stack_trace);
+
+#endif /* CONFIG_STACKTRACE */
diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
new file mode 100644
index 000000000000..33d40a5da4a1
--- /dev/null
+++ b/arch/riscv/kernel/sys_riscv.c
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/syscalls.h>
+#include <asm/unistd.h>
+
+#ifdef CONFIG_64BIT
+SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
+	unsigned long, prot, unsigned long, flags,
+	unsigned long, fd, off_t, offset)
+{
+	if (unlikely(offset & (~PAGE_MASK)))
+		return -EINVAL;
+	return sys_mmap_pgoff(addr, len, prot, flags, fd, offset >> PAGE_SHIFT);
+}
+#else
+SYSCALL_DEFINE6(mmap2, unsigned long, addr, unsigned long, len,
+	unsigned long, prot, unsigned long, flags,
+	unsigned long, fd, off_t, offset)
+{
+	/* Note that the shift for mmap2 is constant (12),
+	 * regardless of PAGE_SIZE
+	 */
+	if (unlikely(offset & (~PAGE_MASK >> 12)))
+		return -EINVAL;
+	return sys_mmap_pgoff(addr, len, prot, flags, fd,
+		offset >> (PAGE_SHIFT - 12));
+}
+#endif /* !CONFIG_64BIT */
+
+#ifdef CONFIG_SYSRISCV_ATOMIC
+SYSCALL_DEFINE4(sysriscv, unsigned long, cmd, unsigned long, arg1,
+	unsigned long, arg2, unsigned long, arg3)
+{
+	unsigned long flags;
+	unsigned long prev;
+	unsigned int *ptr;
+	unsigned int err;
+
+	switch (cmd) {
+	case RISCV_ATOMIC_CMPXCHG:
+		ptr = (unsigned int *)arg1;
+		if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned int)))
+			return -EFAULT;
+
+		preempt_disable();
+		raw_local_irq_save(flags);
+		err = __get_user(prev, ptr);
+		if (likely(!err && prev == arg2))
+			err = __put_user(arg3, ptr);
+		raw_local_irq_restore(flags);
+		preempt_enable();
+
+		return unlikely(err) ? err : prev;
+
+	case RISCV_ATOMIC_CMPXCHG64:
+		ptr = (unsigned int *)arg1;
+		if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned long)))
+			return -EFAULT;
+
+		preempt_disable();
+		raw_local_irq_save(flags);
+		err = __get_user(prev, ptr);
+		if (likely(!err && prev == arg2))
+			err = __put_user(arg3, ptr);
+		raw_local_irq_restore(flags);
+		preempt_enable();
+
+		return unlikely(err) ? err : prev;
+	}
+
+	return -EINVAL;
+}
+#endif /* CONFIG_SYSRISCV_ATOMIC */
diff --git a/arch/riscv/kernel/syscall_table.c b/arch/riscv/kernel/syscall_table.c
new file mode 100644
index 000000000000..8fa239b67cbc
--- /dev/null
+++ b/arch/riscv/kernel/syscall_table.c
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2009 Arnd Bergmann <arnd@arndb.de>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/syscalls.h>
+
+#include <asm/syscalls.h>
+
+#undef __SYSCALL
+#define __SYSCALL(nr, call)	[nr] = (call),
+
+void *sys_call_table[__NR_syscalls] = {
+	[0 ... __NR_syscalls - 1] = sys_ni_syscall,
+#include <asm/unistd.h>
+};
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
new file mode 100644
index 000000000000..3a4698199ecf
--- /dev/null
+++ b/arch/riscv/kernel/traps.c
@@ -0,0 +1,183 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <linux/sched/signal.h>
+#include <linux/signal.h>
+#include <linux/kdebug.h>
+#include <linux/uaccess.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/irq.h>
+
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/csr.h>
+
+int show_unhandled_signals = 1;
+
+extern asmlinkage void handle_exception(void);
+
+static DEFINE_SPINLOCK(die_lock);
+
+void die(struct pt_regs *regs, const char *str)
+{
+	static int die_counter;
+	int ret;
+
+	oops_enter();
+
+	spin_lock_irq(&die_lock);
+	console_verbose();
+	bust_spinlocks(1);
+
+	pr_emerg("%s [#%d]\n", str, ++die_counter);
+	print_modules();
+	show_regs(regs);
+
+	ret = notify_die(DIE_OOPS, str, regs, 0, regs->scause, SIGSEGV);
+
+	bust_spinlocks(0);
+	add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
+	spin_unlock_irq(&die_lock);
+	oops_exit();
+
+	if (in_interrupt())
+		panic("Fatal exception in interrupt");
+	if (panic_on_oops)
+		panic("Fatal exception");
+	if (ret != NOTIFY_STOP)
+		do_exit(SIGSEGV);
+}
+
+static inline void do_trap_siginfo(int signo, int code,
+	unsigned long addr, struct task_struct *tsk)
+{
+	siginfo_t info;
+
+	info.si_signo = signo;
+	info.si_errno = 0;
+	info.si_code = code;
+	info.si_addr = (void __user *)addr;
+	force_sig_info(signo, &info, tsk);
+}
+
+void do_trap(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, struct task_struct *tsk)
+{
+	if (show_unhandled_signals && unhandled_signal(tsk, signo)
+	    && printk_ratelimit()) {
+		pr_info("%s[%d]: unhandled signal %d code 0x%x at 0x" REG_FMT,
+			tsk->comm, task_pid_nr(tsk), signo, code, addr);
+		print_vma_addr(KERN_CONT " in ", GET_IP(regs));
+		pr_cont("\n");
+		show_regs(regs);
+	}
+
+	do_trap_siginfo(signo, code, addr, tsk);
+}
+
+static void do_trap_error(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, const char *str)
+{
+	if (user_mode(regs)) {
+		do_trap(regs, signo, code, addr, current);
+	} else {
+		if (!fixup_exception(regs))
+			die(regs, str);
+	}
+}
+
+#define DO_ERROR_INFO(name, signo, code, str)				\
+asmlinkage void name(struct pt_regs *regs)				\
+{									\
+	do_trap_error(regs, signo, code, regs->sepc, "Oops - " str);	\
+}
+
+DO_ERROR_INFO(do_trap_unknown,
+	SIGILL, ILL_ILLTRP, "unknown exception");
+DO_ERROR_INFO(do_trap_insn_misaligned,
+	SIGBUS, BUS_ADRALN, "instruction address misaligned");
+DO_ERROR_INFO(do_trap_insn_fault,
+	SIGBUS, BUS_ADRALN, "instruction access fault");
+DO_ERROR_INFO(do_trap_insn_illegal,
+	SIGILL, ILL_ILLOPC, "illegal instruction");
+DO_ERROR_INFO(do_trap_load_misaligned,
+	SIGBUS, BUS_ADRALN, "load address misaligned");
+DO_ERROR_INFO(do_trap_load_fault,
+	SIGSEGV, SEGV_ACCERR, "load access fault");
+DO_ERROR_INFO(do_trap_store_misaligned,
+	SIGBUS, BUS_ADRALN, "store (or AMO) address misaligned");
+DO_ERROR_INFO(do_trap_store_fault,
+	SIGSEGV, SEGV_ACCERR, "store (or AMO) access fault");
+DO_ERROR_INFO(do_trap_ecall_u,
+	SIGILL, ILL_ILLTRP, "environment call from U-mode");
+DO_ERROR_INFO(do_trap_ecall_s,
+	SIGILL, ILL_ILLTRP, "environment call from S-mode");
+DO_ERROR_INFO(do_trap_ecall_m,
+	SIGILL, ILL_ILLTRP, "environment call from M-mode");
+
+asmlinkage void do_trap_break(struct pt_regs *regs)
+{
+#ifdef CONFIG_GENERIC_BUG
+	if (!user_mode(regs)) {
+		enum bug_trap_type type;
+
+		type = report_bug(regs->sepc, regs);
+		switch (type) {
+		case BUG_TRAP_TYPE_NONE:
+			break;
+		case BUG_TRAP_TYPE_WARN:
+			regs->sepc += sizeof(bug_insn_t);
+			return;
+		case BUG_TRAP_TYPE_BUG:
+			die(regs, "Kernel BUG");
+		}
+	}
+#endif /* CONFIG_GENERIC_BUG */
+
+	do_trap_siginfo(SIGTRAP, TRAP_BRKPT, regs->sepc, current);
+	regs->sepc += 0x4;
+}
+
+#ifdef CONFIG_GENERIC_BUG
+int is_valid_bugaddr(unsigned long pc)
+{
+	bug_insn_t insn;
+
+	if (pc < PAGE_OFFSET)
+		return 0;
+	if (probe_kernel_address((bug_insn_t __user *)pc, insn))
+		return 0;
+	return (insn == __BUG_INSN);
+}
+#endif /* CONFIG_GENERIC_BUG */
+
+void __init trap_init(void)
+{
+	int hart = smp_processor_id();
+
+	/* Set sup0 scratch register to 0, indicating to exception vector
+	 * that we are presently executing in the kernel
+	 */
+	csr_write(sscratch, 0);
+	/* Set the exception vector address */
+	csr_write(stvec, &handle_exception);
+	/* Enable software interrupts and setup initial mask */
+	csr_write(sie,
+		  SIE_SSIE | atomic_long_read(&per_cpu(riscv_early_sie, hart))
+		);
+}
diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c
new file mode 100644
index 000000000000..e8a178df8144
--- /dev/null
+++ b/arch/riscv/kernel/vdso.c
@@ -0,0 +1,125 @@
+/*
+ * Copyright (C) 2004 Benjamin Herrenschmidt, IBM Corp.
+ *                    <benh@kernel.crashing.org>
+ * Copyright (C) 2012 ARM Limited
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/binfmts.h>
+#include <linux/err.h>
+
+#include <asm/vdso.h>
+
+extern char vdso_start[], vdso_end[];
+
+static unsigned int vdso_pages;
+static struct page **vdso_pagelist;
+
+/*
+ * The vDSO data page.
+ */
+static union {
+	struct vdso_data	data;
+	u8			page[PAGE_SIZE];
+} vdso_data_store __page_aligned_data;
+struct vdso_data *vdso_data = &vdso_data_store.data;
+
+static int __init vdso_init(void)
+{
+	unsigned int i;
+
+	vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT;
+	vdso_pagelist =
+		kcalloc(vdso_pages + 1, sizeof(struct page *), GFP_KERNEL);
+	if (unlikely(vdso_pagelist == NULL)) {
+		pr_err("vdso: pagelist allocation failed\n");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < vdso_pages; i++) {
+		struct page *pg;
+
+		pg = virt_to_page(vdso_start + (i << PAGE_SHIFT));
+		ClearPageReserved(pg);
+		vdso_pagelist[i] = pg;
+	}
+	vdso_pagelist[i] = virt_to_page(vdso_data);
+
+	return 0;
+}
+arch_initcall(vdso_init);
+
+int arch_setup_additional_pages(struct linux_binprm *bprm,
+	int uses_interp)
+{
+	struct mm_struct *mm = current->mm;
+	unsigned long vdso_base, vdso_len;
+	int ret;
+
+	vdso_len = (vdso_pages + 1) << PAGE_SHIFT;
+
+	down_write(&mm->mmap_sem);
+	vdso_base = get_unmapped_area(NULL, 0, vdso_len, 0, 0);
+	if (unlikely(IS_ERR_VALUE(vdso_base))) {
+		ret = vdso_base;
+		goto end;
+	}
+
+	/*
+	 * Put vDSO base into mm struct. We need to do this before calling
+	 * install_special_mapping or the perf counter mmap tracking code
+	 * will fail to recognise it as a vDSO (since arch_vma_name fails).
+	 */
+	mm->context.vdso = (void *)vdso_base;
+
+	ret = install_special_mapping(mm, vdso_base, vdso_len,
+		(VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC),
+		vdso_pagelist);
+
+	if (unlikely(ret))
+		mm->context.vdso = NULL;
+
+end:
+	up_write(&mm->mmap_sem);
+	return ret;
+}
+
+const char *arch_vma_name(struct vm_area_struct *vma)
+{
+	if (vma->vm_mm && (vma->vm_start == (long)vma->vm_mm->context.vdso))
+		return "[vdso]";
+	return NULL;
+}
+
+/*
+ * Function stubs to prevent linker errors when AT_SYSINFO_EHDR is defined
+ */
+
+int in_gate_area_no_mm(unsigned long addr)
+{
+	return 0;
+}
+
+int in_gate_area(struct mm_struct *mm, unsigned long addr)
+{
+	return 0;
+}
+
+struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
+{
+	return NULL;
+}
diff --git a/arch/riscv/kernel/vdso/.gitignore b/arch/riscv/kernel/vdso/.gitignore
new file mode 100644
index 000000000000..f8b69d84238e
--- /dev/null
+++ b/arch/riscv/kernel/vdso/.gitignore
@@ -0,0 +1 @@
+vdso.lds
diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
new file mode 100644
index 000000000000..04f3ec75b217
--- /dev/null
+++ b/arch/riscv/kernel/vdso/Makefile
@@ -0,0 +1,61 @@
+# Derived from arch/{arm64,tile}/kernel/vdso/Makefile
+
+obj-vdso := sigreturn.o
+
+# Build rules
+targets := $(obj-vdso) vdso.so vdso.so.dbg
+obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
+
+#ccflags-y := -shared -fno-common -fno-builtin
+#ccflags-y += -nostdlib -Wl,-soname=linux-vdso.so.1 \
+		$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
+
+CFLAGS_vdso.so = $(c_flags)
+CFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \
+	$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
+CFLAGS_vdso_syms.o = -r
+
+obj-y += vdso.o
+
+# We also create a special relocatable object that should mirror the symbol
+# table and layout of the linked DSO.  With ld -R we can then refer to
+# these symbols in the kernel code rather than hand-coded addresses.
+extra-y += vdso.lds vdso-syms.o
+$(obj)/built-in.o: $(obj)/vdso-syms.o
+$(obj)/built-in.o: ld_flags += -R $(obj)/vdso-syms.o
+
+CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+
+# Force dependency
+$(obj)/vdso.o : $(obj)/vdso.so
+
+# Link rule for the *.so file; *.lds must be first
+$(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso)
+	$(call if_changed,vdsold)
+$(obj)/vdso-syms.o: $(src)/vdso.lds $(obj-vdso)
+	$(call if_changed,vdsold)
+
+# Strip rule for the *.so file
+$(obj)/%.so: OBJCOPYFLAGS := -S
+$(obj)/%.so: $(obj)/%.so.dbg FORCE
+	$(call if_changed,objcopy)
+
+# Assembly rules for the *.S files
+$(obj-vdso): %.o: %.S
+	$(call if_changed_dep,vdsoas)
+
+# Actual build commands
+quiet_cmd_vdsold = VDSOLD  $@
+      cmd_vdsold = $(CC) $(c_flags) -nostdlib $(CFLAGS_$(@F)) -Wl,-n -Wl,-T $^ -o $@
+quiet_cmd_vdsoas = VDSOAS  $@
+      cmd_vdsoas = $(CC) $(a_flags) -c -o $@ $<
+
+# Install commands for the unstripped file
+quiet_cmd_vdso_install = INSTALL $@
+      cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/$@
+
+vdso.so: $(obj)/vdso.so.dbg
+	@mkdir -p $(MODLIB)/vdso
+	$(call cmd,vdso_install)
+
+vdso_install: vdso.so
diff --git a/arch/riscv/kernel/vdso/sigreturn.S b/arch/riscv/kernel/vdso/sigreturn.S
new file mode 100644
index 000000000000..f5aa3d72acfb
--- /dev/null
+++ b/arch/riscv/kernel/vdso/sigreturn.S
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/unistd.h>
+
+	.text
+ENTRY(__vdso_rt_sigreturn)
+	.cfi_startproc
+	.cfi_signal_frame
+	li a7, __NR_rt_sigreturn
+	scall
+	.cfi_endproc
+ENDPROC(__vdso_rt_sigreturn)
diff --git a/arch/riscv/kernel/vdso/vdso.S b/arch/riscv/kernel/vdso/vdso.S
new file mode 100644
index 000000000000..7055de5f9174
--- /dev/null
+++ b/arch/riscv/kernel/vdso/vdso.S
@@ -0,0 +1,27 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/page.h>
+
+	__PAGE_ALIGNED_DATA
+
+	.globl vdso_start, vdso_end
+	.balign PAGE_SIZE
+vdso_start:
+	.incbin "arch/riscv/kernel/vdso/vdso.so"
+	.balign PAGE_SIZE
+vdso_end:
+
+	.previous
diff --git a/arch/riscv/kernel/vdso/vdso.lds.S b/arch/riscv/kernel/vdso/vdso.lds.S
new file mode 100644
index 000000000000..24942cb25694
--- /dev/null
+++ b/arch/riscv/kernel/vdso/vdso.lds.S
@@ -0,0 +1,76 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+OUTPUT_ARCH(riscv)
+
+SECTIONS
+{
+	. = SIZEOF_HEADERS;
+
+	.hash		: { *(.hash) }			:text
+	.gnu.hash	: { *(.gnu.hash) }
+	.dynsym		: { *(.dynsym) }
+	.dynstr		: { *(.dynstr) }
+	.gnu.version	: { *(.gnu.version) }
+	.gnu.version_d	: { *(.gnu.version_d) }
+	.gnu.version_r	: { *(.gnu.version_r) }
+
+	.note		: { *(.note.*) }		:text	:note
+	.dynamic	: { *(.dynamic) }		:text	:dynamic
+
+	.eh_frame_hdr	: { *(.eh_frame_hdr) }		:text	:eh_frame_hdr
+	.eh_frame	: { KEEP (*(.eh_frame)) }	:text
+
+	.rodata		: { *(.rodata .rodata.* .gnu.linkonce.r.*) }
+
+	/*
+	 * This linker script is used both with -r and with -shared.
+	 * For the layouts to match, we need to skip more than enough
+	 * space for the dynamic symbol table, etc. If this amount is
+	 * insufficient, ld -shared will error; simply increase it here.
+	 */
+	. = 0x800;
+	.text		: { *(.text .text.*) }		:text
+
+	.data		: {
+		*(.got.plt) *(.got)
+		*(.data .data.* .gnu.linkonce.d.*)
+		*(.dynbss)
+		*(.bss .bss.* .gnu.linkonce.b.*)
+	}
+}
+
+/*
+ * We must supply the ELF program headers explicitly to get just one
+ * PT_LOAD segment, and set the flags explicitly to make segments read-only.
+ */
+PHDRS
+{
+	text		PT_LOAD		FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
+	dynamic		PT_DYNAMIC	FLAGS(4);		/* PF_R */
+	note		PT_NOTE		FLAGS(4);		/* PF_R */
+	eh_frame_hdr	PT_GNU_EH_FRAME;
+}
+
+/*
+ * This controls what symbols we export from the DSO.
+ */
+VERSION
+{
+	LINUX_2.6 {
+	global:
+		__vdso_rt_sigreturn;
+	local: *;
+	};
+}
+
diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
new file mode 100644
index 000000000000..ece84991609c
--- /dev/null
+++ b/arch/riscv/kernel/vmlinux.lds.S
@@ -0,0 +1,92 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#define LOAD_OFFSET PAGE_OFFSET
+#include <asm/vmlinux.lds.h>
+#include <asm/page.h>
+#include <asm/cache.h>
+#include <asm/thread_info.h>
+
+OUTPUT_ARCH(riscv)
+ENTRY(_start)
+
+jiffies = jiffies_64;
+
+SECTIONS
+{
+	/* Beginning of code and text segment */
+	. = LOAD_OFFSET;
+	_start = .;
+	__init_begin = .;
+	HEAD_TEXT_SECTION
+	INIT_TEXT_SECTION(PAGE_SIZE)
+	INIT_DATA_SECTION(16)
+	/* we have to discard exit text and such at runtime, not link time */
+	.exit.text :
+	{
+		EXIT_TEXT
+	}
+	.exit.data :
+	{
+		EXIT_DATA
+	}
+	PERCPU_SECTION(L1_CACHE_BYTES)
+	__init_end = .;
+
+	.text : {
+		_text = .;
+		_stext = .;
+		TEXT_TEXT
+		SCHED_TEXT
+		CPUIDLE_TEXT
+		LOCK_TEXT
+		KPROBES_TEXT
+		ENTRY_TEXT
+		IRQENTRY_TEXT
+		*(.fixup)
+		_etext = .;
+	}
+
+	/* Start of data section */
+	_sdata = .;
+	RO_DATA_SECTION(L1_CACHE_BYTES)
+	.srodata : {
+		*(.srodata*)
+	}
+
+	RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
+	.sdata : {
+		__global_pointer$ = . + 0x800;
+		*(.sdata*)
+		/* End of data section */
+		_edata = .;
+		*(.sbss*)
+	}
+
+	BSS_SECTION(0, 0, 0)
+
+	EXCEPTION_TABLE(0x10)
+	NOTES
+
+	.rel.dyn : {
+		*(.rel.dyn*)
+	}
+
+	_end = .;
+
+	STABS_DEBUG
+	DWARF_DEBUG
+
+	DISCARDS
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 16/17] RISC-V: Add kernel subdirectory
  2017-06-06 22:59   ` Palmer Dabbelt
                     ` (22 preceding siblings ...)
  (?)
@ 2017-06-06 23:00   ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

These files were mostly based on the score port, but many of them are
very ISA specific.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/kernel/.gitignore       |   1 +
 arch/riscv/kernel/Makefile         |  16 ++
 arch/riscv/kernel/asm-offsets.c    | 316 +++++++++++++++++++++++++++
 arch/riscv/kernel/cacheinfo.c      | 103 +++++++++
 arch/riscv/kernel/cpu.c            |  89 ++++++++
 arch/riscv/kernel/entry.S          | 437 +++++++++++++++++++++++++++++++++++++
 arch/riscv/kernel/head.S           | 147 +++++++++++++
 arch/riscv/kernel/irq.c            |  20 ++
 arch/riscv/kernel/module.c         | 215 ++++++++++++++++++
 arch/riscv/kernel/process.c        | 132 +++++++++++
 arch/riscv/kernel/ptrace.c         | 147 +++++++++++++
 arch/riscv/kernel/reset.c          |  36 +++
 arch/riscv/kernel/riscv_ksyms.c    |  16 ++
 arch/riscv/kernel/setup.c          | 240 ++++++++++++++++++++
 arch/riscv/kernel/signal.c         | 257 ++++++++++++++++++++++
 arch/riscv/kernel/smp.c            | 110 ++++++++++
 arch/riscv/kernel/smpboot.c        | 103 +++++++++
 arch/riscv/kernel/stacktrace.c     | 177 +++++++++++++++
 arch/riscv/kernel/sys_riscv.c      |  85 ++++++++
 arch/riscv/kernel/syscall_table.c  |  25 +++
 arch/riscv/kernel/traps.c          | 183 ++++++++++++++++
 arch/riscv/kernel/vdso.c           | 125 +++++++++++
 arch/riscv/kernel/vdso/.gitignore  |   1 +
 arch/riscv/kernel/vdso/Makefile    |  61 ++++++
 arch/riscv/kernel/vdso/sigreturn.S |  24 ++
 arch/riscv/kernel/vdso/vdso.S      |  27 +++
 arch/riscv/kernel/vdso/vdso.lds.S  |  76 +++++++
 arch/riscv/kernel/vmlinux.lds.S    |  92 ++++++++
 28 files changed, 3261 insertions(+)
 create mode 100644 arch/riscv/kernel/.gitignore
 create mode 100644 arch/riscv/kernel/Makefile
 create mode 100644 arch/riscv/kernel/asm-offsets.c
 create mode 100644 arch/riscv/kernel/cacheinfo.c
 create mode 100644 arch/riscv/kernel/cpu.c
 create mode 100644 arch/riscv/kernel/entry.S
 create mode 100644 arch/riscv/kernel/head.S
 create mode 100644 arch/riscv/kernel/irq.c
 create mode 100644 arch/riscv/kernel/module.c
 create mode 100644 arch/riscv/kernel/process.c
 create mode 100644 arch/riscv/kernel/ptrace.c
 create mode 100644 arch/riscv/kernel/reset.c
 create mode 100644 arch/riscv/kernel/riscv_ksyms.c
 create mode 100644 arch/riscv/kernel/setup.c
 create mode 100644 arch/riscv/kernel/signal.c
 create mode 100644 arch/riscv/kernel/smp.c
 create mode 100644 arch/riscv/kernel/smpboot.c
 create mode 100644 arch/riscv/kernel/stacktrace.c
 create mode 100644 arch/riscv/kernel/sys_riscv.c
 create mode 100644 arch/riscv/kernel/syscall_table.c
 create mode 100644 arch/riscv/kernel/traps.c
 create mode 100644 arch/riscv/kernel/vdso.c
 create mode 100644 arch/riscv/kernel/vdso/.gitignore
 create mode 100644 arch/riscv/kernel/vdso/Makefile
 create mode 100644 arch/riscv/kernel/vdso/sigreturn.S
 create mode 100644 arch/riscv/kernel/vdso/vdso.S
 create mode 100644 arch/riscv/kernel/vdso/vdso.lds.S
 create mode 100644 arch/riscv/kernel/vmlinux.lds.S

diff --git a/arch/riscv/kernel/.gitignore b/arch/riscv/kernel/.gitignore
new file mode 100644
index 000000000000..b51634f6a7cd
--- /dev/null
+++ b/arch/riscv/kernel/.gitignore
@@ -0,0 +1 @@
+/vmlinux.lds
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
new file mode 100644
index 000000000000..b6de129d4a23
--- /dev/null
+++ b/arch/riscv/kernel/Makefile
@@ -0,0 +1,16 @@
+#
+# Makefile for the RISC-V Linux kernel
+#
+
+extra-y := head.o vmlinux.lds
+
+obj-y	:= cpu.o entry.o irq.o process.o ptrace.o reset.o setup.o \
+	   signal.o syscall_table.o sys_riscv.o traps.o \
+	   riscv_ksyms.o stacktrace.o vdso.o cacheinfo.o vdso/
+
+CFLAGS_setup.o := -mcmodel=medany
+
+obj-$(CONFIG_SMP)		+= smpboot.o smp.o
+obj-$(CONFIG_MODULES)		+= module.o
+
+clean:
diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
new file mode 100644
index 000000000000..2ead5037528c
--- /dev/null
+++ b/arch/riscv/kernel/asm-offsets.c
@@ -0,0 +1,316 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/kbuild.h>
+#include <linux/sched.h>
+#include <asm/thread_info.h>
+#include <asm/ptrace.h>
+
+void asm_offsets(void)
+{
+	OFFSET(TASK_THREAD_RA, task_struct, thread.ra);
+	OFFSET(TASK_THREAD_SP, task_struct, thread.sp);
+	OFFSET(TASK_THREAD_S0, task_struct, thread.s[0]);
+	OFFSET(TASK_THREAD_S1, task_struct, thread.s[1]);
+	OFFSET(TASK_THREAD_S2, task_struct, thread.s[2]);
+	OFFSET(TASK_THREAD_S3, task_struct, thread.s[3]);
+	OFFSET(TASK_THREAD_S4, task_struct, thread.s[4]);
+	OFFSET(TASK_THREAD_S5, task_struct, thread.s[5]);
+	OFFSET(TASK_THREAD_S6, task_struct, thread.s[6]);
+	OFFSET(TASK_THREAD_S7, task_struct, thread.s[7]);
+	OFFSET(TASK_THREAD_S8, task_struct, thread.s[8]);
+	OFFSET(TASK_THREAD_S9, task_struct, thread.s[9]);
+	OFFSET(TASK_THREAD_S10, task_struct, thread.s[10]);
+	OFFSET(TASK_THREAD_S11, task_struct, thread.s[11]);
+	OFFSET(TASK_THREAD_SP, task_struct, thread.sp);
+	OFFSET(TASK_STACK, task_struct, stack);
+	OFFSET(TASK_TI, task_struct, thread_info);
+	OFFSET(TASK_TI_FLAGS, task_struct, thread_info.flags);
+	OFFSET(TASK_TI_KERNEL_SP, task_struct, thread_info.kernel_sp);
+	OFFSET(TASK_TI_USER_SP, task_struct, thread_info.user_sp);
+
+	OFFSET(TASK_THREAD_F0,  task_struct, thread.fstate.f[0]);
+	OFFSET(TASK_THREAD_F1,  task_struct, thread.fstate.f[1]);
+	OFFSET(TASK_THREAD_F2,  task_struct, thread.fstate.f[2]);
+	OFFSET(TASK_THREAD_F3,  task_struct, thread.fstate.f[3]);
+	OFFSET(TASK_THREAD_F4,  task_struct, thread.fstate.f[4]);
+	OFFSET(TASK_THREAD_F5,  task_struct, thread.fstate.f[5]);
+	OFFSET(TASK_THREAD_F6,  task_struct, thread.fstate.f[6]);
+	OFFSET(TASK_THREAD_F7,  task_struct, thread.fstate.f[7]);
+	OFFSET(TASK_THREAD_F8,  task_struct, thread.fstate.f[8]);
+	OFFSET(TASK_THREAD_F9,  task_struct, thread.fstate.f[9]);
+	OFFSET(TASK_THREAD_F10, task_struct, thread.fstate.f[10]);
+	OFFSET(TASK_THREAD_F11, task_struct, thread.fstate.f[11]);
+	OFFSET(TASK_THREAD_F12, task_struct, thread.fstate.f[12]);
+	OFFSET(TASK_THREAD_F13, task_struct, thread.fstate.f[13]);
+	OFFSET(TASK_THREAD_F14, task_struct, thread.fstate.f[14]);
+	OFFSET(TASK_THREAD_F15, task_struct, thread.fstate.f[15]);
+	OFFSET(TASK_THREAD_F16, task_struct, thread.fstate.f[16]);
+	OFFSET(TASK_THREAD_F17, task_struct, thread.fstate.f[17]);
+	OFFSET(TASK_THREAD_F18, task_struct, thread.fstate.f[18]);
+	OFFSET(TASK_THREAD_F19, task_struct, thread.fstate.f[19]);
+	OFFSET(TASK_THREAD_F20, task_struct, thread.fstate.f[20]);
+	OFFSET(TASK_THREAD_F21, task_struct, thread.fstate.f[21]);
+	OFFSET(TASK_THREAD_F22, task_struct, thread.fstate.f[22]);
+	OFFSET(TASK_THREAD_F23, task_struct, thread.fstate.f[23]);
+	OFFSET(TASK_THREAD_F24, task_struct, thread.fstate.f[24]);
+	OFFSET(TASK_THREAD_F25, task_struct, thread.fstate.f[25]);
+	OFFSET(TASK_THREAD_F26, task_struct, thread.fstate.f[26]);
+	OFFSET(TASK_THREAD_F27, task_struct, thread.fstate.f[27]);
+	OFFSET(TASK_THREAD_F28, task_struct, thread.fstate.f[28]);
+	OFFSET(TASK_THREAD_F29, task_struct, thread.fstate.f[29]);
+	OFFSET(TASK_THREAD_F30, task_struct, thread.fstate.f[30]);
+	OFFSET(TASK_THREAD_F31, task_struct, thread.fstate.f[31]);
+	OFFSET(TASK_THREAD_FCSR, task_struct, thread.fstate.fcsr);
+
+	DEFINE(PT_SIZE, sizeof(struct pt_regs));
+	OFFSET(PT_SEPC, pt_regs, sepc);
+	OFFSET(PT_RA, pt_regs, ra);
+	OFFSET(PT_FP, pt_regs, s0);
+	OFFSET(PT_S0, pt_regs, s0);
+	OFFSET(PT_S1, pt_regs, s1);
+	OFFSET(PT_S2, pt_regs, s2);
+	OFFSET(PT_S3, pt_regs, s3);
+	OFFSET(PT_S4, pt_regs, s4);
+	OFFSET(PT_S5, pt_regs, s5);
+	OFFSET(PT_S6, pt_regs, s6);
+	OFFSET(PT_S7, pt_regs, s7);
+	OFFSET(PT_S8, pt_regs, s8);
+	OFFSET(PT_S9, pt_regs, s9);
+	OFFSET(PT_S10, pt_regs, s10);
+	OFFSET(PT_S11, pt_regs, s11);
+	OFFSET(PT_SP, pt_regs, sp);
+	OFFSET(PT_TP, pt_regs, tp);
+	OFFSET(PT_A0, pt_regs, a0);
+	OFFSET(PT_A1, pt_regs, a1);
+	OFFSET(PT_A2, pt_regs, a2);
+	OFFSET(PT_A3, pt_regs, a3);
+	OFFSET(PT_A4, pt_regs, a4);
+	OFFSET(PT_A5, pt_regs, a5);
+	OFFSET(PT_A6, pt_regs, a6);
+	OFFSET(PT_A7, pt_regs, a7);
+	OFFSET(PT_T0, pt_regs, t0);
+	OFFSET(PT_T1, pt_regs, t1);
+	OFFSET(PT_T2, pt_regs, t2);
+	OFFSET(PT_T3, pt_regs, t3);
+	OFFSET(PT_T4, pt_regs, t4);
+	OFFSET(PT_T5, pt_regs, t5);
+	OFFSET(PT_T6, pt_regs, t6);
+	OFFSET(PT_GP, pt_regs, gp);
+	OFFSET(PT_SSTATUS, pt_regs, sstatus);
+	OFFSET(PT_SBADADDR, pt_regs, sbadaddr);
+	OFFSET(PT_SCAUSE, pt_regs, scause);
+
+	/* THREAD_{F,X}* might be larger than a S-type offset can handle, but
+	 * these are used in performance-sensitive assembly so we can't resort
+	 * to loading the long immediate every time.
+	 */
+	DEFINE(TASK_THREAD_RA_RA,
+		  offsetof(struct task_struct, thread.ra)
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_SP_RA,
+		  offsetof(struct task_struct, thread.sp)
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S0_RA,
+		  offsetof(struct task_struct, thread.s[0])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S1_RA,
+		  offsetof(struct task_struct, thread.s[1])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S2_RA,
+		  offsetof(struct task_struct, thread.s[2])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S3_RA,
+		  offsetof(struct task_struct, thread.s[3])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S4_RA,
+		  offsetof(struct task_struct, thread.s[4])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S5_RA,
+		  offsetof(struct task_struct, thread.s[5])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S6_RA,
+		  offsetof(struct task_struct, thread.s[6])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S7_RA,
+		  offsetof(struct task_struct, thread.s[7])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S8_RA,
+		  offsetof(struct task_struct, thread.s[8])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S9_RA,
+		  offsetof(struct task_struct, thread.s[9])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S10_RA,
+		  offsetof(struct task_struct, thread.s[10])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S11_RA,
+		  offsetof(struct task_struct, thread.s[11])
+		- offsetof(struct task_struct, thread.ra)
+	);
+
+	DEFINE(TASK_THREAD_F0_F0,
+		  offsetof(struct task_struct, thread.fstate.f[0])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F1_F0,
+		  offsetof(struct task_struct, thread.fstate.f[1])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F2_F0,
+		  offsetof(struct task_struct, thread.fstate.f[2])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F3_F0,
+		  offsetof(struct task_struct, thread.fstate.f[3])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F4_F0,
+		  offsetof(struct task_struct, thread.fstate.f[4])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F5_F0,
+		  offsetof(struct task_struct, thread.fstate.f[5])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F6_F0,
+		  offsetof(struct task_struct, thread.fstate.f[6])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F7_F0,
+		  offsetof(struct task_struct, thread.fstate.f[7])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F8_F0,
+		  offsetof(struct task_struct, thread.fstate.f[8])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F9_F0,
+		  offsetof(struct task_struct, thread.fstate.f[9])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F10_F0,
+		  offsetof(struct task_struct, thread.fstate.f[10])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F11_F0,
+		  offsetof(struct task_struct, thread.fstate.f[11])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F12_F0,
+		  offsetof(struct task_struct, thread.fstate.f[12])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F13_F0,
+		  offsetof(struct task_struct, thread.fstate.f[13])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F14_F0,
+		  offsetof(struct task_struct, thread.fstate.f[14])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F15_F0,
+		  offsetof(struct task_struct, thread.fstate.f[15])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F16_F0,
+		  offsetof(struct task_struct, thread.fstate.f[16])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F17_F0,
+		  offsetof(struct task_struct, thread.fstate.f[17])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F18_F0,
+		  offsetof(struct task_struct, thread.fstate.f[18])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F19_F0,
+		  offsetof(struct task_struct, thread.fstate.f[19])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F20_F0,
+		  offsetof(struct task_struct, thread.fstate.f[20])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F21_F0,
+		  offsetof(struct task_struct, thread.fstate.f[21])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F22_F0,
+		  offsetof(struct task_struct, thread.fstate.f[22])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F23_F0,
+		  offsetof(struct task_struct, thread.fstate.f[23])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F24_F0,
+		  offsetof(struct task_struct, thread.fstate.f[24])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F25_F0,
+		  offsetof(struct task_struct, thread.fstate.f[25])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F26_F0,
+		  offsetof(struct task_struct, thread.fstate.f[26])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F27_F0,
+		  offsetof(struct task_struct, thread.fstate.f[27])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F28_F0,
+		  offsetof(struct task_struct, thread.fstate.f[28])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F29_F0,
+		  offsetof(struct task_struct, thread.fstate.f[29])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F30_F0,
+		  offsetof(struct task_struct, thread.fstate.f[30])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F31_F0,
+		  offsetof(struct task_struct, thread.fstate.f[31])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_FCSR_F0,
+		  offsetof(struct task_struct, thread.fstate.fcsr)
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+
+	/* The assembler needs access to THREAD_SIZE as well. */
+	DEFINE(ASM_THREAD_SIZE, THREAD_SIZE);
+
+	/* We allocate a pt_regs on the stack when entering the kernel.  This
+	 * ensures the alignment is sane.
+	 */
+	DEFINE(PT_SIZE_ON_STACK, ALIGN(sizeof(struct pt_regs), STACK_ALIGN));
+}
diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c
new file mode 100644
index 000000000000..76ed95850a22
--- /dev/null
+++ b/arch/riscv/kernel/cacheinfo.c
@@ -0,0 +1,103 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/cacheinfo.h>
+#include <linux/cpu.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+
+static void ci_leaf_init(struct cacheinfo *this_leaf,
+			 struct device_node *node,
+			 enum cache_type type, unsigned int level)
+{
+	this_leaf->of_node = node;
+	this_leaf->level = level;
+	this_leaf->type = type;
+	this_leaf->physical_line_partition = 1; // not a sector cache
+	this_leaf->attributes =
+		CACHE_WRITE_BACK
+		| CACHE_READ_ALLOCATE
+		| CACHE_WRITE_ALLOCATE; // TODO: add to DTS
+}
+
+static int __init_cache_level(unsigned int cpu)
+{
+	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+	struct device_node *np = of_cpu_device_node_get(cpu);
+	int levels = 0, leaves = 0, level;
+
+	if (of_property_read_bool(np, "cache-size"))
+		++leaves;
+	if (of_property_read_bool(np, "i-cache-size"))
+		++leaves;
+	if (of_property_read_bool(np, "d-cache-size"))
+		++leaves;
+	if (leaves > 0)
+		levels = 1;
+
+	while ((np = of_find_next_cache_node(np))) {
+		if (!of_device_is_compatible(np, "cache"))
+			break;
+		if (of_property_read_u32(np, "cache-level", &level))
+			break;
+		if (level <= levels)
+			break;
+		if (of_property_read_bool(np, "cache-size"))
+			++leaves;
+		if (of_property_read_bool(np, "i-cache-size"))
+			++leaves;
+		if (of_property_read_bool(np, "d-cache-size"))
+			++leaves;
+		levels = level;
+	}
+
+	this_cpu_ci->num_levels = levels;
+	this_cpu_ci->num_leaves = leaves;
+	return 0;
+}
+
+static int __populate_cache_leaves(unsigned int cpu)
+{
+	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+	struct cacheinfo *this_leaf = this_cpu_ci->info_list;
+	struct device_node *np = of_cpu_device_node_get(cpu);
+	int levels = 1, level = 1;
+
+	if (of_property_read_bool(np, "cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
+	if (of_property_read_bool(np, "i-cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
+	if (of_property_read_bool(np, "d-cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
+
+	while ((np = of_find_next_cache_node(np))) {
+		if (!of_device_is_compatible(np, "cache"))
+			break;
+		if (of_property_read_u32(np, "cache-level", &level))
+			break;
+		if (level <= levels)
+			break;
+		if (of_property_read_bool(np, "cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
+		if (of_property_read_bool(np, "i-cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
+		if (of_property_read_bool(np, "d-cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
+		levels = level;
+	}
+
+	return 0;
+}
+
+DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
+DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c
new file mode 100644
index 000000000000..20004bd7a216
--- /dev/null
+++ b/arch/riscv/kernel/cpu.c
@@ -0,0 +1,89 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <linux/of.h>
+
+/* Return -1 if not a valid hart */
+int riscv_of_processor_hart(struct device_node *node)
+{
+	const char *isa, *status;
+	u32 hart;
+
+	if (!of_device_is_compatible(node, "riscv"))
+		return -(ENODEV);
+	if (of_property_read_u32(node, "reg", &hart)
+	    || hart >= NR_CPUS)
+		return -(ENODEV);
+	if (of_property_read_string(node, "status", &status)
+	    || strcmp(status, "okay"))
+		return -(ENODEV);
+	if (of_property_read_string(node, "riscv,isa", &isa)
+	    || isa[0] != 'r'
+	    || isa[1] != 'v')
+		return -(ENODEV);
+
+	return hart;
+}
+
+#ifdef CONFIG_PROC_FS
+
+static void *c_start(struct seq_file *m, loff_t *pos)
+{
+	*pos = cpumask_next(*pos - 1, cpu_online_mask);
+	if ((*pos) < nr_cpu_ids)
+		return (void *)(uintptr_t)(1 + *pos);
+	return NULL;
+}
+
+static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+{
+	(*pos)++;
+	return c_start(m, pos);
+}
+
+static void c_stop(struct seq_file *m, void *v)
+{
+}
+
+static int c_show(struct seq_file *m, void *v)
+{
+	unsigned long hart_id = (unsigned long)v - 1;
+	struct device_node *node = of_get_cpu_node(hart_id, NULL);
+	const char *compat, *isa, *mmu;
+
+	seq_printf(m, "hart\t: %lu\n", hart_id);
+	if (!of_property_read_string(node, "riscv,isa", &isa)
+	    && isa[0] == 'r'
+	    && isa[1] == 'v')
+		seq_printf(m, "isa\t: %s\n", isa);
+	if (!of_property_read_string(node, "mmu-type", &mmu)
+	    && !strncmp(mmu, "riscv,", 6))
+		seq_printf(m, "mmu\t: %s\n", mmu+6);
+	if (!of_property_read_string(node, "compatible", &compat)
+	    && strcmp(compat, "riscv"))
+		seq_printf(m, "uarch\t: %s\n", compat);
+	seq_puts(m, "\n");
+
+	return 0;
+}
+
+const struct seq_operations cpuinfo_op = {
+	.start	= c_start,
+	.next	= c_next,
+	.stop	= c_stop,
+	.show	= c_show
+};
+
+#endif /* CONFIG_PROC_FS */
diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
new file mode 100644
index 000000000000..0be9ca33e8fc
--- /dev/null
+++ b/arch/riscv/kernel/entry.S
@@ -0,0 +1,437 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+
+#include <asm/asm.h>
+#include <asm/csr.h>
+#include <asm/unistd.h>
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+
+	.text
+	.altmacro
+
+/* Prepares to enter a system call or exception by saving all registers to the
+ * stack.
+ */
+	.macro SAVE_ALL
+	LOCAL _restore_kernel_tpsp
+	LOCAL _save_context
+
+	/* If coming from userspace, preserve the user thread pointer and load
+	   the kernel thread pointer.  If we came from the kernel, sscratch
+	   will contain 0, and we should continue on the current TP. */
+	csrrw tp, sscratch, tp
+	bnez tp, _save_context
+
+_restore_kernel_tpsp:
+	csrr tp, sscratch
+	REG_S sp, TASK_TI_KERNEL_SP(tp)
+_save_context:
+	REG_S sp, TASK_TI_USER_SP(tp)
+	REG_L sp, TASK_TI_KERNEL_SP(tp)
+	addi sp, sp, -(PT_SIZE_ON_STACK)
+	REG_S x1,  PT_RA(sp)
+	REG_S x3,  PT_GP(sp)
+	REG_S x5,  PT_T0(sp)
+	REG_S x6,  PT_T1(sp)
+	REG_S x7,  PT_T2(sp)
+	REG_S x8,  PT_S0(sp)
+	REG_S x9,  PT_S1(sp)
+	REG_S x10, PT_A0(sp)
+	REG_S x11, PT_A1(sp)
+	REG_S x12, PT_A2(sp)
+	REG_S x13, PT_A3(sp)
+	REG_S x14, PT_A4(sp)
+	REG_S x15, PT_A5(sp)
+	REG_S x16, PT_A6(sp)
+	REG_S x17, PT_A7(sp)
+	REG_S x18, PT_S2(sp)
+	REG_S x19, PT_S3(sp)
+	REG_S x20, PT_S4(sp)
+	REG_S x21, PT_S5(sp)
+	REG_S x22, PT_S6(sp)
+	REG_S x23, PT_S7(sp)
+	REG_S x24, PT_S8(sp)
+	REG_S x25, PT_S9(sp)
+	REG_S x26, PT_S10(sp)
+	REG_S x27, PT_S11(sp)
+	REG_S x28, PT_T3(sp)
+	REG_S x29, PT_T4(sp)
+	REG_S x30, PT_T5(sp)
+	REG_S x31, PT_T6(sp)
+
+	/* Disable FPU to detect illegal usage of
+	   floating point in kernel space */
+	li t0, SR_FS
+
+	REG_L s0, TASK_TI_USER_SP(tp)
+	csrrc s1, sstatus, t0
+	csrr s2, sepc
+	csrr s3, sbadaddr
+	csrr s4, scause
+	csrr s5, sscratch
+	REG_S s0, PT_SP(sp)
+	REG_S s1, PT_SSTATUS(sp)
+	REG_S s2, PT_SEPC(sp)
+	REG_S s3, PT_SBADADDR(sp)
+	REG_S s4, PT_SCAUSE(sp)
+	REG_S s5, PT_TP(sp)
+	.endm
+
+/* Prepares to return from a system call or exception by restoring all
+ * registers from the stack.
+ */
+	.macro RESTORE_ALL
+	REG_L a0, PT_SSTATUS(sp)
+	REG_L a2, PT_SEPC(sp)
+	csrw sstatus, a0
+	csrw sepc, a2
+
+	REG_L x1,  PT_RA(sp)
+	REG_L x3,  PT_GP(sp)
+	REG_L x4,  PT_TP(sp)
+	REG_L x5,  PT_T0(sp)
+	REG_L x6,  PT_T1(sp)
+	REG_L x7,  PT_T2(sp)
+	REG_L x8,  PT_S0(sp)
+	REG_L x9,  PT_S1(sp)
+	REG_L x10, PT_A0(sp)
+	REG_L x11, PT_A1(sp)
+	REG_L x12, PT_A2(sp)
+	REG_L x13, PT_A3(sp)
+	REG_L x14, PT_A4(sp)
+	REG_L x15, PT_A5(sp)
+	REG_L x16, PT_A6(sp)
+	REG_L x17, PT_A7(sp)
+	REG_L x18, PT_S2(sp)
+	REG_L x19, PT_S3(sp)
+	REG_L x20, PT_S4(sp)
+	REG_L x21, PT_S5(sp)
+	REG_L x22, PT_S6(sp)
+	REG_L x23, PT_S7(sp)
+	REG_L x24, PT_S8(sp)
+	REG_L x25, PT_S9(sp)
+	REG_L x26, PT_S10(sp)
+	REG_L x27, PT_S11(sp)
+	REG_L x28, PT_T3(sp)
+	REG_L x29, PT_T4(sp)
+	REG_L x30, PT_T5(sp)
+	REG_L x31, PT_T6(sp)
+
+	REG_L x2,  PT_SP(sp)
+	.endm
+
+ENTRY(handle_exception)
+	SAVE_ALL
+
+	/* Set sscratch register to 0, so that if a recursive exception
+	   occurs, the exception vector knows it came from the kernel */
+	csrw sscratch, x0
+
+	la gp, __global_pointer$
+
+	la ra, ret_from_exception
+	/* MSB of cause differentiates between
+	   interrupts and exceptions */
+	bge s4, zero, 1f
+
+	/* Handle interrupts */
+	slli a0, s4, 1
+	srli a0, a0, 1
+	move a1, sp /* pt_regs */
+	tail do_IRQ
+1:
+	/* Handle syscalls */
+	li t0, EXC_SYSCALL
+	beq s4, t0, handle_syscall
+
+	/* Handle other exceptions */
+	slli t0, s4, RISCV_LGPTR
+	la t1, excp_vect_table
+	la t2, excp_vect_table_end
+	move a0, sp /* pt_regs */
+	add t0, t1, t0
+	/* Check if exception code lies within bounds */
+	bgeu t0, t2, 1f
+	REG_L t0, 0(t0)
+	jr t0
+1:
+	tail do_trap_unknown
+
+handle_syscall:
+	/* Advance SEPC to avoid executing the original
+	   scall instruction on sret */
+	addi s2, s2, 0x4
+	REG_S s2, PT_SEPC(sp)
+	/* System calls run with interrupts enabled */
+	csrs sstatus, SR_IE
+	/* Trace syscalls, but only if requested by the user. */
+	REG_L t0, TASK_TI_FLAGS(tp)
+	andi t0, t0, _TIF_SYSCALL_TRACE
+	bnez t0, handle_syscall_trace_enter
+check_syscall_nr:
+	/* Check to make sure we don't jump to a bogus syscall number. */
+	li t0, __NR_syscalls
+	la s0, sys_ni_syscall
+	/* Syscall number held in a7 */
+	bgeu a7, t0, 1f
+	la s0, sys_call_table
+	slli t0, a7, RISCV_LGPTR
+	add s0, s0, t0
+	REG_L s0, 0(s0)
+1:
+	jalr s0
+
+ret_from_syscall:
+	/* Set user a0 to kernel a0 */
+	REG_S a0, PT_A0(sp)
+	/* Trace syscalls, but only if requested by the user. */
+	REG_L t0, TASK_TI_FLAGS(tp)
+	andi t0, t0, _TIF_SYSCALL_TRACE
+	bnez t0, handle_syscall_trace_exit
+
+ret_from_exception:
+	REG_L s0, PT_SSTATUS(sp)
+	csrc sstatus, SR_IE
+	andi s0, s0, SR_PS
+	bnez s0, restore_all
+
+resume_userspace:
+	/* Interrupts must be disabled here so flags are checked atomically */
+	REG_L s0, TASK_TI_FLAGS(tp) /* current_thread_info->flags */
+	andi s1, s0, _TIF_WORK_MASK
+	bnez s1, work_pending
+
+	/* Save unwound kernel stack pointer in thread_info */
+	addi s0, sp, PT_SIZE_ON_STACK
+	REG_S s0, TASK_TI_KERNEL_SP(tp)
+
+	/* Save TP into sscratch, so we can find the kernel data structures
+	 * again. */
+	csrw sscratch, tp
+
+restore_all:
+	RESTORE_ALL
+	sret
+
+work_pending:
+	/* Enter slow path for supplementary processing */
+	la ra, ret_from_exception
+	andi s1, s0, _TIF_NEED_RESCHED
+	bnez s1, work_resched
+work_notifysig:
+	/* Handle pending signals and notify-resume requests */
+	csrs sstatus, SR_IE /* Enable interrupts for do_notify_resume() */
+	move a0, sp /* pt_regs */
+	move a1, s0 /* current_thread_info->flags */
+	tail do_notify_resume
+work_resched:
+	tail schedule
+
+/* Slow paths for ptrace. */
+handle_syscall_trace_enter:
+	move a0, sp
+	call do_syscall_trace_enter
+	REG_L a0, PT_A0(sp)
+	REG_L a1, PT_A1(sp)
+	REG_L a2, PT_A2(sp)
+	REG_L a3, PT_A3(sp)
+	REG_L a4, PT_A4(sp)
+	REG_L a5, PT_A5(sp)
+	REG_L a6, PT_A6(sp)
+	REG_L a7, PT_A7(sp)
+	j check_syscall_nr
+handle_syscall_trace_exit:
+	move a0, sp
+	call do_syscall_trace_exit
+	j ret_from_exception
+
+END(handle_exception)
+
+ENTRY(ret_from_fork)
+	la ra, ret_from_exception
+	tail schedule_tail
+ENDPROC(ret_from_fork)
+
+ENTRY(ret_from_kernel_thread)
+	call schedule_tail
+	/* Call fn(arg) */
+	la ra, ret_from_exception
+	move a0, s1
+	jr s0
+ENDPROC(ret_from_kernel_thread)
+
+
+/*
+ * Integer register context switch
+ * The callee-saved registers must be saved and restored.
+ *
+ *   a0: previous task_struct (must be preserved across the switch)
+ *   a1: next task_struct
+ */
+ENTRY(__switch_to)
+	/* Save context into prev->thread */
+	li    a2,  TASK_THREAD_RA
+	add   a0, a0, a2
+	add   a2, a1, a2
+	REG_S ra,  TASK_THREAD_RA_RA(a0)
+	REG_S sp,  TASK_THREAD_SP_RA(a0)
+	REG_S s0,  TASK_THREAD_S0_RA(a0)
+	REG_S s1,  TASK_THREAD_S1_RA(a0)
+	REG_S s2,  TASK_THREAD_S2_RA(a0)
+	REG_S s3,  TASK_THREAD_S3_RA(a0)
+	REG_S s4,  TASK_THREAD_S4_RA(a0)
+	REG_S s5,  TASK_THREAD_S5_RA(a0)
+	REG_S s6,  TASK_THREAD_S6_RA(a0)
+	REG_S s7,  TASK_THREAD_S7_RA(a0)
+	REG_S s8,  TASK_THREAD_S8_RA(a0)
+	REG_S s9,  TASK_THREAD_S9_RA(a0)
+	REG_S s10, TASK_THREAD_S10_RA(a0)
+	REG_S s11, TASK_THREAD_S11_RA(a0)
+	/* Restore context from next->thread */
+	REG_L ra,  TASK_THREAD_RA_RA(a2)
+	REG_L sp,  TASK_THREAD_SP_RA(a2)
+	REG_L s0,  TASK_THREAD_S0_RA(a2)
+	REG_L s1,  TASK_THREAD_S1_RA(a2)
+	REG_L s2,  TASK_THREAD_S2_RA(a2)
+	REG_L s3,  TASK_THREAD_S3_RA(a2)
+	REG_L s4,  TASK_THREAD_S4_RA(a2)
+	REG_L s5,  TASK_THREAD_S5_RA(a2)
+	REG_L s6,  TASK_THREAD_S6_RA(a2)
+	REG_L s7,  TASK_THREAD_S7_RA(a2)
+	REG_L s8,  TASK_THREAD_S8_RA(a2)
+	REG_L s9,  TASK_THREAD_S9_RA(a2)
+	REG_L s10, TASK_THREAD_S10_RA(a2)
+	REG_L s11, TASK_THREAD_S11_RA(a2)
+#if TASK_TI != 0
+#error "TASK_TI != 0: tp will contain a 'struct thread_info', not a 'struct task_struct' so get_current() won't work."
+	addi tp, a1, TASK_TI
+#else
+	move tp, a1
+#endif
+	ret
+ENDPROC(__switch_to)
+
+ENTRY(__fstate_save)
+	li  a2,  TASK_THREAD_F0
+	add a0, a0, a2
+	li t1, SR_FS
+	csrs sstatus, t1
+	frcsr t0
+	fsd f0,  TASK_THREAD_F0_F0(a0)
+	fsd f1,  TASK_THREAD_F1_F0(a0)
+	fsd f2,  TASK_THREAD_F2_F0(a0)
+	fsd f3,  TASK_THREAD_F3_F0(a0)
+	fsd f4,  TASK_THREAD_F4_F0(a0)
+	fsd f5,  TASK_THREAD_F5_F0(a0)
+	fsd f6,  TASK_THREAD_F6_F0(a0)
+	fsd f7,  TASK_THREAD_F7_F0(a0)
+	fsd f8,  TASK_THREAD_F8_F0(a0)
+	fsd f9,  TASK_THREAD_F9_F0(a0)
+	fsd f10, TASK_THREAD_F10_F0(a0)
+	fsd f11, TASK_THREAD_F11_F0(a0)
+	fsd f12, TASK_THREAD_F12_F0(a0)
+	fsd f13, TASK_THREAD_F13_F0(a0)
+	fsd f14, TASK_THREAD_F14_F0(a0)
+	fsd f15, TASK_THREAD_F15_F0(a0)
+	fsd f16, TASK_THREAD_F16_F0(a0)
+	fsd f17, TASK_THREAD_F17_F0(a0)
+	fsd f18, TASK_THREAD_F18_F0(a0)
+	fsd f19, TASK_THREAD_F19_F0(a0)
+	fsd f20, TASK_THREAD_F20_F0(a0)
+	fsd f21, TASK_THREAD_F21_F0(a0)
+	fsd f22, TASK_THREAD_F22_F0(a0)
+	fsd f23, TASK_THREAD_F23_F0(a0)
+	fsd f24, TASK_THREAD_F24_F0(a0)
+	fsd f25, TASK_THREAD_F25_F0(a0)
+	fsd f26, TASK_THREAD_F26_F0(a0)
+	fsd f27, TASK_THREAD_F27_F0(a0)
+	fsd f28, TASK_THREAD_F28_F0(a0)
+	fsd f29, TASK_THREAD_F29_F0(a0)
+	fsd f30, TASK_THREAD_F30_F0(a0)
+	fsd f31, TASK_THREAD_F31_F0(a0)
+	sw t0, TASK_THREAD_FCSR_F0(a0)
+	csrc sstatus, t1
+	ret
+ENDPROC(__fstate_save)
+
+ENTRY(__fstate_restore)
+	li  a2,  TASK_THREAD_F0
+	add a0, a0, a2
+	li t1, SR_FS
+	lw t0, TASK_THREAD_FCSR_F0(a0)
+	csrs sstatus, t1
+	fld f0,  TASK_THREAD_F0_F0(a0)
+	fld f1,  TASK_THREAD_F1_F0(a0)
+	fld f2,  TASK_THREAD_F2_F0(a0)
+	fld f3,  TASK_THREAD_F3_F0(a0)
+	fld f4,  TASK_THREAD_F4_F0(a0)
+	fld f5,  TASK_THREAD_F5_F0(a0)
+	fld f6,  TASK_THREAD_F6_F0(a0)
+	fld f7,  TASK_THREAD_F7_F0(a0)
+	fld f8,  TASK_THREAD_F8_F0(a0)
+	fld f9,  TASK_THREAD_F9_F0(a0)
+	fld f10, TASK_THREAD_F10_F0(a0)
+	fld f11, TASK_THREAD_F11_F0(a0)
+	fld f12, TASK_THREAD_F12_F0(a0)
+	fld f13, TASK_THREAD_F13_F0(a0)
+	fld f14, TASK_THREAD_F14_F0(a0)
+	fld f15, TASK_THREAD_F15_F0(a0)
+	fld f16, TASK_THREAD_F16_F0(a0)
+	fld f17, TASK_THREAD_F17_F0(a0)
+	fld f18, TASK_THREAD_F18_F0(a0)
+	fld f19, TASK_THREAD_F19_F0(a0)
+	fld f20, TASK_THREAD_F20_F0(a0)
+	fld f21, TASK_THREAD_F21_F0(a0)
+	fld f22, TASK_THREAD_F22_F0(a0)
+	fld f23, TASK_THREAD_F23_F0(a0)
+	fld f24, TASK_THREAD_F24_F0(a0)
+	fld f25, TASK_THREAD_F25_F0(a0)
+	fld f26, TASK_THREAD_F26_F0(a0)
+	fld f27, TASK_THREAD_F27_F0(a0)
+	fld f28, TASK_THREAD_F28_F0(a0)
+	fld f29, TASK_THREAD_F29_F0(a0)
+	fld f30, TASK_THREAD_F30_F0(a0)
+	fld f31, TASK_THREAD_F31_F0(a0)
+	fscsr t0
+	csrc sstatus, t1
+	ret
+ENDPROC(__fstate_restore)
+
+
+	.section ".rodata"
+	/* Exception vector table */
+ENTRY(excp_vect_table)
+	RISCV_PTR do_trap_insn_misaligned
+	RISCV_PTR do_trap_insn_fault
+	RISCV_PTR do_trap_insn_illegal
+	RISCV_PTR do_trap_break
+	RISCV_PTR do_trap_load_misaligned
+	RISCV_PTR do_trap_load_fault
+	RISCV_PTR do_trap_store_misaligned
+	RISCV_PTR do_trap_store_fault
+	RISCV_PTR do_trap_ecall_u /* system call, gets intercepted */
+	RISCV_PTR do_trap_ecall_s
+	RISCV_PTR do_trap_unknown
+	RISCV_PTR do_trap_ecall_m
+	RISCV_PTR do_page_fault   /* instruction page fault */
+	RISCV_PTR do_page_fault   /* load page fault */
+	RISCV_PTR do_trap_unknown
+	RISCV_PTR do_page_fault   /* store page fault */
+excp_vect_table_end:
+END(excp_vect_table)
+
diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
new file mode 100644
index 000000000000..608e57d4531f
--- /dev/null
+++ b/arch/riscv/kernel/head.S
@@ -0,0 +1,147 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+#include <asm/asm.h>
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/thread_info.h>
+#include <asm/page.h>
+#include <asm/csr.h>
+
+__INIT
+ENTRY(_start)
+	/* Mask all interrupts */
+	csrw sie, zero
+
+	/* Disable FPU to detect illegal usage of
+	   floating point in kernel space */
+	li t0, SR_FS
+	csrc sstatus, t0
+
+#ifndef CONFIG_RV_PUM
+	/* Allow access to user memory */
+	li t0, SR_SUM
+	csrs sstatus, t0
+#endif
+
+#ifdef CONFIG_ISA_A
+	/* Pick one hart to run the main boot sequence */
+	la a3, hart_lottery
+	li a2, 1
+	amoadd.w a3, a2, (a3)
+	bnez a3, .Lsecondary_start
+#else
+	/* We don't have atomic support, so the boot hart must be picked
+	 * staticly.  Hart 0 is the only sane choice.
+	 */
+	bnez a0, .Lsecondary_park
+#endif
+
+	/* Save hart ID and DTB physical address */
+	mv s0, a0
+	mv s1, a1
+
+	/* Initialize page tables and relocate to virtual addresses */
+	la sp, init_thread_union + THREAD_SIZE
+	call setup_vm
+	call relocate
+
+	/* Restore C environment */
+	la tp, init_task
+
+	la sp, init_thread_union
+	li a0, ASM_THREAD_SIZE
+	add sp, sp, a0
+
+	/* Start the kernel */
+	mv a0, s0
+	mv a1, s1
+	call sbi_save
+	tail start_kernel
+
+relocate:
+	/* Relocate return address */
+	li a1, PAGE_OFFSET
+	la a0, _start
+	sub a1, a1, a0
+	add ra, ra, a1
+
+	/* Point stvec to virtual address of intruction after sptbr write */
+	la a0, 1f
+	add a0, a0, a1
+	csrw stvec, a0
+
+	/* Compute sptbr for kernel page tables, but don't load it yet */
+	la a2, swapper_pg_dir
+	srl a2, a2, PAGE_SHIFT
+	li a1, SPTBR_MODE
+	or a2, a2, a1
+
+	/* Load trampoline page directory, which will cause us to trap to
+	   stvec if VA != PA, or simply fall through if VA == PA */
+	la a0, trampoline_pg_dir
+	srl a0, a0, PAGE_SHIFT
+	or a0, a0, a1
+	sfence.vma
+	csrw sptbr, a0
+1:
+	/* Set trap vector to spin forever to help debug */
+	la a0, .Lsecondary_park
+	csrw stvec, a0
+
+	/* Load the global pointer */
+	la gp, __global_pointer$
+
+	/* Switch to kernel page tables */
+	csrw sptbr, a2
+
+	ret
+
+.Lsecondary_start:
+#ifdef CONFIG_SMP
+	li a1, CONFIG_NR_CPUS
+	bgeu a0, a1, .Lsecondary_park
+
+	la a1, __cpu_up_stack_pointer
+	slli a0, a0, LGREG
+	add a0, a0, a1
+
+.Lwait_for_cpu_up:
+	REG_L sp, (a0)
+	beqz sp, .Lwait_for_cpu_up
+	fence
+
+	/* Enable virtual memory and relocate to virtual address */
+	call relocate
+
+	/* Initialize task_struct pointer */
+	li tp, -THREAD_SIZE
+	add tp, tp, sp
+
+	tail smp_callin
+#endif
+
+.Lsecondary_park:
+	/* We lack SMP support or have too many harts, so park this hart */
+	wfi
+	j .Lsecondary_park
+END(_start)
+
+__PAGE_ALIGNED_BSS
+	/* Empty zero page */
+	.balign PAGE_SIZE
+ENTRY(empty_zero_page)
+	.fill (empty_zero_page + PAGE_SIZE) - ., 1, 0x00
+END(empty_zero_page)
diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c
new file mode 100644
index 000000000000..737d7cce2c6d
--- /dev/null
+++ b/arch/riscv/kernel/irq.c
@@ -0,0 +1,20 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/irqchip.h>
+
+void __init init_IRQ(void)
+{
+	irqchip_init();
+}
diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
new file mode 100644
index 000000000000..753cb9894feb
--- /dev/null
+++ b/arch/riscv/kernel/module.c
@@ -0,0 +1,215 @@
+/*
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  Copyright (C) 2017 Zihao Yu
+ */
+
+#include <linux/elf.h>
+#include <linux/err.h>
+#include <linux/errno.h>
+#include <linux/moduleloader.h>
+
+static int apply_r_riscv_64_rela(struct module *me, u32 *location, Elf_Addr v)
+{
+	*(u64 *)location = v;
+	return 0;
+}
+
+static int apply_r_riscv_branch_rela(struct module *me, u32 *location,
+				     Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	u32 imm12 = (offset & 0x1000) << (31 - 12);
+	u32 imm11 = (offset & 0x800) >> (11 - 7);
+	u32 imm10_5 = (offset & 0x7e0) << (30 - 10);
+	u32 imm4_1 = (offset & 0x1e) << (11 - 4);
+
+	*location = (*location & 0x1fff07f) | imm12 | imm11 | imm10_5 | imm4_1;
+	return 0;
+}
+
+static int apply_r_riscv_jal_rela(struct module *me, u32 *location,
+				  Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	u32 imm20 = (offset & 0x100000) << (31 - 20);
+	u32 imm19_12 = (offset & 0xff000);
+	u32 imm11 = (offset & 0x800) << (20 - 11);
+	u32 imm10_1 = (offset & 0x7fe) << (30 - 10);
+
+	*location = (*location & 0xfff) | imm20 | imm19_12 | imm11 | imm10_1;
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location,
+					 Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	s32 hi20;
+
+	if (offset != (s32)offset) {
+		pr_err(
+		  "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+		  me->name, v, location);
+		return -EINVAL;
+	}
+
+	hi20 = (offset + 0x800) & 0xfffff000;
+	*location = (*location & 0xfff) | hi20;
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_lo12_i_rela(struct module *me, u32 *location,
+					   Elf_Addr v)
+{
+	/* v is the lo12 value to fill. It is calculated before calling this
+	 * handler.
+	 */
+	*location = (*location & 0xfffff) | ((v & 0xfff) << 20);
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_lo12_s_rela(struct module *me, u32 *location,
+					   Elf_Addr v)
+{
+	/* v is the lo12 value to fill. It is calculated before calling this
+	 * handler.
+	 */
+	u32 imm11_5 = (v & 0xfe0) << (31 - 11);
+	u32 imm4_0 = (v & 0x1f) << (11 - 4);
+
+	*location = (*location & 0x1fff07f) | imm11_5 | imm4_0;
+	return 0;
+}
+
+static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location,
+				       Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	s32 fill_v = offset;
+	u32 hi20, lo12;
+
+	if (offset != fill_v) {
+		pr_err(
+		  "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+		  me->name, v, location);
+		return -EINVAL;
+	}
+
+	hi20 = (offset + 0x800) & 0xfffff000;
+	lo12 = (offset - hi20) & 0xfff;
+	*location = (*location & 0xfff) | hi20;
+	*(location + 1) = (*(location + 1) & 0xfffff) | (lo12 << 20);
+	return 0;
+}
+
+static int apply_r_riscv_relax_rela(struct module *me, u32 *location,
+				    Elf_Addr v)
+{
+	return 0;
+}
+
+static int (*reloc_handlers_rela[]) (struct module *me, u32 *location,
+				Elf_Addr v) = {
+	[R_RISCV_64]			= apply_r_riscv_64_rela,
+	[R_RISCV_BRANCH]		= apply_r_riscv_branch_rela,
+	[R_RISCV_JAL]			= apply_r_riscv_jal_rela,
+	[R_RISCV_PCREL_HI20]		= apply_r_riscv_pcrel_hi20_rela,
+	[R_RISCV_PCREL_LO12_I]		= apply_r_riscv_pcrel_lo12_i_rela,
+	[R_RISCV_PCREL_LO12_S]		= apply_r_riscv_pcrel_lo12_s_rela,
+	[R_RISCV_CALL_PLT]		= apply_r_riscv_call_plt_rela,
+	[R_RISCV_RELAX]			= apply_r_riscv_relax_rela,
+};
+
+int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
+		       unsigned int symindex, unsigned int relsec,
+		       struct module *me)
+{
+	Elf_Rela *rel = (void *) sechdrs[relsec].sh_addr;
+	int (*handler)(struct module *me, u32 *location, Elf_Addr v);
+	Elf_Sym *sym;
+	u32 *location;
+	unsigned int i, type;
+	Elf_Addr v;
+	int res;
+
+	pr_debug("Applying relocate section %u to %u\n", relsec,
+	       sechdrs[relsec].sh_info);
+
+	for (i = 0; i < sechdrs[relsec].sh_size / sizeof(*rel); i++) {
+		/* This is where to make the change */
+		location = (void *)sechdrs[sechdrs[relsec].sh_info].sh_addr
+			+ rel[i].r_offset;
+		/* This is the symbol it is referring to */
+		sym = (Elf_Sym *)sechdrs[symindex].sh_addr
+			+ ELF_RISCV_R_SYM(rel[i].r_info);
+		if (IS_ERR_VALUE(sym->st_value)) {
+			/* Ignore unresolved weak symbol */
+			if (ELF_ST_BIND(sym->st_info) == STB_WEAK)
+				continue;
+			printk(KERN_WARNING "%s: Unknown symbol %s\n",
+			       me->name, strtab + sym->st_name);
+			return -ENOENT;
+		}
+
+		type = ELF_RISCV_R_TYPE(rel[i].r_info);
+
+		if (type < ARRAY_SIZE(reloc_handlers_rela))
+			handler = reloc_handlers_rela[type];
+		else
+			handler = NULL;
+
+		if (!handler) {
+			pr_err("%s: Unknown relocation type %u\n",
+			       me->name, type);
+			return -EINVAL;
+		}
+
+		v = sym->st_value + rel[i].r_addend;
+
+		if (type == R_RISCV_PCREL_LO12_I || type == R_RISCV_PCREL_LO12_S) {
+			unsigned int j;
+
+			for (j = 0; j < sechdrs[relsec].sh_size / sizeof(*rel); j++) {
+				u64 hi20_loc =
+					sechdrs[sechdrs[relsec].sh_info].sh_addr
+					+ rel[j].r_offset;
+				/* Find the corresponding HI20 PC-relative relocation entry */
+				if (hi20_loc == sym->st_value) {
+					Elf_Sym *hi20_sym =
+						(Elf_Sym *)sechdrs[symindex].sh_addr
+						+ ELF_RISCV_R_SYM(rel[j].r_info);
+					u64 hi20_sym_val =
+						hi20_sym->st_value
+						+ rel[j].r_addend;
+					/* Calculate lo12 */
+					s64 offset = hi20_sym_val - hi20_loc;
+					s32 hi20 = (offset + 0x800) & 0xfffff000;
+					s32 lo12 = offset - hi20;
+					v = lo12;
+					break;
+				}
+			}
+			if (j == sechdrs[relsec].sh_size / sizeof(*rel)) {
+				pr_err(
+				  "%s: Can not find HI20 PC-relative relocation information\n",
+				  me->name);
+				return -EINVAL;
+			}
+		}
+
+		res = handler(me, location, v);
+		if (res)
+			return res;
+	}
+
+	return 0;
+}
diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
new file mode 100644
index 000000000000..c10146ae317d
--- /dev/null
+++ b/arch/riscv/kernel/process.c
@@ -0,0 +1,132 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/tick.h>
+#include <linux/ptrace.h>
+
+#include <asm/unistd.h>
+#include <asm/uaccess.h>
+#include <asm/processor.h>
+#include <asm/csr.h>
+#include <asm/string.h>
+#include <asm/switch_to.h>
+
+extern asmlinkage void ret_from_fork(void);
+extern asmlinkage void ret_from_kernel_thread(void);
+
+void arch_cpu_idle(void)
+{
+	wait_for_interrupt();
+	local_irq_enable();
+}
+
+void show_regs(struct pt_regs *regs)
+{
+	show_regs_print_info(KERN_DEFAULT);
+
+	printk(KERN_CONT "sepc: " REG_FMT " ra : " REG_FMT " sp : " REG_FMT "\n",
+		regs->sepc, regs->ra, regs->sp);
+	printk(KERN_CONT " gp : " REG_FMT " tp : " REG_FMT " t0 : " REG_FMT "\n",
+		regs->gp, regs->tp, regs->t0);
+	printk(KERN_CONT " t1 : " REG_FMT " t2 : " REG_FMT " s0 : " REG_FMT "\n",
+		regs->t1, regs->t2, regs->s0);
+	printk(KERN_CONT " s1 : " REG_FMT " a0 : " REG_FMT " a1 : " REG_FMT "\n",
+		regs->s1, regs->a0, regs->a1);
+	printk(KERN_CONT " a2 : " REG_FMT " a3 : " REG_FMT " a4 : " REG_FMT "\n",
+		regs->a2, regs->a3, regs->a4);
+	printk(KERN_CONT " a5 : " REG_FMT " a6 : " REG_FMT " a7 : " REG_FMT "\n",
+		regs->a5, regs->a6, regs->a7);
+	printk(KERN_CONT " s2 : " REG_FMT " s3 : " REG_FMT " s4 : " REG_FMT "\n",
+		regs->s2, regs->s3, regs->s4);
+	printk(KERN_CONT " s5 : " REG_FMT " s6 : " REG_FMT " s7 : " REG_FMT "\n",
+		regs->s5, regs->s6, regs->s7);
+	printk(KERN_CONT " s8 : " REG_FMT " s9 : " REG_FMT " s10: " REG_FMT "\n",
+		regs->s8, regs->s9, regs->s10);
+	printk(KERN_CONT " s11: " REG_FMT " t3 : " REG_FMT " t4 : " REG_FMT "\n",
+		regs->s11, regs->t3, regs->t4);
+	printk(KERN_CONT " t5 : " REG_FMT " t6 : " REG_FMT "\n",
+		regs->t5, regs->t6);
+
+	printk(KERN_CONT "sstatus: " REG_FMT " sbadaddr: " REG_FMT " scause: " REG_FMT "\n",
+		regs->sstatus, regs->sbadaddr, regs->scause);
+}
+
+void start_thread(struct pt_regs *regs, unsigned long pc,
+	unsigned long sp)
+{
+	regs->sstatus = SR_PIE /* User mode, irqs on */ | SR_FS_INITIAL;
+#ifndef CONFIG_RV_PUM
+	regs->sstatus |= SR_SUM
+#endif
+	regs->sepc = pc;
+	regs->sp = sp;
+	set_fs(USER_DS);
+}
+
+void flush_thread(void)
+{
+	/* Reset FPU context
+	 *	frm: round to nearest, ties to even (IEEE default)
+	 *	fflags: accrued exceptions cleared
+	 */
+	memset(&current->thread.fstate, 0,
+		sizeof(struct user_fpregs_struct));
+}
+
+int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+{
+	fstate_save(src, task_pt_regs(src));
+	*dst = *src;
+	return 0;
+}
+
+int copy_thread(unsigned long clone_flags, unsigned long usp,
+	unsigned long arg, struct task_struct *p)
+{
+	struct pt_regs *childregs = task_pt_regs(p);
+
+	/* p->thread holds context to be restored by __switch_to() */
+	if (unlikely(p->flags & PF_KTHREAD)) {
+		/* Kernel thread */
+		const register unsigned long gp __asm__ ("gp");
+		memset(childregs, 0, sizeof(struct pt_regs));
+		childregs->gp = gp;
+		childregs->sstatus = SR_PS | SR_PIE; /* Supervisor, irqs on */
+
+		p->thread.ra = (unsigned long)ret_from_kernel_thread;
+		p->thread.s[0] = usp; /* fn */
+		p->thread.s[1] = arg;
+	} else {
+		*childregs = *(current_pt_regs());
+		if (usp) /* User fork */
+			childregs->sp = usp;
+		if (clone_flags & CLONE_SETTLS)
+			childregs->tp = childregs->a5;
+		childregs->a0 = 0; /* Return value of fork() */
+		p->thread.ra = (unsigned long)ret_from_fork;
+	}
+	p->thread.sp = (unsigned long)childregs; /* kernel sp */
+	return 0;
+}
diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
new file mode 100644
index 000000000000..69b3b2d10664
--- /dev/null
+++ b/arch/riscv/kernel/ptrace.c
@@ -0,0 +1,147 @@
+/*
+ * Copyright 2010 Tilera Corporation. All Rights Reserved.
+ * Copyright 2015 Regents of the University of California
+ * Copyright 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ *
+ * Copied from arch/tile/kernel/ptrace.c
+ */
+
+#include <asm/ptrace.h>
+#include <asm/syscall.h>
+#include <asm/thread_info.h>
+#include <linux/ptrace.h>
+#include <linux/elf.h>
+#include <linux/regset.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/tracehook.h>
+#include <trace/events/syscalls.h>
+
+enum riscv_regset {
+	REGSET_X,
+};
+
+/*
+ * Get registers from task and ready the result for userspace.
+ */
+static char *getregs(struct task_struct *child, struct pt_regs *uregs)
+{
+	*uregs = *task_pt_regs(child);
+	return (char *)uregs;
+}
+
+/* Put registers back to task. */
+static void putregs(struct task_struct *child, struct pt_regs *uregs)
+{
+	struct pt_regs *regs = task_pt_regs(child);
+	*regs = *uregs;
+}
+
+static int riscv_gpr_get(struct task_struct *target,
+			 const struct user_regset *regset,
+			 unsigned int pos, unsigned int count,
+			 void *kbuf, void __user *ubuf)
+{
+	struct pt_regs regs;
+
+	getregs(target, &regs);
+
+	return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &regs, 0,
+				   sizeof(regs));
+}
+
+static int riscv_gpr_set(struct task_struct *target,
+			 const struct user_regset *regset,
+			 unsigned int pos, unsigned int count,
+			 const void *kbuf, const void __user *ubuf)
+{
+	int ret;
+	struct pt_regs regs;
+
+	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &regs, 0,
+				 sizeof(regs));
+	if (ret)
+		return ret;
+
+	putregs(target, &regs);
+
+	return 0;
+}
+
+
+static const struct user_regset riscv_user_regset[] = {
+	[REGSET_X] = {
+		.core_note_type = NT_PRSTATUS,
+		.n = ELF_NGREG,
+		.size = sizeof(elf_greg_t),
+		.align = sizeof(elf_greg_t),
+		.get = &riscv_gpr_get,
+		.set = &riscv_gpr_set,
+	},
+};
+
+static const struct user_regset_view riscv_user_native_view = {
+	.name = "riscv",
+	.e_machine = EM_RISCV,
+	.regsets = riscv_user_regset,
+	.n = ARRAY_SIZE(riscv_user_regset),
+};
+
+const struct user_regset_view *task_user_regset_view(struct task_struct *task)
+{
+	return &riscv_user_native_view;
+}
+
+void ptrace_disable(struct task_struct *child)
+{
+	clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+}
+
+long arch_ptrace(struct task_struct *child, long request,
+		 unsigned long addr, unsigned long data)
+{
+	long ret = -EIO;
+
+	switch (request) {
+	default:
+		ret = ptrace_request(child, request, addr, data);
+		break;
+	}
+
+	return ret;
+}
+
+/* Allows PTRACE_SYSCALL to work.  These are called from entry.S in
+ * {handle,ret_from}_syscall.
+ */
+void do_syscall_trace_enter(struct pt_regs *regs)
+{
+	if (test_thread_flag(TIF_SYSCALL_TRACE))
+		if (tracehook_report_syscall_entry(regs))
+			syscall_set_nr(current, regs, -1);
+
+#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
+	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
+		trace_sys_enter(regs, syscall_get_nr(current, regs));
+#endif
+}
+
+void do_syscall_trace_exit(struct pt_regs *regs)
+{
+	if (test_thread_flag(TIF_SYSCALL_TRACE))
+		tracehook_report_syscall_exit(regs, 0);
+
+#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
+	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
+		trace_sys_exit(regs, regs->regs[0]);
+#endif
+}
diff --git a/arch/riscv/kernel/reset.c b/arch/riscv/kernel/reset.c
new file mode 100644
index 000000000000..2a53d26ffdd6
--- /dev/null
+++ b/arch/riscv/kernel/reset.c
@@ -0,0 +1,36 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/reboot.h>
+#include <linux/export.h>
+#include <asm/sbi.h>
+
+void (*pm_power_off)(void) = machine_power_off;
+EXPORT_SYMBOL(pm_power_off);
+
+void machine_restart(char *cmd)
+{
+	do_kernel_restart(cmd);
+	while (1);
+}
+
+void machine_halt(void)
+{
+	machine_power_off();
+}
+
+void machine_power_off(void)
+{
+	sbi_shutdown();
+	while (1);
+}
diff --git a/arch/riscv/kernel/riscv_ksyms.c b/arch/riscv/kernel/riscv_ksyms.c
new file mode 100644
index 000000000000..ab0db6d48101
--- /dev/null
+++ b/arch/riscv/kernel/riscv_ksyms.c
@@ -0,0 +1,16 @@
+/*
+ * Copyright (C) 2017 Zihao Yu
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/export.h>
+#include <linux/uaccess.h>
+
+/*
+ * Assembly functions that may be used (directly or indirectly) by modules
+ */
+EXPORT_SYMBOL(__copy_user);
+
diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
new file mode 100644
index 000000000000..9ed70e84d74e
--- /dev/null
+++ b/arch/riscv/kernel/setup.c
@@ -0,0 +1,240 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/memblock.h>
+#include <linux/sched.h>
+#include <linux/initrd.h>
+#include <linux/console.h>
+#include <linux/screen_info.h>
+#include <linux/of_fdt.h>
+#include <linux/of_platform.h>
+#include <linux/sched/task.h>
+
+#include <asm/setup.h>
+#include <asm/sections.h>
+#include <asm/pgtable.h>
+#include <asm/smp.h>
+#include <asm/sbi.h>
+#include <asm/tlbflush.h>
+#include <asm/thread_info.h>
+
+#ifdef CONFIG_DUMMY_CONSOLE
+struct screen_info screen_info = {
+	.orig_video_lines	= 30,
+	.orig_video_cols	= 80,
+	.orig_video_mode	= 0,
+	.orig_video_ega_bx	= 0,
+	.orig_video_isVGA	= 1,
+	.orig_video_points	= 8
+};
+#endif
+
+#ifdef CONFIG_CMDLINE_BOOL
+static char __initdata builtin_cmdline[COMMAND_LINE_SIZE] = CONFIG_CMDLINE;
+#endif /* CONFIG_CMDLINE_BOOL */
+
+unsigned long va_pa_offset;
+unsigned long pfn_base;
+
+/* The lucky hart to first increment this variable will boot the other cores */
+atomic_t hart_lottery;
+
+#ifdef CONFIG_BLK_DEV_INITRD
+static void __init setup_initrd(void)
+{
+	extern char __initramfs_start[];
+	extern unsigned long __initramfs_size;
+	unsigned long size;
+
+	if (__initramfs_size > 0) {
+		initrd_start = (unsigned long)(&__initramfs_start);
+		initrd_end = initrd_start + __initramfs_size;
+	}
+
+	if (initrd_start >= initrd_end) {
+		printk(KERN_INFO "initrd not found or empty");
+		goto disable;
+	}
+	if (__pa(initrd_end) > PFN_PHYS(max_low_pfn)) {
+		printk(KERN_ERR "initrd extends beyond end of memory");
+		goto disable;
+	}
+
+	size =  initrd_end - initrd_start;
+	memblock_reserve(__pa(initrd_start), size);
+	initrd_below_start_ok = 1;
+
+	printk(KERN_INFO "Initial ramdisk at: 0x%p (%lu bytes)\n",
+		(void *)(initrd_start), size);
+	return;
+disable:
+	printk(KERN_CONT " - disabling initrd\n");
+	initrd_start = 0;
+	initrd_end = 0;
+}
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+pgd_t trampoline_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+
+#ifndef __PAGETABLE_PMD_FOLDED
+#define NUM_SWAPPER_PMDS ((uintptr_t)-PAGE_OFFSET >> PGDIR_SHIFT)
+pmd_t swapper_pmd[PTRS_PER_PMD*((-PAGE_OFFSET)/PGDIR_SIZE)] __page_aligned_bss;
+pmd_t trampoline_pmd[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+#endif
+
+asmlinkage void __init setup_vm(void)
+{
+	extern char _start;
+	uintptr_t i;
+	uintptr_t pa = (uintptr_t) &_start;
+	pgprot_t prot = __pgprot(pgprot_val(PAGE_KERNEL) | _PAGE_EXEC);
+
+	va_pa_offset = PAGE_OFFSET - pa;
+	pfn_base = PFN_DOWN(pa);
+
+	/* Sanity check alignment and size */
+	BUG_ON((PAGE_OFFSET % PGDIR_SIZE) != 0);
+	BUG_ON((pa % (PAGE_SIZE * PTRS_PER_PTE)) != 0);
+
+#ifndef __PAGETABLE_PMD_FOLDED
+	trampoline_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD] =
+		pfn_pgd(PFN_DOWN((uintptr_t)trampoline_pmd),
+			__pgprot(_PAGE_TABLE));
+	trampoline_pmd[0] = pfn_pmd(PFN_DOWN(pa), prot);
+
+	for (i = 0; i < (-PAGE_OFFSET)/PGDIR_SIZE; ++i) {
+		size_t o = (PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD + i;
+		swapper_pg_dir[o] =
+			pfn_pgd(PFN_DOWN((uintptr_t)swapper_pmd) + i,
+				__pgprot(_PAGE_TABLE));
+	}
+	for (i = 0; i < ARRAY_SIZE(swapper_pmd); i++)
+		swapper_pmd[i] = pfn_pmd(PFN_DOWN(pa + i * PMD_SIZE), prot);
+#else
+	trampoline_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD] =
+		pfn_pgd(PFN_DOWN(pa), prot);
+
+	for (i = 0; i < (-PAGE_OFFSET)/PGDIR_SIZE; ++i) {
+		size_t o = (PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD + i;
+		swapper_pg_dir[o] =
+			pfn_pgd(PFN_DOWN(pa + i * PGDIR_SIZE), prot);
+	}
+#endif
+}
+
+void __init sbi_save(unsigned int hartid, void *dtb)
+{
+	early_init_dt_scan(__va(dtb));
+}
+
+/* Allow the user to manually add a memory region (in case DTS is broken); "mem_end=nn[KkMmGg]" */
+static int __init mem_end_override(char *p)
+{
+	resource_size_t base, end;
+
+	if (!p)
+		return -EINVAL;
+	base = (uintptr_t) __pa(PAGE_OFFSET);
+	end = memparse(p, &p) & PMD_MASK;
+	if (end == 0)
+		return -EINVAL;
+	memblock_add(base, end - base);
+	return 0;
+}
+early_param("mem_end", mem_end_override);
+
+static void __init setup_bootmem(void)
+{
+	struct memblock_region *reg;
+	phys_addr_t mem_size = 0;
+
+	/* Find the memory region containing the kernel */
+	for_each_memblock(memory, reg) {
+		phys_addr_t vmlinux_end = __pa(_end);
+		phys_addr_t end = reg->base + reg->size;
+
+		if (reg->base <= vmlinux_end && vmlinux_end <= end) {
+			/* Reserve from the start of the region to the end of
+			 * the kernel
+			 */
+			memblock_reserve(reg->base, vmlinux_end - reg->base);
+			mem_size = min(reg->size, (phys_addr_t)-PAGE_OFFSET);
+		}
+	}
+	BUG_ON(mem_size == 0);
+
+	set_max_mapnr(PFN_DOWN(mem_size));
+	max_low_pfn = pfn_base + PFN_DOWN(mem_size);
+
+#ifdef CONFIG_BLK_DEV_INITRD
+	setup_initrd();
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+	early_init_fdt_reserve_self();
+	early_init_fdt_scan_reserved_mem();
+	memblock_allow_resize();
+	memblock_dump_all();
+}
+
+void __init setup_arch(char **cmdline_p)
+{
+#ifdef CONFIG_CMDLINE_BOOL
+#ifdef CONFIG_CMDLINE_OVERRIDE
+	strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+#else
+	if (builtin_cmdline[0] != '\0') {
+		/* Append bootloader command line to built-in */
+		strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);
+		strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+		strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+	}
+#endif /* CONFIG_CMDLINE_OVERRIDE */
+#endif /* CONFIG_CMDLINE_BOOL */
+	*cmdline_p = boot_command_line;
+
+	parse_early_param();
+
+	init_mm.start_code = (unsigned long) _stext;
+	init_mm.end_code   = (unsigned long) _etext;
+	init_mm.end_data   = (unsigned long) _edata;
+	init_mm.brk        = (unsigned long) _end;
+
+	setup_bootmem();
+	paging_init();
+	unflatten_device_tree();
+
+#ifdef CONFIG_SMP
+	setup_smp();
+#endif
+
+#ifdef CONFIG_DUMMY_CONSOLE
+	conswitchp = &dummy_con;
+#endif
+}
+
+static int __init riscv_device_init(void)
+{
+	return of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
+}
+subsys_initcall_sync(riscv_device_init);
diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
new file mode 100644
index 000000000000..ea14c47f28ac
--- /dev/null
+++ b/arch/riscv/kernel/signal.c
@@ -0,0 +1,257 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/signal.h>
+#include <linux/uaccess.h>
+#include <linux/syscalls.h>
+#include <linux/tracehook.h>
+#include <linux/linkage.h>
+
+#include <asm/ucontext.h>
+#include <asm/vdso.h>
+#include <asm/switch_to.h>
+#include <asm/csr.h>
+
+#define DEBUG_SIG 0
+
+struct rt_sigframe {
+	struct siginfo info;
+	struct ucontext uc;
+};
+
+static long restore_sigcontext(struct pt_regs *regs,
+	struct sigcontext __user *sc)
+{
+	struct task_struct *task = current;
+	long err;
+	/* sc_regs is structured the same as the start of pt_regs */
+	err = __copy_from_user(regs, &sc->sc_regs, sizeof(sc->sc_regs));
+	err |= __copy_from_user(&task->thread.fstate, &sc->sc_fpregs,
+		sizeof(sc->sc_fpregs));
+	if (likely(!err))
+		fstate_restore(task, regs);
+	return err;
+}
+
+SYSCALL_DEFINE0(rt_sigreturn)
+{
+	struct pt_regs *regs = current_pt_regs();
+	struct rt_sigframe __user *frame;
+	struct task_struct *task;
+	sigset_t set;
+
+	/* Always make any pending restarted system calls return -EINTR */
+	current->restart_block.fn = do_no_restart_syscall;
+
+	frame = (struct rt_sigframe __user *)regs->sp;
+
+	if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
+		goto badframe;
+
+	if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
+		goto badframe;
+
+	set_current_blocked(&set);
+
+	if (restore_sigcontext(regs, &frame->uc.uc_mcontext))
+		goto badframe;
+
+	if (restore_altstack(&frame->uc.uc_stack))
+		goto badframe;
+
+	return regs->a0;
+
+badframe:
+	task = current;
+	if (show_unhandled_signals) {
+		pr_info_ratelimited(
+			"%s[%d]: bad frame in %s: frame=%p pc=%p sp=%p\n",
+			task->comm, task_pid_nr(task), __func__,
+			frame, (void *)regs->sepc, (void *)regs->sp);
+	}
+	force_sig(SIGSEGV, task);
+	return 0;
+}
+
+static long setup_sigcontext(struct sigcontext __user *sc,
+	struct pt_regs *regs)
+{
+	struct task_struct *task = current;
+	long err;
+	/* sc_regs is structured the same as the start of pt_regs */
+	err = __copy_to_user(&sc->sc_regs, regs, sizeof(sc->sc_regs));
+	fstate_save(task, regs);
+	err |= __copy_to_user(&sc->sc_fpregs, &task->thread.fstate,
+		sizeof(sc->sc_fpregs));
+	return err;
+}
+
+static inline void __user *get_sigframe(struct ksignal *ksig,
+	struct pt_regs *regs, size_t framesize)
+{
+	unsigned long sp;
+	/* Default to using normal stack */
+	sp = regs->sp;
+
+	/*
+	 * If we are on the alternate signal stack and would overflow it, don't.
+	 * Return an always-bogus address instead so we will die with SIGSEGV.
+	 */
+	if (on_sig_stack(sp) && !likely(on_sig_stack(sp - framesize)))
+		return (void __user __force *)(-1UL);
+
+	/* This is the X/Open sanctioned signal stack switching. */
+	sp = sigsp(sp, ksig) - framesize;
+
+	/* Align the stack frame. */
+	sp &= ~0xfUL;
+
+	return (void __user *)sp;
+}
+
+
+static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+	struct pt_regs *regs)
+{
+	struct rt_sigframe __user *frame;
+	long err = 0;
+
+	frame = get_sigframe(ksig, regs, sizeof(*frame));
+	if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+		return -EFAULT;
+
+	err |= copy_siginfo_to_user(&frame->info, &ksig->info);
+
+	/* Create the ucontext. */
+	err |= __put_user(0, &frame->uc.uc_flags);
+	err |= __put_user(NULL, &frame->uc.uc_link);
+	err |= __save_altstack(&frame->uc.uc_stack, regs->sp);
+	err |= setup_sigcontext(&frame->uc.uc_mcontext, regs);
+	err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
+	if (err)
+		return -EFAULT;
+
+	/* Set up to return from userspace. */
+	regs->ra = (unsigned long)VDSO_SYMBOL(
+		current->mm->context.vdso, rt_sigreturn);
+
+	/*
+	 * Set up registers for signal handler.
+	 * Registers that we don't modify keep the value they had from
+	 * user-space at the time we took the signal.
+	 * We always pass siginfo and mcontext, regardless of SA_SIGINFO,
+	 * since some things rely on this (e.g. glibc's debug/segfault.c).
+	 */
+	regs->sepc = (unsigned long)ksig->ka.sa.sa_handler;
+	regs->sp = (unsigned long)frame;
+	regs->a0 = ksig->sig;                     /* a0: signal number */
+	regs->a1 = (unsigned long)(&frame->info); /* a1: siginfo pointer */
+	regs->a2 = (unsigned long)(&frame->uc);   /* a2: ucontext pointer */
+
+#if DEBUG_SIG
+	pr_info("SIG deliver (%s:%d): sig=%d pc=%p ra=%p sp=%p\n",
+		current->comm, task_pid_nr(current), ksig->sig,
+		(void *)regs->sepc, (void *)regs->ra, frame);
+#endif
+
+	return 0;
+}
+
+static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
+{
+	sigset_t *oldset = sigmask_to_save();
+	int ret;
+
+	/* Are we from a system call? */
+	if (regs->scause == EXC_SYSCALL) {
+		/* If so, check system call restarting.. */
+		switch (regs->a0) {
+		case -ERESTART_RESTARTBLOCK:
+		case -ERESTARTNOHAND:
+			regs->a0 = -EINTR;
+			break;
+
+		case -ERESTARTSYS:
+			if (!(ksig->ka.sa.sa_flags & SA_RESTART)) {
+				regs->a0 = -EINTR;
+				break;
+			}
+			/* fallthrough */
+		case -ERESTARTNOINTR:
+			regs->sepc -= 0x4;
+			break;
+		}
+	}
+
+	/* Set up the stack frame */
+	ret = setup_rt_frame(ksig, oldset, regs);
+
+	signal_setup_done(ret, ksig, 0);
+}
+
+static void do_signal(struct pt_regs *regs)
+{
+	struct ksignal ksig;
+
+	if (get_signal(&ksig)) {
+		/* Actually deliver the signal */
+		handle_signal(&ksig, regs);
+		return;
+	}
+
+	/* Did we come from a system call? */
+	if (regs->scause == EXC_SYSCALL) {
+		/* Restart the system call - no handlers present */
+		switch (regs->a0) {
+		case -ERESTARTNOHAND:
+		case -ERESTARTSYS:
+		case -ERESTARTNOINTR:
+			regs->sepc -= 0x4;
+			break;
+		case -ERESTART_RESTARTBLOCK:
+			regs->a7 = __NR_restart_syscall;
+			regs->sepc -= 0x4;
+			break;
+		}
+	}
+
+	/* If there is no signal to deliver, we just put the saved
+	 * sigmask back.
+	 */
+	restore_saved_sigmask();
+}
+
+/*
+ * notification of userspace execution resumption
+ * - triggered by the _TIF_WORK_MASK flags
+ */
+asmlinkage void do_notify_resume(struct pt_regs *regs,
+	unsigned long thread_info_flags)
+{
+	/* Handle pending signal delivery */
+	if (thread_info_flags & _TIF_SIGPENDING)
+		do_signal(regs);
+
+	if (thread_info_flags & _TIF_NOTIFY_RESUME) {
+		clear_thread_flag(TIF_NOTIFY_RESUME);
+		tracehook_notify_resume(regs);
+	}
+}
diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
new file mode 100644
index 000000000000..b65c0e1020e3
--- /dev/null
+++ b/arch/riscv/kernel/smp.c
@@ -0,0 +1,110 @@
+/*
+ * SMP initialisation and IPI support
+ * Based on arch/arm64/kernel/smp.c
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/smp.h>
+#include <linux/sched.h>
+
+#include <asm/sbi.h>
+#include <asm/tlbflush.h>
+#include <asm/cacheflush.h>
+
+/* A collection of single bit ipi messages.  */
+static struct {
+	unsigned long bits ____cacheline_aligned;
+} ipi_data[NR_CPUS] __cacheline_aligned;
+
+enum ipi_message_type {
+	IPI_RESCHEDULE,
+	IPI_CALL_FUNC,
+	IPI_MAX
+};
+
+irqreturn_t handle_ipi(void)
+{
+	unsigned long *pending_ipis = &ipi_data[smp_processor_id()].bits;
+
+	/* Clear pending IPI */
+	csr_clear(sip, SIE_SSIE);
+
+	while (true) {
+		unsigned long ops;
+
+		/* Order bit clearing and data access. */
+		mb(); // test
+
+		ops = xchg(pending_ipis, 0);
+		if (ops == 0)
+			return IRQ_HANDLED;
+
+		if (ops & (1 << IPI_RESCHEDULE))
+			scheduler_ipi();
+
+		if (ops & (1 << IPI_CALL_FUNC))
+			generic_smp_call_function_interrupt();
+
+		BUG_ON((ops >> IPI_MAX) != 0);
+
+		/* Order data access and bit testing. */
+		mb();
+	}
+
+	return IRQ_HANDLED;
+}
+
+static void
+send_ipi_message(const struct cpumask *to_whom, enum ipi_message_type operation)
+{
+	int i;
+
+	mb();
+	for_each_cpu(i, to_whom)
+		set_bit(operation, &ipi_data[i].bits);
+
+	mb();
+	sbi_send_ipi(cpumask_bits(to_whom));
+}
+
+void arch_send_call_function_ipi_mask(struct cpumask *mask)
+{
+	send_ipi_message(mask, IPI_CALL_FUNC);
+}
+
+void arch_send_call_function_single_ipi(int cpu)
+{
+	send_ipi_message(cpumask_of(cpu), IPI_CALL_FUNC);
+}
+
+static void ipi_stop(void *unused)
+{
+	while (1)
+		wait_for_interrupt();
+}
+
+void smp_send_stop(void)
+{
+	on_each_cpu(ipi_stop, NULL, 1);
+}
+
+void smp_send_reschedule(int cpu)
+{
+	send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE);
+}
diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
new file mode 100644
index 000000000000..7043bbbfbc1e
--- /dev/null
+++ b/arch/riscv/kernel/smpboot.c
@@ -0,0 +1,103 @@
+/*
+ * SMP initialisation and IPI support
+ * Based on arch/arm64/kernel/smp.c
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/kernel_stat.h>
+#include <linux/notifier.h>
+#include <linux/cpu.h>
+#include <linux/percpu.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/irq.h>
+#include <linux/of.h>
+#include <linux/sched/task_stack.h>
+#include <asm/mmu_context.h>
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+#include <asm/sbi.h>
+
+void *__cpu_up_stack_pointer[NR_CPUS];
+
+void __init smp_prepare_boot_cpu(void)
+{
+}
+
+void __init smp_prepare_cpus(unsigned int max_cpus)
+{
+}
+
+void __init setup_smp(void)
+{
+	struct device_node *dn = NULL;
+	int hart, im_okay_therefore_i_am = 0;
+
+	while ((dn = of_find_node_by_type(dn, "cpu"))) {
+		hart = riscv_of_processor_hart(dn);
+		if (hart >= 0) {
+			set_cpu_possible(hart, true);
+			set_cpu_present(hart, true);
+			if (hart == smp_processor_id()) {
+				BUG_ON(im_okay_therefore_i_am);
+				im_okay_therefore_i_am = 1;
+			}
+		}
+	}
+
+	BUG_ON(!im_okay_therefore_i_am);
+}
+
+int __cpu_up(unsigned int cpu, struct task_struct *tidle)
+{
+	/* Signal cpu to start */
+	mb();
+	__cpu_up_stack_pointer[cpu] = task_stack_page(tidle) + THREAD_SIZE;
+
+	while (!cpu_online(cpu))
+		cpu_relax();
+
+	return 0;
+}
+
+void __init smp_cpus_done(unsigned int max_cpus)
+{
+}
+
+/*
+ * C entry point for a secondary processor.
+ */
+asmlinkage void __init smp_callin(void)
+{
+	struct mm_struct *mm = &init_mm;
+
+	/* All kernel threads share the same mm context.  */
+	atomic_inc(&mm->mm_count);
+	current->active_mm = mm;
+
+	trap_init();
+	init_clockevent();
+	notify_cpu_starting(smp_processor_id());
+	set_cpu_online(smp_processor_id(), 1);
+	local_flush_tlb_all();
+	local_irq_enable();
+	preempt_disable();
+	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+}
diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
new file mode 100644
index 000000000000..109f5120d5c7
--- /dev/null
+++ b/arch/riscv/kernel/stacktrace.c
@@ -0,0 +1,177 @@
+/*
+ * Copyright (C) 2008 ARM Limited
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/export.h>
+#include <linux/kallsyms.h>
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <linux/sched/task_stack.h>
+#include <linux/stacktrace.h>
+
+#ifdef CONFIG_FRAME_POINTER
+
+struct stackframe {
+	unsigned long fp;
+	unsigned long ra;
+};
+
+static void notrace walk_stackframe(struct task_struct *task,
+	struct pt_regs *regs, bool (*fn)(unsigned long, void *), void *arg)
+{
+	unsigned long fp, sp, pc;
+
+	if (regs) {
+		fp = GET_FP(regs);
+		sp = GET_USP(regs);
+		pc = GET_IP(regs);
+	} else if (task == NULL || task == current) {
+		const register unsigned long current_sp __asm__ ("sp");
+		fp = (unsigned long)__builtin_frame_address(0);
+		sp = current_sp;
+		pc = (unsigned long)walk_stackframe;
+	} else {
+		/* task blocked in __switch_to */
+		fp = task->thread.s[0];
+		sp = task->thread.sp;
+		pc = task->thread.ra;
+	}
+
+	for (;;) {
+		unsigned long low, high;
+		struct stackframe *frame;
+
+		if (unlikely(!__kernel_text_address(pc) || fn(pc, arg)))
+			break;
+
+		/* Validate frame pointer */
+		low = sp + sizeof(struct stackframe);
+		high = ALIGN(sp, THREAD_SIZE);
+		if (unlikely(fp < low || fp > high || fp & 0x7))
+			break;
+		/* Unwind stack frame */
+		frame = (struct stackframe *)fp - 1;
+		sp = fp;
+		fp = frame->fp;
+		pc = frame->ra - 0x4;
+	}
+}
+
+#else /* !CONFIG_FRAME_POINTER */
+
+static void notrace walk_stackframe(struct task_struct *task,
+	struct pt_regs *regs, bool (*fn)(unsigned long, void *), void *arg)
+{
+	unsigned long sp, pc;
+	unsigned long *ksp;
+
+	if (regs) {
+		sp = GET_USP(regs);
+		pc = GET_IP(regs);
+	} else if (task == NULL || task == current) {
+		const register unsigned long current_sp __asm__ ("sp");
+		sp = current_sp;
+		pc = (unsigned long)walk_stackframe;
+	} else {
+		/* task blocked in __switch_to */
+		sp = task->thread.sp;
+		pc = task->thread.ra;
+	}
+
+	if (unlikely(sp & 0x7))
+		return;
+
+	ksp = (unsigned long *)sp;
+	while (!kstack_end(ksp)) {
+		if (__kernel_text_address(pc) && unlikely(fn(pc, arg)))
+			break;
+		pc = (*ksp++) - 0x4;
+	}
+}
+
+#endif /* CONFIG_FRAME_POINTER */
+
+
+static bool print_trace_address(unsigned long pc, void *arg)
+{
+	print_ip_sym(pc);
+	return false;
+}
+
+void show_stack(struct task_struct *task, unsigned long *sp)
+{
+	printk("Call Trace:\n");
+	walk_stackframe(task, NULL, print_trace_address, NULL);
+}
+
+
+static bool save_wchan(unsigned long pc, void *arg)
+{
+	if (!in_sched_functions(pc)) {
+		unsigned long *p = arg;
+		*p = pc;
+		return true;
+	}
+	return false;
+}
+
+unsigned long get_wchan(struct task_struct *task)
+{
+	unsigned long pc = 0;
+
+	if (likely(task && task != current && task->state != TASK_RUNNING))
+		walk_stackframe(task, NULL, save_wchan, &pc);
+	return pc;
+}
+
+
+#ifdef CONFIG_STACKTRACE
+
+static bool __save_trace(unsigned long pc, void *arg, bool nosched)
+{
+	struct stack_trace *trace = arg;
+
+	if (unlikely(nosched && in_sched_functions(pc)))
+		return false;
+	if (unlikely(trace->skip > 0)) {
+		trace->skip--;
+		return false;
+	}
+
+	trace->entries[trace->nr_entries++] = pc;
+	return (trace->nr_entries >= trace->max_entries);
+}
+
+static bool save_trace(unsigned long pc, void *arg)
+{
+	return __save_trace(pc, arg, false);
+}
+
+/*
+ * Save stack-backtrace addresses into a stack_trace buffer.
+ */
+void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
+{
+	walk_stackframe(tsk, NULL, save_trace, trace);
+	if (trace->nr_entries < trace->max_entries)
+		trace->entries[trace->nr_entries++] = ULONG_MAX;
+}
+EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
+
+void save_stack_trace(struct stack_trace *trace)
+{
+	save_stack_trace_tsk(NULL, trace);
+}
+EXPORT_SYMBOL_GPL(save_stack_trace);
+
+#endif /* CONFIG_STACKTRACE */
diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
new file mode 100644
index 000000000000..33d40a5da4a1
--- /dev/null
+++ b/arch/riscv/kernel/sys_riscv.c
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/syscalls.h>
+#include <asm/unistd.h>
+
+#ifdef CONFIG_64BIT
+SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
+	unsigned long, prot, unsigned long, flags,
+	unsigned long, fd, off_t, offset)
+{
+	if (unlikely(offset & (~PAGE_MASK)))
+		return -EINVAL;
+	return sys_mmap_pgoff(addr, len, prot, flags, fd, offset >> PAGE_SHIFT);
+}
+#else
+SYSCALL_DEFINE6(mmap2, unsigned long, addr, unsigned long, len,
+	unsigned long, prot, unsigned long, flags,
+	unsigned long, fd, off_t, offset)
+{
+	/* Note that the shift for mmap2 is constant (12),
+	 * regardless of PAGE_SIZE
+	 */
+	if (unlikely(offset & (~PAGE_MASK >> 12)))
+		return -EINVAL;
+	return sys_mmap_pgoff(addr, len, prot, flags, fd,
+		offset >> (PAGE_SHIFT - 12));
+}
+#endif /* !CONFIG_64BIT */
+
+#ifdef CONFIG_SYSRISCV_ATOMIC
+SYSCALL_DEFINE4(sysriscv, unsigned long, cmd, unsigned long, arg1,
+	unsigned long, arg2, unsigned long, arg3)
+{
+	unsigned long flags;
+	unsigned long prev;
+	unsigned int *ptr;
+	unsigned int err;
+
+	switch (cmd) {
+	case RISCV_ATOMIC_CMPXCHG:
+		ptr = (unsigned int *)arg1;
+		if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned int)))
+			return -EFAULT;
+
+		preempt_disable();
+		raw_local_irq_save(flags);
+		err = __get_user(prev, ptr);
+		if (likely(!err && prev == arg2))
+			err = __put_user(arg3, ptr);
+		raw_local_irq_restore(flags);
+		preempt_enable();
+
+		return unlikely(err) ? err : prev;
+
+	case RISCV_ATOMIC_CMPXCHG64:
+		ptr = (unsigned int *)arg1;
+		if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned long)))
+			return -EFAULT;
+
+		preempt_disable();
+		raw_local_irq_save(flags);
+		err = __get_user(prev, ptr);
+		if (likely(!err && prev == arg2))
+			err = __put_user(arg3, ptr);
+		raw_local_irq_restore(flags);
+		preempt_enable();
+
+		return unlikely(err) ? err : prev;
+	}
+
+	return -EINVAL;
+}
+#endif /* CONFIG_SYSRISCV_ATOMIC */
diff --git a/arch/riscv/kernel/syscall_table.c b/arch/riscv/kernel/syscall_table.c
new file mode 100644
index 000000000000..8fa239b67cbc
--- /dev/null
+++ b/arch/riscv/kernel/syscall_table.c
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2009 Arnd Bergmann <arnd@arndb.de>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/syscalls.h>
+
+#include <asm/syscalls.h>
+
+#undef __SYSCALL
+#define __SYSCALL(nr, call)	[nr] = (call),
+
+void *sys_call_table[__NR_syscalls] = {
+	[0 ... __NR_syscalls - 1] = sys_ni_syscall,
+#include <asm/unistd.h>
+};
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
new file mode 100644
index 000000000000..3a4698199ecf
--- /dev/null
+++ b/arch/riscv/kernel/traps.c
@@ -0,0 +1,183 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <linux/sched/signal.h>
+#include <linux/signal.h>
+#include <linux/kdebug.h>
+#include <linux/uaccess.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/irq.h>
+
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/csr.h>
+
+int show_unhandled_signals = 1;
+
+extern asmlinkage void handle_exception(void);
+
+static DEFINE_SPINLOCK(die_lock);
+
+void die(struct pt_regs *regs, const char *str)
+{
+	static int die_counter;
+	int ret;
+
+	oops_enter();
+
+	spin_lock_irq(&die_lock);
+	console_verbose();
+	bust_spinlocks(1);
+
+	pr_emerg("%s [#%d]\n", str, ++die_counter);
+	print_modules();
+	show_regs(regs);
+
+	ret = notify_die(DIE_OOPS, str, regs, 0, regs->scause, SIGSEGV);
+
+	bust_spinlocks(0);
+	add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
+	spin_unlock_irq(&die_lock);
+	oops_exit();
+
+	if (in_interrupt())
+		panic("Fatal exception in interrupt");
+	if (panic_on_oops)
+		panic("Fatal exception");
+	if (ret != NOTIFY_STOP)
+		do_exit(SIGSEGV);
+}
+
+static inline void do_trap_siginfo(int signo, int code,
+	unsigned long addr, struct task_struct *tsk)
+{
+	siginfo_t info;
+
+	info.si_signo = signo;
+	info.si_errno = 0;
+	info.si_code = code;
+	info.si_addr = (void __user *)addr;
+	force_sig_info(signo, &info, tsk);
+}
+
+void do_trap(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, struct task_struct *tsk)
+{
+	if (show_unhandled_signals && unhandled_signal(tsk, signo)
+	    && printk_ratelimit()) {
+		pr_info("%s[%d]: unhandled signal %d code 0x%x at 0x" REG_FMT,
+			tsk->comm, task_pid_nr(tsk), signo, code, addr);
+		print_vma_addr(KERN_CONT " in ", GET_IP(regs));
+		pr_cont("\n");
+		show_regs(regs);
+	}
+
+	do_trap_siginfo(signo, code, addr, tsk);
+}
+
+static void do_trap_error(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, const char *str)
+{
+	if (user_mode(regs)) {
+		do_trap(regs, signo, code, addr, current);
+	} else {
+		if (!fixup_exception(regs))
+			die(regs, str);
+	}
+}
+
+#define DO_ERROR_INFO(name, signo, code, str)				\
+asmlinkage void name(struct pt_regs *regs)				\
+{									\
+	do_trap_error(regs, signo, code, regs->sepc, "Oops - " str);	\
+}
+
+DO_ERROR_INFO(do_trap_unknown,
+	SIGILL, ILL_ILLTRP, "unknown exception");
+DO_ERROR_INFO(do_trap_insn_misaligned,
+	SIGBUS, BUS_ADRALN, "instruction address misaligned");
+DO_ERROR_INFO(do_trap_insn_fault,
+	SIGBUS, BUS_ADRALN, "instruction access fault");
+DO_ERROR_INFO(do_trap_insn_illegal,
+	SIGILL, ILL_ILLOPC, "illegal instruction");
+DO_ERROR_INFO(do_trap_load_misaligned,
+	SIGBUS, BUS_ADRALN, "load address misaligned");
+DO_ERROR_INFO(do_trap_load_fault,
+	SIGSEGV, SEGV_ACCERR, "load access fault");
+DO_ERROR_INFO(do_trap_store_misaligned,
+	SIGBUS, BUS_ADRALN, "store (or AMO) address misaligned");
+DO_ERROR_INFO(do_trap_store_fault,
+	SIGSEGV, SEGV_ACCERR, "store (or AMO) access fault");
+DO_ERROR_INFO(do_trap_ecall_u,
+	SIGILL, ILL_ILLTRP, "environment call from U-mode");
+DO_ERROR_INFO(do_trap_ecall_s,
+	SIGILL, ILL_ILLTRP, "environment call from S-mode");
+DO_ERROR_INFO(do_trap_ecall_m,
+	SIGILL, ILL_ILLTRP, "environment call from M-mode");
+
+asmlinkage void do_trap_break(struct pt_regs *regs)
+{
+#ifdef CONFIG_GENERIC_BUG
+	if (!user_mode(regs)) {
+		enum bug_trap_type type;
+
+		type = report_bug(regs->sepc, regs);
+		switch (type) {
+		case BUG_TRAP_TYPE_NONE:
+			break;
+		case BUG_TRAP_TYPE_WARN:
+			regs->sepc += sizeof(bug_insn_t);
+			return;
+		case BUG_TRAP_TYPE_BUG:
+			die(regs, "Kernel BUG");
+		}
+	}
+#endif /* CONFIG_GENERIC_BUG */
+
+	do_trap_siginfo(SIGTRAP, TRAP_BRKPT, regs->sepc, current);
+	regs->sepc += 0x4;
+}
+
+#ifdef CONFIG_GENERIC_BUG
+int is_valid_bugaddr(unsigned long pc)
+{
+	bug_insn_t insn;
+
+	if (pc < PAGE_OFFSET)
+		return 0;
+	if (probe_kernel_address((bug_insn_t __user *)pc, insn))
+		return 0;
+	return (insn == __BUG_INSN);
+}
+#endif /* CONFIG_GENERIC_BUG */
+
+void __init trap_init(void)
+{
+	int hart = smp_processor_id();
+
+	/* Set sup0 scratch register to 0, indicating to exception vector
+	 * that we are presently executing in the kernel
+	 */
+	csr_write(sscratch, 0);
+	/* Set the exception vector address */
+	csr_write(stvec, &handle_exception);
+	/* Enable software interrupts and setup initial mask */
+	csr_write(sie,
+		  SIE_SSIE | atomic_long_read(&per_cpu(riscv_early_sie, hart))
+		);
+}
diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c
new file mode 100644
index 000000000000..e8a178df8144
--- /dev/null
+++ b/arch/riscv/kernel/vdso.c
@@ -0,0 +1,125 @@
+/*
+ * Copyright (C) 2004 Benjamin Herrenschmidt, IBM Corp.
+ *                    <benh@kernel.crashing.org>
+ * Copyright (C) 2012 ARM Limited
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/binfmts.h>
+#include <linux/err.h>
+
+#include <asm/vdso.h>
+
+extern char vdso_start[], vdso_end[];
+
+static unsigned int vdso_pages;
+static struct page **vdso_pagelist;
+
+/*
+ * The vDSO data page.
+ */
+static union {
+	struct vdso_data	data;
+	u8			page[PAGE_SIZE];
+} vdso_data_store __page_aligned_data;
+struct vdso_data *vdso_data = &vdso_data_store.data;
+
+static int __init vdso_init(void)
+{
+	unsigned int i;
+
+	vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT;
+	vdso_pagelist =
+		kcalloc(vdso_pages + 1, sizeof(struct page *), GFP_KERNEL);
+	if (unlikely(vdso_pagelist == NULL)) {
+		pr_err("vdso: pagelist allocation failed\n");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < vdso_pages; i++) {
+		struct page *pg;
+
+		pg = virt_to_page(vdso_start + (i << PAGE_SHIFT));
+		ClearPageReserved(pg);
+		vdso_pagelist[i] = pg;
+	}
+	vdso_pagelist[i] = virt_to_page(vdso_data);
+
+	return 0;
+}
+arch_initcall(vdso_init);
+
+int arch_setup_additional_pages(struct linux_binprm *bprm,
+	int uses_interp)
+{
+	struct mm_struct *mm = current->mm;
+	unsigned long vdso_base, vdso_len;
+	int ret;
+
+	vdso_len = (vdso_pages + 1) << PAGE_SHIFT;
+
+	down_write(&mm->mmap_sem);
+	vdso_base = get_unmapped_area(NULL, 0, vdso_len, 0, 0);
+	if (unlikely(IS_ERR_VALUE(vdso_base))) {
+		ret = vdso_base;
+		goto end;
+	}
+
+	/*
+	 * Put vDSO base into mm struct. We need to do this before calling
+	 * install_special_mapping or the perf counter mmap tracking code
+	 * will fail to recognise it as a vDSO (since arch_vma_name fails).
+	 */
+	mm->context.vdso = (void *)vdso_base;
+
+	ret = install_special_mapping(mm, vdso_base, vdso_len,
+		(VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC),
+		vdso_pagelist);
+
+	if (unlikely(ret))
+		mm->context.vdso = NULL;
+
+end:
+	up_write(&mm->mmap_sem);
+	return ret;
+}
+
+const char *arch_vma_name(struct vm_area_struct *vma)
+{
+	if (vma->vm_mm && (vma->vm_start == (long)vma->vm_mm->context.vdso))
+		return "[vdso]";
+	return NULL;
+}
+
+/*
+ * Function stubs to prevent linker errors when AT_SYSINFO_EHDR is defined
+ */
+
+int in_gate_area_no_mm(unsigned long addr)
+{
+	return 0;
+}
+
+int in_gate_area(struct mm_struct *mm, unsigned long addr)
+{
+	return 0;
+}
+
+struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
+{
+	return NULL;
+}
diff --git a/arch/riscv/kernel/vdso/.gitignore b/arch/riscv/kernel/vdso/.gitignore
new file mode 100644
index 000000000000..f8b69d84238e
--- /dev/null
+++ b/arch/riscv/kernel/vdso/.gitignore
@@ -0,0 +1 @@
+vdso.lds
diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
new file mode 100644
index 000000000000..04f3ec75b217
--- /dev/null
+++ b/arch/riscv/kernel/vdso/Makefile
@@ -0,0 +1,61 @@
+# Derived from arch/{arm64,tile}/kernel/vdso/Makefile
+
+obj-vdso := sigreturn.o
+
+# Build rules
+targets := $(obj-vdso) vdso.so vdso.so.dbg
+obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
+
+#ccflags-y := -shared -fno-common -fno-builtin
+#ccflags-y += -nostdlib -Wl,-soname=linux-vdso.so.1 \
+		$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
+
+CFLAGS_vdso.so = $(c_flags)
+CFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \
+	$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
+CFLAGS_vdso_syms.o = -r
+
+obj-y += vdso.o
+
+# We also create a special relocatable object that should mirror the symbol
+# table and layout of the linked DSO.  With ld -R we can then refer to
+# these symbols in the kernel code rather than hand-coded addresses.
+extra-y += vdso.lds vdso-syms.o
+$(obj)/built-in.o: $(obj)/vdso-syms.o
+$(obj)/built-in.o: ld_flags += -R $(obj)/vdso-syms.o
+
+CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+
+# Force dependency
+$(obj)/vdso.o : $(obj)/vdso.so
+
+# Link rule for the *.so file; *.lds must be first
+$(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso)
+	$(call if_changed,vdsold)
+$(obj)/vdso-syms.o: $(src)/vdso.lds $(obj-vdso)
+	$(call if_changed,vdsold)
+
+# Strip rule for the *.so file
+$(obj)/%.so: OBJCOPYFLAGS := -S
+$(obj)/%.so: $(obj)/%.so.dbg FORCE
+	$(call if_changed,objcopy)
+
+# Assembly rules for the *.S files
+$(obj-vdso): %.o: %.S
+	$(call if_changed_dep,vdsoas)
+
+# Actual build commands
+quiet_cmd_vdsold = VDSOLD  $@
+      cmd_vdsold = $(CC) $(c_flags) -nostdlib $(CFLAGS_$(@F)) -Wl,-n -Wl,-T $^ -o $@
+quiet_cmd_vdsoas = VDSOAS  $@
+      cmd_vdsoas = $(CC) $(a_flags) -c -o $@ $<
+
+# Install commands for the unstripped file
+quiet_cmd_vdso_install = INSTALL $@
+      cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/$@
+
+vdso.so: $(obj)/vdso.so.dbg
+	@mkdir -p $(MODLIB)/vdso
+	$(call cmd,vdso_install)
+
+vdso_install: vdso.so
diff --git a/arch/riscv/kernel/vdso/sigreturn.S b/arch/riscv/kernel/vdso/sigreturn.S
new file mode 100644
index 000000000000..f5aa3d72acfb
--- /dev/null
+++ b/arch/riscv/kernel/vdso/sigreturn.S
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/unistd.h>
+
+	.text
+ENTRY(__vdso_rt_sigreturn)
+	.cfi_startproc
+	.cfi_signal_frame
+	li a7, __NR_rt_sigreturn
+	scall
+	.cfi_endproc
+ENDPROC(__vdso_rt_sigreturn)
diff --git a/arch/riscv/kernel/vdso/vdso.S b/arch/riscv/kernel/vdso/vdso.S
new file mode 100644
index 000000000000..7055de5f9174
--- /dev/null
+++ b/arch/riscv/kernel/vdso/vdso.S
@@ -0,0 +1,27 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/page.h>
+
+	__PAGE_ALIGNED_DATA
+
+	.globl vdso_start, vdso_end
+	.balign PAGE_SIZE
+vdso_start:
+	.incbin "arch/riscv/kernel/vdso/vdso.so"
+	.balign PAGE_SIZE
+vdso_end:
+
+	.previous
diff --git a/arch/riscv/kernel/vdso/vdso.lds.S b/arch/riscv/kernel/vdso/vdso.lds.S
new file mode 100644
index 000000000000..24942cb25694
--- /dev/null
+++ b/arch/riscv/kernel/vdso/vdso.lds.S
@@ -0,0 +1,76 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+OUTPUT_ARCH(riscv)
+
+SECTIONS
+{
+	. = SIZEOF_HEADERS;
+
+	.hash		: { *(.hash) }			:text
+	.gnu.hash	: { *(.gnu.hash) }
+	.dynsym		: { *(.dynsym) }
+	.dynstr		: { *(.dynstr) }
+	.gnu.version	: { *(.gnu.version) }
+	.gnu.version_d	: { *(.gnu.version_d) }
+	.gnu.version_r	: { *(.gnu.version_r) }
+
+	.note		: { *(.note.*) }		:text	:note
+	.dynamic	: { *(.dynamic) }		:text	:dynamic
+
+	.eh_frame_hdr	: { *(.eh_frame_hdr) }		:text	:eh_frame_hdr
+	.eh_frame	: { KEEP (*(.eh_frame)) }	:text
+
+	.rodata		: { *(.rodata .rodata.* .gnu.linkonce.r.*) }
+
+	/*
+	 * This linker script is used both with -r and with -shared.
+	 * For the layouts to match, we need to skip more than enough
+	 * space for the dynamic symbol table, etc. If this amount is
+	 * insufficient, ld -shared will error; simply increase it here.
+	 */
+	. = 0x800;
+	.text		: { *(.text .text.*) }		:text
+
+	.data		: {
+		*(.got.plt) *(.got)
+		*(.data .data.* .gnu.linkonce.d.*)
+		*(.dynbss)
+		*(.bss .bss.* .gnu.linkonce.b.*)
+	}
+}
+
+/*
+ * We must supply the ELF program headers explicitly to get just one
+ * PT_LOAD segment, and set the flags explicitly to make segments read-only.
+ */
+PHDRS
+{
+	text		PT_LOAD		FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
+	dynamic		PT_DYNAMIC	FLAGS(4);		/* PF_R */
+	note		PT_NOTE		FLAGS(4);		/* PF_R */
+	eh_frame_hdr	PT_GNU_EH_FRAME;
+}
+
+/*
+ * This controls what symbols we export from the DSO.
+ */
+VERSION
+{
+	LINUX_2.6 {
+	global:
+		__vdso_rt_sigreturn;
+	local: *;
+	};
+}
+
diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
new file mode 100644
index 000000000000..ece84991609c
--- /dev/null
+++ b/arch/riscv/kernel/vmlinux.lds.S
@@ -0,0 +1,92 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#define LOAD_OFFSET PAGE_OFFSET
+#include <asm/vmlinux.lds.h>
+#include <asm/page.h>
+#include <asm/cache.h>
+#include <asm/thread_info.h>
+
+OUTPUT_ARCH(riscv)
+ENTRY(_start)
+
+jiffies = jiffies_64;
+
+SECTIONS
+{
+	/* Beginning of code and text segment */
+	. = LOAD_OFFSET;
+	_start = .;
+	__init_begin = .;
+	HEAD_TEXT_SECTION
+	INIT_TEXT_SECTION(PAGE_SIZE)
+	INIT_DATA_SECTION(16)
+	/* we have to discard exit text and such at runtime, not link time */
+	.exit.text :
+	{
+		EXIT_TEXT
+	}
+	.exit.data :
+	{
+		EXIT_DATA
+	}
+	PERCPU_SECTION(L1_CACHE_BYTES)
+	__init_end = .;
+
+	.text : {
+		_text = .;
+		_stext = .;
+		TEXT_TEXT
+		SCHED_TEXT
+		CPUIDLE_TEXT
+		LOCK_TEXT
+		KPROBES_TEXT
+		ENTRY_TEXT
+		IRQENTRY_TEXT
+		*(.fixup)
+		_etext = .;
+	}
+
+	/* Start of data section */
+	_sdata = .;
+	RO_DATA_SECTION(L1_CACHE_BYTES)
+	.srodata : {
+		*(.srodata*)
+	}
+
+	RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
+	.sdata : {
+		__global_pointer$ = . + 0x800;
+		*(.sdata*)
+		/* End of data section */
+		_edata = .;
+		*(.sbss*)
+	}
+
+	BSS_SECTION(0, 0, 0)
+
+	EXCEPTION_TABLE(0x10)
+	NOTES
+
+	.rel.dyn : {
+		*(.rel.dyn*)
+	}
+
+	_end = .;
+
+	STABS_DEBUG
+	DWARF_DEBUG
+
+	DISCARDS
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 16/17] RISC-V: Add kernel subdirectory
@ 2017-06-06 23:00     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

These files were mostly based on the score port, but many of them are
very ISA specific.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/kernel/.gitignore       |   1 +
 arch/riscv/kernel/Makefile         |  16 ++
 arch/riscv/kernel/asm-offsets.c    | 316 +++++++++++++++++++++++++++
 arch/riscv/kernel/cacheinfo.c      | 103 +++++++++
 arch/riscv/kernel/cpu.c            |  89 ++++++++
 arch/riscv/kernel/entry.S          | 437 +++++++++++++++++++++++++++++++++++++
 arch/riscv/kernel/head.S           | 147 +++++++++++++
 arch/riscv/kernel/irq.c            |  20 ++
 arch/riscv/kernel/module.c         | 215 ++++++++++++++++++
 arch/riscv/kernel/process.c        | 132 +++++++++++
 arch/riscv/kernel/ptrace.c         | 147 +++++++++++++
 arch/riscv/kernel/reset.c          |  36 +++
 arch/riscv/kernel/riscv_ksyms.c    |  16 ++
 arch/riscv/kernel/setup.c          | 240 ++++++++++++++++++++
 arch/riscv/kernel/signal.c         | 257 ++++++++++++++++++++++
 arch/riscv/kernel/smp.c            | 110 ++++++++++
 arch/riscv/kernel/smpboot.c        | 103 +++++++++
 arch/riscv/kernel/stacktrace.c     | 177 +++++++++++++++
 arch/riscv/kernel/sys_riscv.c      |  85 ++++++++
 arch/riscv/kernel/syscall_table.c  |  25 +++
 arch/riscv/kernel/traps.c          | 183 ++++++++++++++++
 arch/riscv/kernel/vdso.c           | 125 +++++++++++
 arch/riscv/kernel/vdso/.gitignore  |   1 +
 arch/riscv/kernel/vdso/Makefile    |  61 ++++++
 arch/riscv/kernel/vdso/sigreturn.S |  24 ++
 arch/riscv/kernel/vdso/vdso.S      |  27 +++
 arch/riscv/kernel/vdso/vdso.lds.S  |  76 +++++++
 arch/riscv/kernel/vmlinux.lds.S    |  92 ++++++++
 28 files changed, 3261 insertions(+)
 create mode 100644 arch/riscv/kernel/.gitignore
 create mode 100644 arch/riscv/kernel/Makefile
 create mode 100644 arch/riscv/kernel/asm-offsets.c
 create mode 100644 arch/riscv/kernel/cacheinfo.c
 create mode 100644 arch/riscv/kernel/cpu.c
 create mode 100644 arch/riscv/kernel/entry.S
 create mode 100644 arch/riscv/kernel/head.S
 create mode 100644 arch/riscv/kernel/irq.c
 create mode 100644 arch/riscv/kernel/module.c
 create mode 100644 arch/riscv/kernel/process.c
 create mode 100644 arch/riscv/kernel/ptrace.c
 create mode 100644 arch/riscv/kernel/reset.c
 create mode 100644 arch/riscv/kernel/riscv_ksyms.c
 create mode 100644 arch/riscv/kernel/setup.c
 create mode 100644 arch/riscv/kernel/signal.c
 create mode 100644 arch/riscv/kernel/smp.c
 create mode 100644 arch/riscv/kernel/smpboot.c
 create mode 100644 arch/riscv/kernel/stacktrace.c
 create mode 100644 arch/riscv/kernel/sys_riscv.c
 create mode 100644 arch/riscv/kernel/syscall_table.c
 create mode 100644 arch/riscv/kernel/traps.c
 create mode 100644 arch/riscv/kernel/vdso.c
 create mode 100644 arch/riscv/kernel/vdso/.gitignore
 create mode 100644 arch/riscv/kernel/vdso/Makefile
 create mode 100644 arch/riscv/kernel/vdso/sigreturn.S
 create mode 100644 arch/riscv/kernel/vdso/vdso.S
 create mode 100644 arch/riscv/kernel/vdso/vdso.lds.S
 create mode 100644 arch/riscv/kernel/vmlinux.lds.S

diff --git a/arch/riscv/kernel/.gitignore b/arch/riscv/kernel/.gitignore
new file mode 100644
index 000000000000..b51634f6a7cd
--- /dev/null
+++ b/arch/riscv/kernel/.gitignore
@@ -0,0 +1 @@
+/vmlinux.lds
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
new file mode 100644
index 000000000000..b6de129d4a23
--- /dev/null
+++ b/arch/riscv/kernel/Makefile
@@ -0,0 +1,16 @@
+#
+# Makefile for the RISC-V Linux kernel
+#
+
+extra-y := head.o vmlinux.lds
+
+obj-y	:= cpu.o entry.o irq.o process.o ptrace.o reset.o setup.o \
+	   signal.o syscall_table.o sys_riscv.o traps.o \
+	   riscv_ksyms.o stacktrace.o vdso.o cacheinfo.o vdso/
+
+CFLAGS_setup.o := -mcmodel=medany
+
+obj-$(CONFIG_SMP)		+= smpboot.o smp.o
+obj-$(CONFIG_MODULES)		+= module.o
+
+clean:
diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
new file mode 100644
index 000000000000..2ead5037528c
--- /dev/null
+++ b/arch/riscv/kernel/asm-offsets.c
@@ -0,0 +1,316 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/kbuild.h>
+#include <linux/sched.h>
+#include <asm/thread_info.h>
+#include <asm/ptrace.h>
+
+void asm_offsets(void)
+{
+	OFFSET(TASK_THREAD_RA, task_struct, thread.ra);
+	OFFSET(TASK_THREAD_SP, task_struct, thread.sp);
+	OFFSET(TASK_THREAD_S0, task_struct, thread.s[0]);
+	OFFSET(TASK_THREAD_S1, task_struct, thread.s[1]);
+	OFFSET(TASK_THREAD_S2, task_struct, thread.s[2]);
+	OFFSET(TASK_THREAD_S3, task_struct, thread.s[3]);
+	OFFSET(TASK_THREAD_S4, task_struct, thread.s[4]);
+	OFFSET(TASK_THREAD_S5, task_struct, thread.s[5]);
+	OFFSET(TASK_THREAD_S6, task_struct, thread.s[6]);
+	OFFSET(TASK_THREAD_S7, task_struct, thread.s[7]);
+	OFFSET(TASK_THREAD_S8, task_struct, thread.s[8]);
+	OFFSET(TASK_THREAD_S9, task_struct, thread.s[9]);
+	OFFSET(TASK_THREAD_S10, task_struct, thread.s[10]);
+	OFFSET(TASK_THREAD_S11, task_struct, thread.s[11]);
+	OFFSET(TASK_THREAD_SP, task_struct, thread.sp);
+	OFFSET(TASK_STACK, task_struct, stack);
+	OFFSET(TASK_TI, task_struct, thread_info);
+	OFFSET(TASK_TI_FLAGS, task_struct, thread_info.flags);
+	OFFSET(TASK_TI_KERNEL_SP, task_struct, thread_info.kernel_sp);
+	OFFSET(TASK_TI_USER_SP, task_struct, thread_info.user_sp);
+
+	OFFSET(TASK_THREAD_F0,  task_struct, thread.fstate.f[0]);
+	OFFSET(TASK_THREAD_F1,  task_struct, thread.fstate.f[1]);
+	OFFSET(TASK_THREAD_F2,  task_struct, thread.fstate.f[2]);
+	OFFSET(TASK_THREAD_F3,  task_struct, thread.fstate.f[3]);
+	OFFSET(TASK_THREAD_F4,  task_struct, thread.fstate.f[4]);
+	OFFSET(TASK_THREAD_F5,  task_struct, thread.fstate.f[5]);
+	OFFSET(TASK_THREAD_F6,  task_struct, thread.fstate.f[6]);
+	OFFSET(TASK_THREAD_F7,  task_struct, thread.fstate.f[7]);
+	OFFSET(TASK_THREAD_F8,  task_struct, thread.fstate.f[8]);
+	OFFSET(TASK_THREAD_F9,  task_struct, thread.fstate.f[9]);
+	OFFSET(TASK_THREAD_F10, task_struct, thread.fstate.f[10]);
+	OFFSET(TASK_THREAD_F11, task_struct, thread.fstate.f[11]);
+	OFFSET(TASK_THREAD_F12, task_struct, thread.fstate.f[12]);
+	OFFSET(TASK_THREAD_F13, task_struct, thread.fstate.f[13]);
+	OFFSET(TASK_THREAD_F14, task_struct, thread.fstate.f[14]);
+	OFFSET(TASK_THREAD_F15, task_struct, thread.fstate.f[15]);
+	OFFSET(TASK_THREAD_F16, task_struct, thread.fstate.f[16]);
+	OFFSET(TASK_THREAD_F17, task_struct, thread.fstate.f[17]);
+	OFFSET(TASK_THREAD_F18, task_struct, thread.fstate.f[18]);
+	OFFSET(TASK_THREAD_F19, task_struct, thread.fstate.f[19]);
+	OFFSET(TASK_THREAD_F20, task_struct, thread.fstate.f[20]);
+	OFFSET(TASK_THREAD_F21, task_struct, thread.fstate.f[21]);
+	OFFSET(TASK_THREAD_F22, task_struct, thread.fstate.f[22]);
+	OFFSET(TASK_THREAD_F23, task_struct, thread.fstate.f[23]);
+	OFFSET(TASK_THREAD_F24, task_struct, thread.fstate.f[24]);
+	OFFSET(TASK_THREAD_F25, task_struct, thread.fstate.f[25]);
+	OFFSET(TASK_THREAD_F26, task_struct, thread.fstate.f[26]);
+	OFFSET(TASK_THREAD_F27, task_struct, thread.fstate.f[27]);
+	OFFSET(TASK_THREAD_F28, task_struct, thread.fstate.f[28]);
+	OFFSET(TASK_THREAD_F29, task_struct, thread.fstate.f[29]);
+	OFFSET(TASK_THREAD_F30, task_struct, thread.fstate.f[30]);
+	OFFSET(TASK_THREAD_F31, task_struct, thread.fstate.f[31]);
+	OFFSET(TASK_THREAD_FCSR, task_struct, thread.fstate.fcsr);
+
+	DEFINE(PT_SIZE, sizeof(struct pt_regs));
+	OFFSET(PT_SEPC, pt_regs, sepc);
+	OFFSET(PT_RA, pt_regs, ra);
+	OFFSET(PT_FP, pt_regs, s0);
+	OFFSET(PT_S0, pt_regs, s0);
+	OFFSET(PT_S1, pt_regs, s1);
+	OFFSET(PT_S2, pt_regs, s2);
+	OFFSET(PT_S3, pt_regs, s3);
+	OFFSET(PT_S4, pt_regs, s4);
+	OFFSET(PT_S5, pt_regs, s5);
+	OFFSET(PT_S6, pt_regs, s6);
+	OFFSET(PT_S7, pt_regs, s7);
+	OFFSET(PT_S8, pt_regs, s8);
+	OFFSET(PT_S9, pt_regs, s9);
+	OFFSET(PT_S10, pt_regs, s10);
+	OFFSET(PT_S11, pt_regs, s11);
+	OFFSET(PT_SP, pt_regs, sp);
+	OFFSET(PT_TP, pt_regs, tp);
+	OFFSET(PT_A0, pt_regs, a0);
+	OFFSET(PT_A1, pt_regs, a1);
+	OFFSET(PT_A2, pt_regs, a2);
+	OFFSET(PT_A3, pt_regs, a3);
+	OFFSET(PT_A4, pt_regs, a4);
+	OFFSET(PT_A5, pt_regs, a5);
+	OFFSET(PT_A6, pt_regs, a6);
+	OFFSET(PT_A7, pt_regs, a7);
+	OFFSET(PT_T0, pt_regs, t0);
+	OFFSET(PT_T1, pt_regs, t1);
+	OFFSET(PT_T2, pt_regs, t2);
+	OFFSET(PT_T3, pt_regs, t3);
+	OFFSET(PT_T4, pt_regs, t4);
+	OFFSET(PT_T5, pt_regs, t5);
+	OFFSET(PT_T6, pt_regs, t6);
+	OFFSET(PT_GP, pt_regs, gp);
+	OFFSET(PT_SSTATUS, pt_regs, sstatus);
+	OFFSET(PT_SBADADDR, pt_regs, sbadaddr);
+	OFFSET(PT_SCAUSE, pt_regs, scause);
+
+	/* THREAD_{F,X}* might be larger than a S-type offset can handle, but
+	 * these are used in performance-sensitive assembly so we can't resort
+	 * to loading the long immediate every time.
+	 */
+	DEFINE(TASK_THREAD_RA_RA,
+		  offsetof(struct task_struct, thread.ra)
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_SP_RA,
+		  offsetof(struct task_struct, thread.sp)
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S0_RA,
+		  offsetof(struct task_struct, thread.s[0])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S1_RA,
+		  offsetof(struct task_struct, thread.s[1])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S2_RA,
+		  offsetof(struct task_struct, thread.s[2])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S3_RA,
+		  offsetof(struct task_struct, thread.s[3])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S4_RA,
+		  offsetof(struct task_struct, thread.s[4])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S5_RA,
+		  offsetof(struct task_struct, thread.s[5])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S6_RA,
+		  offsetof(struct task_struct, thread.s[6])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S7_RA,
+		  offsetof(struct task_struct, thread.s[7])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S8_RA,
+		  offsetof(struct task_struct, thread.s[8])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S9_RA,
+		  offsetof(struct task_struct, thread.s[9])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S10_RA,
+		  offsetof(struct task_struct, thread.s[10])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S11_RA,
+		  offsetof(struct task_struct, thread.s[11])
+		- offsetof(struct task_struct, thread.ra)
+	);
+
+	DEFINE(TASK_THREAD_F0_F0,
+		  offsetof(struct task_struct, thread.fstate.f[0])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F1_F0,
+		  offsetof(struct task_struct, thread.fstate.f[1])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F2_F0,
+		  offsetof(struct task_struct, thread.fstate.f[2])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F3_F0,
+		  offsetof(struct task_struct, thread.fstate.f[3])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F4_F0,
+		  offsetof(struct task_struct, thread.fstate.f[4])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F5_F0,
+		  offsetof(struct task_struct, thread.fstate.f[5])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F6_F0,
+		  offsetof(struct task_struct, thread.fstate.f[6])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F7_F0,
+		  offsetof(struct task_struct, thread.fstate.f[7])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F8_F0,
+		  offsetof(struct task_struct, thread.fstate.f[8])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F9_F0,
+		  offsetof(struct task_struct, thread.fstate.f[9])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F10_F0,
+		  offsetof(struct task_struct, thread.fstate.f[10])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F11_F0,
+		  offsetof(struct task_struct, thread.fstate.f[11])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F12_F0,
+		  offsetof(struct task_struct, thread.fstate.f[12])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F13_F0,
+		  offsetof(struct task_struct, thread.fstate.f[13])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F14_F0,
+		  offsetof(struct task_struct, thread.fstate.f[14])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F15_F0,
+		  offsetof(struct task_struct, thread.fstate.f[15])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F16_F0,
+		  offsetof(struct task_struct, thread.fstate.f[16])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F17_F0,
+		  offsetof(struct task_struct, thread.fstate.f[17])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F18_F0,
+		  offsetof(struct task_struct, thread.fstate.f[18])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F19_F0,
+		  offsetof(struct task_struct, thread.fstate.f[19])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F20_F0,
+		  offsetof(struct task_struct, thread.fstate.f[20])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F21_F0,
+		  offsetof(struct task_struct, thread.fstate.f[21])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F22_F0,
+		  offsetof(struct task_struct, thread.fstate.f[22])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F23_F0,
+		  offsetof(struct task_struct, thread.fstate.f[23])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F24_F0,
+		  offsetof(struct task_struct, thread.fstate.f[24])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F25_F0,
+		  offsetof(struct task_struct, thread.fstate.f[25])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F26_F0,
+		  offsetof(struct task_struct, thread.fstate.f[26])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F27_F0,
+		  offsetof(struct task_struct, thread.fstate.f[27])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F28_F0,
+		  offsetof(struct task_struct, thread.fstate.f[28])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F29_F0,
+		  offsetof(struct task_struct, thread.fstate.f[29])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F30_F0,
+		  offsetof(struct task_struct, thread.fstate.f[30])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F31_F0,
+		  offsetof(struct task_struct, thread.fstate.f[31])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_FCSR_F0,
+		  offsetof(struct task_struct, thread.fstate.fcsr)
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+
+	/* The assembler needs access to THREAD_SIZE as well. */
+	DEFINE(ASM_THREAD_SIZE, THREAD_SIZE);
+
+	/* We allocate a pt_regs on the stack when entering the kernel.  This
+	 * ensures the alignment is sane.
+	 */
+	DEFINE(PT_SIZE_ON_STACK, ALIGN(sizeof(struct pt_regs), STACK_ALIGN));
+}
diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c
new file mode 100644
index 000000000000..76ed95850a22
--- /dev/null
+++ b/arch/riscv/kernel/cacheinfo.c
@@ -0,0 +1,103 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/cacheinfo.h>
+#include <linux/cpu.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+
+static void ci_leaf_init(struct cacheinfo *this_leaf,
+			 struct device_node *node,
+			 enum cache_type type, unsigned int level)
+{
+	this_leaf->of_node = node;
+	this_leaf->level = level;
+	this_leaf->type = type;
+	this_leaf->physical_line_partition = 1; // not a sector cache
+	this_leaf->attributes =
+		CACHE_WRITE_BACK
+		| CACHE_READ_ALLOCATE
+		| CACHE_WRITE_ALLOCATE; // TODO: add to DTS
+}
+
+static int __init_cache_level(unsigned int cpu)
+{
+	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+	struct device_node *np = of_cpu_device_node_get(cpu);
+	int levels = 0, leaves = 0, level;
+
+	if (of_property_read_bool(np, "cache-size"))
+		++leaves;
+	if (of_property_read_bool(np, "i-cache-size"))
+		++leaves;
+	if (of_property_read_bool(np, "d-cache-size"))
+		++leaves;
+	if (leaves > 0)
+		levels = 1;
+
+	while ((np = of_find_next_cache_node(np))) {
+		if (!of_device_is_compatible(np, "cache"))
+			break;
+		if (of_property_read_u32(np, "cache-level", &level))
+			break;
+		if (level <= levels)
+			break;
+		if (of_property_read_bool(np, "cache-size"))
+			++leaves;
+		if (of_property_read_bool(np, "i-cache-size"))
+			++leaves;
+		if (of_property_read_bool(np, "d-cache-size"))
+			++leaves;
+		levels = level;
+	}
+
+	this_cpu_ci->num_levels = levels;
+	this_cpu_ci->num_leaves = leaves;
+	return 0;
+}
+
+static int __populate_cache_leaves(unsigned int cpu)
+{
+	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+	struct cacheinfo *this_leaf = this_cpu_ci->info_list;
+	struct device_node *np = of_cpu_device_node_get(cpu);
+	int levels = 1, level = 1;
+
+	if (of_property_read_bool(np, "cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
+	if (of_property_read_bool(np, "i-cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
+	if (of_property_read_bool(np, "d-cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
+
+	while ((np = of_find_next_cache_node(np))) {
+		if (!of_device_is_compatible(np, "cache"))
+			break;
+		if (of_property_read_u32(np, "cache-level", &level))
+			break;
+		if (level <= levels)
+			break;
+		if (of_property_read_bool(np, "cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
+		if (of_property_read_bool(np, "i-cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
+		if (of_property_read_bool(np, "d-cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
+		levels = level;
+	}
+
+	return 0;
+}
+
+DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
+DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c
new file mode 100644
index 000000000000..20004bd7a216
--- /dev/null
+++ b/arch/riscv/kernel/cpu.c
@@ -0,0 +1,89 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <linux/of.h>
+
+/* Return -1 if not a valid hart */
+int riscv_of_processor_hart(struct device_node *node)
+{
+	const char *isa, *status;
+	u32 hart;
+
+	if (!of_device_is_compatible(node, "riscv"))
+		return -(ENODEV);
+	if (of_property_read_u32(node, "reg", &hart)
+	    || hart >= NR_CPUS)
+		return -(ENODEV);
+	if (of_property_read_string(node, "status", &status)
+	    || strcmp(status, "okay"))
+		return -(ENODEV);
+	if (of_property_read_string(node, "riscv,isa", &isa)
+	    || isa[0] != 'r'
+	    || isa[1] != 'v')
+		return -(ENODEV);
+
+	return hart;
+}
+
+#ifdef CONFIG_PROC_FS
+
+static void *c_start(struct seq_file *m, loff_t *pos)
+{
+	*pos = cpumask_next(*pos - 1, cpu_online_mask);
+	if ((*pos) < nr_cpu_ids)
+		return (void *)(uintptr_t)(1 + *pos);
+	return NULL;
+}
+
+static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+{
+	(*pos)++;
+	return c_start(m, pos);
+}
+
+static void c_stop(struct seq_file *m, void *v)
+{
+}
+
+static int c_show(struct seq_file *m, void *v)
+{
+	unsigned long hart_id = (unsigned long)v - 1;
+	struct device_node *node = of_get_cpu_node(hart_id, NULL);
+	const char *compat, *isa, *mmu;
+
+	seq_printf(m, "hart\t: %lu\n", hart_id);
+	if (!of_property_read_string(node, "riscv,isa", &isa)
+	    && isa[0] == 'r'
+	    && isa[1] == 'v')
+		seq_printf(m, "isa\t: %s\n", isa);
+	if (!of_property_read_string(node, "mmu-type", &mmu)
+	    && !strncmp(mmu, "riscv,", 6))
+		seq_printf(m, "mmu\t: %s\n", mmu+6);
+	if (!of_property_read_string(node, "compatible", &compat)
+	    && strcmp(compat, "riscv"))
+		seq_printf(m, "uarch\t: %s\n", compat);
+	seq_puts(m, "\n");
+
+	return 0;
+}
+
+const struct seq_operations cpuinfo_op = {
+	.start	= c_start,
+	.next	= c_next,
+	.stop	= c_stop,
+	.show	= c_show
+};
+
+#endif /* CONFIG_PROC_FS */
diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
new file mode 100644
index 000000000000..0be9ca33e8fc
--- /dev/null
+++ b/arch/riscv/kernel/entry.S
@@ -0,0 +1,437 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+
+#include <asm/asm.h>
+#include <asm/csr.h>
+#include <asm/unistd.h>
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+
+	.text
+	.altmacro
+
+/* Prepares to enter a system call or exception by saving all registers to the
+ * stack.
+ */
+	.macro SAVE_ALL
+	LOCAL _restore_kernel_tpsp
+	LOCAL _save_context
+
+	/* If coming from userspace, preserve the user thread pointer and load
+	   the kernel thread pointer.  If we came from the kernel, sscratch
+	   will contain 0, and we should continue on the current TP. */
+	csrrw tp, sscratch, tp
+	bnez tp, _save_context
+
+_restore_kernel_tpsp:
+	csrr tp, sscratch
+	REG_S sp, TASK_TI_KERNEL_SP(tp)
+_save_context:
+	REG_S sp, TASK_TI_USER_SP(tp)
+	REG_L sp, TASK_TI_KERNEL_SP(tp)
+	addi sp, sp, -(PT_SIZE_ON_STACK)
+	REG_S x1,  PT_RA(sp)
+	REG_S x3,  PT_GP(sp)
+	REG_S x5,  PT_T0(sp)
+	REG_S x6,  PT_T1(sp)
+	REG_S x7,  PT_T2(sp)
+	REG_S x8,  PT_S0(sp)
+	REG_S x9,  PT_S1(sp)
+	REG_S x10, PT_A0(sp)
+	REG_S x11, PT_A1(sp)
+	REG_S x12, PT_A2(sp)
+	REG_S x13, PT_A3(sp)
+	REG_S x14, PT_A4(sp)
+	REG_S x15, PT_A5(sp)
+	REG_S x16, PT_A6(sp)
+	REG_S x17, PT_A7(sp)
+	REG_S x18, PT_S2(sp)
+	REG_S x19, PT_S3(sp)
+	REG_S x20, PT_S4(sp)
+	REG_S x21, PT_S5(sp)
+	REG_S x22, PT_S6(sp)
+	REG_S x23, PT_S7(sp)
+	REG_S x24, PT_S8(sp)
+	REG_S x25, PT_S9(sp)
+	REG_S x26, PT_S10(sp)
+	REG_S x27, PT_S11(sp)
+	REG_S x28, PT_T3(sp)
+	REG_S x29, PT_T4(sp)
+	REG_S x30, PT_T5(sp)
+	REG_S x31, PT_T6(sp)
+
+	/* Disable FPU to detect illegal usage of
+	   floating point in kernel space */
+	li t0, SR_FS
+
+	REG_L s0, TASK_TI_USER_SP(tp)
+	csrrc s1, sstatus, t0
+	csrr s2, sepc
+	csrr s3, sbadaddr
+	csrr s4, scause
+	csrr s5, sscratch
+	REG_S s0, PT_SP(sp)
+	REG_S s1, PT_SSTATUS(sp)
+	REG_S s2, PT_SEPC(sp)
+	REG_S s3, PT_SBADADDR(sp)
+	REG_S s4, PT_SCAUSE(sp)
+	REG_S s5, PT_TP(sp)
+	.endm
+
+/* Prepares to return from a system call or exception by restoring all
+ * registers from the stack.
+ */
+	.macro RESTORE_ALL
+	REG_L a0, PT_SSTATUS(sp)
+	REG_L a2, PT_SEPC(sp)
+	csrw sstatus, a0
+	csrw sepc, a2
+
+	REG_L x1,  PT_RA(sp)
+	REG_L x3,  PT_GP(sp)
+	REG_L x4,  PT_TP(sp)
+	REG_L x5,  PT_T0(sp)
+	REG_L x6,  PT_T1(sp)
+	REG_L x7,  PT_T2(sp)
+	REG_L x8,  PT_S0(sp)
+	REG_L x9,  PT_S1(sp)
+	REG_L x10, PT_A0(sp)
+	REG_L x11, PT_A1(sp)
+	REG_L x12, PT_A2(sp)
+	REG_L x13, PT_A3(sp)
+	REG_L x14, PT_A4(sp)
+	REG_L x15, PT_A5(sp)
+	REG_L x16, PT_A6(sp)
+	REG_L x17, PT_A7(sp)
+	REG_L x18, PT_S2(sp)
+	REG_L x19, PT_S3(sp)
+	REG_L x20, PT_S4(sp)
+	REG_L x21, PT_S5(sp)
+	REG_L x22, PT_S6(sp)
+	REG_L x23, PT_S7(sp)
+	REG_L x24, PT_S8(sp)
+	REG_L x25, PT_S9(sp)
+	REG_L x26, PT_S10(sp)
+	REG_L x27, PT_S11(sp)
+	REG_L x28, PT_T3(sp)
+	REG_L x29, PT_T4(sp)
+	REG_L x30, PT_T5(sp)
+	REG_L x31, PT_T6(sp)
+
+	REG_L x2,  PT_SP(sp)
+	.endm
+
+ENTRY(handle_exception)
+	SAVE_ALL
+
+	/* Set sscratch register to 0, so that if a recursive exception
+	   occurs, the exception vector knows it came from the kernel */
+	csrw sscratch, x0
+
+	la gp, __global_pointer$
+
+	la ra, ret_from_exception
+	/* MSB of cause differentiates between
+	   interrupts and exceptions */
+	bge s4, zero, 1f
+
+	/* Handle interrupts */
+	slli a0, s4, 1
+	srli a0, a0, 1
+	move a1, sp /* pt_regs */
+	tail do_IRQ
+1:
+	/* Handle syscalls */
+	li t0, EXC_SYSCALL
+	beq s4, t0, handle_syscall
+
+	/* Handle other exceptions */
+	slli t0, s4, RISCV_LGPTR
+	la t1, excp_vect_table
+	la t2, excp_vect_table_end
+	move a0, sp /* pt_regs */
+	add t0, t1, t0
+	/* Check if exception code lies within bounds */
+	bgeu t0, t2, 1f
+	REG_L t0, 0(t0)
+	jr t0
+1:
+	tail do_trap_unknown
+
+handle_syscall:
+	/* Advance SEPC to avoid executing the original
+	   scall instruction on sret */
+	addi s2, s2, 0x4
+	REG_S s2, PT_SEPC(sp)
+	/* System calls run with interrupts enabled */
+	csrs sstatus, SR_IE
+	/* Trace syscalls, but only if requested by the user. */
+	REG_L t0, TASK_TI_FLAGS(tp)
+	andi t0, t0, _TIF_SYSCALL_TRACE
+	bnez t0, handle_syscall_trace_enter
+check_syscall_nr:
+	/* Check to make sure we don't jump to a bogus syscall number. */
+	li t0, __NR_syscalls
+	la s0, sys_ni_syscall
+	/* Syscall number held in a7 */
+	bgeu a7, t0, 1f
+	la s0, sys_call_table
+	slli t0, a7, RISCV_LGPTR
+	add s0, s0, t0
+	REG_L s0, 0(s0)
+1:
+	jalr s0
+
+ret_from_syscall:
+	/* Set user a0 to kernel a0 */
+	REG_S a0, PT_A0(sp)
+	/* Trace syscalls, but only if requested by the user. */
+	REG_L t0, TASK_TI_FLAGS(tp)
+	andi t0, t0, _TIF_SYSCALL_TRACE
+	bnez t0, handle_syscall_trace_exit
+
+ret_from_exception:
+	REG_L s0, PT_SSTATUS(sp)
+	csrc sstatus, SR_IE
+	andi s0, s0, SR_PS
+	bnez s0, restore_all
+
+resume_userspace:
+	/* Interrupts must be disabled here so flags are checked atomically */
+	REG_L s0, TASK_TI_FLAGS(tp) /* current_thread_info->flags */
+	andi s1, s0, _TIF_WORK_MASK
+	bnez s1, work_pending
+
+	/* Save unwound kernel stack pointer in thread_info */
+	addi s0, sp, PT_SIZE_ON_STACK
+	REG_S s0, TASK_TI_KERNEL_SP(tp)
+
+	/* Save TP into sscratch, so we can find the kernel data structures
+	 * again. */
+	csrw sscratch, tp
+
+restore_all:
+	RESTORE_ALL
+	sret
+
+work_pending:
+	/* Enter slow path for supplementary processing */
+	la ra, ret_from_exception
+	andi s1, s0, _TIF_NEED_RESCHED
+	bnez s1, work_resched
+work_notifysig:
+	/* Handle pending signals and notify-resume requests */
+	csrs sstatus, SR_IE /* Enable interrupts for do_notify_resume() */
+	move a0, sp /* pt_regs */
+	move a1, s0 /* current_thread_info->flags */
+	tail do_notify_resume
+work_resched:
+	tail schedule
+
+/* Slow paths for ptrace. */
+handle_syscall_trace_enter:
+	move a0, sp
+	call do_syscall_trace_enter
+	REG_L a0, PT_A0(sp)
+	REG_L a1, PT_A1(sp)
+	REG_L a2, PT_A2(sp)
+	REG_L a3, PT_A3(sp)
+	REG_L a4, PT_A4(sp)
+	REG_L a5, PT_A5(sp)
+	REG_L a6, PT_A6(sp)
+	REG_L a7, PT_A7(sp)
+	j check_syscall_nr
+handle_syscall_trace_exit:
+	move a0, sp
+	call do_syscall_trace_exit
+	j ret_from_exception
+
+END(handle_exception)
+
+ENTRY(ret_from_fork)
+	la ra, ret_from_exception
+	tail schedule_tail
+ENDPROC(ret_from_fork)
+
+ENTRY(ret_from_kernel_thread)
+	call schedule_tail
+	/* Call fn(arg) */
+	la ra, ret_from_exception
+	move a0, s1
+	jr s0
+ENDPROC(ret_from_kernel_thread)
+
+
+/*
+ * Integer register context switch
+ * The callee-saved registers must be saved and restored.
+ *
+ *   a0: previous task_struct (must be preserved across the switch)
+ *   a1: next task_struct
+ */
+ENTRY(__switch_to)
+	/* Save context into prev->thread */
+	li    a2,  TASK_THREAD_RA
+	add   a0, a0, a2
+	add   a2, a1, a2
+	REG_S ra,  TASK_THREAD_RA_RA(a0)
+	REG_S sp,  TASK_THREAD_SP_RA(a0)
+	REG_S s0,  TASK_THREAD_S0_RA(a0)
+	REG_S s1,  TASK_THREAD_S1_RA(a0)
+	REG_S s2,  TASK_THREAD_S2_RA(a0)
+	REG_S s3,  TASK_THREAD_S3_RA(a0)
+	REG_S s4,  TASK_THREAD_S4_RA(a0)
+	REG_S s5,  TASK_THREAD_S5_RA(a0)
+	REG_S s6,  TASK_THREAD_S6_RA(a0)
+	REG_S s7,  TASK_THREAD_S7_RA(a0)
+	REG_S s8,  TASK_THREAD_S8_RA(a0)
+	REG_S s9,  TASK_THREAD_S9_RA(a0)
+	REG_S s10, TASK_THREAD_S10_RA(a0)
+	REG_S s11, TASK_THREAD_S11_RA(a0)
+	/* Restore context from next->thread */
+	REG_L ra,  TASK_THREAD_RA_RA(a2)
+	REG_L sp,  TASK_THREAD_SP_RA(a2)
+	REG_L s0,  TASK_THREAD_S0_RA(a2)
+	REG_L s1,  TASK_THREAD_S1_RA(a2)
+	REG_L s2,  TASK_THREAD_S2_RA(a2)
+	REG_L s3,  TASK_THREAD_S3_RA(a2)
+	REG_L s4,  TASK_THREAD_S4_RA(a2)
+	REG_L s5,  TASK_THREAD_S5_RA(a2)
+	REG_L s6,  TASK_THREAD_S6_RA(a2)
+	REG_L s7,  TASK_THREAD_S7_RA(a2)
+	REG_L s8,  TASK_THREAD_S8_RA(a2)
+	REG_L s9,  TASK_THREAD_S9_RA(a2)
+	REG_L s10, TASK_THREAD_S10_RA(a2)
+	REG_L s11, TASK_THREAD_S11_RA(a2)
+#if TASK_TI != 0
+#error "TASK_TI != 0: tp will contain a 'struct thread_info', not a 'struct task_struct' so get_current() won't work."
+	addi tp, a1, TASK_TI
+#else
+	move tp, a1
+#endif
+	ret
+ENDPROC(__switch_to)
+
+ENTRY(__fstate_save)
+	li  a2,  TASK_THREAD_F0
+	add a0, a0, a2
+	li t1, SR_FS
+	csrs sstatus, t1
+	frcsr t0
+	fsd f0,  TASK_THREAD_F0_F0(a0)
+	fsd f1,  TASK_THREAD_F1_F0(a0)
+	fsd f2,  TASK_THREAD_F2_F0(a0)
+	fsd f3,  TASK_THREAD_F3_F0(a0)
+	fsd f4,  TASK_THREAD_F4_F0(a0)
+	fsd f5,  TASK_THREAD_F5_F0(a0)
+	fsd f6,  TASK_THREAD_F6_F0(a0)
+	fsd f7,  TASK_THREAD_F7_F0(a0)
+	fsd f8,  TASK_THREAD_F8_F0(a0)
+	fsd f9,  TASK_THREAD_F9_F0(a0)
+	fsd f10, TASK_THREAD_F10_F0(a0)
+	fsd f11, TASK_THREAD_F11_F0(a0)
+	fsd f12, TASK_THREAD_F12_F0(a0)
+	fsd f13, TASK_THREAD_F13_F0(a0)
+	fsd f14, TASK_THREAD_F14_F0(a0)
+	fsd f15, TASK_THREAD_F15_F0(a0)
+	fsd f16, TASK_THREAD_F16_F0(a0)
+	fsd f17, TASK_THREAD_F17_F0(a0)
+	fsd f18, TASK_THREAD_F18_F0(a0)
+	fsd f19, TASK_THREAD_F19_F0(a0)
+	fsd f20, TASK_THREAD_F20_F0(a0)
+	fsd f21, TASK_THREAD_F21_F0(a0)
+	fsd f22, TASK_THREAD_F22_F0(a0)
+	fsd f23, TASK_THREAD_F23_F0(a0)
+	fsd f24, TASK_THREAD_F24_F0(a0)
+	fsd f25, TASK_THREAD_F25_F0(a0)
+	fsd f26, TASK_THREAD_F26_F0(a0)
+	fsd f27, TASK_THREAD_F27_F0(a0)
+	fsd f28, TASK_THREAD_F28_F0(a0)
+	fsd f29, TASK_THREAD_F29_F0(a0)
+	fsd f30, TASK_THREAD_F30_F0(a0)
+	fsd f31, TASK_THREAD_F31_F0(a0)
+	sw t0, TASK_THREAD_FCSR_F0(a0)
+	csrc sstatus, t1
+	ret
+ENDPROC(__fstate_save)
+
+ENTRY(__fstate_restore)
+	li  a2,  TASK_THREAD_F0
+	add a0, a0, a2
+	li t1, SR_FS
+	lw t0, TASK_THREAD_FCSR_F0(a0)
+	csrs sstatus, t1
+	fld f0,  TASK_THREAD_F0_F0(a0)
+	fld f1,  TASK_THREAD_F1_F0(a0)
+	fld f2,  TASK_THREAD_F2_F0(a0)
+	fld f3,  TASK_THREAD_F3_F0(a0)
+	fld f4,  TASK_THREAD_F4_F0(a0)
+	fld f5,  TASK_THREAD_F5_F0(a0)
+	fld f6,  TASK_THREAD_F6_F0(a0)
+	fld f7,  TASK_THREAD_F7_F0(a0)
+	fld f8,  TASK_THREAD_F8_F0(a0)
+	fld f9,  TASK_THREAD_F9_F0(a0)
+	fld f10, TASK_THREAD_F10_F0(a0)
+	fld f11, TASK_THREAD_F11_F0(a0)
+	fld f12, TASK_THREAD_F12_F0(a0)
+	fld f13, TASK_THREAD_F13_F0(a0)
+	fld f14, TASK_THREAD_F14_F0(a0)
+	fld f15, TASK_THREAD_F15_F0(a0)
+	fld f16, TASK_THREAD_F16_F0(a0)
+	fld f17, TASK_THREAD_F17_F0(a0)
+	fld f18, TASK_THREAD_F18_F0(a0)
+	fld f19, TASK_THREAD_F19_F0(a0)
+	fld f20, TASK_THREAD_F20_F0(a0)
+	fld f21, TASK_THREAD_F21_F0(a0)
+	fld f22, TASK_THREAD_F22_F0(a0)
+	fld f23, TASK_THREAD_F23_F0(a0)
+	fld f24, TASK_THREAD_F24_F0(a0)
+	fld f25, TASK_THREAD_F25_F0(a0)
+	fld f26, TASK_THREAD_F26_F0(a0)
+	fld f27, TASK_THREAD_F27_F0(a0)
+	fld f28, TASK_THREAD_F28_F0(a0)
+	fld f29, TASK_THREAD_F29_F0(a0)
+	fld f30, TASK_THREAD_F30_F0(a0)
+	fld f31, TASK_THREAD_F31_F0(a0)
+	fscsr t0
+	csrc sstatus, t1
+	ret
+ENDPROC(__fstate_restore)
+
+
+	.section ".rodata"
+	/* Exception vector table */
+ENTRY(excp_vect_table)
+	RISCV_PTR do_trap_insn_misaligned
+	RISCV_PTR do_trap_insn_fault
+	RISCV_PTR do_trap_insn_illegal
+	RISCV_PTR do_trap_break
+	RISCV_PTR do_trap_load_misaligned
+	RISCV_PTR do_trap_load_fault
+	RISCV_PTR do_trap_store_misaligned
+	RISCV_PTR do_trap_store_fault
+	RISCV_PTR do_trap_ecall_u /* system call, gets intercepted */
+	RISCV_PTR do_trap_ecall_s
+	RISCV_PTR do_trap_unknown
+	RISCV_PTR do_trap_ecall_m
+	RISCV_PTR do_page_fault   /* instruction page fault */
+	RISCV_PTR do_page_fault   /* load page fault */
+	RISCV_PTR do_trap_unknown
+	RISCV_PTR do_page_fault   /* store page fault */
+excp_vect_table_end:
+END(excp_vect_table)
+
diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
new file mode 100644
index 000000000000..608e57d4531f
--- /dev/null
+++ b/arch/riscv/kernel/head.S
@@ -0,0 +1,147 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+#include <asm/asm.h>
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/thread_info.h>
+#include <asm/page.h>
+#include <asm/csr.h>
+
+__INIT
+ENTRY(_start)
+	/* Mask all interrupts */
+	csrw sie, zero
+
+	/* Disable FPU to detect illegal usage of
+	   floating point in kernel space */
+	li t0, SR_FS
+	csrc sstatus, t0
+
+#ifndef CONFIG_RV_PUM
+	/* Allow access to user memory */
+	li t0, SR_SUM
+	csrs sstatus, t0
+#endif
+
+#ifdef CONFIG_ISA_A
+	/* Pick one hart to run the main boot sequence */
+	la a3, hart_lottery
+	li a2, 1
+	amoadd.w a3, a2, (a3)
+	bnez a3, .Lsecondary_start
+#else
+	/* We don't have atomic support, so the boot hart must be picked
+	 * staticly.  Hart 0 is the only sane choice.
+	 */
+	bnez a0, .Lsecondary_park
+#endif
+
+	/* Save hart ID and DTB physical address */
+	mv s0, a0
+	mv s1, a1
+
+	/* Initialize page tables and relocate to virtual addresses */
+	la sp, init_thread_union + THREAD_SIZE
+	call setup_vm
+	call relocate
+
+	/* Restore C environment */
+	la tp, init_task
+
+	la sp, init_thread_union
+	li a0, ASM_THREAD_SIZE
+	add sp, sp, a0
+
+	/* Start the kernel */
+	mv a0, s0
+	mv a1, s1
+	call sbi_save
+	tail start_kernel
+
+relocate:
+	/* Relocate return address */
+	li a1, PAGE_OFFSET
+	la a0, _start
+	sub a1, a1, a0
+	add ra, ra, a1
+
+	/* Point stvec to virtual address of intruction after sptbr write */
+	la a0, 1f
+	add a0, a0, a1
+	csrw stvec, a0
+
+	/* Compute sptbr for kernel page tables, but don't load it yet */
+	la a2, swapper_pg_dir
+	srl a2, a2, PAGE_SHIFT
+	li a1, SPTBR_MODE
+	or a2, a2, a1
+
+	/* Load trampoline page directory, which will cause us to trap to
+	   stvec if VA != PA, or simply fall through if VA == PA */
+	la a0, trampoline_pg_dir
+	srl a0, a0, PAGE_SHIFT
+	or a0, a0, a1
+	sfence.vma
+	csrw sptbr, a0
+1:
+	/* Set trap vector to spin forever to help debug */
+	la a0, .Lsecondary_park
+	csrw stvec, a0
+
+	/* Load the global pointer */
+	la gp, __global_pointer$
+
+	/* Switch to kernel page tables */
+	csrw sptbr, a2
+
+	ret
+
+.Lsecondary_start:
+#ifdef CONFIG_SMP
+	li a1, CONFIG_NR_CPUS
+	bgeu a0, a1, .Lsecondary_park
+
+	la a1, __cpu_up_stack_pointer
+	slli a0, a0, LGREG
+	add a0, a0, a1
+
+.Lwait_for_cpu_up:
+	REG_L sp, (a0)
+	beqz sp, .Lwait_for_cpu_up
+	fence
+
+	/* Enable virtual memory and relocate to virtual address */
+	call relocate
+
+	/* Initialize task_struct pointer */
+	li tp, -THREAD_SIZE
+	add tp, tp, sp
+
+	tail smp_callin
+#endif
+
+.Lsecondary_park:
+	/* We lack SMP support or have too many harts, so park this hart */
+	wfi
+	j .Lsecondary_park
+END(_start)
+
+__PAGE_ALIGNED_BSS
+	/* Empty zero page */
+	.balign PAGE_SIZE
+ENTRY(empty_zero_page)
+	.fill (empty_zero_page + PAGE_SIZE) - ., 1, 0x00
+END(empty_zero_page)
diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c
new file mode 100644
index 000000000000..737d7cce2c6d
--- /dev/null
+++ b/arch/riscv/kernel/irq.c
@@ -0,0 +1,20 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/irqchip.h>
+
+void __init init_IRQ(void)
+{
+	irqchip_init();
+}
diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
new file mode 100644
index 000000000000..753cb9894feb
--- /dev/null
+++ b/arch/riscv/kernel/module.c
@@ -0,0 +1,215 @@
+/*
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  Copyright (C) 2017 Zihao Yu
+ */
+
+#include <linux/elf.h>
+#include <linux/err.h>
+#include <linux/errno.h>
+#include <linux/moduleloader.h>
+
+static int apply_r_riscv_64_rela(struct module *me, u32 *location, Elf_Addr v)
+{
+	*(u64 *)location = v;
+	return 0;
+}
+
+static int apply_r_riscv_branch_rela(struct module *me, u32 *location,
+				     Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	u32 imm12 = (offset & 0x1000) << (31 - 12);
+	u32 imm11 = (offset & 0x800) >> (11 - 7);
+	u32 imm10_5 = (offset & 0x7e0) << (30 - 10);
+	u32 imm4_1 = (offset & 0x1e) << (11 - 4);
+
+	*location = (*location & 0x1fff07f) | imm12 | imm11 | imm10_5 | imm4_1;
+	return 0;
+}
+
+static int apply_r_riscv_jal_rela(struct module *me, u32 *location,
+				  Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	u32 imm20 = (offset & 0x100000) << (31 - 20);
+	u32 imm19_12 = (offset & 0xff000);
+	u32 imm11 = (offset & 0x800) << (20 - 11);
+	u32 imm10_1 = (offset & 0x7fe) << (30 - 10);
+
+	*location = (*location & 0xfff) | imm20 | imm19_12 | imm11 | imm10_1;
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location,
+					 Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	s32 hi20;
+
+	if (offset != (s32)offset) {
+		pr_err(
+		  "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+		  me->name, v, location);
+		return -EINVAL;
+	}
+
+	hi20 = (offset + 0x800) & 0xfffff000;
+	*location = (*location & 0xfff) | hi20;
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_lo12_i_rela(struct module *me, u32 *location,
+					   Elf_Addr v)
+{
+	/* v is the lo12 value to fill. It is calculated before calling this
+	 * handler.
+	 */
+	*location = (*location & 0xfffff) | ((v & 0xfff) << 20);
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_lo12_s_rela(struct module *me, u32 *location,
+					   Elf_Addr v)
+{
+	/* v is the lo12 value to fill. It is calculated before calling this
+	 * handler.
+	 */
+	u32 imm11_5 = (v & 0xfe0) << (31 - 11);
+	u32 imm4_0 = (v & 0x1f) << (11 - 4);
+
+	*location = (*location & 0x1fff07f) | imm11_5 | imm4_0;
+	return 0;
+}
+
+static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location,
+				       Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	s32 fill_v = offset;
+	u32 hi20, lo12;
+
+	if (offset != fill_v) {
+		pr_err(
+		  "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+		  me->name, v, location);
+		return -EINVAL;
+	}
+
+	hi20 = (offset + 0x800) & 0xfffff000;
+	lo12 = (offset - hi20) & 0xfff;
+	*location = (*location & 0xfff) | hi20;
+	*(location + 1) = (*(location + 1) & 0xfffff) | (lo12 << 20);
+	return 0;
+}
+
+static int apply_r_riscv_relax_rela(struct module *me, u32 *location,
+				    Elf_Addr v)
+{
+	return 0;
+}
+
+static int (*reloc_handlers_rela[]) (struct module *me, u32 *location,
+				Elf_Addr v) = {
+	[R_RISCV_64]			= apply_r_riscv_64_rela,
+	[R_RISCV_BRANCH]		= apply_r_riscv_branch_rela,
+	[R_RISCV_JAL]			= apply_r_riscv_jal_rela,
+	[R_RISCV_PCREL_HI20]		= apply_r_riscv_pcrel_hi20_rela,
+	[R_RISCV_PCREL_LO12_I]		= apply_r_riscv_pcrel_lo12_i_rela,
+	[R_RISCV_PCREL_LO12_S]		= apply_r_riscv_pcrel_lo12_s_rela,
+	[R_RISCV_CALL_PLT]		= apply_r_riscv_call_plt_rela,
+	[R_RISCV_RELAX]			= apply_r_riscv_relax_rela,
+};
+
+int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
+		       unsigned int symindex, unsigned int relsec,
+		       struct module *me)
+{
+	Elf_Rela *rel = (void *) sechdrs[relsec].sh_addr;
+	int (*handler)(struct module *me, u32 *location, Elf_Addr v);
+	Elf_Sym *sym;
+	u32 *location;
+	unsigned int i, type;
+	Elf_Addr v;
+	int res;
+
+	pr_debug("Applying relocate section %u to %u\n", relsec,
+	       sechdrs[relsec].sh_info);
+
+	for (i = 0; i < sechdrs[relsec].sh_size / sizeof(*rel); i++) {
+		/* This is where to make the change */
+		location = (void *)sechdrs[sechdrs[relsec].sh_info].sh_addr
+			+ rel[i].r_offset;
+		/* This is the symbol it is referring to */
+		sym = (Elf_Sym *)sechdrs[symindex].sh_addr
+			+ ELF_RISCV_R_SYM(rel[i].r_info);
+		if (IS_ERR_VALUE(sym->st_value)) {
+			/* Ignore unresolved weak symbol */
+			if (ELF_ST_BIND(sym->st_info) == STB_WEAK)
+				continue;
+			printk(KERN_WARNING "%s: Unknown symbol %s\n",
+			       me->name, strtab + sym->st_name);
+			return -ENOENT;
+		}
+
+		type = ELF_RISCV_R_TYPE(rel[i].r_info);
+
+		if (type < ARRAY_SIZE(reloc_handlers_rela))
+			handler = reloc_handlers_rela[type];
+		else
+			handler = NULL;
+
+		if (!handler) {
+			pr_err("%s: Unknown relocation type %u\n",
+			       me->name, type);
+			return -EINVAL;
+		}
+
+		v = sym->st_value + rel[i].r_addend;
+
+		if (type == R_RISCV_PCREL_LO12_I || type == R_RISCV_PCREL_LO12_S) {
+			unsigned int j;
+
+			for (j = 0; j < sechdrs[relsec].sh_size / sizeof(*rel); j++) {
+				u64 hi20_loc =
+					sechdrs[sechdrs[relsec].sh_info].sh_addr
+					+ rel[j].r_offset;
+				/* Find the corresponding HI20 PC-relative relocation entry */
+				if (hi20_loc == sym->st_value) {
+					Elf_Sym *hi20_sym =
+						(Elf_Sym *)sechdrs[symindex].sh_addr
+						+ ELF_RISCV_R_SYM(rel[j].r_info);
+					u64 hi20_sym_val =
+						hi20_sym->st_value
+						+ rel[j].r_addend;
+					/* Calculate lo12 */
+					s64 offset = hi20_sym_val - hi20_loc;
+					s32 hi20 = (offset + 0x800) & 0xfffff000;
+					s32 lo12 = offset - hi20;
+					v = lo12;
+					break;
+				}
+			}
+			if (j == sechdrs[relsec].sh_size / sizeof(*rel)) {
+				pr_err(
+				  "%s: Can not find HI20 PC-relative relocation information\n",
+				  me->name);
+				return -EINVAL;
+			}
+		}
+
+		res = handler(me, location, v);
+		if (res)
+			return res;
+	}
+
+	return 0;
+}
diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
new file mode 100644
index 000000000000..c10146ae317d
--- /dev/null
+++ b/arch/riscv/kernel/process.c
@@ -0,0 +1,132 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/tick.h>
+#include <linux/ptrace.h>
+
+#include <asm/unistd.h>
+#include <asm/uaccess.h>
+#include <asm/processor.h>
+#include <asm/csr.h>
+#include <asm/string.h>
+#include <asm/switch_to.h>
+
+extern asmlinkage void ret_from_fork(void);
+extern asmlinkage void ret_from_kernel_thread(void);
+
+void arch_cpu_idle(void)
+{
+	wait_for_interrupt();
+	local_irq_enable();
+}
+
+void show_regs(struct pt_regs *regs)
+{
+	show_regs_print_info(KERN_DEFAULT);
+
+	printk(KERN_CONT "sepc: " REG_FMT " ra : " REG_FMT " sp : " REG_FMT "\n",
+		regs->sepc, regs->ra, regs->sp);
+	printk(KERN_CONT " gp : " REG_FMT " tp : " REG_FMT " t0 : " REG_FMT "\n",
+		regs->gp, regs->tp, regs->t0);
+	printk(KERN_CONT " t1 : " REG_FMT " t2 : " REG_FMT " s0 : " REG_FMT "\n",
+		regs->t1, regs->t2, regs->s0);
+	printk(KERN_CONT " s1 : " REG_FMT " a0 : " REG_FMT " a1 : " REG_FMT "\n",
+		regs->s1, regs->a0, regs->a1);
+	printk(KERN_CONT " a2 : " REG_FMT " a3 : " REG_FMT " a4 : " REG_FMT "\n",
+		regs->a2, regs->a3, regs->a4);
+	printk(KERN_CONT " a5 : " REG_FMT " a6 : " REG_FMT " a7 : " REG_FMT "\n",
+		regs->a5, regs->a6, regs->a7);
+	printk(KERN_CONT " s2 : " REG_FMT " s3 : " REG_FMT " s4 : " REG_FMT "\n",
+		regs->s2, regs->s3, regs->s4);
+	printk(KERN_CONT " s5 : " REG_FMT " s6 : " REG_FMT " s7 : " REG_FMT "\n",
+		regs->s5, regs->s6, regs->s7);
+	printk(KERN_CONT " s8 : " REG_FMT " s9 : " REG_FMT " s10: " REG_FMT "\n",
+		regs->s8, regs->s9, regs->s10);
+	printk(KERN_CONT " s11: " REG_FMT " t3 : " REG_FMT " t4 : " REG_FMT "\n",
+		regs->s11, regs->t3, regs->t4);
+	printk(KERN_CONT " t5 : " REG_FMT " t6 : " REG_FMT "\n",
+		regs->t5, regs->t6);
+
+	printk(KERN_CONT "sstatus: " REG_FMT " sbadaddr: " REG_FMT " scause: " REG_FMT "\n",
+		regs->sstatus, regs->sbadaddr, regs->scause);
+}
+
+void start_thread(struct pt_regs *regs, unsigned long pc,
+	unsigned long sp)
+{
+	regs->sstatus = SR_PIE /* User mode, irqs on */ | SR_FS_INITIAL;
+#ifndef CONFIG_RV_PUM
+	regs->sstatus |= SR_SUM
+#endif
+	regs->sepc = pc;
+	regs->sp = sp;
+	set_fs(USER_DS);
+}
+
+void flush_thread(void)
+{
+	/* Reset FPU context
+	 *	frm: round to nearest, ties to even (IEEE default)
+	 *	fflags: accrued exceptions cleared
+	 */
+	memset(&current->thread.fstate, 0,
+		sizeof(struct user_fpregs_struct));
+}
+
+int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+{
+	fstate_save(src, task_pt_regs(src));
+	*dst = *src;
+	return 0;
+}
+
+int copy_thread(unsigned long clone_flags, unsigned long usp,
+	unsigned long arg, struct task_struct *p)
+{
+	struct pt_regs *childregs = task_pt_regs(p);
+
+	/* p->thread holds context to be restored by __switch_to() */
+	if (unlikely(p->flags & PF_KTHREAD)) {
+		/* Kernel thread */
+		const register unsigned long gp __asm__ ("gp");
+		memset(childregs, 0, sizeof(struct pt_regs));
+		childregs->gp = gp;
+		childregs->sstatus = SR_PS | SR_PIE; /* Supervisor, irqs on */
+
+		p->thread.ra = (unsigned long)ret_from_kernel_thread;
+		p->thread.s[0] = usp; /* fn */
+		p->thread.s[1] = arg;
+	} else {
+		*childregs = *(current_pt_regs());
+		if (usp) /* User fork */
+			childregs->sp = usp;
+		if (clone_flags & CLONE_SETTLS)
+			childregs->tp = childregs->a5;
+		childregs->a0 = 0; /* Return value of fork() */
+		p->thread.ra = (unsigned long)ret_from_fork;
+	}
+	p->thread.sp = (unsigned long)childregs; /* kernel sp */
+	return 0;
+}
diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
new file mode 100644
index 000000000000..69b3b2d10664
--- /dev/null
+++ b/arch/riscv/kernel/ptrace.c
@@ -0,0 +1,147 @@
+/*
+ * Copyright 2010 Tilera Corporation. All Rights Reserved.
+ * Copyright 2015 Regents of the University of California
+ * Copyright 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ *
+ * Copied from arch/tile/kernel/ptrace.c
+ */
+
+#include <asm/ptrace.h>
+#include <asm/syscall.h>
+#include <asm/thread_info.h>
+#include <linux/ptrace.h>
+#include <linux/elf.h>
+#include <linux/regset.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/tracehook.h>
+#include <trace/events/syscalls.h>
+
+enum riscv_regset {
+	REGSET_X,
+};
+
+/*
+ * Get registers from task and ready the result for userspace.
+ */
+static char *getregs(struct task_struct *child, struct pt_regs *uregs)
+{
+	*uregs = *task_pt_regs(child);
+	return (char *)uregs;
+}
+
+/* Put registers back to task. */
+static void putregs(struct task_struct *child, struct pt_regs *uregs)
+{
+	struct pt_regs *regs = task_pt_regs(child);
+	*regs = *uregs;
+}
+
+static int riscv_gpr_get(struct task_struct *target,
+			 const struct user_regset *regset,
+			 unsigned int pos, unsigned int count,
+			 void *kbuf, void __user *ubuf)
+{
+	struct pt_regs regs;
+
+	getregs(target, &regs);
+
+	return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &regs, 0,
+				   sizeof(regs));
+}
+
+static int riscv_gpr_set(struct task_struct *target,
+			 const struct user_regset *regset,
+			 unsigned int pos, unsigned int count,
+			 const void *kbuf, const void __user *ubuf)
+{
+	int ret;
+	struct pt_regs regs;
+
+	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &regs, 0,
+				 sizeof(regs));
+	if (ret)
+		return ret;
+
+	putregs(target, &regs);
+
+	return 0;
+}
+
+
+static const struct user_regset riscv_user_regset[] = {
+	[REGSET_X] = {
+		.core_note_type = NT_PRSTATUS,
+		.n = ELF_NGREG,
+		.size = sizeof(elf_greg_t),
+		.align = sizeof(elf_greg_t),
+		.get = &riscv_gpr_get,
+		.set = &riscv_gpr_set,
+	},
+};
+
+static const struct user_regset_view riscv_user_native_view = {
+	.name = "riscv",
+	.e_machine = EM_RISCV,
+	.regsets = riscv_user_regset,
+	.n = ARRAY_SIZE(riscv_user_regset),
+};
+
+const struct user_regset_view *task_user_regset_view(struct task_struct *task)
+{
+	return &riscv_user_native_view;
+}
+
+void ptrace_disable(struct task_struct *child)
+{
+	clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+}
+
+long arch_ptrace(struct task_struct *child, long request,
+		 unsigned long addr, unsigned long data)
+{
+	long ret = -EIO;
+
+	switch (request) {
+	default:
+		ret = ptrace_request(child, request, addr, data);
+		break;
+	}
+
+	return ret;
+}
+
+/* Allows PTRACE_SYSCALL to work.  These are called from entry.S in
+ * {handle,ret_from}_syscall.
+ */
+void do_syscall_trace_enter(struct pt_regs *regs)
+{
+	if (test_thread_flag(TIF_SYSCALL_TRACE))
+		if (tracehook_report_syscall_entry(regs))
+			syscall_set_nr(current, regs, -1);
+
+#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
+	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
+		trace_sys_enter(regs, syscall_get_nr(current, regs));
+#endif
+}
+
+void do_syscall_trace_exit(struct pt_regs *regs)
+{
+	if (test_thread_flag(TIF_SYSCALL_TRACE))
+		tracehook_report_syscall_exit(regs, 0);
+
+#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
+	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
+		trace_sys_exit(regs, regs->regs[0]);
+#endif
+}
diff --git a/arch/riscv/kernel/reset.c b/arch/riscv/kernel/reset.c
new file mode 100644
index 000000000000..2a53d26ffdd6
--- /dev/null
+++ b/arch/riscv/kernel/reset.c
@@ -0,0 +1,36 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/reboot.h>
+#include <linux/export.h>
+#include <asm/sbi.h>
+
+void (*pm_power_off)(void) = machine_power_off;
+EXPORT_SYMBOL(pm_power_off);
+
+void machine_restart(char *cmd)
+{
+	do_kernel_restart(cmd);
+	while (1);
+}
+
+void machine_halt(void)
+{
+	machine_power_off();
+}
+
+void machine_power_off(void)
+{
+	sbi_shutdown();
+	while (1);
+}
diff --git a/arch/riscv/kernel/riscv_ksyms.c b/arch/riscv/kernel/riscv_ksyms.c
new file mode 100644
index 000000000000..ab0db6d48101
--- /dev/null
+++ b/arch/riscv/kernel/riscv_ksyms.c
@@ -0,0 +1,16 @@
+/*
+ * Copyright (C) 2017 Zihao Yu
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/export.h>
+#include <linux/uaccess.h>
+
+/*
+ * Assembly functions that may be used (directly or indirectly) by modules
+ */
+EXPORT_SYMBOL(__copy_user);
+
diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
new file mode 100644
index 000000000000..9ed70e84d74e
--- /dev/null
+++ b/arch/riscv/kernel/setup.c
@@ -0,0 +1,240 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/memblock.h>
+#include <linux/sched.h>
+#include <linux/initrd.h>
+#include <linux/console.h>
+#include <linux/screen_info.h>
+#include <linux/of_fdt.h>
+#include <linux/of_platform.h>
+#include <linux/sched/task.h>
+
+#include <asm/setup.h>
+#include <asm/sections.h>
+#include <asm/pgtable.h>
+#include <asm/smp.h>
+#include <asm/sbi.h>
+#include <asm/tlbflush.h>
+#include <asm/thread_info.h>
+
+#ifdef CONFIG_DUMMY_CONSOLE
+struct screen_info screen_info = {
+	.orig_video_lines	= 30,
+	.orig_video_cols	= 80,
+	.orig_video_mode	= 0,
+	.orig_video_ega_bx	= 0,
+	.orig_video_isVGA	= 1,
+	.orig_video_points	= 8
+};
+#endif
+
+#ifdef CONFIG_CMDLINE_BOOL
+static char __initdata builtin_cmdline[COMMAND_LINE_SIZE] = CONFIG_CMDLINE;
+#endif /* CONFIG_CMDLINE_BOOL */
+
+unsigned long va_pa_offset;
+unsigned long pfn_base;
+
+/* The lucky hart to first increment this variable will boot the other cores */
+atomic_t hart_lottery;
+
+#ifdef CONFIG_BLK_DEV_INITRD
+static void __init setup_initrd(void)
+{
+	extern char __initramfs_start[];
+	extern unsigned long __initramfs_size;
+	unsigned long size;
+
+	if (__initramfs_size > 0) {
+		initrd_start = (unsigned long)(&__initramfs_start);
+		initrd_end = initrd_start + __initramfs_size;
+	}
+
+	if (initrd_start >= initrd_end) {
+		printk(KERN_INFO "initrd not found or empty");
+		goto disable;
+	}
+	if (__pa(initrd_end) > PFN_PHYS(max_low_pfn)) {
+		printk(KERN_ERR "initrd extends beyond end of memory");
+		goto disable;
+	}
+
+	size =  initrd_end - initrd_start;
+	memblock_reserve(__pa(initrd_start), size);
+	initrd_below_start_ok = 1;
+
+	printk(KERN_INFO "Initial ramdisk at: 0x%p (%lu bytes)\n",
+		(void *)(initrd_start), size);
+	return;
+disable:
+	printk(KERN_CONT " - disabling initrd\n");
+	initrd_start = 0;
+	initrd_end = 0;
+}
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+pgd_t trampoline_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+
+#ifndef __PAGETABLE_PMD_FOLDED
+#define NUM_SWAPPER_PMDS ((uintptr_t)-PAGE_OFFSET >> PGDIR_SHIFT)
+pmd_t swapper_pmd[PTRS_PER_PMD*((-PAGE_OFFSET)/PGDIR_SIZE)] __page_aligned_bss;
+pmd_t trampoline_pmd[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+#endif
+
+asmlinkage void __init setup_vm(void)
+{
+	extern char _start;
+	uintptr_t i;
+	uintptr_t pa = (uintptr_t) &_start;
+	pgprot_t prot = __pgprot(pgprot_val(PAGE_KERNEL) | _PAGE_EXEC);
+
+	va_pa_offset = PAGE_OFFSET - pa;
+	pfn_base = PFN_DOWN(pa);
+
+	/* Sanity check alignment and size */
+	BUG_ON((PAGE_OFFSET % PGDIR_SIZE) != 0);
+	BUG_ON((pa % (PAGE_SIZE * PTRS_PER_PTE)) != 0);
+
+#ifndef __PAGETABLE_PMD_FOLDED
+	trampoline_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD] =
+		pfn_pgd(PFN_DOWN((uintptr_t)trampoline_pmd),
+			__pgprot(_PAGE_TABLE));
+	trampoline_pmd[0] = pfn_pmd(PFN_DOWN(pa), prot);
+
+	for (i = 0; i < (-PAGE_OFFSET)/PGDIR_SIZE; ++i) {
+		size_t o = (PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD + i;
+		swapper_pg_dir[o] =
+			pfn_pgd(PFN_DOWN((uintptr_t)swapper_pmd) + i,
+				__pgprot(_PAGE_TABLE));
+	}
+	for (i = 0; i < ARRAY_SIZE(swapper_pmd); i++)
+		swapper_pmd[i] = pfn_pmd(PFN_DOWN(pa + i * PMD_SIZE), prot);
+#else
+	trampoline_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD] =
+		pfn_pgd(PFN_DOWN(pa), prot);
+
+	for (i = 0; i < (-PAGE_OFFSET)/PGDIR_SIZE; ++i) {
+		size_t o = (PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD + i;
+		swapper_pg_dir[o] =
+			pfn_pgd(PFN_DOWN(pa + i * PGDIR_SIZE), prot);
+	}
+#endif
+}
+
+void __init sbi_save(unsigned int hartid, void *dtb)
+{
+	early_init_dt_scan(__va(dtb));
+}
+
+/* Allow the user to manually add a memory region (in case DTS is broken); "mem_end=nn[KkMmGg]" */
+static int __init mem_end_override(char *p)
+{
+	resource_size_t base, end;
+
+	if (!p)
+		return -EINVAL;
+	base = (uintptr_t) __pa(PAGE_OFFSET);
+	end = memparse(p, &p) & PMD_MASK;
+	if (end == 0)
+		return -EINVAL;
+	memblock_add(base, end - base);
+	return 0;
+}
+early_param("mem_end", mem_end_override);
+
+static void __init setup_bootmem(void)
+{
+	struct memblock_region *reg;
+	phys_addr_t mem_size = 0;
+
+	/* Find the memory region containing the kernel */
+	for_each_memblock(memory, reg) {
+		phys_addr_t vmlinux_end = __pa(_end);
+		phys_addr_t end = reg->base + reg->size;
+
+		if (reg->base <= vmlinux_end && vmlinux_end <= end) {
+			/* Reserve from the start of the region to the end of
+			 * the kernel
+			 */
+			memblock_reserve(reg->base, vmlinux_end - reg->base);
+			mem_size = min(reg->size, (phys_addr_t)-PAGE_OFFSET);
+		}
+	}
+	BUG_ON(mem_size == 0);
+
+	set_max_mapnr(PFN_DOWN(mem_size));
+	max_low_pfn = pfn_base + PFN_DOWN(mem_size);
+
+#ifdef CONFIG_BLK_DEV_INITRD
+	setup_initrd();
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+	early_init_fdt_reserve_self();
+	early_init_fdt_scan_reserved_mem();
+	memblock_allow_resize();
+	memblock_dump_all();
+}
+
+void __init setup_arch(char **cmdline_p)
+{
+#ifdef CONFIG_CMDLINE_BOOL
+#ifdef CONFIG_CMDLINE_OVERRIDE
+	strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+#else
+	if (builtin_cmdline[0] != '\0') {
+		/* Append bootloader command line to built-in */
+		strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);
+		strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+		strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+	}
+#endif /* CONFIG_CMDLINE_OVERRIDE */
+#endif /* CONFIG_CMDLINE_BOOL */
+	*cmdline_p = boot_command_line;
+
+	parse_early_param();
+
+	init_mm.start_code = (unsigned long) _stext;
+	init_mm.end_code   = (unsigned long) _etext;
+	init_mm.end_data   = (unsigned long) _edata;
+	init_mm.brk        = (unsigned long) _end;
+
+	setup_bootmem();
+	paging_init();
+	unflatten_device_tree();
+
+#ifdef CONFIG_SMP
+	setup_smp();
+#endif
+
+#ifdef CONFIG_DUMMY_CONSOLE
+	conswitchp = &dummy_con;
+#endif
+}
+
+static int __init riscv_device_init(void)
+{
+	return of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
+}
+subsys_initcall_sync(riscv_device_init);
diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
new file mode 100644
index 000000000000..ea14c47f28ac
--- /dev/null
+++ b/arch/riscv/kernel/signal.c
@@ -0,0 +1,257 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/signal.h>
+#include <linux/uaccess.h>
+#include <linux/syscalls.h>
+#include <linux/tracehook.h>
+#include <linux/linkage.h>
+
+#include <asm/ucontext.h>
+#include <asm/vdso.h>
+#include <asm/switch_to.h>
+#include <asm/csr.h>
+
+#define DEBUG_SIG 0
+
+struct rt_sigframe {
+	struct siginfo info;
+	struct ucontext uc;
+};
+
+static long restore_sigcontext(struct pt_regs *regs,
+	struct sigcontext __user *sc)
+{
+	struct task_struct *task = current;
+	long err;
+	/* sc_regs is structured the same as the start of pt_regs */
+	err = __copy_from_user(regs, &sc->sc_regs, sizeof(sc->sc_regs));
+	err |= __copy_from_user(&task->thread.fstate, &sc->sc_fpregs,
+		sizeof(sc->sc_fpregs));
+	if (likely(!err))
+		fstate_restore(task, regs);
+	return err;
+}
+
+SYSCALL_DEFINE0(rt_sigreturn)
+{
+	struct pt_regs *regs = current_pt_regs();
+	struct rt_sigframe __user *frame;
+	struct task_struct *task;
+	sigset_t set;
+
+	/* Always make any pending restarted system calls return -EINTR */
+	current->restart_block.fn = do_no_restart_syscall;
+
+	frame = (struct rt_sigframe __user *)regs->sp;
+
+	if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
+		goto badframe;
+
+	if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
+		goto badframe;
+
+	set_current_blocked(&set);
+
+	if (restore_sigcontext(regs, &frame->uc.uc_mcontext))
+		goto badframe;
+
+	if (restore_altstack(&frame->uc.uc_stack))
+		goto badframe;
+
+	return regs->a0;
+
+badframe:
+	task = current;
+	if (show_unhandled_signals) {
+		pr_info_ratelimited(
+			"%s[%d]: bad frame in %s: frame=%p pc=%p sp=%p\n",
+			task->comm, task_pid_nr(task), __func__,
+			frame, (void *)regs->sepc, (void *)regs->sp);
+	}
+	force_sig(SIGSEGV, task);
+	return 0;
+}
+
+static long setup_sigcontext(struct sigcontext __user *sc,
+	struct pt_regs *regs)
+{
+	struct task_struct *task = current;
+	long err;
+	/* sc_regs is structured the same as the start of pt_regs */
+	err = __copy_to_user(&sc->sc_regs, regs, sizeof(sc->sc_regs));
+	fstate_save(task, regs);
+	err |= __copy_to_user(&sc->sc_fpregs, &task->thread.fstate,
+		sizeof(sc->sc_fpregs));
+	return err;
+}
+
+static inline void __user *get_sigframe(struct ksignal *ksig,
+	struct pt_regs *regs, size_t framesize)
+{
+	unsigned long sp;
+	/* Default to using normal stack */
+	sp = regs->sp;
+
+	/*
+	 * If we are on the alternate signal stack and would overflow it, don't.
+	 * Return an always-bogus address instead so we will die with SIGSEGV.
+	 */
+	if (on_sig_stack(sp) && !likely(on_sig_stack(sp - framesize)))
+		return (void __user __force *)(-1UL);
+
+	/* This is the X/Open sanctioned signal stack switching. */
+	sp = sigsp(sp, ksig) - framesize;
+
+	/* Align the stack frame. */
+	sp &= ~0xfUL;
+
+	return (void __user *)sp;
+}
+
+
+static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+	struct pt_regs *regs)
+{
+	struct rt_sigframe __user *frame;
+	long err = 0;
+
+	frame = get_sigframe(ksig, regs, sizeof(*frame));
+	if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+		return -EFAULT;
+
+	err |= copy_siginfo_to_user(&frame->info, &ksig->info);
+
+	/* Create the ucontext. */
+	err |= __put_user(0, &frame->uc.uc_flags);
+	err |= __put_user(NULL, &frame->uc.uc_link);
+	err |= __save_altstack(&frame->uc.uc_stack, regs->sp);
+	err |= setup_sigcontext(&frame->uc.uc_mcontext, regs);
+	err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
+	if (err)
+		return -EFAULT;
+
+	/* Set up to return from userspace. */
+	regs->ra = (unsigned long)VDSO_SYMBOL(
+		current->mm->context.vdso, rt_sigreturn);
+
+	/*
+	 * Set up registers for signal handler.
+	 * Registers that we don't modify keep the value they had from
+	 * user-space at the time we took the signal.
+	 * We always pass siginfo and mcontext, regardless of SA_SIGINFO,
+	 * since some things rely on this (e.g. glibc's debug/segfault.c).
+	 */
+	regs->sepc = (unsigned long)ksig->ka.sa.sa_handler;
+	regs->sp = (unsigned long)frame;
+	regs->a0 = ksig->sig;                     /* a0: signal number */
+	regs->a1 = (unsigned long)(&frame->info); /* a1: siginfo pointer */
+	regs->a2 = (unsigned long)(&frame->uc);   /* a2: ucontext pointer */
+
+#if DEBUG_SIG
+	pr_info("SIG deliver (%s:%d): sig=%d pc=%p ra=%p sp=%p\n",
+		current->comm, task_pid_nr(current), ksig->sig,
+		(void *)regs->sepc, (void *)regs->ra, frame);
+#endif
+
+	return 0;
+}
+
+static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
+{
+	sigset_t *oldset = sigmask_to_save();
+	int ret;
+
+	/* Are we from a system call? */
+	if (regs->scause == EXC_SYSCALL) {
+		/* If so, check system call restarting.. */
+		switch (regs->a0) {
+		case -ERESTART_RESTARTBLOCK:
+		case -ERESTARTNOHAND:
+			regs->a0 = -EINTR;
+			break;
+
+		case -ERESTARTSYS:
+			if (!(ksig->ka.sa.sa_flags & SA_RESTART)) {
+				regs->a0 = -EINTR;
+				break;
+			}
+			/* fallthrough */
+		case -ERESTARTNOINTR:
+			regs->sepc -= 0x4;
+			break;
+		}
+	}
+
+	/* Set up the stack frame */
+	ret = setup_rt_frame(ksig, oldset, regs);
+
+	signal_setup_done(ret, ksig, 0);
+}
+
+static void do_signal(struct pt_regs *regs)
+{
+	struct ksignal ksig;
+
+	if (get_signal(&ksig)) {
+		/* Actually deliver the signal */
+		handle_signal(&ksig, regs);
+		return;
+	}
+
+	/* Did we come from a system call? */
+	if (regs->scause == EXC_SYSCALL) {
+		/* Restart the system call - no handlers present */
+		switch (regs->a0) {
+		case -ERESTARTNOHAND:
+		case -ERESTARTSYS:
+		case -ERESTARTNOINTR:
+			regs->sepc -= 0x4;
+			break;
+		case -ERESTART_RESTARTBLOCK:
+			regs->a7 = __NR_restart_syscall;
+			regs->sepc -= 0x4;
+			break;
+		}
+	}
+
+	/* If there is no signal to deliver, we just put the saved
+	 * sigmask back.
+	 */
+	restore_saved_sigmask();
+}
+
+/*
+ * notification of userspace execution resumption
+ * - triggered by the _TIF_WORK_MASK flags
+ */
+asmlinkage void do_notify_resume(struct pt_regs *regs,
+	unsigned long thread_info_flags)
+{
+	/* Handle pending signal delivery */
+	if (thread_info_flags & _TIF_SIGPENDING)
+		do_signal(regs);
+
+	if (thread_info_flags & _TIF_NOTIFY_RESUME) {
+		clear_thread_flag(TIF_NOTIFY_RESUME);
+		tracehook_notify_resume(regs);
+	}
+}
diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
new file mode 100644
index 000000000000..b65c0e1020e3
--- /dev/null
+++ b/arch/riscv/kernel/smp.c
@@ -0,0 +1,110 @@
+/*
+ * SMP initialisation and IPI support
+ * Based on arch/arm64/kernel/smp.c
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/smp.h>
+#include <linux/sched.h>
+
+#include <asm/sbi.h>
+#include <asm/tlbflush.h>
+#include <asm/cacheflush.h>
+
+/* A collection of single bit ipi messages.  */
+static struct {
+	unsigned long bits ____cacheline_aligned;
+} ipi_data[NR_CPUS] __cacheline_aligned;
+
+enum ipi_message_type {
+	IPI_RESCHEDULE,
+	IPI_CALL_FUNC,
+	IPI_MAX
+};
+
+irqreturn_t handle_ipi(void)
+{
+	unsigned long *pending_ipis = &ipi_data[smp_processor_id()].bits;
+
+	/* Clear pending IPI */
+	csr_clear(sip, SIE_SSIE);
+
+	while (true) {
+		unsigned long ops;
+
+		/* Order bit clearing and data access. */
+		mb(); // test
+
+		ops = xchg(pending_ipis, 0);
+		if (ops == 0)
+			return IRQ_HANDLED;
+
+		if (ops & (1 << IPI_RESCHEDULE))
+			scheduler_ipi();
+
+		if (ops & (1 << IPI_CALL_FUNC))
+			generic_smp_call_function_interrupt();
+
+		BUG_ON((ops >> IPI_MAX) != 0);
+
+		/* Order data access and bit testing. */
+		mb();
+	}
+
+	return IRQ_HANDLED;
+}
+
+static void
+send_ipi_message(const struct cpumask *to_whom, enum ipi_message_type operation)
+{
+	int i;
+
+	mb();
+	for_each_cpu(i, to_whom)
+		set_bit(operation, &ipi_data[i].bits);
+
+	mb();
+	sbi_send_ipi(cpumask_bits(to_whom));
+}
+
+void arch_send_call_function_ipi_mask(struct cpumask *mask)
+{
+	send_ipi_message(mask, IPI_CALL_FUNC);
+}
+
+void arch_send_call_function_single_ipi(int cpu)
+{
+	send_ipi_message(cpumask_of(cpu), IPI_CALL_FUNC);
+}
+
+static void ipi_stop(void *unused)
+{
+	while (1)
+		wait_for_interrupt();
+}
+
+void smp_send_stop(void)
+{
+	on_each_cpu(ipi_stop, NULL, 1);
+}
+
+void smp_send_reschedule(int cpu)
+{
+	send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE);
+}
diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
new file mode 100644
index 000000000000..7043bbbfbc1e
--- /dev/null
+++ b/arch/riscv/kernel/smpboot.c
@@ -0,0 +1,103 @@
+/*
+ * SMP initialisation and IPI support
+ * Based on arch/arm64/kernel/smp.c
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/kernel_stat.h>
+#include <linux/notifier.h>
+#include <linux/cpu.h>
+#include <linux/percpu.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/irq.h>
+#include <linux/of.h>
+#include <linux/sched/task_stack.h>
+#include <asm/mmu_context.h>
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+#include <asm/sbi.h>
+
+void *__cpu_up_stack_pointer[NR_CPUS];
+
+void __init smp_prepare_boot_cpu(void)
+{
+}
+
+void __init smp_prepare_cpus(unsigned int max_cpus)
+{
+}
+
+void __init setup_smp(void)
+{
+	struct device_node *dn = NULL;
+	int hart, im_okay_therefore_i_am = 0;
+
+	while ((dn = of_find_node_by_type(dn, "cpu"))) {
+		hart = riscv_of_processor_hart(dn);
+		if (hart >= 0) {
+			set_cpu_possible(hart, true);
+			set_cpu_present(hart, true);
+			if (hart == smp_processor_id()) {
+				BUG_ON(im_okay_therefore_i_am);
+				im_okay_therefore_i_am = 1;
+			}
+		}
+	}
+
+	BUG_ON(!im_okay_therefore_i_am);
+}
+
+int __cpu_up(unsigned int cpu, struct task_struct *tidle)
+{
+	/* Signal cpu to start */
+	mb();
+	__cpu_up_stack_pointer[cpu] = task_stack_page(tidle) + THREAD_SIZE;
+
+	while (!cpu_online(cpu))
+		cpu_relax();
+
+	return 0;
+}
+
+void __init smp_cpus_done(unsigned int max_cpus)
+{
+}
+
+/*
+ * C entry point for a secondary processor.
+ */
+asmlinkage void __init smp_callin(void)
+{
+	struct mm_struct *mm = &init_mm;
+
+	/* All kernel threads share the same mm context.  */
+	atomic_inc(&mm->mm_count);
+	current->active_mm = mm;
+
+	trap_init();
+	init_clockevent();
+	notify_cpu_starting(smp_processor_id());
+	set_cpu_online(smp_processor_id(), 1);
+	local_flush_tlb_all();
+	local_irq_enable();
+	preempt_disable();
+	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+}
diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
new file mode 100644
index 000000000000..109f5120d5c7
--- /dev/null
+++ b/arch/riscv/kernel/stacktrace.c
@@ -0,0 +1,177 @@
+/*
+ * Copyright (C) 2008 ARM Limited
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/export.h>
+#include <linux/kallsyms.h>
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <linux/sched/task_stack.h>
+#include <linux/stacktrace.h>
+
+#ifdef CONFIG_FRAME_POINTER
+
+struct stackframe {
+	unsigned long fp;
+	unsigned long ra;
+};
+
+static void notrace walk_stackframe(struct task_struct *task,
+	struct pt_regs *regs, bool (*fn)(unsigned long, void *), void *arg)
+{
+	unsigned long fp, sp, pc;
+
+	if (regs) {
+		fp = GET_FP(regs);
+		sp = GET_USP(regs);
+		pc = GET_IP(regs);
+	} else if (task == NULL || task == current) {
+		const register unsigned long current_sp __asm__ ("sp");
+		fp = (unsigned long)__builtin_frame_address(0);
+		sp = current_sp;
+		pc = (unsigned long)walk_stackframe;
+	} else {
+		/* task blocked in __switch_to */
+		fp = task->thread.s[0];
+		sp = task->thread.sp;
+		pc = task->thread.ra;
+	}
+
+	for (;;) {
+		unsigned long low, high;
+		struct stackframe *frame;
+
+		if (unlikely(!__kernel_text_address(pc) || fn(pc, arg)))
+			break;
+
+		/* Validate frame pointer */
+		low = sp + sizeof(struct stackframe);
+		high = ALIGN(sp, THREAD_SIZE);
+		if (unlikely(fp < low || fp > high || fp & 0x7))
+			break;
+		/* Unwind stack frame */
+		frame = (struct stackframe *)fp - 1;
+		sp = fp;
+		fp = frame->fp;
+		pc = frame->ra - 0x4;
+	}
+}
+
+#else /* !CONFIG_FRAME_POINTER */
+
+static void notrace walk_stackframe(struct task_struct *task,
+	struct pt_regs *regs, bool (*fn)(unsigned long, void *), void *arg)
+{
+	unsigned long sp, pc;
+	unsigned long *ksp;
+
+	if (regs) {
+		sp = GET_USP(regs);
+		pc = GET_IP(regs);
+	} else if (task == NULL || task == current) {
+		const register unsigned long current_sp __asm__ ("sp");
+		sp = current_sp;
+		pc = (unsigned long)walk_stackframe;
+	} else {
+		/* task blocked in __switch_to */
+		sp = task->thread.sp;
+		pc = task->thread.ra;
+	}
+
+	if (unlikely(sp & 0x7))
+		return;
+
+	ksp = (unsigned long *)sp;
+	while (!kstack_end(ksp)) {
+		if (__kernel_text_address(pc) && unlikely(fn(pc, arg)))
+			break;
+		pc = (*ksp++) - 0x4;
+	}
+}
+
+#endif /* CONFIG_FRAME_POINTER */
+
+
+static bool print_trace_address(unsigned long pc, void *arg)
+{
+	print_ip_sym(pc);
+	return false;
+}
+
+void show_stack(struct task_struct *task, unsigned long *sp)
+{
+	printk("Call Trace:\n");
+	walk_stackframe(task, NULL, print_trace_address, NULL);
+}
+
+
+static bool save_wchan(unsigned long pc, void *arg)
+{
+	if (!in_sched_functions(pc)) {
+		unsigned long *p = arg;
+		*p = pc;
+		return true;
+	}
+	return false;
+}
+
+unsigned long get_wchan(struct task_struct *task)
+{
+	unsigned long pc = 0;
+
+	if (likely(task && task != current && task->state != TASK_RUNNING))
+		walk_stackframe(task, NULL, save_wchan, &pc);
+	return pc;
+}
+
+
+#ifdef CONFIG_STACKTRACE
+
+static bool __save_trace(unsigned long pc, void *arg, bool nosched)
+{
+	struct stack_trace *trace = arg;
+
+	if (unlikely(nosched && in_sched_functions(pc)))
+		return false;
+	if (unlikely(trace->skip > 0)) {
+		trace->skip--;
+		return false;
+	}
+
+	trace->entries[trace->nr_entries++] = pc;
+	return (trace->nr_entries >= trace->max_entries);
+}
+
+static bool save_trace(unsigned long pc, void *arg)
+{
+	return __save_trace(pc, arg, false);
+}
+
+/*
+ * Save stack-backtrace addresses into a stack_trace buffer.
+ */
+void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
+{
+	walk_stackframe(tsk, NULL, save_trace, trace);
+	if (trace->nr_entries < trace->max_entries)
+		trace->entries[trace->nr_entries++] = ULONG_MAX;
+}
+EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
+
+void save_stack_trace(struct stack_trace *trace)
+{
+	save_stack_trace_tsk(NULL, trace);
+}
+EXPORT_SYMBOL_GPL(save_stack_trace);
+
+#endif /* CONFIG_STACKTRACE */
diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
new file mode 100644
index 000000000000..33d40a5da4a1
--- /dev/null
+++ b/arch/riscv/kernel/sys_riscv.c
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/syscalls.h>
+#include <asm/unistd.h>
+
+#ifdef CONFIG_64BIT
+SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
+	unsigned long, prot, unsigned long, flags,
+	unsigned long, fd, off_t, offset)
+{
+	if (unlikely(offset & (~PAGE_MASK)))
+		return -EINVAL;
+	return sys_mmap_pgoff(addr, len, prot, flags, fd, offset >> PAGE_SHIFT);
+}
+#else
+SYSCALL_DEFINE6(mmap2, unsigned long, addr, unsigned long, len,
+	unsigned long, prot, unsigned long, flags,
+	unsigned long, fd, off_t, offset)
+{
+	/* Note that the shift for mmap2 is constant (12),
+	 * regardless of PAGE_SIZE
+	 */
+	if (unlikely(offset & (~PAGE_MASK >> 12)))
+		return -EINVAL;
+	return sys_mmap_pgoff(addr, len, prot, flags, fd,
+		offset >> (PAGE_SHIFT - 12));
+}
+#endif /* !CONFIG_64BIT */
+
+#ifdef CONFIG_SYSRISCV_ATOMIC
+SYSCALL_DEFINE4(sysriscv, unsigned long, cmd, unsigned long, arg1,
+	unsigned long, arg2, unsigned long, arg3)
+{
+	unsigned long flags;
+	unsigned long prev;
+	unsigned int *ptr;
+	unsigned int err;
+
+	switch (cmd) {
+	case RISCV_ATOMIC_CMPXCHG:
+		ptr = (unsigned int *)arg1;
+		if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned int)))
+			return -EFAULT;
+
+		preempt_disable();
+		raw_local_irq_save(flags);
+		err = __get_user(prev, ptr);
+		if (likely(!err && prev == arg2))
+			err = __put_user(arg3, ptr);
+		raw_local_irq_restore(flags);
+		preempt_enable();
+
+		return unlikely(err) ? err : prev;
+
+	case RISCV_ATOMIC_CMPXCHG64:
+		ptr = (unsigned int *)arg1;
+		if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned long)))
+			return -EFAULT;
+
+		preempt_disable();
+		raw_local_irq_save(flags);
+		err = __get_user(prev, ptr);
+		if (likely(!err && prev == arg2))
+			err = __put_user(arg3, ptr);
+		raw_local_irq_restore(flags);
+		preempt_enable();
+
+		return unlikely(err) ? err : prev;
+	}
+
+	return -EINVAL;
+}
+#endif /* CONFIG_SYSRISCV_ATOMIC */
diff --git a/arch/riscv/kernel/syscall_table.c b/arch/riscv/kernel/syscall_table.c
new file mode 100644
index 000000000000..8fa239b67cbc
--- /dev/null
+++ b/arch/riscv/kernel/syscall_table.c
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2009 Arnd Bergmann <arnd@arndb.de>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/syscalls.h>
+
+#include <asm/syscalls.h>
+
+#undef __SYSCALL
+#define __SYSCALL(nr, call)	[nr] = (call),
+
+void *sys_call_table[__NR_syscalls] = {
+	[0 ... __NR_syscalls - 1] = sys_ni_syscall,
+#include <asm/unistd.h>
+};
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
new file mode 100644
index 000000000000..3a4698199ecf
--- /dev/null
+++ b/arch/riscv/kernel/traps.c
@@ -0,0 +1,183 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <linux/sched/signal.h>
+#include <linux/signal.h>
+#include <linux/kdebug.h>
+#include <linux/uaccess.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/irq.h>
+
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/csr.h>
+
+int show_unhandled_signals = 1;
+
+extern asmlinkage void handle_exception(void);
+
+static DEFINE_SPINLOCK(die_lock);
+
+void die(struct pt_regs *regs, const char *str)
+{
+	static int die_counter;
+	int ret;
+
+	oops_enter();
+
+	spin_lock_irq(&die_lock);
+	console_verbose();
+	bust_spinlocks(1);
+
+	pr_emerg("%s [#%d]\n", str, ++die_counter);
+	print_modules();
+	show_regs(regs);
+
+	ret = notify_die(DIE_OOPS, str, regs, 0, regs->scause, SIGSEGV);
+
+	bust_spinlocks(0);
+	add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
+	spin_unlock_irq(&die_lock);
+	oops_exit();
+
+	if (in_interrupt())
+		panic("Fatal exception in interrupt");
+	if (panic_on_oops)
+		panic("Fatal exception");
+	if (ret != NOTIFY_STOP)
+		do_exit(SIGSEGV);
+}
+
+static inline void do_trap_siginfo(int signo, int code,
+	unsigned long addr, struct task_struct *tsk)
+{
+	siginfo_t info;
+
+	info.si_signo = signo;
+	info.si_errno = 0;
+	info.si_code = code;
+	info.si_addr = (void __user *)addr;
+	force_sig_info(signo, &info, tsk);
+}
+
+void do_trap(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, struct task_struct *tsk)
+{
+	if (show_unhandled_signals && unhandled_signal(tsk, signo)
+	    && printk_ratelimit()) {
+		pr_info("%s[%d]: unhandled signal %d code 0x%x at 0x" REG_FMT,
+			tsk->comm, task_pid_nr(tsk), signo, code, addr);
+		print_vma_addr(KERN_CONT " in ", GET_IP(regs));
+		pr_cont("\n");
+		show_regs(regs);
+	}
+
+	do_trap_siginfo(signo, code, addr, tsk);
+}
+
+static void do_trap_error(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, const char *str)
+{
+	if (user_mode(regs)) {
+		do_trap(regs, signo, code, addr, current);
+	} else {
+		if (!fixup_exception(regs))
+			die(regs, str);
+	}
+}
+
+#define DO_ERROR_INFO(name, signo, code, str)				\
+asmlinkage void name(struct pt_regs *regs)				\
+{									\
+	do_trap_error(regs, signo, code, regs->sepc, "Oops - " str);	\
+}
+
+DO_ERROR_INFO(do_trap_unknown,
+	SIGILL, ILL_ILLTRP, "unknown exception");
+DO_ERROR_INFO(do_trap_insn_misaligned,
+	SIGBUS, BUS_ADRALN, "instruction address misaligned");
+DO_ERROR_INFO(do_trap_insn_fault,
+	SIGBUS, BUS_ADRALN, "instruction access fault");
+DO_ERROR_INFO(do_trap_insn_illegal,
+	SIGILL, ILL_ILLOPC, "illegal instruction");
+DO_ERROR_INFO(do_trap_load_misaligned,
+	SIGBUS, BUS_ADRALN, "load address misaligned");
+DO_ERROR_INFO(do_trap_load_fault,
+	SIGSEGV, SEGV_ACCERR, "load access fault");
+DO_ERROR_INFO(do_trap_store_misaligned,
+	SIGBUS, BUS_ADRALN, "store (or AMO) address misaligned");
+DO_ERROR_INFO(do_trap_store_fault,
+	SIGSEGV, SEGV_ACCERR, "store (or AMO) access fault");
+DO_ERROR_INFO(do_trap_ecall_u,
+	SIGILL, ILL_ILLTRP, "environment call from U-mode");
+DO_ERROR_INFO(do_trap_ecall_s,
+	SIGILL, ILL_ILLTRP, "environment call from S-mode");
+DO_ERROR_INFO(do_trap_ecall_m,
+	SIGILL, ILL_ILLTRP, "environment call from M-mode");
+
+asmlinkage void do_trap_break(struct pt_regs *regs)
+{
+#ifdef CONFIG_GENERIC_BUG
+	if (!user_mode(regs)) {
+		enum bug_trap_type type;
+
+		type = report_bug(regs->sepc, regs);
+		switch (type) {
+		case BUG_TRAP_TYPE_NONE:
+			break;
+		case BUG_TRAP_TYPE_WARN:
+			regs->sepc += sizeof(bug_insn_t);
+			return;
+		case BUG_TRAP_TYPE_BUG:
+			die(regs, "Kernel BUG");
+		}
+	}
+#endif /* CONFIG_GENERIC_BUG */
+
+	do_trap_siginfo(SIGTRAP, TRAP_BRKPT, regs->sepc, current);
+	regs->sepc += 0x4;
+}
+
+#ifdef CONFIG_GENERIC_BUG
+int is_valid_bugaddr(unsigned long pc)
+{
+	bug_insn_t insn;
+
+	if (pc < PAGE_OFFSET)
+		return 0;
+	if (probe_kernel_address((bug_insn_t __user *)pc, insn))
+		return 0;
+	return (insn == __BUG_INSN);
+}
+#endif /* CONFIG_GENERIC_BUG */
+
+void __init trap_init(void)
+{
+	int hart = smp_processor_id();
+
+	/* Set sup0 scratch register to 0, indicating to exception vector
+	 * that we are presently executing in the kernel
+	 */
+	csr_write(sscratch, 0);
+	/* Set the exception vector address */
+	csr_write(stvec, &handle_exception);
+	/* Enable software interrupts and setup initial mask */
+	csr_write(sie,
+		  SIE_SSIE | atomic_long_read(&per_cpu(riscv_early_sie, hart))
+		);
+}
diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c
new file mode 100644
index 000000000000..e8a178df8144
--- /dev/null
+++ b/arch/riscv/kernel/vdso.c
@@ -0,0 +1,125 @@
+/*
+ * Copyright (C) 2004 Benjamin Herrenschmidt, IBM Corp.
+ *                    <benh@kernel.crashing.org>
+ * Copyright (C) 2012 ARM Limited
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/binfmts.h>
+#include <linux/err.h>
+
+#include <asm/vdso.h>
+
+extern char vdso_start[], vdso_end[];
+
+static unsigned int vdso_pages;
+static struct page **vdso_pagelist;
+
+/*
+ * The vDSO data page.
+ */
+static union {
+	struct vdso_data	data;
+	u8			page[PAGE_SIZE];
+} vdso_data_store __page_aligned_data;
+struct vdso_data *vdso_data = &vdso_data_store.data;
+
+static int __init vdso_init(void)
+{
+	unsigned int i;
+
+	vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT;
+	vdso_pagelist =
+		kcalloc(vdso_pages + 1, sizeof(struct page *), GFP_KERNEL);
+	if (unlikely(vdso_pagelist == NULL)) {
+		pr_err("vdso: pagelist allocation failed\n");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < vdso_pages; i++) {
+		struct page *pg;
+
+		pg = virt_to_page(vdso_start + (i << PAGE_SHIFT));
+		ClearPageReserved(pg);
+		vdso_pagelist[i] = pg;
+	}
+	vdso_pagelist[i] = virt_to_page(vdso_data);
+
+	return 0;
+}
+arch_initcall(vdso_init);
+
+int arch_setup_additional_pages(struct linux_binprm *bprm,
+	int uses_interp)
+{
+	struct mm_struct *mm = current->mm;
+	unsigned long vdso_base, vdso_len;
+	int ret;
+
+	vdso_len = (vdso_pages + 1) << PAGE_SHIFT;
+
+	down_write(&mm->mmap_sem);
+	vdso_base = get_unmapped_area(NULL, 0, vdso_len, 0, 0);
+	if (unlikely(IS_ERR_VALUE(vdso_base))) {
+		ret = vdso_base;
+		goto end;
+	}
+
+	/*
+	 * Put vDSO base into mm struct. We need to do this before calling
+	 * install_special_mapping or the perf counter mmap tracking code
+	 * will fail to recognise it as a vDSO (since arch_vma_name fails).
+	 */
+	mm->context.vdso = (void *)vdso_base;
+
+	ret = install_special_mapping(mm, vdso_base, vdso_len,
+		(VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC),
+		vdso_pagelist);
+
+	if (unlikely(ret))
+		mm->context.vdso = NULL;
+
+end:
+	up_write(&mm->mmap_sem);
+	return ret;
+}
+
+const char *arch_vma_name(struct vm_area_struct *vma)
+{
+	if (vma->vm_mm && (vma->vm_start == (long)vma->vm_mm->context.vdso))
+		return "[vdso]";
+	return NULL;
+}
+
+/*
+ * Function stubs to prevent linker errors when AT_SYSINFO_EHDR is defined
+ */
+
+int in_gate_area_no_mm(unsigned long addr)
+{
+	return 0;
+}
+
+int in_gate_area(struct mm_struct *mm, unsigned long addr)
+{
+	return 0;
+}
+
+struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
+{
+	return NULL;
+}
diff --git a/arch/riscv/kernel/vdso/.gitignore b/arch/riscv/kernel/vdso/.gitignore
new file mode 100644
index 000000000000..f8b69d84238e
--- /dev/null
+++ b/arch/riscv/kernel/vdso/.gitignore
@@ -0,0 +1 @@
+vdso.lds
diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
new file mode 100644
index 000000000000..04f3ec75b217
--- /dev/null
+++ b/arch/riscv/kernel/vdso/Makefile
@@ -0,0 +1,61 @@
+# Derived from arch/{arm64,tile}/kernel/vdso/Makefile
+
+obj-vdso := sigreturn.o
+
+# Build rules
+targets := $(obj-vdso) vdso.so vdso.so.dbg
+obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
+
+#ccflags-y := -shared -fno-common -fno-builtin
+#ccflags-y += -nostdlib -Wl,-soname=linux-vdso.so.1 \
+		$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
+
+CFLAGS_vdso.so = $(c_flags)
+CFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \
+	$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
+CFLAGS_vdso_syms.o = -r
+
+obj-y += vdso.o
+
+# We also create a special relocatable object that should mirror the symbol
+# table and layout of the linked DSO.  With ld -R we can then refer to
+# these symbols in the kernel code rather than hand-coded addresses.
+extra-y += vdso.lds vdso-syms.o
+$(obj)/built-in.o: $(obj)/vdso-syms.o
+$(obj)/built-in.o: ld_flags += -R $(obj)/vdso-syms.o
+
+CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+
+# Force dependency
+$(obj)/vdso.o : $(obj)/vdso.so
+
+# Link rule for the *.so file; *.lds must be first
+$(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso)
+	$(call if_changed,vdsold)
+$(obj)/vdso-syms.o: $(src)/vdso.lds $(obj-vdso)
+	$(call if_changed,vdsold)
+
+# Strip rule for the *.so file
+$(obj)/%.so: OBJCOPYFLAGS := -S
+$(obj)/%.so: $(obj)/%.so.dbg FORCE
+	$(call if_changed,objcopy)
+
+# Assembly rules for the *.S files
+$(obj-vdso): %.o: %.S
+	$(call if_changed_dep,vdsoas)
+
+# Actual build commands
+quiet_cmd_vdsold = VDSOLD  $@
+      cmd_vdsold = $(CC) $(c_flags) -nostdlib $(CFLAGS_$(@F)) -Wl,-n -Wl,-T $^ -o $@
+quiet_cmd_vdsoas = VDSOAS  $@
+      cmd_vdsoas = $(CC) $(a_flags) -c -o $@ $<
+
+# Install commands for the unstripped file
+quiet_cmd_vdso_install = INSTALL $@
+      cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/$@
+
+vdso.so: $(obj)/vdso.so.dbg
+	@mkdir -p $(MODLIB)/vdso
+	$(call cmd,vdso_install)
+
+vdso_install: vdso.so
diff --git a/arch/riscv/kernel/vdso/sigreturn.S b/arch/riscv/kernel/vdso/sigreturn.S
new file mode 100644
index 000000000000..f5aa3d72acfb
--- /dev/null
+++ b/arch/riscv/kernel/vdso/sigreturn.S
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/unistd.h>
+
+	.text
+ENTRY(__vdso_rt_sigreturn)
+	.cfi_startproc
+	.cfi_signal_frame
+	li a7, __NR_rt_sigreturn
+	scall
+	.cfi_endproc
+ENDPROC(__vdso_rt_sigreturn)
diff --git a/arch/riscv/kernel/vdso/vdso.S b/arch/riscv/kernel/vdso/vdso.S
new file mode 100644
index 000000000000..7055de5f9174
--- /dev/null
+++ b/arch/riscv/kernel/vdso/vdso.S
@@ -0,0 +1,27 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/page.h>
+
+	__PAGE_ALIGNED_DATA
+
+	.globl vdso_start, vdso_end
+	.balign PAGE_SIZE
+vdso_start:
+	.incbin "arch/riscv/kernel/vdso/vdso.so"
+	.balign PAGE_SIZE
+vdso_end:
+
+	.previous
diff --git a/arch/riscv/kernel/vdso/vdso.lds.S b/arch/riscv/kernel/vdso/vdso.lds.S
new file mode 100644
index 000000000000..24942cb25694
--- /dev/null
+++ b/arch/riscv/kernel/vdso/vdso.lds.S
@@ -0,0 +1,76 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+OUTPUT_ARCH(riscv)
+
+SECTIONS
+{
+	. = SIZEOF_HEADERS;
+
+	.hash		: { *(.hash) }			:text
+	.gnu.hash	: { *(.gnu.hash) }
+	.dynsym		: { *(.dynsym) }
+	.dynstr		: { *(.dynstr) }
+	.gnu.version	: { *(.gnu.version) }
+	.gnu.version_d	: { *(.gnu.version_d) }
+	.gnu.version_r	: { *(.gnu.version_r) }
+
+	.note		: { *(.note.*) }		:text	:note
+	.dynamic	: { *(.dynamic) }		:text	:dynamic
+
+	.eh_frame_hdr	: { *(.eh_frame_hdr) }		:text	:eh_frame_hdr
+	.eh_frame	: { KEEP (*(.eh_frame)) }	:text
+
+	.rodata		: { *(.rodata .rodata.* .gnu.linkonce.r.*) }
+
+	/*
+	 * This linker script is used both with -r and with -shared.
+	 * For the layouts to match, we need to skip more than enough
+	 * space for the dynamic symbol table, etc. If this amount is
+	 * insufficient, ld -shared will error; simply increase it here.
+	 */
+	. = 0x800;
+	.text		: { *(.text .text.*) }		:text
+
+	.data		: {
+		*(.got.plt) *(.got)
+		*(.data .data.* .gnu.linkonce.d.*)
+		*(.dynbss)
+		*(.bss .bss.* .gnu.linkonce.b.*)
+	}
+}
+
+/*
+ * We must supply the ELF program headers explicitly to get just one
+ * PT_LOAD segment, and set the flags explicitly to make segments read-only.
+ */
+PHDRS
+{
+	text		PT_LOAD		FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
+	dynamic		PT_DYNAMIC	FLAGS(4);		/* PF_R */
+	note		PT_NOTE		FLAGS(4);		/* PF_R */
+	eh_frame_hdr	PT_GNU_EH_FRAME;
+}
+
+/*
+ * This controls what symbols we export from the DSO.
+ */
+VERSION
+{
+	LINUX_2.6 {
+	global:
+		__vdso_rt_sigreturn;
+	local: *;
+	};
+}
+
diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
new file mode 100644
index 000000000000..ece84991609c
--- /dev/null
+++ b/arch/riscv/kernel/vmlinux.lds.S
@@ -0,0 +1,92 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#define LOAD_OFFSET PAGE_OFFSET
+#include <asm/vmlinux.lds.h>
+#include <asm/page.h>
+#include <asm/cache.h>
+#include <asm/thread_info.h>
+
+OUTPUT_ARCH(riscv)
+ENTRY(_start)
+
+jiffies = jiffies_64;
+
+SECTIONS
+{
+	/* Beginning of code and text segment */
+	. = LOAD_OFFSET;
+	_start = .;
+	__init_begin = .;
+	HEAD_TEXT_SECTION
+	INIT_TEXT_SECTION(PAGE_SIZE)
+	INIT_DATA_SECTION(16)
+	/* we have to discard exit text and such at runtime, not link time */
+	.exit.text :
+	{
+		EXIT_TEXT
+	}
+	.exit.data :
+	{
+		EXIT_DATA
+	}
+	PERCPU_SECTION(L1_CACHE_BYTES)
+	__init_end = .;
+
+	.text : {
+		_text = .;
+		_stext = .;
+		TEXT_TEXT
+		SCHED_TEXT
+		CPUIDLE_TEXT
+		LOCK_TEXT
+		KPROBES_TEXT
+		ENTRY_TEXT
+		IRQENTRY_TEXT
+		*(.fixup)
+		_etext = .;
+	}
+
+	/* Start of data section */
+	_sdata = .;
+	RO_DATA_SECTION(L1_CACHE_BYTES)
+	.srodata : {
+		*(.srodata*)
+	}
+
+	RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
+	.sdata : {
+		__global_pointer$ = . + 0x800;
+		*(.sdata*)
+		/* End of data section */
+		_edata = .;
+		*(.sbss*)
+	}
+
+	BSS_SECTION(0, 0, 0)
+
+	EXCEPTION_TABLE(0x10)
+	NOTES
+
+	.rel.dyn : {
+		*(.rel.dyn*)
+	}
+
+	_end = .;
+
+	STABS_DEBUG
+	DWARF_DEBUG
+
+	DISCARDS
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 17/17] RISC-V: Makefile and Kconfig
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-06 23:00     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

This patch adds RISC-V support to the build infastructure.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 Makefile                                  |   3 +-
 arch/riscv/Kconfig                        | 318 ++++++++++++++++++++++++++++++
 arch/riscv/Makefile                       |  64 ++++++
 arch/riscv/configs/freedom-u500_defconfig |  53 +++++
 arch/riscv/configs/spike32_defconfig      |  50 +++++
 arch/riscv/configs/spike64_defconfig      |  46 +++++
 6 files changed, 533 insertions(+), 1 deletion(-)
 create mode 100644 arch/riscv/Kconfig
 create mode 100644 arch/riscv/Makefile
 create mode 100644 arch/riscv/configs/freedom-u500_defconfig
 create mode 100644 arch/riscv/configs/spike32_defconfig
 create mode 100644 arch/riscv/configs/spike64_defconfig

diff --git a/Makefile b/Makefile
index 853ae9179af9..88711cbcc3ca 100644
--- a/Makefile
+++ b/Makefile
@@ -232,7 +232,8 @@ SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
 				  -e s/arm.*/arm/ -e s/sa110/arm/ \
 				  -e s/s390x/s390/ -e s/parisc64/parisc/ \
 				  -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
-				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ )
+				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
+				  -e s/riscv.*/riscv/)
 
 # Cross compiling and selecting different set of gcc/bin-utils
 # ---------------------------------------------------------------------------
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
new file mode 100644
index 000000000000..64fcb17886c6
--- /dev/null
+++ b/arch/riscv/Kconfig
@@ -0,0 +1,318 @@
+#
+# For a description of the syntax of this configuration file,
+# see Documentation/kbuild/kconfig-language.txt.
+#
+
+config RISCV
+	def_bool y
+	select OF
+	select OF_EARLY_FLATTREE
+	select OF_IRQ
+	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
+	select ARCH_WANT_FRAME_POINTERS
+	select CLONE_BACKWARDS
+	select COMMON_CLK
+	select GENERIC_CLOCKEVENTS
+	select GENERIC_CPU_DEVICES
+	select GENERIC_IRQ_SHOW
+	select GENERIC_PCI_IOMAP
+	select GENERIC_STRNCPY_FROM_USER
+	select GENERIC_STRNLEN_USER
+	select GENERIC_SMP_IDLE_THREAD
+	select GENERIC_ATOMIC64 if !64BIT || !ISA_A
+	select ARCH_WANT_OPTIONAL_GPIOLIB
+	select HAVE_MEMBLOCK
+	select HAVE_DMA_API_DEBUG
+	select HAVE_DMA_CONTIGUOUS
+	select HAVE_GENERIC_DMA_COHERENT
+	select IRQ_DOMAIN
+	select NO_BOOTMEM
+	select ISA_A if SMP
+	select SYSRISCV_ATOMIC if !ISA_A
+	select SPARSE_IRQ
+	select SYSCTL_EXCEPTION_TRACE
+	select HAVE_ARCH_TRACEHOOK
+	select MODULES_USE_ELF_RELA if MODULES
+	select THREAD_INFO_IN_TASK
+	select RISCV_IRQ_INTC
+
+config MMU
+	def_bool y
+
+# even on 32-bit, physical (and DMA) addresses are > 32-bits
+config ARCH_PHYS_ADDR_T_64BIT
+	def_bool y
+
+config ARCH_DMA_ADDR_T_64BIT
+	def_bool y
+
+config STACKTRACE_SUPPORT
+	def_bool y
+
+config RWSEM_GENERIC_SPINLOCK
+	def_bool y
+
+config GENERIC_BUG
+	def_bool y
+	depends on BUG
+	select GENERIC_BUG_RELATIVE_POINTERS if 64BIT
+
+config GENERIC_BUG_RELATIVE_POINTERS
+	bool
+
+config GENERIC_CALIBRATE_DELAY
+	def_bool y
+
+config GENERIC_CSUM
+	def_bool y
+
+config GENERIC_HWEIGHT
+	def_bool y
+
+config PGTABLE_LEVELS
+	int
+	default 3 if 64BIT
+	default 2
+
+config HAVE_KPROBES
+	def_bool n
+
+config DMA_NOOP_OPS
+	def_bool y
+
+menu "Platform type"
+
+config SMP
+	bool "Symmetric Multi-Processing"
+	help
+	  This enables support for systems with more than one CPU.  If
+	  you say N here, the kernel will run on single and
+	  multiprocessor machines, but will use only one CPU of a
+	  multiprocessor machine. If you say Y here, the kernel will run
+	  on many, but not all, single processor machines. On a single
+	  processor machine, the kernel will run faster if you say N
+	  here.
+
+	  If you don't know what to do here, say N.
+
+config NR_CPUS
+	int "Maximum number of CPUs (2-32)"
+	range 2 32
+	depends on SMP
+	default "8"
+
+config CPU_SUPPORTS_32BIT_KERNEL
+	bool
+config CPU_SUPPORTS_64BIT_KERNEL
+	bool
+
+choice
+	prompt "Base ISA"
+	default ARCH_RV64I
+
+config ARCH_RV32I
+	bool "RV32I"
+	select CPU_SUPPORTS_32BIT_KERNEL
+	select 32BIT
+	select GENERIC_ASHLDI3
+	select GENERIC_ASHRDI3
+	select GENERIC_LSHRDI3
+
+config ARCH_RV64I
+	bool "RV64I"
+	select CPU_SUPPORTS_64BIT_KERNEL
+	select 64BIT
+
+endchoice
+
+choice
+	prompt "CPU Tuning"
+	default TUNE_GENERIC
+
+config TUNE_GENERIC
+	bool "generic"
+
+endchoice
+
+config ISA_C
+	bool "Emit compressed instructions when building Linux"
+	default n
+	help
+	   Adds "C" to the ISA subsets that the toolchain is allowed to emit
+	   when building Linux, which results in compressed instructions in the
+	   Linux binary.
+
+	   If you don't know what to do here, say Y.
+
+config ISA_A
+	bool "Emit atomic instructions when building Linux"
+	default y
+	help
+	   Adds "A" to the ISA subsets that the toolchain is allowed to emit
+	   when building Linux, which results in atomic instructions in the
+	   Linux binary.
+
+	   If you don't know what to do here, say Y.
+
+config SYSRISCV_ATOMIC
+	bool "Include support for atomic operation syscalls"
+	default !ISA_A
+	help
+	  If atomic memory instructions are present, i.e.,
+	  CONFIG_ISA_A, this includes support for the syscall that
+	  provides atomic accesses.  This is only useful to run
+	  binaries that require atomic access but were compiled with
+	  -mno-atomic.
+
+	  If CONFIG_ISA_A is unset, this option is mandatory.
+
+	  If you don't know what to do here, say N.
+
+config RV_PUM
+	def_bool y
+	prompt "Protect User Memory" if EXPERT
+	---help---
+	  Protect User Memory (PUM) prevents the kernel from inadvertently
+	  accessing user-space memory.  There is a small performance cost
+	  and kernel size increase if this is enabled.
+
+	  If unsure, say Y.
+
+endmenu
+
+menu "Kernel type"
+
+choice
+	prompt "Kernel code model"
+	default 64BIT
+
+config 32BIT
+	bool "32-bit kernel"
+	depends on CPU_SUPPORTS_32BIT_KERNEL
+	help
+	  Select this option to build a 32-bit kernel.
+
+config 64BIT
+	bool "64-bit kernel"
+	depends on CPU_SUPPORTS_64BIT_KERNEL
+	help
+	  Select this option to build a 64-bit kernel.
+
+endchoice
+
+source "mm/Kconfig"
+
+source "kernel/Kconfig.preempt"
+
+source "kernel/Kconfig.hz"
+
+endmenu
+
+menu "Bus support"
+
+config PCI
+	bool "PCI support"
+	select PCI_MSI
+	help
+	  This feature enables support for PCI bus system. If you say Y
+	  here, the kernel will include drivers and infrastructure code
+	  to support PCI bus devices.
+
+	  If you don't know what to do here, say Y.
+
+config PCI_DOMAINS
+	def_bool PCI
+
+config PCI_DOMAINS_GENERIC
+	def_bool PCI
+
+source "drivers/pci/Kconfig"
+
+endmenu
+
+source "init/Kconfig"
+
+source "kernel/Kconfig.freezer"
+
+menu "Executable file formats"
+
+source "fs/Kconfig.binfmt"
+
+endmenu
+
+menu "Power management options"
+
+source kernel/power/Kconfig
+
+endmenu
+
+source "net/Kconfig"
+
+source "drivers/Kconfig"
+
+source "fs/Kconfig"
+
+menu "Kernel hacking"
+
+config CMDLINE_BOOL
+	bool "Built-in kernel command line"
+	default n
+	help
+	  For most platforms, it is firmware or second stage bootloader
+	  that by default specifies the kernel command line options.
+	  However, it might be necessary or advantageous to either override
+	  the default kernel command line or add a few extra options to it.
+	  For such cases, this option allows hardcoding command line options
+	  directly into the kernel.
+
+	  For that, choose 'Y' here and fill in the extra boot parameters
+	  in CONFIG_CMDLINE.
+
+	  The built-in options will be concatenated to the default command
+	  line if CMDLINE_OVERRIDE is set to 'N'. Otherwise, the default
+	  command line will be ignored and replaced by the built-in string.
+
+config CMDLINE
+	string "Built-in kernel command string"
+	depends on CMDLINE_BOOL
+	default ""
+	help
+	  Supply command-line options at build time by entering them here.
+
+config CMDLINE_OVERRIDE
+	bool "Built-in command line overrides bootloader arguments"
+	default n
+	depends on CMDLINE_BOOL
+	help
+	  Set this option to 'Y' to have the kernel ignore the bootloader
+	  or firmware command line.  Instead, the built-in command line
+	  will be used exclusively.
+
+	  If you don't know what to do here, say N.
+
+config EARLY_PRINTK
+	bool "Early printk"
+	default n
+	help
+	  This option enables special console drivers which allow the kernel
+	  to print messages very early in the bootup process.
+
+	  This is useful for kernel debugging when your machine crashes very
+	  early before the console code is initialized. For normal operation
+	  it is not recommended because it looks ugly and doesn't cooperate
+	  with klogd/syslogd or the X server. You should normally N here,
+	  unless you want to debug such a crash.
+
+
+source "lib/Kconfig.debug"
+
+config CMDLINE_BOOL
+	bool
+endmenu
+
+source "security/Kconfig"
+
+source "crypto/Kconfig"
+
+source "lib/Kconfig"
+
diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
new file mode 100644
index 000000000000..66c4a5e383f9
--- /dev/null
+++ b/arch/riscv/Makefile
@@ -0,0 +1,64 @@
+# This file is included by the global makefile so that you can add your own
+# architecture-specific flags and dependencies. Remember to do have actions
+# for "archclean" and "archdep" for cleaning up and making dependencies for
+# this architecture
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License.  See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+
+LDFLAGS         :=
+OBJCOPYFLAGS    := -O binary
+LDFLAGS_vmlinux :=
+KBUILD_AFLAGS_MODULE += -fPIC
+KBUILD_CFLAGS_MODULE += -fPIC
+
+KBUILD_DEFCONFIG = spike64_defconfig
+
+export BITS
+ifeq ($(CONFIG_ARCH_RV64I),y)
+	BITS := 64
+	UTS_MACHINE := riscv64
+
+	KBUILD_CFLAGS += -mabi=lp64
+	KBUILD_AFLAGS += -mabi=lp64
+	KBUILD_MARCH = rv64im
+	LDFLAGS += -melf64lriscv
+else
+	BITS := 32
+	UTS_MACHINE := riscv32
+
+	KBUILD_CFLAGS += -mabi=ilp32
+	KBUILD_AFLAGS += -mabi=ilp32
+	KBUILD_MARCH = rv32im
+	LDFLAGS += -melf32lriscv
+endif
+
+KBUILD_CFLAGS += -Wall
+
+ifeq ($(CONFIG_ISA_A),y)
+	KBUILD_ARCH_A = a
+endif
+ifeq ($(CONFIG_ISA_C),y)
+	KBUILD_ARCH_C = c
+endif
+
+KBUILD_AFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_ARCH_A)fd$(KBUILD_ARCH_C)
+
+KBUILD_CFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_ARCH_A)$(KBUILD_ARCH_C)
+KBUILD_CFLAGS += -mno-save-restore
+
+# GCC versions that support the "-mstrict-align" option default to allowing
+# unaligned accesses.  While unaligned accesses are explicitly allowed in the
+# RISC-V ISA, they're emulated by machine mode traps on all extant
+# architectures.  It's faster to have GCC emit only aligned accesses.
+KBUILD_CFLAGS += $(call cc-option,-mstrict-align)
+
+head-y := arch/riscv/kernel/head.o
+
+core-y += arch/riscv/kernel/ arch/riscv/mm/
+
+libs-y += arch/riscv/lib/
+
+all: vmlinux
diff --git a/arch/riscv/configs/freedom-u500_defconfig b/arch/riscv/configs/freedom-u500_defconfig
new file mode 100644
index 000000000000..b37908d45067
--- /dev/null
+++ b/arch/riscv/configs/freedom-u500_defconfig
@@ -0,0 +1,53 @@
+CONFIG_CROSS_COMPILE="riscv64-unknown-linux-gnu-"
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+# CONFIG_SGETMASK_SYSCALL is not set
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+# CONFIG_COMPACTION is not set
+CONFIG_HZ_100=y
+CONFIG_PCI_MSI=y
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+CONFIG_OF=y
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+# CONFIG_UNUSED_SYMBOLS is not set
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_EARLY_PRINTK is not set
+# CONFIG_CRYPTO_HW is not set
+CONFIG_TTY=y
diff --git a/arch/riscv/configs/spike32_defconfig b/arch/riscv/configs/spike32_defconfig
new file mode 100644
index 000000000000..cf3431e94311
--- /dev/null
+++ b/arch/riscv/configs/spike32_defconfig
@@ -0,0 +1,50 @@
+CONFIG_64BIT=n
+CONFIG_32BIT=y
+CONFIG_ARCH_RV64I=n
+CONFIG_ARCH_RV32I=y
+CONFIG_PCI=y
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_CRYPTO_HW is not set
+CONFIG_TTY=y
diff --git a/arch/riscv/configs/spike64_defconfig b/arch/riscv/configs/spike64_defconfig
new file mode 100644
index 000000000000..5ad6644df541
--- /dev/null
+++ b/arch/riscv/configs/spike64_defconfig
@@ -0,0 +1,46 @@
+CONFIG_PCI=y
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_CRYPTO_HW is not set
+CONFIG_TTY=y
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 17/17] RISC-V: Makefile and Kconfig
  2017-06-06 22:59   ` Palmer Dabbelt
                     ` (23 preceding siblings ...)
  (?)
@ 2017-06-06 23:00   ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

This patch adds RISC-V support to the build infastructure.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 Makefile                                  |   3 +-
 arch/riscv/Kconfig                        | 318 ++++++++++++++++++++++++++++++
 arch/riscv/Makefile                       |  64 ++++++
 arch/riscv/configs/freedom-u500_defconfig |  53 +++++
 arch/riscv/configs/spike32_defconfig      |  50 +++++
 arch/riscv/configs/spike64_defconfig      |  46 +++++
 6 files changed, 533 insertions(+), 1 deletion(-)
 create mode 100644 arch/riscv/Kconfig
 create mode 100644 arch/riscv/Makefile
 create mode 100644 arch/riscv/configs/freedom-u500_defconfig
 create mode 100644 arch/riscv/configs/spike32_defconfig
 create mode 100644 arch/riscv/configs/spike64_defconfig

diff --git a/Makefile b/Makefile
index 853ae9179af9..88711cbcc3ca 100644
--- a/Makefile
+++ b/Makefile
@@ -232,7 +232,8 @@ SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
 				  -e s/arm.*/arm/ -e s/sa110/arm/ \
 				  -e s/s390x/s390/ -e s/parisc64/parisc/ \
 				  -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
-				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ )
+				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
+				  -e s/riscv.*/riscv/)
 
 # Cross compiling and selecting different set of gcc/bin-utils
 # ---------------------------------------------------------------------------
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
new file mode 100644
index 000000000000..64fcb17886c6
--- /dev/null
+++ b/arch/riscv/Kconfig
@@ -0,0 +1,318 @@
+#
+# For a description of the syntax of this configuration file,
+# see Documentation/kbuild/kconfig-language.txt.
+#
+
+config RISCV
+	def_bool y
+	select OF
+	select OF_EARLY_FLATTREE
+	select OF_IRQ
+	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
+	select ARCH_WANT_FRAME_POINTERS
+	select CLONE_BACKWARDS
+	select COMMON_CLK
+	select GENERIC_CLOCKEVENTS
+	select GENERIC_CPU_DEVICES
+	select GENERIC_IRQ_SHOW
+	select GENERIC_PCI_IOMAP
+	select GENERIC_STRNCPY_FROM_USER
+	select GENERIC_STRNLEN_USER
+	select GENERIC_SMP_IDLE_THREAD
+	select GENERIC_ATOMIC64 if !64BIT || !ISA_A
+	select ARCH_WANT_OPTIONAL_GPIOLIB
+	select HAVE_MEMBLOCK
+	select HAVE_DMA_API_DEBUG
+	select HAVE_DMA_CONTIGUOUS
+	select HAVE_GENERIC_DMA_COHERENT
+	select IRQ_DOMAIN
+	select NO_BOOTMEM
+	select ISA_A if SMP
+	select SYSRISCV_ATOMIC if !ISA_A
+	select SPARSE_IRQ
+	select SYSCTL_EXCEPTION_TRACE
+	select HAVE_ARCH_TRACEHOOK
+	select MODULES_USE_ELF_RELA if MODULES
+	select THREAD_INFO_IN_TASK
+	select RISCV_IRQ_INTC
+
+config MMU
+	def_bool y
+
+# even on 32-bit, physical (and DMA) addresses are > 32-bits
+config ARCH_PHYS_ADDR_T_64BIT
+	def_bool y
+
+config ARCH_DMA_ADDR_T_64BIT
+	def_bool y
+
+config STACKTRACE_SUPPORT
+	def_bool y
+
+config RWSEM_GENERIC_SPINLOCK
+	def_bool y
+
+config GENERIC_BUG
+	def_bool y
+	depends on BUG
+	select GENERIC_BUG_RELATIVE_POINTERS if 64BIT
+
+config GENERIC_BUG_RELATIVE_POINTERS
+	bool
+
+config GENERIC_CALIBRATE_DELAY
+	def_bool y
+
+config GENERIC_CSUM
+	def_bool y
+
+config GENERIC_HWEIGHT
+	def_bool y
+
+config PGTABLE_LEVELS
+	int
+	default 3 if 64BIT
+	default 2
+
+config HAVE_KPROBES
+	def_bool n
+
+config DMA_NOOP_OPS
+	def_bool y
+
+menu "Platform type"
+
+config SMP
+	bool "Symmetric Multi-Processing"
+	help
+	  This enables support for systems with more than one CPU.  If
+	  you say N here, the kernel will run on single and
+	  multiprocessor machines, but will use only one CPU of a
+	  multiprocessor machine. If you say Y here, the kernel will run
+	  on many, but not all, single processor machines. On a single
+	  processor machine, the kernel will run faster if you say N
+	  here.
+
+	  If you don't know what to do here, say N.
+
+config NR_CPUS
+	int "Maximum number of CPUs (2-32)"
+	range 2 32
+	depends on SMP
+	default "8"
+
+config CPU_SUPPORTS_32BIT_KERNEL
+	bool
+config CPU_SUPPORTS_64BIT_KERNEL
+	bool
+
+choice
+	prompt "Base ISA"
+	default ARCH_RV64I
+
+config ARCH_RV32I
+	bool "RV32I"
+	select CPU_SUPPORTS_32BIT_KERNEL
+	select 32BIT
+	select GENERIC_ASHLDI3
+	select GENERIC_ASHRDI3
+	select GENERIC_LSHRDI3
+
+config ARCH_RV64I
+	bool "RV64I"
+	select CPU_SUPPORTS_64BIT_KERNEL
+	select 64BIT
+
+endchoice
+
+choice
+	prompt "CPU Tuning"
+	default TUNE_GENERIC
+
+config TUNE_GENERIC
+	bool "generic"
+
+endchoice
+
+config ISA_C
+	bool "Emit compressed instructions when building Linux"
+	default n
+	help
+	   Adds "C" to the ISA subsets that the toolchain is allowed to emit
+	   when building Linux, which results in compressed instructions in the
+	   Linux binary.
+
+	   If you don't know what to do here, say Y.
+
+config ISA_A
+	bool "Emit atomic instructions when building Linux"
+	default y
+	help
+	   Adds "A" to the ISA subsets that the toolchain is allowed to emit
+	   when building Linux, which results in atomic instructions in the
+	   Linux binary.
+
+	   If you don't know what to do here, say Y.
+
+config SYSRISCV_ATOMIC
+	bool "Include support for atomic operation syscalls"
+	default !ISA_A
+	help
+	  If atomic memory instructions are present, i.e.,
+	  CONFIG_ISA_A, this includes support for the syscall that
+	  provides atomic accesses.  This is only useful to run
+	  binaries that require atomic access but were compiled with
+	  -mno-atomic.
+
+	  If CONFIG_ISA_A is unset, this option is mandatory.
+
+	  If you don't know what to do here, say N.
+
+config RV_PUM
+	def_bool y
+	prompt "Protect User Memory" if EXPERT
+	---help---
+	  Protect User Memory (PUM) prevents the kernel from inadvertently
+	  accessing user-space memory.  There is a small performance cost
+	  and kernel size increase if this is enabled.
+
+	  If unsure, say Y.
+
+endmenu
+
+menu "Kernel type"
+
+choice
+	prompt "Kernel code model"
+	default 64BIT
+
+config 32BIT
+	bool "32-bit kernel"
+	depends on CPU_SUPPORTS_32BIT_KERNEL
+	help
+	  Select this option to build a 32-bit kernel.
+
+config 64BIT
+	bool "64-bit kernel"
+	depends on CPU_SUPPORTS_64BIT_KERNEL
+	help
+	  Select this option to build a 64-bit kernel.
+
+endchoice
+
+source "mm/Kconfig"
+
+source "kernel/Kconfig.preempt"
+
+source "kernel/Kconfig.hz"
+
+endmenu
+
+menu "Bus support"
+
+config PCI
+	bool "PCI support"
+	select PCI_MSI
+	help
+	  This feature enables support for PCI bus system. If you say Y
+	  here, the kernel will include drivers and infrastructure code
+	  to support PCI bus devices.
+
+	  If you don't know what to do here, say Y.
+
+config PCI_DOMAINS
+	def_bool PCI
+
+config PCI_DOMAINS_GENERIC
+	def_bool PCI
+
+source "drivers/pci/Kconfig"
+
+endmenu
+
+source "init/Kconfig"
+
+source "kernel/Kconfig.freezer"
+
+menu "Executable file formats"
+
+source "fs/Kconfig.binfmt"
+
+endmenu
+
+menu "Power management options"
+
+source kernel/power/Kconfig
+
+endmenu
+
+source "net/Kconfig"
+
+source "drivers/Kconfig"
+
+source "fs/Kconfig"
+
+menu "Kernel hacking"
+
+config CMDLINE_BOOL
+	bool "Built-in kernel command line"
+	default n
+	help
+	  For most platforms, it is firmware or second stage bootloader
+	  that by default specifies the kernel command line options.
+	  However, it might be necessary or advantageous to either override
+	  the default kernel command line or add a few extra options to it.
+	  For such cases, this option allows hardcoding command line options
+	  directly into the kernel.
+
+	  For that, choose 'Y' here and fill in the extra boot parameters
+	  in CONFIG_CMDLINE.
+
+	  The built-in options will be concatenated to the default command
+	  line if CMDLINE_OVERRIDE is set to 'N'. Otherwise, the default
+	  command line will be ignored and replaced by the built-in string.
+
+config CMDLINE
+	string "Built-in kernel command string"
+	depends on CMDLINE_BOOL
+	default ""
+	help
+	  Supply command-line options at build time by entering them here.
+
+config CMDLINE_OVERRIDE
+	bool "Built-in command line overrides bootloader arguments"
+	default n
+	depends on CMDLINE_BOOL
+	help
+	  Set this option to 'Y' to have the kernel ignore the bootloader
+	  or firmware command line.  Instead, the built-in command line
+	  will be used exclusively.
+
+	  If you don't know what to do here, say N.
+
+config EARLY_PRINTK
+	bool "Early printk"
+	default n
+	help
+	  This option enables special console drivers which allow the kernel
+	  to print messages very early in the bootup process.
+
+	  This is useful for kernel debugging when your machine crashes very
+	  early before the console code is initialized. For normal operation
+	  it is not recommended because it looks ugly and doesn't cooperate
+	  with klogd/syslogd or the X server. You should normally N here,
+	  unless you want to debug such a crash.
+
+
+source "lib/Kconfig.debug"
+
+config CMDLINE_BOOL
+	bool
+endmenu
+
+source "security/Kconfig"
+
+source "crypto/Kconfig"
+
+source "lib/Kconfig"
+
diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
new file mode 100644
index 000000000000..66c4a5e383f9
--- /dev/null
+++ b/arch/riscv/Makefile
@@ -0,0 +1,64 @@
+# This file is included by the global makefile so that you can add your own
+# architecture-specific flags and dependencies. Remember to do have actions
+# for "archclean" and "archdep" for cleaning up and making dependencies for
+# this architecture
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License.  See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+
+LDFLAGS         :=
+OBJCOPYFLAGS    := -O binary
+LDFLAGS_vmlinux :=
+KBUILD_AFLAGS_MODULE += -fPIC
+KBUILD_CFLAGS_MODULE += -fPIC
+
+KBUILD_DEFCONFIG = spike64_defconfig
+
+export BITS
+ifeq ($(CONFIG_ARCH_RV64I),y)
+	BITS := 64
+	UTS_MACHINE := riscv64
+
+	KBUILD_CFLAGS += -mabi=lp64
+	KBUILD_AFLAGS += -mabi=lp64
+	KBUILD_MARCH = rv64im
+	LDFLAGS += -melf64lriscv
+else
+	BITS := 32
+	UTS_MACHINE := riscv32
+
+	KBUILD_CFLAGS += -mabi=ilp32
+	KBUILD_AFLAGS += -mabi=ilp32
+	KBUILD_MARCH = rv32im
+	LDFLAGS += -melf32lriscv
+endif
+
+KBUILD_CFLAGS += -Wall
+
+ifeq ($(CONFIG_ISA_A),y)
+	KBUILD_ARCH_A = a
+endif
+ifeq ($(CONFIG_ISA_C),y)
+	KBUILD_ARCH_C = c
+endif
+
+KBUILD_AFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_ARCH_A)fd$(KBUILD_ARCH_C)
+
+KBUILD_CFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_ARCH_A)$(KBUILD_ARCH_C)
+KBUILD_CFLAGS += -mno-save-restore
+
+# GCC versions that support the "-mstrict-align" option default to allowing
+# unaligned accesses.  While unaligned accesses are explicitly allowed in the
+# RISC-V ISA, they're emulated by machine mode traps on all extant
+# architectures.  It's faster to have GCC emit only aligned accesses.
+KBUILD_CFLAGS += $(call cc-option,-mstrict-align)
+
+head-y := arch/riscv/kernel/head.o
+
+core-y += arch/riscv/kernel/ arch/riscv/mm/
+
+libs-y += arch/riscv/lib/
+
+all: vmlinux
diff --git a/arch/riscv/configs/freedom-u500_defconfig b/arch/riscv/configs/freedom-u500_defconfig
new file mode 100644
index 000000000000..b37908d45067
--- /dev/null
+++ b/arch/riscv/configs/freedom-u500_defconfig
@@ -0,0 +1,53 @@
+CONFIG_CROSS_COMPILE="riscv64-unknown-linux-gnu-"
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+# CONFIG_SGETMASK_SYSCALL is not set
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+# CONFIG_COMPACTION is not set
+CONFIG_HZ_100=y
+CONFIG_PCI_MSI=y
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+CONFIG_OF=y
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+# CONFIG_UNUSED_SYMBOLS is not set
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_EARLY_PRINTK is not set
+# CONFIG_CRYPTO_HW is not set
+CONFIG_TTY=y
diff --git a/arch/riscv/configs/spike32_defconfig b/arch/riscv/configs/spike32_defconfig
new file mode 100644
index 000000000000..cf3431e94311
--- /dev/null
+++ b/arch/riscv/configs/spike32_defconfig
@@ -0,0 +1,50 @@
+CONFIG_64BIT=n
+CONFIG_32BIT=y
+CONFIG_ARCH_RV64I=n
+CONFIG_ARCH_RV32I=y
+CONFIG_PCI=y
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_CRYPTO_HW is not set
+CONFIG_TTY=y
diff --git a/arch/riscv/configs/spike64_defconfig b/arch/riscv/configs/spike64_defconfig
new file mode 100644
index 000000000000..5ad6644df541
--- /dev/null
+++ b/arch/riscv/configs/spike64_defconfig
@@ -0,0 +1,46 @@
+CONFIG_PCI=y
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_CRYPTO_HW is not set
+CONFIG_TTY=y
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 17/17] RISC-V: Makefile and Kconfig
@ 2017-06-06 23:00     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-06 23:00 UTC (permalink / raw)
  To: linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Palmer Dabbelt

This patch adds RISC-V support to the build infastructure.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 Makefile                                  |   3 +-
 arch/riscv/Kconfig                        | 318 ++++++++++++++++++++++++++++++
 arch/riscv/Makefile                       |  64 ++++++
 arch/riscv/configs/freedom-u500_defconfig |  53 +++++
 arch/riscv/configs/spike32_defconfig      |  50 +++++
 arch/riscv/configs/spike64_defconfig      |  46 +++++
 6 files changed, 533 insertions(+), 1 deletion(-)
 create mode 100644 arch/riscv/Kconfig
 create mode 100644 arch/riscv/Makefile
 create mode 100644 arch/riscv/configs/freedom-u500_defconfig
 create mode 100644 arch/riscv/configs/spike32_defconfig
 create mode 100644 arch/riscv/configs/spike64_defconfig

diff --git a/Makefile b/Makefile
index 853ae9179af9..88711cbcc3ca 100644
--- a/Makefile
+++ b/Makefile
@@ -232,7 +232,8 @@ SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
 				  -e s/arm.*/arm/ -e s/sa110/arm/ \
 				  -e s/s390x/s390/ -e s/parisc64/parisc/ \
 				  -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
-				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ )
+				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
+				  -e s/riscv.*/riscv/)
 
 # Cross compiling and selecting different set of gcc/bin-utils
 # ---------------------------------------------------------------------------
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
new file mode 100644
index 000000000000..64fcb17886c6
--- /dev/null
+++ b/arch/riscv/Kconfig
@@ -0,0 +1,318 @@
+#
+# For a description of the syntax of this configuration file,
+# see Documentation/kbuild/kconfig-language.txt.
+#
+
+config RISCV
+	def_bool y
+	select OF
+	select OF_EARLY_FLATTREE
+	select OF_IRQ
+	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
+	select ARCH_WANT_FRAME_POINTERS
+	select CLONE_BACKWARDS
+	select COMMON_CLK
+	select GENERIC_CLOCKEVENTS
+	select GENERIC_CPU_DEVICES
+	select GENERIC_IRQ_SHOW
+	select GENERIC_PCI_IOMAP
+	select GENERIC_STRNCPY_FROM_USER
+	select GENERIC_STRNLEN_USER
+	select GENERIC_SMP_IDLE_THREAD
+	select GENERIC_ATOMIC64 if !64BIT || !ISA_A
+	select ARCH_WANT_OPTIONAL_GPIOLIB
+	select HAVE_MEMBLOCK
+	select HAVE_DMA_API_DEBUG
+	select HAVE_DMA_CONTIGUOUS
+	select HAVE_GENERIC_DMA_COHERENT
+	select IRQ_DOMAIN
+	select NO_BOOTMEM
+	select ISA_A if SMP
+	select SYSRISCV_ATOMIC if !ISA_A
+	select SPARSE_IRQ
+	select SYSCTL_EXCEPTION_TRACE
+	select HAVE_ARCH_TRACEHOOK
+	select MODULES_USE_ELF_RELA if MODULES
+	select THREAD_INFO_IN_TASK
+	select RISCV_IRQ_INTC
+
+config MMU
+	def_bool y
+
+# even on 32-bit, physical (and DMA) addresses are > 32-bits
+config ARCH_PHYS_ADDR_T_64BIT
+	def_bool y
+
+config ARCH_DMA_ADDR_T_64BIT
+	def_bool y
+
+config STACKTRACE_SUPPORT
+	def_bool y
+
+config RWSEM_GENERIC_SPINLOCK
+	def_bool y
+
+config GENERIC_BUG
+	def_bool y
+	depends on BUG
+	select GENERIC_BUG_RELATIVE_POINTERS if 64BIT
+
+config GENERIC_BUG_RELATIVE_POINTERS
+	bool
+
+config GENERIC_CALIBRATE_DELAY
+	def_bool y
+
+config GENERIC_CSUM
+	def_bool y
+
+config GENERIC_HWEIGHT
+	def_bool y
+
+config PGTABLE_LEVELS
+	int
+	default 3 if 64BIT
+	default 2
+
+config HAVE_KPROBES
+	def_bool n
+
+config DMA_NOOP_OPS
+	def_bool y
+
+menu "Platform type"
+
+config SMP
+	bool "Symmetric Multi-Processing"
+	help
+	  This enables support for systems with more than one CPU.  If
+	  you say N here, the kernel will run on single and
+	  multiprocessor machines, but will use only one CPU of a
+	  multiprocessor machine. If you say Y here, the kernel will run
+	  on many, but not all, single processor machines. On a single
+	  processor machine, the kernel will run faster if you say N
+	  here.
+
+	  If you don't know what to do here, say N.
+
+config NR_CPUS
+	int "Maximum number of CPUs (2-32)"
+	range 2 32
+	depends on SMP
+	default "8"
+
+config CPU_SUPPORTS_32BIT_KERNEL
+	bool
+config CPU_SUPPORTS_64BIT_KERNEL
+	bool
+
+choice
+	prompt "Base ISA"
+	default ARCH_RV64I
+
+config ARCH_RV32I
+	bool "RV32I"
+	select CPU_SUPPORTS_32BIT_KERNEL
+	select 32BIT
+	select GENERIC_ASHLDI3
+	select GENERIC_ASHRDI3
+	select GENERIC_LSHRDI3
+
+config ARCH_RV64I
+	bool "RV64I"
+	select CPU_SUPPORTS_64BIT_KERNEL
+	select 64BIT
+
+endchoice
+
+choice
+	prompt "CPU Tuning"
+	default TUNE_GENERIC
+
+config TUNE_GENERIC
+	bool "generic"
+
+endchoice
+
+config ISA_C
+	bool "Emit compressed instructions when building Linux"
+	default n
+	help
+	   Adds "C" to the ISA subsets that the toolchain is allowed to emit
+	   when building Linux, which results in compressed instructions in the
+	   Linux binary.
+
+	   If you don't know what to do here, say Y.
+
+config ISA_A
+	bool "Emit atomic instructions when building Linux"
+	default y
+	help
+	   Adds "A" to the ISA subsets that the toolchain is allowed to emit
+	   when building Linux, which results in atomic instructions in the
+	   Linux binary.
+
+	   If you don't know what to do here, say Y.
+
+config SYSRISCV_ATOMIC
+	bool "Include support for atomic operation syscalls"
+	default !ISA_A
+	help
+	  If atomic memory instructions are present, i.e.,
+	  CONFIG_ISA_A, this includes support for the syscall that
+	  provides atomic accesses.  This is only useful to run
+	  binaries that require atomic access but were compiled with
+	  -mno-atomic.
+
+	  If CONFIG_ISA_A is unset, this option is mandatory.
+
+	  If you don't know what to do here, say N.
+
+config RV_PUM
+	def_bool y
+	prompt "Protect User Memory" if EXPERT
+	---help---
+	  Protect User Memory (PUM) prevents the kernel from inadvertently
+	  accessing user-space memory.  There is a small performance cost
+	  and kernel size increase if this is enabled.
+
+	  If unsure, say Y.
+
+endmenu
+
+menu "Kernel type"
+
+choice
+	prompt "Kernel code model"
+	default 64BIT
+
+config 32BIT
+	bool "32-bit kernel"
+	depends on CPU_SUPPORTS_32BIT_KERNEL
+	help
+	  Select this option to build a 32-bit kernel.
+
+config 64BIT
+	bool "64-bit kernel"
+	depends on CPU_SUPPORTS_64BIT_KERNEL
+	help
+	  Select this option to build a 64-bit kernel.
+
+endchoice
+
+source "mm/Kconfig"
+
+source "kernel/Kconfig.preempt"
+
+source "kernel/Kconfig.hz"
+
+endmenu
+
+menu "Bus support"
+
+config PCI
+	bool "PCI support"
+	select PCI_MSI
+	help
+	  This feature enables support for PCI bus system. If you say Y
+	  here, the kernel will include drivers and infrastructure code
+	  to support PCI bus devices.
+
+	  If you don't know what to do here, say Y.
+
+config PCI_DOMAINS
+	def_bool PCI
+
+config PCI_DOMAINS_GENERIC
+	def_bool PCI
+
+source "drivers/pci/Kconfig"
+
+endmenu
+
+source "init/Kconfig"
+
+source "kernel/Kconfig.freezer"
+
+menu "Executable file formats"
+
+source "fs/Kconfig.binfmt"
+
+endmenu
+
+menu "Power management options"
+
+source kernel/power/Kconfig
+
+endmenu
+
+source "net/Kconfig"
+
+source "drivers/Kconfig"
+
+source "fs/Kconfig"
+
+menu "Kernel hacking"
+
+config CMDLINE_BOOL
+	bool "Built-in kernel command line"
+	default n
+	help
+	  For most platforms, it is firmware or second stage bootloader
+	  that by default specifies the kernel command line options.
+	  However, it might be necessary or advantageous to either override
+	  the default kernel command line or add a few extra options to it.
+	  For such cases, this option allows hardcoding command line options
+	  directly into the kernel.
+
+	  For that, choose 'Y' here and fill in the extra boot parameters
+	  in CONFIG_CMDLINE.
+
+	  The built-in options will be concatenated to the default command
+	  line if CMDLINE_OVERRIDE is set to 'N'. Otherwise, the default
+	  command line will be ignored and replaced by the built-in string.
+
+config CMDLINE
+	string "Built-in kernel command string"
+	depends on CMDLINE_BOOL
+	default ""
+	help
+	  Supply command-line options at build time by entering them here.
+
+config CMDLINE_OVERRIDE
+	bool "Built-in command line overrides bootloader arguments"
+	default n
+	depends on CMDLINE_BOOL
+	help
+	  Set this option to 'Y' to have the kernel ignore the bootloader
+	  or firmware command line.  Instead, the built-in command line
+	  will be used exclusively.
+
+	  If you don't know what to do here, say N.
+
+config EARLY_PRINTK
+	bool "Early printk"
+	default n
+	help
+	  This option enables special console drivers which allow the kernel
+	  to print messages very early in the bootup process.
+
+	  This is useful for kernel debugging when your machine crashes very
+	  early before the console code is initialized. For normal operation
+	  it is not recommended because it looks ugly and doesn't cooperate
+	  with klogd/syslogd or the X server. You should normally N here,
+	  unless you want to debug such a crash.
+
+
+source "lib/Kconfig.debug"
+
+config CMDLINE_BOOL
+	bool
+endmenu
+
+source "security/Kconfig"
+
+source "crypto/Kconfig"
+
+source "lib/Kconfig"
+
diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
new file mode 100644
index 000000000000..66c4a5e383f9
--- /dev/null
+++ b/arch/riscv/Makefile
@@ -0,0 +1,64 @@
+# This file is included by the global makefile so that you can add your own
+# architecture-specific flags and dependencies. Remember to do have actions
+# for "archclean" and "archdep" for cleaning up and making dependencies for
+# this architecture
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License.  See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+
+LDFLAGS         :=
+OBJCOPYFLAGS    := -O binary
+LDFLAGS_vmlinux :=
+KBUILD_AFLAGS_MODULE += -fPIC
+KBUILD_CFLAGS_MODULE += -fPIC
+
+KBUILD_DEFCONFIG = spike64_defconfig
+
+export BITS
+ifeq ($(CONFIG_ARCH_RV64I),y)
+	BITS := 64
+	UTS_MACHINE := riscv64
+
+	KBUILD_CFLAGS += -mabi=lp64
+	KBUILD_AFLAGS += -mabi=lp64
+	KBUILD_MARCH = rv64im
+	LDFLAGS += -melf64lriscv
+else
+	BITS := 32
+	UTS_MACHINE := riscv32
+
+	KBUILD_CFLAGS += -mabi=ilp32
+	KBUILD_AFLAGS += -mabi=ilp32
+	KBUILD_MARCH = rv32im
+	LDFLAGS += -melf32lriscv
+endif
+
+KBUILD_CFLAGS += -Wall
+
+ifeq ($(CONFIG_ISA_A),y)
+	KBUILD_ARCH_A = a
+endif
+ifeq ($(CONFIG_ISA_C),y)
+	KBUILD_ARCH_C = c
+endif
+
+KBUILD_AFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_ARCH_A)fd$(KBUILD_ARCH_C)
+
+KBUILD_CFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_ARCH_A)$(KBUILD_ARCH_C)
+KBUILD_CFLAGS += -mno-save-restore
+
+# GCC versions that support the "-mstrict-align" option default to allowing
+# unaligned accesses.  While unaligned accesses are explicitly allowed in the
+# RISC-V ISA, they're emulated by machine mode traps on all extant
+# architectures.  It's faster to have GCC emit only aligned accesses.
+KBUILD_CFLAGS += $(call cc-option,-mstrict-align)
+
+head-y := arch/riscv/kernel/head.o
+
+core-y += arch/riscv/kernel/ arch/riscv/mm/
+
+libs-y += arch/riscv/lib/
+
+all: vmlinux
diff --git a/arch/riscv/configs/freedom-u500_defconfig b/arch/riscv/configs/freedom-u500_defconfig
new file mode 100644
index 000000000000..b37908d45067
--- /dev/null
+++ b/arch/riscv/configs/freedom-u500_defconfig
@@ -0,0 +1,53 @@
+CONFIG_CROSS_COMPILE="riscv64-unknown-linux-gnu-"
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+# CONFIG_SGETMASK_SYSCALL is not set
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+# CONFIG_COMPACTION is not set
+CONFIG_HZ_100=y
+CONFIG_PCI_MSI=y
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+CONFIG_OF=y
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+# CONFIG_UNUSED_SYMBOLS is not set
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_EARLY_PRINTK is not set
+# CONFIG_CRYPTO_HW is not set
+CONFIG_TTY=y
diff --git a/arch/riscv/configs/spike32_defconfig b/arch/riscv/configs/spike32_defconfig
new file mode 100644
index 000000000000..cf3431e94311
--- /dev/null
+++ b/arch/riscv/configs/spike32_defconfig
@@ -0,0 +1,50 @@
+CONFIG_64BIT=n
+CONFIG_32BIT=y
+CONFIG_ARCH_RV64I=n
+CONFIG_ARCH_RV32I=y
+CONFIG_PCI=y
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_CRYPTO_HW is not set
+CONFIG_TTY=y
diff --git a/arch/riscv/configs/spike64_defconfig b/arch/riscv/configs/spike64_defconfig
new file mode 100644
index 000000000000..5ad6644df541
--- /dev/null
+++ b/arch/riscv/configs/spike64_defconfig
@@ -0,0 +1,46 @@
+CONFIG_PCI=y
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_CRYPTO_HW is not set
+CONFIG_TTY=y
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* Re: [PATCH 03/17] base: fix order of OF initialization
  2017-06-06 22:59     ` Palmer Dabbelt
  (?)
@ 2017-06-07  7:07     ` Geert Uytterhoeven
  2017-06-07  9:35         ` Mark Rutland
  -1 siblings, 1 reply; 283+ messages in thread
From: Geert Uytterhoeven @ 2017-06-07  7:07 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Linux-Arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, Wesley W. Terpstra, Rob Herring, Frank Rowand,
	devicetree

CC devicetree folks

On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> From: "Wesley W. Terpstra" <wesley@sifive.com>
>
> This fixes: [    0.010000] cpu cpu0: Error -2 creating of_node link
> ... which you get for every CPU on all architectures with a OF cpu/ node.
>
> This affects riscv, nios, etc.
>
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
>  drivers/base/init.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/base/init.c b/drivers/base/init.c
> index 48c0e220acc0..0dcd17e561d0 100644
> --- a/drivers/base/init.c
> +++ b/drivers/base/init.c
> @@ -31,9 +31,9 @@ void __init driver_init(void)
>         /* These are also core pieces, but must come after the
>          * core core pieces.
>          */
> +       of_core_init();
>         platform_bus_init();
>         cpu_dev_init();
>         memory_dev_init();
>         container_dev_init();
> -       of_core_init();
>  }
> --
> 2.13.0

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 08/17] dts: include documentation for the RISC-V interrupt controllers
  2017-06-06 22:59     ` Palmer Dabbelt
  (?)
@ 2017-06-07  7:11     ` Geert Uytterhoeven
  2017-06-07 10:13       ` Mark Rutland
  -1 siblings, 1 reply; 283+ messages in thread
From: Geert Uytterhoeven @ 2017-06-07  7:11 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Linux-Arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, Wesley W. Terpstra, Thomas Gleixner, Jason Cooper,
	Marc Zyngier, Rob Herring, Mark Rutland, devicetree

CC irqchip and devicetree folks

On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> From: "Wesley W. Terpstra" <wesley@sifive.com>
>
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
>  .../interrupt-controller/riscv,cpu-intc.txt        | 46 ++++++++++++++++++++++
>  .../bindings/interrupt-controller/riscv,plic0.txt  | 44 +++++++++++++++++++++
>  2 files changed, 90 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
>  create mode 100644 Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt
>
> diff --git a/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt b/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
> new file mode 100644
> index 000000000000..62f02e834ff9
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
> @@ -0,0 +1,46 @@
> +RISC-V Hart-Level Interrupt Controller (HLIC)
> +---------------------------------------------
> +
> +RISC-V cores include Control Status Registers (CSRs) which are local to each
> +hart and can be read or written by software. Some of these CSRs are used to
> +control local interrupts connected to the core.
> +
> +Typical examples of local interrupts on a RISC-V core include: software IPI
> +interrupts, timer interrupts, and a link to the PLIC interrupt controller.
> +
> +Required properties:
> +- compatible : "riscv,cpu-intc"
> +- #interrupt-cells : should be <1>
> +- interrupt-controller : Identifies the node as an interrupt controller
> +
> +Furthermore, this interrupt-controller MUST be embedded inside the cpu
> +definition of the hart whose CSRs control these local interrupts.
> +
> +Example:
> +
> +       cpu1: cpu@1 {
> +               clock-frequency = <1600000000>;
> +               compatible = "riscv";
> +               d-cache-block-size = <64>;
> +               d-cache-sets = <64>;
> +               d-cache-size = <16384>;
> +               d-tlb-sets = <1>;
> +               d-tlb-size = <32>;
> +               device_type = "cpu";
> +               i-cache-block-size = <64>;
> +               i-cache-sets = <64>;
> +               i-cache-size = <16384>;
> +               i-tlb-sets = <1>;
> +               i-tlb-size = <32>;
> +               mmu-type = "riscv,sv39";
> +               next-level-cache = <&L2>;
> +               reg = <1>;
> +               riscv,isa = "rv64imac";
> +               status = "okay";
> +               tlb-split;
> +               cpu1-intc: interrupt-controller {
> +                       #interrupt-cells = <1>;
> +                       compatible = "riscv,cpu-intc";
> +                       interrupt-controller;
> +               };
> +       };
> diff --git a/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt b/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt
> new file mode 100644
> index 000000000000..c05b5806f7d2
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt
> @@ -0,0 +1,44 @@
> +RISC-V Platform-Level Interrupt Controller (PLIC)
> +-------------------------------------------------
> +
> +RISC-V cores typically include a PLIC, which route interrupts from multiple
> +devices to multiple hart contexts.  The PLIC is connected to the interrupt
> +controller embedded in a RISC-V core via the interrupt-related CSRs.
> +
> +A hart context is a priviledge mode in a hardware execution thread.  For
> +example, in an 4 core system with 2-way SMT, you have 8 harts and probably
> +at least two priviledge modes per hart; machine mode and supervisor mode.
> +
> +Each interrupt can be enabled on per-context basis. Any context can claim
> +a pending enabled interrupt and then release it once it has been handled.
> +
> +Each interrupt has a configurable priority. Higher priority interrupts are
> +serviced firs. Each context can specify a priority threshold. Interrupts
> +with priority below this threshold will not cause the PLIC to raise its
> +interrupt line leading to the context.
> +
> +Required properties:
> +- compatible : "riscv,plic0"
> +- #address-cells : should be <0>
> +- #interrupt-cells : should be <1>
> +- interrupt-controller : Identifies the node as an interrupt controller
> +- reg : Should contain 1 register range (address and length)
> +- riscv,ndev : Specifies the number of interrupts attached to the PLIC
> +- interrupts-extended : Specifies which contexts are connected to the PLIC
> +
> +Example:
> +
> +       plic: interrupt-controller@c000000 {
> +               #address-cells = <0>;
> +               #interrupt-cells = <1>;
> +               compatible = "riscv,plic0";
> +               interrupt-controller;
> +               interrupts-extended = <
> +                       &cpu0-intc 11
> +                       &cpu1-intc 11 &cpu1-intc 9
> +                       &cpu2-intc 11 &cpu2-intc 9
> +                       &cpu3-intc 11 &cpu3-intc 9
> +                       &cpu4-intc 11 &cpu4-intc 9>;
> +               reg = <0xc000000 0x4000000>;
> +               riscv,ndev = <10>;
> +       };
> --
> 2.13.0

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 09/17] clocksource/timer-riscv: New RISC-V Clocksource
  2017-06-06 22:59     ` Palmer Dabbelt
  (?)
@ 2017-06-07  7:12     ` Geert Uytterhoeven
  2017-06-07  7:25       ` Arnd Bergmann
  -1 siblings, 1 reply; 283+ messages in thread
From: Geert Uytterhoeven @ 2017-06-07  7:12 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Linux-Arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, Thomas Gleixner, Daniel Lezcano

CC clocksource folks

On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> The RISC-V ISA defines a single RTC as well as an SBI oneshot timer.
> This timer is present on all RISC-V systems.
>
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
>  drivers/clocksource/Kconfig       |   8 +++
>  drivers/clocksource/Makefile      |   1 +
>  drivers/clocksource/timer-riscv.c | 118 ++++++++++++++++++++++++++++++++++++++
>  3 files changed, 127 insertions(+)
>  create mode 100644 drivers/clocksource/timer-riscv.c
>
> diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
> index 545d541ae20e..1c2c6e7c7fab 100644
> --- a/drivers/clocksource/Kconfig
> +++ b/drivers/clocksource/Kconfig
> @@ -612,4 +612,12 @@ config CLKSRC_ST_LPC
>           Enable this option to use the Low Power controller timer
>           as clocksource.
>
> +config CLKSRC_RISCV
> +       #bool "Clocksource for the RISC-V platform"
> +       def_bool y if RISCV
> +       depends on RISCV
> +       help
> +         This enables a clocksource based on the RISC-V SBI timer, which is
> +         built in to all RISC-V systems.
> +
>  endmenu
> diff --git a/drivers/clocksource/Makefile b/drivers/clocksource/Makefile
> index 2b5b56a6f00f..408ed9d314dc 100644
> --- a/drivers/clocksource/Makefile
> +++ b/drivers/clocksource/Makefile
> @@ -73,3 +73,4 @@ obj-$(CONFIG_H8300_TMR16)             += h8300_timer16.o
>  obj-$(CONFIG_H8300_TPU)                        += h8300_tpu.o
>  obj-$(CONFIG_CLKSRC_ST_LPC)            += clksrc_st_lpc.o
>  obj-$(CONFIG_X86_NUMACHIP)             += numachip.o
> +obj-$(CONFIG_CLKSRC_RISCV)             += timer-riscv.o
> diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
> new file mode 100644
> index 000000000000..04ef7b9130b3
> --- /dev/null
> +++ b/drivers/clocksource/timer-riscv.c
> @@ -0,0 +1,118 @@
> +/*
> + * Copyright (C) 2012 Regents of the University of California
> + * Copyright (C) 2017 SiFive
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful,
> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *   GNU General Public License for more details.
> + */
> +
> +#include <linux/clocksource.h>
> +#include <linux/clockchips.h>
> +#include <linux/interrupt.h>
> +#include <linux/irq.h>
> +#include <linux/delay.h>
> +#include <linux/of.h>
> +
> +#include <asm/irq.h>
> +#include <asm/csr.h>
> +#include <asm/sbi.h>
> +#include <asm/delay.h>
> +
> +unsigned long riscv_timebase;
> +
> +static DEFINE_PER_CPU(struct clock_event_device, clock_event);
> +
> +static int riscv_timer_set_next_event(unsigned long delta,
> +       struct clock_event_device *evdev)
> +{
> +       sbi_set_timer(get_cycles() + delta);
> +       return 0;
> +}
> +
> +static int riscv_timer_set_oneshot(struct clock_event_device *evt)
> +{
> +       /* no-op; only one mode */
> +       return 0;
> +}
> +
> +static int riscv_timer_set_shutdown(struct clock_event_device *evt)
> +{
> +       /* can't stop the clock! */
> +       return 0;
> +}
> +
> +static u64 riscv_rdtime(struct clocksource *cs)
> +{
> +       return get_cycles();
> +}
> +
> +static struct clocksource riscv_clocksource = {
> +       .name = "riscv_clocksource",
> +       .rating = 300,
> +       .read = riscv_rdtime,
> +#ifdef CONFIG_64BITS
> +       .mask = CLOCKSOURCE_MASK(64),
> +#else
> +       .mask = CLOCKSOURCE_MASK(32),
> +#endif /* CONFIG_64BITS */
> +       .flags = CLOCK_SOURCE_IS_CONTINUOUS,
> +};
> +
> +void riscv_timer_interrupt(void)
> +{
> +       int cpu = smp_processor_id();
> +       struct clock_event_device *evdev = &per_cpu(clock_event, cpu);
> +
> +       evdev->event_handler(evdev);
> +}
> +
> +void __init init_clockevent(void)
> +{
> +       int cpu = smp_processor_id();
> +       struct clock_event_device *ce = &per_cpu(clock_event, cpu);
> +
> +       *ce = (struct clock_event_device){
> +               .name = "riscv_timer_clockevent",
> +               .features = CLOCK_EVT_FEAT_ONESHOT,
> +               .rating = 300,
> +               .cpumask = cpumask_of(cpu),
> +               .set_next_event = riscv_timer_set_next_event,
> +               .set_state_oneshot  = riscv_timer_set_oneshot,
> +               .set_state_shutdown = riscv_timer_set_shutdown,
> +       };
> +
> +       /* Enable timer interrupts */
> +       csr_set(sie, SIE_STIE);
> +
> +       clockevents_config_and_register(ce, riscv_timebase, 100, 0x7fffffff);
> +}
> +
> +static unsigned long __init of_timebase(void)
> +{
> +       struct device_node *cpu;
> +       const __be32 *prop;
> +
> +       cpu = of_find_node_by_path("/cpus");
> +       if (cpu) {
> +               prop = of_get_property(cpu, "timebase-frequency", NULL);
> +               if (prop)
> +                       return be32_to_cpu(*prop);
> +       }
> +
> +       return 10000000;
> +}
> +
> +void __init time_init(void)
> +{
> +       riscv_timebase = of_timebase();
> +       lpj_fine = riscv_timebase / HZ;
> +
> +       clocksource_register_hz(&riscv_clocksource, riscv_timebase);
> +       init_clockevent();
> +}
> --
> 2.13.0

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 10/17] irqchip: New RISC-V PLIC Driver
  2017-06-06 23:00     ` Palmer Dabbelt
  (?)
@ 2017-06-07  7:13     ` Geert Uytterhoeven
  2017-06-07  7:55       ` Arnd Bergmann
  -1 siblings, 1 reply; 283+ messages in thread
From: Geert Uytterhoeven @ 2017-06-07  7:13 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Linux-Arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, Thomas Gleixner, Jason Cooper, Marc Zyngier

CC irqchip folks

On Wed, Jun 7, 2017 at 1:00 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> This patch adds a driver for the Platform Level Interrupt Controller
> (PLIC) specified as part of the RISC-V supervisor level ISA manual.
> The PLIC connocts global interrupt sources to the local interrupt
> controller on each hart.  A PLIC is present on all RISC-V systems.
>
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
>  drivers/irqchip/Kconfig          |  12 ++
>  drivers/irqchip/Makefile         |   1 +
>  drivers/irqchip/irq-riscv-plic.c | 253 +++++++++++++++++++++++++++++++++++++++
>  3 files changed, 266 insertions(+)
>  create mode 100644 drivers/irqchip/irq-riscv-plic.c
>
> diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
> index 478f8ace2664..2906d63934ef 100644
> --- a/drivers/irqchip/Kconfig
> +++ b/drivers/irqchip/Kconfig
> @@ -301,3 +301,15 @@ config QCOM_IRQ_COMBINER
>         help
>           Say yes here to add support for the IRQ combiner devices embedded
>           in Qualcomm Technologies chips.
> +
> +config RISCV_PLIC
> +        bool "Platform-Level Interrupt Controller"
> +       depends on RISCV
> +        default y
> +        help
> +          This enables support for the PLIC chip found in standard RISC-V
> +          systems. The PLIC is the top-most interrupt controller found in
> +          the system, connected directly to the core complex. All other
> +          interrupt sources (MSI, GPIO, etc) are subordinate to the PLIC.
> +
> +          If you don't know what to do here, say Y.
> diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile
> index b64c59b838a0..bed94cc89146 100644
> --- a/drivers/irqchip/Makefile
> +++ b/drivers/irqchip/Makefile
> @@ -76,3 +76,4 @@ obj-$(CONFIG_EZNPS_GIC)                       += irq-eznps.o
>  obj-$(CONFIG_ARCH_ASPEED)              += irq-aspeed-vic.o
>  obj-$(CONFIG_STM32_EXTI)               += irq-stm32-exti.o
>  obj-$(CONFIG_QCOM_IRQ_COMBINER)                += qcom-irq-combiner.o
> +obj-$(CONFIG_RISCV_PLIC)               += irq-riscv-plic.o
> diff --git a/drivers/irqchip/irq-riscv-plic.c b/drivers/irqchip/irq-riscv-plic.c
> new file mode 100644
> index 000000000000..906c8a62a911
> --- /dev/null
> +++ b/drivers/irqchip/irq-riscv-plic.c
> @@ -0,0 +1,253 @@
> +/*
> + * Copyright (C) 2017 SiFive
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful,
> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *   GNU General Public License for more details.
> + */
> +
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/irq.h>
> +#include <linux/irqchip.h>
> +#include <linux/irqchip/chained_irq.h>
> +#include <linux/irqdomain.h>
> +#include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_address.h>
> +#include <linux/of_irq.h>
> +#include <linux/platform_device.h>
> +
> +/* From the RISC-V Privlidged Spec v1.10:
> + *
> + * Global interrupt sources are assigned small unsigned integer identifiers,
> + * beginning at the value 1.  An interrupt ID of 0 is reserved to mean “no
> + * interrupt”.  Interrupt identifiers are also used to break ties when two or
> + * more interrupt sources have the same assigned priority. Smaller values of
> + * interrupt ID take precedence over larger values of interrupt ID.
> + *
> + * It's not defined what the largest device ID is, so we're just fixing
> + * MAX_DEVICES right here (which is named oddly, as there will never be a
> + * device 0).
> + */
> +#define MAX_DEVICES    1024
> +#define MAX_CONTEXTS   15872
> +
> +#define PRIORITY_BASE  0
> +#define ENABLE_BASE    0x2000
> +#define ENABLE_SIZE    0x80
> +#define HART_BASE      0x200000
> +#define HART_SIZE      0x1000
> +
> +struct plic_hart_context {
> +       u32 threshold;
> +       u32 claim;
> +};
> +
> +struct plic_enable_context {
> +       atomic_t mask[32]; // 32-bit * 32-entry
> +};
> +
> +struct plic_priority {
> +       u32 prio[MAX_DEVICES];
> +};
> +
> +struct plic_data {
> +       struct irq_chip         chip;
> +       struct irq_domain       *domain;
> +       u32                     ndev;
> +       void __iomem            *reg;
> +       int                     handlers;
> +       struct plic_handler     *handler;
> +       char                    name[30];
> +};
> +
> +struct plic_handler {
> +       struct plic_hart_context        *context;
> +       struct plic_data                *data;
> +};
> +
> +static inline
> +struct plic_hart_context *plic_hart_context(struct plic_data *data, size_t i)
> +{
> +       return (struct plic_hart_context *)((char *)data->reg + HART_BASE + HART_SIZE*i);
> +}
> +
> +static inline
> +struct plic_enable_context *plic_enable_context(struct plic_data *data, size_t i)
> +{
> +       return (struct plic_enable_context *)((char *)data->reg + ENABLE_BASE + ENABLE_SIZE*i);
> +}
> +
> +static inline
> +struct plic_priority *plic_priority(struct plic_data *data)
> +{
> +       return (struct plic_priority *)((char *)data->reg + PRIORITY_BASE);
> +}
> +
> +static void plic_disable(struct plic_data *data, int i, int hwirq)
> +{
> +       struct plic_enable_context *enable = plic_enable_context(data, i);
> +
> +       atomic_and(~(1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
> +}
> +
> +static void plic_enable(struct plic_data *data, int i, int hwirq)
> +{
> +       struct plic_enable_context *enable = plic_enable_context(data, i);
> +
> +       atomic_or((1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
> +}
> +
> +// There is no need to mask/unmask PLIC interrupts
> +// They are "masked" by reading claim and "unmasked" when writing it back.
> +static void plic_irq_mask(struct irq_data *d) { }
> +static void plic_irq_unmask(struct irq_data *d) { }
> +
> +static void plic_irq_enable(struct irq_data *d)
> +{
> +       struct plic_data *data = irq_data_get_irq_chip_data(d);
> +       struct plic_priority *priority = plic_priority(data);
> +       int i;
> +
> +       iowrite32(1, &priority->prio[d->hwirq]);
> +       for (i = 0; i < data->handlers; ++i)
> +               if (data->handler[i].context)
> +                       plic_enable(data, i, d->hwirq);
> +}
> +
> +static void plic_irq_disable(struct irq_data *d)
> +{
> +       struct plic_data *data = irq_data_get_irq_chip_data(d);
> +       struct plic_priority *priority = plic_priority(data);
> +       int i;
> +
> +       iowrite32(0, &priority->prio[d->hwirq]);
> +       for (i = 0; i < data->handlers; ++i)
> +               if (data->handler[i].context)
> +                       plic_disable(data, i, d->hwirq);
> +}
> +
> +static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
> +                             irq_hw_number_t hwirq)
> +{
> +       struct plic_data *data = d->host_data;
> +
> +       irq_set_chip_and_handler(irq, &data->chip, handle_simple_irq);
> +       irq_set_chip_data(irq, data);
> +       irq_set_noprobe(irq);
> +
> +       return 0;
> +}
> +
> +static const struct irq_domain_ops plic_irqdomain_ops = {
> +       .map    = plic_irqdomain_map,
> +       .xlate  = irq_domain_xlate_onecell,
> +};
> +
> +static void plic_chained_handle_irq(struct irq_desc *desc)
> +{
> +       struct plic_handler *handler = irq_desc_get_handler_data(desc);
> +       struct irq_chip *chip = irq_desc_get_chip(desc);
> +       struct irq_domain *domain = handler->data->domain;
> +       u32 what;
> +
> +       chained_irq_enter(chip, desc);
> +
> +       while ((what = ioread32(&handler->context->claim))) {
> +               int irq = irq_find_mapping(domain, what);
> +
> +               if (irq > 0)
> +                       generic_handle_irq(irq);
> +               else
> +                       handle_bad_irq(desc);
> +               iowrite32(what, &handler->context->claim);
> +       }
> +
> +       chained_irq_exit(chip, desc);
> +}
> +
> +static int plic_init(struct device_node *node, struct device_node *parent)
> +{
> +       struct plic_data *data;
> +       struct resource resource;
> +       int i, ok = 0;
> +
> +       data = kzalloc(sizeof(*data), GFP_KERNEL);
> +       if (WARN_ON(!data))
> +               return -ENOMEM;
> +
> +       data->reg = of_iomap(node, 0);
> +       if (WARN_ON(!data->reg))
> +               return -EIO;
> +
> +       of_property_read_u32(node, "riscv,ndev", &data->ndev);
> +       if (WARN_ON(!data->ndev))
> +               return -EINVAL;
> +
> +       data->handlers = of_irq_count(node);
> +       if (WARN_ON(!data->handlers))
> +               return -EINVAL;
> +
> +       data->handler =
> +               kcalloc(data->handlers, sizeof(*data->handler), GFP_KERNEL);
> +       if (WARN_ON(!data->handler))
> +               return -ENOMEM;
> +
> +       data->domain = irq_domain_add_linear(node, data->ndev+1, &plic_irqdomain_ops, data);
> +       if (WARN_ON(!data->domain))
> +               return -ENOMEM;
> +
> +       of_address_to_resource(node, 0, &resource);
> +       snprintf(data->name, sizeof(data->name),
> +                "riscv,plic0,%llx", resource.start);
> +       data->chip.name = data->name;
> +       data->chip.irq_mask = plic_irq_mask;
> +       data->chip.irq_unmask = plic_irq_unmask;
> +       data->chip.irq_enable = plic_irq_enable;
> +       data->chip.irq_disable = plic_irq_disable;
> +
> +       for (i = 0; i < data->handlers; ++i) {
> +               struct plic_handler *handler = &data->handler[i];
> +               struct of_phandle_args parent;
> +               int parent_irq, hwirq;
> +
> +               if (of_irq_parse_one(node, i, &parent))
> +                       continue;
> +               // skip context holes
> +               if (parent.args[0] == -1)
> +                       continue;
> +
> +               // skip any contexts that lead to inactive harts
> +               if (of_device_is_compatible(parent.np, "riscv,cpu-intc") &&
> +                   parent.np->parent &&
> +                   riscv_of_processor_hart(parent.np->parent) < 0)
> +                       continue;
> +
> +               parent_irq = irq_create_of_mapping(&parent);
> +               if (!parent_irq)
> +                       continue;
> +
> +               handler->context = plic_hart_context(data, i);
> +               handler->data = data;
> +               // hwirq prio must be > this to trigger an interrupt
> +               iowrite32(0, &handler->context->threshold);
> +
> +               for (hwirq = 1; hwirq <= data->ndev; ++hwirq)
> +                       plic_disable(data, i, hwirq);
> +               irq_set_chained_handler_and_data(parent_irq, plic_chained_handle_irq, handler);
> +               ++ok;
> +       }
> +
> +       printk(KERN_INFO "%s: mapped %d interrupts to %d/%d handlers\n",
> +              data->name, data->ndev, ok, data->handlers);
> +       WARN_ON(!ok);
> +       return 0;
> +}
> +
> +IRQCHIP_DECLARE(plic0, "riscv,plic0", plic_init);
> --
> 2.13.0

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 11/17] irqchip: RISC-V Local Interrupt Controller Driver
  2017-06-06 23:00     ` Palmer Dabbelt
  (?)
@ 2017-06-07  7:14     ` Geert Uytterhoeven
  -1 siblings, 0 replies; 283+ messages in thread
From: Geert Uytterhoeven @ 2017-06-07  7:14 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Linux-Arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, Thomas Gleixner, Jason Cooper, Marc Zyngier

CC irqchip folks

On Wed, Jun 7, 2017 at 1:00 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> This patch adds a driver that manages the local interrupts on each
> RISC-V hart, as specifiec by the RISC-V supervisor level ISA manual.
> The local interrupt controller manages software interrupts, timer
> interrupts, and hardware interrupts (which are routed via the
> platform level interrupt controller).  Per-hart local interrupt
> controllers are found on all RISC-V systems.
>
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
>  drivers/irqchip/Kconfig          |  14 +++
>  drivers/irqchip/Makefile         |   1 +
>  drivers/irqchip/irq-riscv-intc.c | 239 +++++++++++++++++++++++++++++++++++++++
>  3 files changed, 254 insertions(+)
>  create mode 100644 drivers/irqchip/irq-riscv-intc.c
>
> diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
> index 2906d63934ef..dfde170e5886 100644
> --- a/drivers/irqchip/Kconfig
> +++ b/drivers/irqchip/Kconfig
> @@ -313,3 +313,17 @@ config RISCV_PLIC
>            interrupt sources (MSI, GPIO, etc) are subordinate to the PLIC.
>
>            If you don't know what to do here, say Y.
> +
> +config RISCV_INTC
> +       def_bool y if RISCV
> +       #bool "RISC-V Interrupt Controller"
> +       depends on RISCV
> +       default y
> +       help
> +          This enables support for the local interrupt controller found in
> +          standard RISC-V systems.  The local interrupt controller handles
> +          timer interrupts, software interrupts, and hardware interrupts.
> +          Without a local interrupt controller the system will be unable to
> +          handle any interrupts, including those passed via the PLIC.
> +
> +          If you don't know what to do here, say Y.
> diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile
> index bed94cc89146..bc9a6b45903b 100644
> --- a/drivers/irqchip/Makefile
> +++ b/drivers/irqchip/Makefile
> @@ -77,3 +77,4 @@ obj-$(CONFIG_ARCH_ASPEED)             += irq-aspeed-vic.o
>  obj-$(CONFIG_STM32_EXTI)               += irq-stm32-exti.o
>  obj-$(CONFIG_QCOM_IRQ_COMBINER)                += qcom-irq-combiner.o
>  obj-$(CONFIG_RISCV_PLIC)               += irq-riscv-plic.o
> +obj-$(CONFIG_RISCV_INTC)               += irq-riscv-intc.o
> diff --git a/drivers/irqchip/irq-riscv-intc.c b/drivers/irqchip/irq-riscv-intc.c
> new file mode 100644
> index 000000000000..8150a035aada
> --- /dev/null
> +++ b/drivers/irqchip/irq-riscv-intc.c
> @@ -0,0 +1,239 @@
> +/*
> + * Copyright (C) 2012 Regents of the University of California
> + * Copyright (C) 2017 SiFive
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful,
> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *   GNU General Public License for more details.
> + */
> +
> +#include <linux/irq.h>
> +#include <linux/irqchip.h>
> +#include <linux/irqdomain.h>
> +#include <linux/interrupt.h>
> +#include <linux/ftrace.h>
> +#include <linux/of.h>
> +#include <linux/seq_file.h>
> +
> +#include <asm/ptrace.h>
> +#include <asm/sbi.h>
> +#include <asm/smp.h>
> +
> +struct riscv_irq_data {
> +       struct irq_chip         chip;
> +       struct irq_domain       *domain;
> +       int                     hart;
> +       char                    name[20];
> +};
> +DEFINE_PER_CPU(struct riscv_irq_data, riscv_irq_data);
> +DEFINE_PER_CPU(atomic_long_t, riscv_early_sie);
> +
> +static void riscv_software_interrupt(void)
> +{
> +#ifdef CONFIG_SMP
> +       irqreturn_t ret;
> +
> +       ret = handle_ipi();
> +       if (ret != IRQ_NONE)
> +               return;
> +#endif
> +
> +       BUG();
> +}
> +
> +asmlinkage void __irq_entry do_IRQ(unsigned int cause, struct pt_regs *regs)
> +{
> +       struct pt_regs *old_regs = set_irq_regs(regs);
> +       struct irq_domain *domain;
> +
> +       irq_enter();
> +
> +       /* There are three classes of interrupt: timer, software, and
> +        * external devices.  We dispatch between them here.  External
> +        * device interrupts use the generic IRQ mechanisms.
> +        */
> +       switch (cause) {
> +       case INTERRUPT_CAUSE_TIMER:
> +               riscv_timer_interrupt();
> +               break;
> +       case INTERRUPT_CAUSE_SOFTWARE:
> +               riscv_software_interrupt();
> +               break;
> +       default:
> +               domain = per_cpu(riscv_irq_data, smp_processor_id()).domain;
> +               generic_handle_irq(irq_find_mapping(domain, cause));
> +               break;
> +       }
> +
> +       irq_exit();
> +       set_irq_regs(old_regs);
> +}
> +
> +static int riscv_irqdomain_map(struct irq_domain *d, unsigned int irq,
> +                              irq_hw_number_t hwirq)
> +{
> +       struct riscv_irq_data *data = d->host_data;
> +
> +       irq_set_chip_and_handler(irq, &data->chip, handle_simple_irq);
> +       irq_set_chip_data(irq, data);
> +       irq_set_noprobe(irq);
> +
> +       return 0;
> +}
> +
> +static const struct irq_domain_ops riscv_irqdomain_ops = {
> +       .map    = riscv_irqdomain_map,
> +       .xlate  = irq_domain_xlate_onecell,
> +};
> +
> +static void riscv_irq_mask(struct irq_data *d)
> +{
> +       struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
> +
> +       BUG_ON(smp_processor_id() != data->hart);
> +       csr_clear(sie, 1 << (long)d->hwirq);
> +}
> +
> +static void riscv_irq_unmask(struct irq_data *d)
> +{
> +       struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
> +
> +       BUG_ON(smp_processor_id() != data->hart);
> +       csr_set(sie, 1 << (long)d->hwirq);
> +}
> +
> +static void riscv_irq_enable_helper(void *d)
> +{
> +       riscv_irq_unmask(d);
> +}
> +
> +static void riscv_irq_enable(struct irq_data *d)
> +{
> +       struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
> +
> +       /* There are two phases to setting up an interrupt: first we set a bit
> +        * in this bookkeeping structure, which is used by trap_init to
> +        * initialize SIE for each hart as it comes up.
> +        */
> +       atomic_long_or((1 << (long)d->hwirq),
> +                      &per_cpu(riscv_early_sie, data->hart));
> +
> +       /* The CPU is usually online, so here we just attempt to enable the
> +        * interrupt by writing SIE directly.  We need to write SIE on the
> +        * correct hart, which might be another hart.
> +        */
> +       if (data->hart == smp_processor_id())
> +               riscv_irq_unmask(d);
> +       else if (cpu_online(data->hart))
> +               smp_call_function_single(data->hart,
> +                                        riscv_irq_enable_helper,
> +                                        d,
> +                                        true);
> +}
> +
> +static void riscv_irq_disable_helper(void *d)
> +{
> +       riscv_irq_mask(d);
> +}
> +
> +static void riscv_irq_disable(struct irq_data *d)
> +{
> +       struct riscv_irq_data *data = irq_data_get_irq_chip_data(d);
> +
> +       /* This is the mirror of riscv_irq_enable. */
> +       atomic_long_and(~(1 << (long)d->hwirq),
> +                       &per_cpu(riscv_early_sie, data->hart));
> +       if (data->hart == smp_processor_id())
> +               riscv_irq_mask(d);
> +       else if (cpu_online(data->hart))
> +               smp_call_function_single(data->hart,
> +                                        riscv_irq_disable_helper,
> +                                        d,
> +                                        true);
> +}
> +
> +static void riscv_irq_mask_noop(struct irq_data *d) { }
> +
> +static void riscv_irq_unmask_noop(struct irq_data *d) { }
> +
> +static void riscv_irq_enable_noop(struct irq_data *d)
> +{
> +       struct device_node *data = irq_data_get_irq_chip_data(d);
> +       u32 hart;
> +
> +       if (!of_property_read_u32(data, "reg", &hart))
> +               printk(
> +                 KERN_WARNING "enabled interrupt %d for missing hart %d (this interrupt has no handler)\n",
> +                 (int)d->hwirq, hart);
> +}
> +
> +static struct irq_chip riscv_noop_chip = {
> +       .name = "riscv,cpu-intc,noop",
> +       .irq_mask = riscv_irq_mask_noop,
> +       .irq_unmask = riscv_irq_unmask_noop,
> +       .irq_enable = riscv_irq_enable_noop,
> +};
> +
> +static int riscv_irqdomain_map_noop(struct irq_domain *d, unsigned int irq,
> +                                   irq_hw_number_t hwirq)
> +{
> +       struct device_node *data = d->host_data;
> +
> +       irq_set_chip_and_handler(irq, &riscv_noop_chip, handle_simple_irq);
> +       irq_set_chip_data(irq, data);
> +       return 0;
> +}
> +
> +static const struct irq_domain_ops riscv_irqdomain_ops_noop = {
> +       .map    = riscv_irqdomain_map_noop,
> +       .xlate  = irq_domain_xlate_onecell,
> +};
> +
> +static int riscv_intc_init(struct device_node *node, struct device_node *parent)
> +{
> +       int hart;
> +       struct riscv_irq_data *data;
> +
> +       if (parent)
> +               return 0;
> +
> +       hart = riscv_of_processor_hart(node->parent);
> +       if (hart < 0) {
> +               /* If a hart is disabled, create a no-op irq domain.  Devices
> +                * may still have interrupts connected to those harts.  This is
> +                * not wrong... unless they actually load a driver that needs
> +                * it!
> +                */
> +               irq_domain_add_linear(
> +                       node,
> +                       8*sizeof(uintptr_t),
> +                       &riscv_irqdomain_ops_noop,
> +                       node->parent);
> +               return 0;
> +       }
> +
> +       data = &per_cpu(riscv_irq_data, hart);
> +       snprintf(data->name, sizeof(data->name), "riscv,cpu_intc,%d", hart);
> +       data->hart = hart;
> +       data->chip.name = data->name;
> +       data->chip.irq_mask = riscv_irq_mask;
> +       data->chip.irq_unmask = riscv_irq_unmask;
> +       data->chip.irq_enable = riscv_irq_enable;
> +       data->chip.irq_disable = riscv_irq_disable;
> +       data->domain = irq_domain_add_linear(
> +               node,
> +               8*sizeof(uintptr_t),
> +               &riscv_irqdomain_ops,
> +               data);
> +       WARN_ON(!data->domain);
> +       printk(KERN_INFO "%s: %d local interrupts mapped\n",
> +              data->name, 8*(int)sizeof(uintptr_t));
> +       return 0;
> +}
> +
> +IRQCHIP_DECLARE(riscv, "riscv,cpu-intc", riscv_intc_init);
> --
> 2.13.0

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 12/17] tty: New RISC-V SBI Console Driver
  2017-06-06 23:00     ` Palmer Dabbelt
  (?)
@ 2017-06-07  7:15     ` Geert Uytterhoeven
  2017-06-07  7:58       ` Arnd Bergmann
  -1 siblings, 1 reply; 283+ messages in thread
From: Geert Uytterhoeven @ 2017-06-07  7:15 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Linux-Arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, Greg KH, Jiri Slaby, linuxppc-dev

CC (hypervisor) console folks

On Wed, Jun 7, 2017 at 1:00 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> This patch adds a new driver for the console availiable via the RISC-V
> SBI.  This console is specified to be used for early boot messages, and
> is designed to be a very simple (albiet somewhat slow) console that is
> always availiable.  All RISC-V systems have an SBI console.
>
> The SBI console is made availiable for early printk messages and is also
> availiable as a regular console.
>
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
>  drivers/tty/hvc/Kconfig   |  11 +++++
>  drivers/tty/hvc/Makefile  |   1 +
>  drivers/tty/hvc/hvc_sbi.c | 102 ++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 114 insertions(+)
>  create mode 100644 drivers/tty/hvc/hvc_sbi.c
>
> diff --git a/drivers/tty/hvc/Kconfig b/drivers/tty/hvc/Kconfig
> index 574da15fe618..f3774adab240 100644
> --- a/drivers/tty/hvc/Kconfig
> +++ b/drivers/tty/hvc/Kconfig
> @@ -114,4 +114,15 @@ config HVCS
>           which will also be compiled when this driver is built as a
>           module.
>
> +config HVC_SBI
> +       bool "SBI console support"
> +       depends on RISCV
> +       select HVC_DRIVER
> +       default y
> +       help
> +         This enables support for console output via RISC-V SBI calls, which
> +         is normally used only during boot to output printk.
> +
> +         If you don't know what do to here, say Y.
> +
>  endif # TTY
> diff --git a/drivers/tty/hvc/Makefile b/drivers/tty/hvc/Makefile
> index 6a2702be76d1..2d63bfe4a96b 100644
> --- a/drivers/tty/hvc/Makefile
> +++ b/drivers/tty/hvc/Makefile
> @@ -11,3 +11,4 @@ obj-$(CONFIG_HVC_IUCV)                += hvc_iucv.o
>  obj-$(CONFIG_HVC_UDBG)         += hvc_udbg.o
>  obj-$(CONFIG_HVC_BFIN_JTAG)    += hvc_bfin_jtag.o
>  obj-$(CONFIG_HVCS)             += hvcs.o
> +obj-$(CONFIG_HVC_SBI)          += hvc_sbi.o
> diff --git a/drivers/tty/hvc/hvc_sbi.c b/drivers/tty/hvc/hvc_sbi.c
> new file mode 100644
> index 000000000000..e70293fb7b35
> --- /dev/null
> +++ b/drivers/tty/hvc/hvc_sbi.c
> @@ -0,0 +1,102 @@
> +/*
> + * RISC-V SBI interface to hvc_console.c
> + *  based on drivers-tty/hvc/hvc_udbg.c
> + *
> + * Copyright (C) 2008 David Gibson, IBM Corporation
> + * Copyright (C) 2012 Regents of the University of California
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful,
> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *   GNU General Public License for more details.
> + */
> +
> +#include <linux/console.h>
> +#include <linux/delay.h>
> +#include <linux/err.h>
> +#include <linux/init.h>
> +#include <linux/moduleparam.h>
> +#include <linux/types.h>
> +#include <linux/irq.h>
> +
> +#include <asm/sbi.h>
> +
> +#include "hvc_console.h"
> +
> +static int hvc_sbi_tty_put(uint32_t vtermno, const char *buf, int count)
> +{
> +       int i;
> +
> +       for (i = 0; i < count; i++)
> +               sbi_console_putchar(buf[i]);
> +
> +       return i;
> +}
> +
> +static int hvc_sbi_tty_get(uint32_t vtermno, char *buf, int count)
> +{
> +       int i, c;
> +
> +       for (i = 0; i < count; i++) {
> +               if ((c = sbi_console_getchar()) < 0)
> +                       break;
> +               buf[i] = c;
> +       }
> +
> +       return i;
> +}
> +
> +static const struct hv_ops hvc_sbi_ops = {
> +       .get_chars = hvc_sbi_tty_get,
> +       .put_chars = hvc_sbi_tty_put,
> +};
> +
> +static int __init hvc_sbi_init(void)
> +{
> +       return PTR_ERR_OR_ZERO(hvc_alloc(0, 0, &hvc_sbi_ops, 16));
> +}
> +device_initcall(hvc_sbi_init);
> +
> +static int __init hvc_sbi_console_init(void)
> +{
> +       hvc_instantiate(0, 0, &hvc_sbi_ops);
> +       add_preferred_console("hvc", 0, NULL);
> +
> +       return 0;
> +}
> +console_initcall(hvc_sbi_console_init);
> +
> +#ifdef CONFIG_EARLY_PRINTK
> +static void sbi_console_write(struct console *co, const char *buf,
> +                             unsigned int n)
> +{
> +       int i;
> +
> +       for (i = 0; i < n; ++i) {
> +               if (buf[i] == '\n')
> +                       sbi_console_putchar('\r');
> +               sbi_console_putchar(buf[i]);
> +       }
> +}
> +
> +static struct console early_console_dev __initdata = {
> +       .name   = "early",
> +       .write  = sbi_console_write,
> +       .flags  = CON_PRINTBUFFER | CON_BOOT,
> +       .index  = -1
> +};
> +
> +static int __init setup_early_printk(char *str)
> +{
> +       if (early_console == NULL) {
> +               early_console = &early_console_dev;
> +               register_console(early_console);
> +       }
> +       return 0;
> +}
> +early_param("earlyprintk", setup_early_printk);
> +#endif
> --
> 2.13.0

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 01/17] drivers: support PCIe in RISCV
  2017-06-06 22:59     ` Palmer Dabbelt
  (?)
@ 2017-06-07  7:17     ` Geert Uytterhoeven
  -1 siblings, 0 replies; 283+ messages in thread
From: Geert Uytterhoeven @ 2017-06-07  7:17 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Linux-Arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, Wesley W. Terpstra, Bjorn Helgaas, linux-pci

CC pci folks

On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> From: "Wesley W. Terpstra" <wesley@sifive.com>
>
> There are RISC-V systems that have been mapped to Xilinx FPGAs that have
> their PCIe controllers on chip.  These build system changes allow RISC-V
> systems to enable the Xilinx PCIe controller, and to setup PCIe IRQs.
>
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
>  drivers/pci/Makefile     | 1 +
>  drivers/pci/host/Kconfig | 2 +-
>  2 files changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
> index 462c1f5f5546..a29d9ec05d13 100644
> --- a/drivers/pci/Makefile
> +++ b/drivers/pci/Makefile
> @@ -41,6 +41,7 @@ obj-$(CONFIG_MIPS) += setup-irq.o
>  obj-$(CONFIG_TILE) += setup-irq.o
>  obj-$(CONFIG_SPARC_LEON) += setup-irq.o
>  obj-$(CONFIG_M68K) += setup-irq.o
> +obj-$(CONFIG_RISCV) += setup-irq.o
>
>  #
>  # ACPI Related PCI FW Functions
> diff --git a/drivers/pci/host/Kconfig b/drivers/pci/host/Kconfig
> index 7f47cd5e10a5..5148f3d3cab7 100644
> --- a/drivers/pci/host/Kconfig
> +++ b/drivers/pci/host/Kconfig
> @@ -71,7 +71,7 @@ config PCI_HOST_GENERIC
>
>  config PCIE_XILINX
>         bool "Xilinx AXI PCIe host bridge support"
> -       depends on ARCH_ZYNQ || MICROBLAZE
> +       depends on ARCH_ZYNQ || MICROBLAZE || RISCV
>         help
>           Say 'Y' here if you want kernel to support the Xilinx AXI PCIe
>           Host Bridge driver.
> --
> 2.13.0

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 02/17] pcie-xilinx: add missing 5th legacy interrupt
  2017-06-06 22:59     ` Palmer Dabbelt
  (?)
@ 2017-06-07  7:18     ` Geert Uytterhoeven
  -1 siblings, 0 replies; 283+ messages in thread
From: Geert Uytterhoeven @ 2017-06-07  7:18 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Linux-Arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, Wesley W. Terpstra, Bjorn Helgaas, Michal Simek,
	Soren Brinkmann, linux-pci

CC pci folks

On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> From: "Wesley W. Terpstra" <wesley@sifive.com>
>
> These are numbered from 1.
>
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
>  drivers/pci/host/pcie-xilinx.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/pci/host/pcie-xilinx.c b/drivers/pci/host/pcie-xilinx.c
> index 2fe2df51f9f8..8804145d399a 100644
> --- a/drivers/pci/host/pcie-xilinx.c
> +++ b/drivers/pci/host/pcie-xilinx.c
> @@ -443,7 +443,7 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data)
>                         val = ((val & XILINX_PCIE_RPIFR1_INTR_MASK) >>
>                                 XILINX_PCIE_RPIFR1_INTR_SHIFT) + 1;
>                         generic_handle_irq(irq_find_mapping(port->leg_domain,
> -                                                           val));
> +                                                           val + 1));
>                 }
>         }
>
> @@ -524,7 +524,7 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
>                 return -ENODEV;
>         }
>
> -       port->leg_domain = irq_domain_add_linear(pcie_intc_node, 4,
> +       port->leg_domain = irq_domain_add_linear(pcie_intc_node, 5,
>                                                  &intx_domain_ops,
>                                                  port);
>         if (!port->leg_domain) {
> --
> 2.13.0

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 04/17] Documentation: atomic_ops.txt is core-api/atomic_ops.rst
  2017-06-06 22:59     ` Palmer Dabbelt
  (?)
@ 2017-06-07  7:19     ` Geert Uytterhoeven
  -1 siblings, 0 replies; 283+ messages in thread
From: Geert Uytterhoeven @ 2017-06-07  7:19 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Linux-Arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, Jonathan Corbet, linux-doc

CC doc folks

On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> I was reading the memory barries documentation in order to make sure the
> RISC-V barries were correct, and I found a broken link to the atomic
> operations documentation.
>
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
>  Documentation/memory-barriers.txt | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
> index 732f10ea382e..f1c9eaa45a57 100644
> --- a/Documentation/memory-barriers.txt
> +++ b/Documentation/memory-barriers.txt
> @@ -498,11 +498,11 @@ And a couple of implicit varieties:
>       This means that ACQUIRE acts as a minimal "acquire" operation and
>       RELEASE acts as a minimal "release" operation.
>
> -A subset of the atomic operations described in atomic_ops.txt have ACQUIRE
> -and RELEASE variants in addition to fully-ordered and relaxed (no barrier
> -semantics) definitions.  For compound atomics performing both a load and a
> -store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
> -only to the store portion of the operation.
> +A subset of the atomic operations described in core-api/atomic_ops.rst have
> +ACQUIRE and RELEASE variants in addition to fully-ordered and relaxed (no
> +barrier semantics) definitions.  For compound atomics performing both a load
> +and a store, ACQUIRE semantics apply only to the load and RELEASE semantics
> +apply only to the store portion of the operation.
>
>  Memory barriers are only required where there's a possibility of interaction
>  between two CPUs or between a CPU and a device.  If it can be guaranteed that
> --
> 2.13.0

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 06/17] pci: Add generic pcibios_{fixup_bus,align_resource}
  2017-06-06 22:59     ` Palmer Dabbelt
  (?)
@ 2017-06-07  7:19     ` Geert Uytterhoeven
  2017-06-07  8:01       ` Arnd Bergmann
  2017-06-08  8:12       ` Christoph Hellwig
  -1 siblings, 2 replies; 283+ messages in thread
From: Geert Uytterhoeven @ 2017-06-07  7:19 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Linux-Arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, Bjorn Helgaas, linux-pci

CC pci folks

On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> While upstreaming the RISC-V port, it was pointed out that multiple
> architectures (arc, arm64, cris, microblaze, sh, tile) have copied the
> mostly empty versions of at least one of these functions.  This defines
> weakly bound versions of the common functions so other architetures can
> use them.
>
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
>  drivers/pci/Makefile |  2 +-
>  drivers/pci/bios.c   | 42 ++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 43 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/pci/bios.c
>
> diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
> index a29d9ec05d13..fa7040915194 100644
> --- a/drivers/pci/Makefile
> +++ b/drivers/pci/Makefile
> @@ -4,7 +4,7 @@
>
>  obj-y          += access.o bus.o probe.o host-bridge.o remove.o pci.o \
>                         pci-driver.o search.o pci-sysfs.o rom.o setup-res.o \
> -                       irq.o vpd.o setup-bus.o vc.o mmap.o
> +                       irq.o vpd.o setup-bus.o vc.o mmap.o bios.o
>  obj-$(CONFIG_PROC_FS) += proc.o
>  obj-$(CONFIG_SYSFS) += slot.o
>
> diff --git a/drivers/pci/bios.c b/drivers/pci/bios.c
> new file mode 100644
> index 000000000000..ffe34c024aa8
> --- /dev/null
> +++ b/drivers/pci/bios.c
> @@ -0,0 +1,42 @@
> +/*
> + * Code borrowed from arch/arm64/kernel/pci.c
> + *   which borrowed from powerpc/kernel/pci-common.c
> + *   which borrowed from arch/alpha/kernel/pci.c
> + *
> + * Extruded from code written by
> + *      Dave Rusling (david.rusling@reo.mts.dec.com)
> + *      David Mosberger (davidm@cs.arizona.edu)
> + * Copyright (C) 1999 Andrea Arcangeli <andrea@suse.de>
> + * Copyright (C) 2000 Ivan Kokshaysky <ink@jurassic.park.msu.ru>
> + * Copyright (C) 2003 Anton Blanchard <anton@au.ibm.com>, IBM
> + * Copyright (C) 2014 ARM Ltd.
> + * Copyright (C) 2017 SiFive
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * version 2 as published by the Free Software Foundation.
> + */
> +
> +/* This file contains weakly bound functions that implement pcibios functions
> + * that some architectures have copied verbatim.
> + */
> +
> +#include <linux/pci.h>
> +
> +/*
> + * Called after each bus is probed, but before its children are examined
> + */
> +__attribute__ ((weak))
> +void pcibios_fixup_bus(struct pci_bus *bus)
> +{
> +       /* nothing to do, expected to be removed in the future */
> +}
> +/*
> + * We don't have to worry about legacy ISA devices, so nothing to do here
> + */
> +__attribute__ ((weak))
> +resource_size_t pcibios_align_resource(void *data, const struct resource *res,
> +                                      resource_size_t size, resource_size_t align)
> +{
> +       return res->start;
> +}
> --
> 2.13.0

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 09/17] clocksource/timer-riscv: New RISC-V Clocksource
  2017-06-07  7:12     ` Geert Uytterhoeven
@ 2017-06-07  7:25       ` Arnd Bergmann
  2017-06-23 23:24           ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-06-07  7:25 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Palmer Dabbelt, Linux-Arch, linux-kernel, Olof Johansson, albert,
	patches, Thomas Gleixner, Daniel Lezcano

On Wed, Jun 7, 2017 at 9:12 AM, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> CC clocksource folks
>
> On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> The RISC-V ISA defines a single RTC as well as an SBI oneshot timer.
>> This timer is present on all RISC-V systems.
>>
>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>> ---
>>  drivers/clocksource/Kconfig       |   8 +++
>>  drivers/clocksource/Makefile      |   1 +
>>  drivers/clocksource/timer-riscv.c | 118 ++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 127 insertions(+)
>>  create mode 100644 drivers/clocksource/timer-riscv.c
>>
>> diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
>> index 545d541ae20e..1c2c6e7c7fab 100644
>> --- a/drivers/clocksource/Kconfig
>> +++ b/drivers/clocksource/Kconfig
>> @@ -612,4 +612,12 @@ config CLKSRC_ST_LPC
>>           Enable this option to use the Low Power controller timer
>>           as clocksource.
>>
>> +config CLKSRC_RISCV
>> +       #bool "Clocksource for the RISC-V platform"
>> +       def_bool y if RISCV
>> +       depends on RISCV

I don't like the commenting out parts of the entry. If there are no
build-time dependencies, you can just make it 'default y' and still allow
users to disabled the driver if they really want to (e.g. on a machine
specific kernel that has a driver for another clocksource), or you
just leave it 'def_bool RISCV'.

>> +
>> +static int riscv_timer_set_oneshot(struct clock_event_device *evt)
>> +{
>> +       /* no-op; only one mode */
>> +       return 0;
>> +}
>> +
>> +static int riscv_timer_set_shutdown(struct clock_event_device *evt)
>> +{
>> +       /* can't stop the clock! */
>> +       return 0;
>> +}

I'd just leave out the empty callbacks, the callers all protect NULL
pointers.

>> +static u64 riscv_rdtime(struct clocksource *cs)
>> +{
>> +       return get_cycles();
>> +}
>> +
>> +static struct clocksource riscv_clocksource = {
>> +       .name = "riscv_clocksource",
>> +       .rating = 300,
>> +       .read = riscv_rdtime,
>> +#ifdef CONFIG_64BITS
>> +       .mask = CLOCKSOURCE_MASK(64),
>> +#else
>> +       .mask = CLOCKSOURCE_MASK(32),
>> +#endif /* CONFIG_64BITS */
>> +       .flags = CLOCK_SOURCE_IS_CONTINUOUS,
>> +};

     ".mask = BITS_PER_LONG" maybe?

>> +void riscv_timer_interrupt(void)
>> +{
>> +       int cpu = smp_processor_id();
>> +       struct clock_event_device *evdev = &per_cpu(clock_event, cpu);
>> +
>> +       evdev->event_handler(evdev);
>> +}
>> +
>> +void __init init_clockevent(void)
>> +{
>> +       int cpu = smp_processor_id();
>> +       struct clock_event_device *ce = &per_cpu(clock_event, cpu);
>> +
>> +       *ce = (struct clock_event_device){
>> +               .name = "riscv_timer_clockevent",
>> +               .features = CLOCK_EVT_FEAT_ONESHOT,
>> +               .rating = 300,
>> +               .cpumask = cpumask_of(cpu),
>> +               .set_next_event = riscv_timer_set_next_event,
>> +               .set_state_oneshot  = riscv_timer_set_oneshot,
>> +               .set_state_shutdown = riscv_timer_set_shutdown,
>> +       };
>> +
>> +       /* Enable timer interrupts */
>> +       csr_set(sie, SIE_STIE);
>> +
>> +       clockevents_config_and_register(ce, riscv_timebase, 100, 0x7fffffff);
>> +}
>> +
>> +static unsigned long __init of_timebase(void)
>> +{
>> +       struct device_node *cpu;
>> +       const __be32 *prop;
>> +
>> +       cpu = of_find_node_by_path("/cpus");
>> +       if (cpu) {
>> +               prop = of_get_property(cpu, "timebase-frequency", NULL);
>> +               if (prop)
>> +                       return be32_to_cpu(*prop);

of_property_read_u32()

>> +       }
>> +
>> +       return 10000000;

The default seems rather arbitrary. Any reason for this particular
number? Maybe it's better to fail if the property is missing.

       Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v2
  2017-05-23  0:41 RISC-V Linux Port v1 Palmer Dabbelt
                   ` (10 preceding siblings ...)
  2017-06-06 22:59 ` RISC-V Linux Port v2 Palmer Dabbelt
@ 2017-06-07  7:29 ` David Howells
  2017-06-07 21:54     ` Palmer Dabbelt
  11 siblings, 1 reply; 283+ messages in thread
From: David Howells @ 2017-06-07  7:29 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: dhowells, linux-arch, linux-kernel, Arnd Bergmann, olof, albert, patches

What's the target type for building cross-binutils and cross-gcc for it?

David

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/7] RISC-V: arch/riscv/lib
  2017-06-06 20:53             ` Palmer Dabbelt
@ 2017-06-07  7:35               ` Arnd Bergmann
  2017-06-23 23:24                 ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-06-07  7:35 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: Linux Kernel Mailing List, Olof Johansson, albert

On Tue, Jun 6, 2017 at 10:53 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> On Tue, 06 Jun 2017 02:31:02 PDT (-0700), Arnd Bergmann wrote:
>> On Tue, Jun 6, 2017 at 6:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> On Fri, 26 May 2017 02:06:58 PDT (-0700), Arnd Bergmann wrote:
>>>> On Thu, May 25, 2017 at 3:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>>>> On Tue, 23 May 2017 04:19:42 PDT (-0700), Arnd Bergmann wrote:
>>>>>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>>>
>>>>>> Also, it would be good to replace the multiply+div64
>>>>>> with a single multiplication here, see how x86 and arm do it
>>>>>> (for the tsc/__timer_delay case).
>>>>>
>>>>> Makes sense.  I think this should do it
>>>>>
>>>>>   https://github.com/riscv/riscv-linux/commit/d397332f6ebff42f3ecb385e9cf3284fdeda6776
>>>>>
>>>>> but I'm finding this hard to test as this only works for 2ms sleeps.  It seems
>>>>> at least in the right ballpark
>>>>
>>>> + if (usecs > MAX_UDELAY_US) {
>>>> + __delay((u64)usecs * riscv_timebase / 1000000ULL);
>>>> + return;
>>>> + }
>>>>
>>>> You still do the 64-bit division here. What I meant is to completely
>>>> avoid the division and use a multiply+shift.
>>>
>>> The goal here was to avoid the error case that ARM has on overflow and instead
>>> just delay for the requested time.  This should only divide when the delay is
>>>>=2ms, so the division won't cost much in comparison.
>>>
>>> The normal case should have no division in it.
>>>
>>> I can copy ARM's error handling if you think that's better, but it seemed more
>>> complicated than just computing the correct answer.
>>
>> I think the intention originally was to avoid overflowing the 32-bit
>> argument in
>>
>>  void __delay(unsigned long cycles)
>>
>> If you need to delay for more than 4 billion clocksource cycles,
>> your code is still broken.
>
> Maybe I'm crazy, but I thought the goal was to avoid overflowing on the
> multiply.  Specifically, the code looks like
>
>   udelay(long input) {
>     long a = input * MUL_VAL;
>     long b = a >> SHIFT_VAL;
>     __delay(b);
>   }
>
> so the place there's extra overflow is at computing a, not b (the input to
> __delay).  When I modified the ARM code I went and recalculated the point at
> which the multiply would overflow and it matched the value from the ARM code,
> which is 2000us.

Ah, that's right. But then you might run into the next overflow at a larger
delay interval.

> While I can buy the argument that 2000us is still too long, the real reason I
> wrote the code this way is because I thought it was easier than having an error
> case.  If you think the error is better then I'll do it that way.

It's probably ok as long as the complexity is not in the inline wrapper, and you
avoid the division for small delays.

     Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 10/17] irqchip: New RISC-V PLIC Driver
  2017-06-07  7:13     ` Geert Uytterhoeven
@ 2017-06-07  7:55       ` Arnd Bergmann
  2017-06-24  0:45           ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-06-07  7:55 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Palmer Dabbelt, Linux-Arch, linux-kernel, Olof Johansson, albert,
	patches, Thomas Gleixner, Jason Cooper, Marc Zyngier

On Wed, Jun 7, 2017 at 9:13 AM, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>> +struct plic_enable_context {
>> +       atomic_t mask[32]; // 32-bit * 32-entry
>> +};

You use many '//' style comments in this file, please change them all to '/* */'
for consistency with kernel coding style.

>> +
>> +struct plic_priority {
>> +       u32 prio[MAX_DEVICES];
>> +};
>> +
>> +struct plic_data {
>> +       struct irq_chip         chip;
>> +       struct irq_domain       *domain;
>> +       u32                     ndev;
>> +       void __iomem            *reg;
>> +       int                     handlers;
>> +       struct plic_handler     *handler;
>> +       char                    name[30];
>> +};
>> +
>> +struct plic_handler {
>> +       struct plic_hart_context        *context;
>> +       struct plic_data                *data;
>> +};
>> +
>> +static inline
>> +struct plic_hart_context *plic_hart_context(struct plic_data *data, size_t i)
>> +{
>> +       return (struct plic_hart_context *)((char *)data->reg + HART_BASE + HART_SIZE*i);
>> +}

'data->reg' is an __iomem pointer, so when you build-test this with 'make C=1',
you should get a valid warning from sparse about an address space mismatch.
Please address all the warning from sparse.

>> +static void plic_disable(struct plic_data *data, int i, int hwirq)
>> +{
>> +       struct plic_enable_context *enable = plic_enable_context(data, i);
>> +
>> +       atomic_and(~(1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
>> +}

In particular, you must not do atomic operations on MMIO pointers.
On most architectures these are explicitly disallowed and trap for
a good reason, as the hardware implementation behind atomics tend
to rely on the cache controller, while mmio registers are required
to be uncached.

>> +       iowrite32(1, &priority->prio[d->hwirq]);

I would normally use 'readl' instead of 'iowrite32'. They may be the same
on riscv, but they have slightly different meaning in portable drivers.

       Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 12/17] tty: New RISC-V SBI Console Driver
  2017-06-07  7:15     ` Geert Uytterhoeven
@ 2017-06-07  7:58       ` Arnd Bergmann
  2017-06-24  0:45           ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-06-07  7:58 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Palmer Dabbelt, Linux-Arch, linux-kernel, Olof Johansson, albert,
	patches, Greg KH, Jiri Slaby, linuxppc-dev

On Wed, Jun 7, 2017 at 9:15 AM, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> CC (hypervisor) console folks
>
> On Wed, Jun 7, 2017 at 1:00 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> This patch adds a new driver for the console availiable via the RISC-V
>> SBI.  This console is specified to be used for early boot messages, and
>> is designed to be a very simple (albiet somewhat slow) console that is
>> always availiable.  All RISC-V systems have an SBI console.
>>
>> The SBI console is made availiable for early printk messages and is also
>> availiable as a regular console.
>>
>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>> ---
>>  drivers/tty/hvc/Kconfig   |  11 +++++
>>  drivers/tty/hvc/Makefile  |   1 +
>>  drivers/tty/hvc/hvc_sbi.c | 102 ++++++++++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 114 insertions(+)
>>  create mode 100644 drivers/tty/hvc/hvc_sbi.c
>>
>> diff --git a/drivers/tty/hvc/Kconfig b/drivers/tty/hvc/Kconfig
>> index 574da15fe618..f3774adab240 100644
>> --- a/drivers/tty/hvc/Kconfig
>> +++ b/drivers/tty/hvc/Kconfig
>> @@ -114,4 +114,15 @@ config HVCS
>>           which will also be compiled when this driver is built as a
>>           module.
>>
>> +config HVC_SBI
>> +       bool "SBI console support"
>> +       depends on RISCV
>> +       select HVC_DRIVER
>> +       default y
>> +       help
>> +         This enables support for console output via RISC-V SBI calls, which
>> +         is normally used only during boot to output printk.
>> +
>> +         If you don't know what do to here, say Y.
>> +
>>  endif # TTY

Please move this a little higher along with the other HVC_DRIVER
implementations.

>> + * RISC-V SBI interface to hvc_console.c
>> + *  based on drivers-tty/hvc/hvc_udbg.c
>> + *
>> + * Copyright (C) 2008 David Gibson, IBM Corporation
>> + * Copyright (C) 2012 Regents of the University of California

2017?

        Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 06/17] pci: Add generic pcibios_{fixup_bus,align_resource}
  2017-06-07  7:19     ` Geert Uytterhoeven
@ 2017-06-07  8:01       ` Arnd Bergmann
  2017-06-24  2:01           ` Palmer Dabbelt
  2017-06-08  8:12       ` Christoph Hellwig
  1 sibling, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-06-07  8:01 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Palmer Dabbelt, Linux-Arch, linux-kernel, Olof Johansson, albert,
	patches, Bjorn Helgaas, linux-pci

On Wed, Jun 7, 2017 at 9:19 AM, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> CC pci folks
>
> On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> While upstreaming the RISC-V port, it was pointed out that multiple
>> architectures (arc, arm64, cris, microblaze, sh, tile) have copied the
>> mostly empty versions of at least one of these functions.  This defines
>> weakly bound versions of the common functions so other architetures can
>> use them.
>>
>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>

Thanks a lot for taking care of this!

>> diff --git a/drivers/pci/bios.c b/drivers/pci/bios.c
>> new file mode 100644
>> index 000000000000..ffe34c024aa8
>> --- /dev/null
>> +++ b/drivers/pci/bios.c
>> @@ -0,0 +1,42 @@
>> +
>> +/* This file contains weakly bound functions that implement pcibios functions
>> + * that some architectures have copied verbatim.
>> + */

Instead of adding a new file, I would suggest adding the two functions next
to their callers, in probe.c and setup-res.c, respectively.

        Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-06 23:00     ` Palmer Dabbelt
  (?)
@ 2017-06-07  8:12     ` Arnd Bergmann
  2017-06-24  2:01         ` Palmer Dabbelt
  -1 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-06-07  8:12 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, Linux Kernel Mailing List, Olof Johansson, albert,
	patches, Benjamin Herrenschmidt

On Wed, Jun 7, 2017 at 1:00 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> This patch adds the include files for the RISC-V port.  These are mostly
> based on the score port, but there are a lot of arm64-based files as
> well.
>
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>

It might be better to split this up into several parts, as the patch
is longer than
most people are willing to review at once.

The uapi should definitely be a separate patch, as it includes the parts that
cannot be changed any more later. memory management (pgtable, mmu,
uaccess) would be another part to split out, and possibly all the atomics
in one separate patch (along with spinlocks and bitops).

> +
> +/* IO barriers.  These only fence on the IO bits because they're only required
> + * to order device access.  We're defining mmiowb because our AMO instructions
> + * (which are used to implement locks) don't specify ordering.  From Chapter 7
> + * of v2.2 of the user ISA:
> + * "The bits order accesses to one of the two address domains, memory or I/O,
> + * depending on which address domain the atomic instruction is accessing. No
> + * ordering constraint is implied to accesses to the other domain, and a FENCE
> + * instruction should be used to order across both domains."
> + */
> +
> +#define __iormb()      __asm__ __volatile__ ("fence i,io" : : : "memory");
> +#define __iowmb()      __asm__ __volatile__ ("fence io,o" : : : "memory");
> +
> +#define mmiowb()       __asm__ __volatile__ ("fence io,io" : : : "memory");
> +
> +/*
> + * Relaxed I/O memory access primitives. These follow the Device memory
> + * ordering rules but do not guarantee any ordering relative to Normal memory
> + * accesses.
> + */
> +#define readb_relaxed(c)       ({ u8  __r = __raw_readb(c); __r; })
> +#define readw_relaxed(c)       ({ u16 __r = le16_to_cpu((__force __le16)__raw_readw(c)); __r; })
> +#define readl_relaxed(c)       ({ u32 __r = le32_to_cpu((__force __le32)__raw_readl(c)); __r; })
> +#define readq_relaxed(c)       ({ u64 __r = le64_to_cpu((__force __le64)__raw_readq(c)); __r; })
> +
> +#define writeb_relaxed(v,c)    ((void)__raw_writeb((v),(c)))
> +#define writew_relaxed(v,c)    ((void)__raw_writew((__force u16)cpu_to_le16(v),(c)))
> +#define writel_relaxed(v,c)    ((void)__raw_writel((__force u32)cpu_to_le32(v),(c)))
> +#define writeq_relaxed(v,c)    ((void)__raw_writeq((__force u64)cpu_to_le64(v),(c)))
> +
> +/*
> + * I/O memory access primitives. Reads are ordered relative to any
> + * following Normal memory access. Writes are ordered relative to any prior
> + * Normal memory access.
> + */
> +#define readb(c)               ({ u8  __v = readb_relaxed(c); __iormb(); __v; })
> +#define readw(c)               ({ u16 __v = readw_relaxed(c); __iormb(); __v; })
> +#define readl(c)               ({ u32 __v = readl_relaxed(c); __iormb(); __v; })
> +#define readq(c)               ({ u64 __v = readq_relaxed(c); __iormb(); __v; })
> +
> +#define writeb(v,c)            ({ __iowmb(); writeb_relaxed((v),(c)); })
> +#define writew(v,c)            ({ __iowmb(); writew_relaxed((v),(c)); })
> +#define writel(v,c)            ({ __iowmb(); writel_relaxed((v),(c)); })
> +#define writeq(v,c)            ({ __iowmb(); writeq_relaxed((v),(c)); })
> +
> +#include <asm-generic/io.h>

These do not yet contain all the changes we discussed: the relaxed operations
don't seem to be ordered against one another and the regular accessors
are not ordered against DMA.

     Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 04/17] Documentation: atomic_ops.txt is core-api/atomic_ops.rst
  2017-06-06 22:59     ` Palmer Dabbelt
  (?)
  (?)
@ 2017-06-07  9:20     ` Will Deacon
  -1 siblings, 0 replies; 283+ messages in thread
From: Will Deacon @ 2017-06-07  9:20 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, linux-kernel, Arnd Bergmann, olof, albert, patches

On Tue, Jun 06, 2017 at 03:59:54PM -0700, Palmer Dabbelt wrote:
> I was reading the memory barries documentation in order to make sure the
> RISC-V barries were correct, and I found a broken link to the atomic
> operations documentation.
> 
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
>  Documentation/memory-barriers.txt | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)

Acked-by: Will Deacon <will.deacon@arm.com>

Will

> diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
> index 732f10ea382e..f1c9eaa45a57 100644
> --- a/Documentation/memory-barriers.txt
> +++ b/Documentation/memory-barriers.txt
> @@ -498,11 +498,11 @@ And a couple of implicit varieties:
>       This means that ACQUIRE acts as a minimal "acquire" operation and
>       RELEASE acts as a minimal "release" operation.
>  
> -A subset of the atomic operations described in atomic_ops.txt have ACQUIRE
> -and RELEASE variants in addition to fully-ordered and relaxed (no barrier
> -semantics) definitions.  For compound atomics performing both a load and a
> -store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
> -only to the store portion of the operation.
> +A subset of the atomic operations described in core-api/atomic_ops.rst have
> +ACQUIRE and RELEASE variants in addition to fully-ordered and relaxed (no
> +barrier semantics) definitions.  For compound atomics performing both a load
> +and a store, ACQUIRE semantics apply only to the load and RELEASE semantics
> +apply only to the store portion of the operation.
>  
>  Memory barriers are only required where there's a possibility of interaction
>  between two CPUs or between a CPU and a device.  If it can be guaranteed that
> -- 
> 2.13.0
> 

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v2
  2017-06-06 22:59   ` Palmer Dabbelt
                     ` (25 preceding siblings ...)
  (?)
@ 2017-06-07  9:23   ` Will Deacon
  2017-06-07 21:54       ` Palmer Dabbelt
  -1 siblings, 1 reply; 283+ messages in thread
From: Will Deacon @ 2017-06-07  9:23 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, linux-kernel, Arnd Bergmann, olof, albert, patches

Hi Palmer,

On Tue, Jun 06, 2017 at 03:59:50PM -0700, Palmer Dabbelt wrote:
> Thanks to everyone who has participated in the review process so far.  We've
> made a lot of changes since the v1 and while this isn't ready to go yet, I
> finally managed to get through everything in my inbox so I thought it would be
> a good time to submit a v2 so everyone is on the same page.

[...]

> [PATCH 13/17] RISC-V: Add include subdirectory

This guy is too big, and got silently dropped by the mailing list. Any
chance you could split it up please, so that it can be reviewed?

Thanks,

Will

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 02/17] pcie-xilinx: add missing 5th legacy interrupt
  2017-06-06 22:59     ` Palmer Dabbelt
  (?)
  (?)
@ 2017-06-07  9:24     ` Marc Zyngier
  2017-06-07 19:03       ` Wesley Terpstra
  -1 siblings, 1 reply; 283+ messages in thread
From: Marc Zyngier @ 2017-06-07  9:24 UTC (permalink / raw)
  To: Palmer Dabbelt, linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Wesley W. Terpstra, Bjorn Helgaas

+ Bjorn

On 06/06/17 23:59, Palmer Dabbelt wrote:
> From: "Wesley W. Terpstra" <wesley@sifive.com>
> 
> These are numbered from 1.
> 
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
>  drivers/pci/host/pcie-xilinx.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/pci/host/pcie-xilinx.c b/drivers/pci/host/pcie-xilinx.c
> index 2fe2df51f9f8..8804145d399a 100644
> --- a/drivers/pci/host/pcie-xilinx.c
> +++ b/drivers/pci/host/pcie-xilinx.c
> @@ -443,7 +443,7 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data)
>  			val = ((val & XILINX_PCIE_RPIFR1_INTR_MASK) >>
>  				XILINX_PCIE_RPIFR1_INTR_SHIFT) + 1;
>  			generic_handle_irq(irq_find_mapping(port->leg_domain,
> -							    val));
> +							    val + 1));
>  		}
>  	}
>  
> @@ -524,7 +524,7 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
>  		return -ENODEV;
>  	}
>  
> -	port->leg_domain = irq_domain_add_linear(pcie_intc_node, 4,
> +	port->leg_domain = irq_domain_add_linear(pcie_intc_node, 5,
>  						 &intx_domain_ops,
>  						 port);
>  	if (!port->leg_domain) {
> 

This is a common problem with the current OF code that numbers INTx from
1 instead of zero (there is no 5th legacy interrupts in the PCI spec,
despite what $SUBJECT says). I'd be inclined to fix this at the core
level rather than papering over it in the various drivers...

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 03/17] base: fix order of OF initialization
@ 2017-06-07  9:35         ` Mark Rutland
  0 siblings, 0 replies; 283+ messages in thread
From: Mark Rutland @ 2017-06-07  9:35 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Palmer Dabbelt, Linux-Arch, linux-kernel, Arnd Bergmann,
	Olof Johansson, albert, patches, Wesley W. Terpstra, Rob Herring,
	Frank Rowand, devicetree

On Wed, Jun 07, 2017 at 09:07:20AM +0200, Geert Uytterhoeven wrote:
> CC devicetree folks
> 
> On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> > From: "Wesley W. Terpstra" <wesley@sifive.com>
> >
> > This fixes: [    0.010000] cpu cpu0: Error -2 creating of_node link
> > ... which you get for every CPU on all architectures with a OF cpu/ node.

I take it this means a /cpus node? Or the /cpus/cpu@* nodes?

I'm not seeing this on arm64 when booting v4.12-rc4 with DT, so clearly
this doesn't affect all such architectures.

What path are these errors happening in?

Thanks,
Mark.

> >
> > This affects riscv, nios, etc.
> >
> > Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> > ---
> >  drivers/base/init.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/base/init.c b/drivers/base/init.c
> > index 48c0e220acc0..0dcd17e561d0 100644
> > --- a/drivers/base/init.c
> > +++ b/drivers/base/init.c
> > @@ -31,9 +31,9 @@ void __init driver_init(void)
> >         /* These are also core pieces, but must come after the
> >          * core core pieces.
> >          */
> > +       of_core_init();
> >         platform_bus_init();
> >         cpu_dev_init();
> >         memory_dev_init();
> >         container_dev_init();
> > -       of_core_init();
> >  }
> > --
> > 2.13.0
> --
> To unsubscribe from this list: send the line "unsubscribe devicetree" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 03/17] base: fix order of OF initialization
@ 2017-06-07  9:35         ` Mark Rutland
  0 siblings, 0 replies; 283+ messages in thread
From: Mark Rutland @ 2017-06-07  9:35 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Palmer Dabbelt, Linux-Arch, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	Arnd Bergmann, Olof Johansson, albert-SpMDHPYPyPbQT0dZR+AlfA,
	patches-q3qR2WxjNRFS9aJRtSZj7A, Wesley W. Terpstra, Rob Herring,
	Frank Rowand, devicetree-u79uwXL29TY76Z2rM5mHXA

On Wed, Jun 07, 2017 at 09:07:20AM +0200, Geert Uytterhoeven wrote:
> CC devicetree folks
> 
> On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer-96lFi9zoCfxBDgjK7y7TUQ@public.gmane.org> wrote:
> > From: "Wesley W. Terpstra" <wesley-SpMDHPYPyPbQT0dZR+AlfA@public.gmane.org>
> >
> > This fixes: [    0.010000] cpu cpu0: Error -2 creating of_node link
> > ... which you get for every CPU on all architectures with a OF cpu/ node.

I take it this means a /cpus node? Or the /cpus/cpu@* nodes?

I'm not seeing this on arm64 when booting v4.12-rc4 with DT, so clearly
this doesn't affect all such architectures.

What path are these errors happening in?

Thanks,
Mark.

> >
> > This affects riscv, nios, etc.
> >
> > Signed-off-by: Palmer Dabbelt <palmer-96lFi9zoCfxBDgjK7y7TUQ@public.gmane.org>
> > ---
> >  drivers/base/init.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/base/init.c b/drivers/base/init.c
> > index 48c0e220acc0..0dcd17e561d0 100644
> > --- a/drivers/base/init.c
> > +++ b/drivers/base/init.c
> > @@ -31,9 +31,9 @@ void __init driver_init(void)
> >         /* These are also core pieces, but must come after the
> >          * core core pieces.
> >          */
> > +       of_core_init();
> >         platform_bus_init();
> >         cpu_dev_init();
> >         memory_dev_init();
> >         container_dev_init();
> > -       of_core_init();
> >  }
> > --
> > 2.13.0
> --
> To unsubscribe from this list: send the line "unsubscribe devicetree" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 09/17] clocksource/timer-riscv: New RISC-V Clocksource
  2017-06-06 22:59     ` Palmer Dabbelt
  (?)
  (?)
@ 2017-06-07  9:43     ` Marc Zyngier
  2017-06-24  2:02         ` Palmer Dabbelt
  -1 siblings, 1 reply; 283+ messages in thread
From: Marc Zyngier @ 2017-06-07  9:43 UTC (permalink / raw)
  To: Palmer Dabbelt, linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches

On 06/06/17 23:59, Palmer Dabbelt wrote:
> The RISC-V ISA defines a single RTC as well as an SBI oneshot timer.
> This timer is present on all RISC-V systems.
> 
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
>  drivers/clocksource/Kconfig       |   8 +++
>  drivers/clocksource/Makefile      |   1 +
>  drivers/clocksource/timer-riscv.c | 118 ++++++++++++++++++++++++++++++++++++++
>  3 files changed, 127 insertions(+)
>  create mode 100644 drivers/clocksource/timer-riscv.c
> 
> diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
> index 545d541ae20e..1c2c6e7c7fab 100644
> --- a/drivers/clocksource/Kconfig
> +++ b/drivers/clocksource/Kconfig
> @@ -612,4 +612,12 @@ config CLKSRC_ST_LPC
>  	  Enable this option to use the Low Power controller timer
>  	  as clocksource.
>  
> +config CLKSRC_RISCV
> +	#bool "Clocksource for the RISC-V platform"
> +	def_bool y if RISCV
> +	depends on RISCV
> +	help
> +	  This enables a clocksource based on the RISC-V SBI timer, which is
> +	  built in to all RISC-V systems.
> +
>  endmenu
> diff --git a/drivers/clocksource/Makefile b/drivers/clocksource/Makefile
> index 2b5b56a6f00f..408ed9d314dc 100644
> --- a/drivers/clocksource/Makefile
> +++ b/drivers/clocksource/Makefile
> @@ -73,3 +73,4 @@ obj-$(CONFIG_H8300_TMR16)		+= h8300_timer16.o
>  obj-$(CONFIG_H8300_TPU)			+= h8300_tpu.o
>  obj-$(CONFIG_CLKSRC_ST_LPC)		+= clksrc_st_lpc.o
>  obj-$(CONFIG_X86_NUMACHIP)		+= numachip.o
> +obj-$(CONFIG_CLKSRC_RISCV)		+= timer-riscv.o
> diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
> new file mode 100644
> index 000000000000..04ef7b9130b3
> --- /dev/null
> +++ b/drivers/clocksource/timer-riscv.c
> @@ -0,0 +1,118 @@
> +/*
> + * Copyright (C) 2012 Regents of the University of California
> + * Copyright (C) 2017 SiFive
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful,
> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *   GNU General Public License for more details.
> + */
> +
> +#include <linux/clocksource.h>
> +#include <linux/clockchips.h>
> +#include <linux/interrupt.h>
> +#include <linux/irq.h>
> +#include <linux/delay.h>
> +#include <linux/of.h>
> +
> +#include <asm/irq.h>
> +#include <asm/csr.h>
> +#include <asm/sbi.h>
> +#include <asm/delay.h>
> +
> +unsigned long riscv_timebase;
> +
> +static DEFINE_PER_CPU(struct clock_event_device, clock_event);
> +
> +static int riscv_timer_set_next_event(unsigned long delta,
> +	struct clock_event_device *evdev)
> +{
> +	sbi_set_timer(get_cycles() + delta);
> +	return 0;
> +}
> +
> +static int riscv_timer_set_oneshot(struct clock_event_device *evt)
> +{
> +	/* no-op; only one mode */
> +	return 0;
> +}
> +
> +static int riscv_timer_set_shutdown(struct clock_event_device *evt)
> +{
> +	/* can't stop the clock! */
> +	return 0;
> +}
> +
> +static u64 riscv_rdtime(struct clocksource *cs)
> +{
> +	return get_cycles();
> +}
> +
> +static struct clocksource riscv_clocksource = {
> +	.name = "riscv_clocksource",
> +	.rating = 300,
> +	.read = riscv_rdtime,
> +#ifdef CONFIG_64BITS
> +	.mask = CLOCKSOURCE_MASK(64),
> +#else
> +	.mask = CLOCKSOURCE_MASK(32),
> +#endif /* CONFIG_64BITS */
> +	.flags = CLOCK_SOURCE_IS_CONTINUOUS,
> +};
> +
> +void riscv_timer_interrupt(void)
> +{
> +	int cpu = smp_processor_id();
> +	struct clock_event_device *evdev = &per_cpu(clock_event, cpu);
> +
> +	evdev->event_handler(evdev);
> +}
> +
> +void __init init_clockevent(void)
> +{
> +	int cpu = smp_processor_id();
> +	struct clock_event_device *ce = &per_cpu(clock_event, cpu);
> +
> +	*ce = (struct clock_event_device){
> +		.name = "riscv_timer_clockevent",
> +		.features = CLOCK_EVT_FEAT_ONESHOT,
> +		.rating = 300,
> +		.cpumask = cpumask_of(cpu),
> +		.set_next_event = riscv_timer_set_next_event,
> +		.set_state_oneshot  = riscv_timer_set_oneshot,
> +		.set_state_shutdown = riscv_timer_set_shutdown,
> +	};
> +
> +	/* Enable timer interrupts */
> +	csr_set(sie, SIE_STIE);
> +
> +	clockevents_config_and_register(ce, riscv_timebase, 100, 0x7fffffff);
> +}
> +
> +static unsigned long __init of_timebase(void)
> +{
> +	struct device_node *cpu;
> +	const __be32 *prop;
> +
> +	cpu = of_find_node_by_path("/cpus");
> +	if (cpu) {
> +		prop = of_get_property(cpu, "timebase-frequency", NULL);
> +		if (prop)
> +			return be32_to_cpu(*prop);

Consider using of_property_read_u32() instead.

> +	}
> +
> +	return 10000000;

Is this an architectural guarantee? Or something that is implementation
specific?

> +}
> +
> +void __init time_init(void)
> +{
> +	riscv_timebase = of_timebase();
> +	lpj_fine = riscv_timebase / HZ;
> +
> +	clocksource_register_hz(&riscv_clocksource, riscv_timebase);
> +	init_clockevent();
> +}
> 

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 08/17] dts: include documentation for the RISC-V interrupt controllers
  2017-06-07  7:11     ` Geert Uytterhoeven
@ 2017-06-07 10:13       ` Mark Rutland
  2017-06-07 18:57         ` Wesley Terpstra
  0 siblings, 1 reply; 283+ messages in thread
From: Mark Rutland @ 2017-06-07 10:13 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Palmer Dabbelt, Linux-Arch, linux-kernel, Arnd Bergmann,
	Olof Johansson, albert, patches, Wesley W. Terpstra,
	Thomas Gleixner, Jason Cooper, Marc Zyngier, Rob Herring,
	devicetree

On Wed, Jun 07, 2017 at 09:11:31AM +0200, Geert Uytterhoeven wrote:
> CC irqchip and devicetree folks

Thanks Geert.

Palmer, in future, you can ensure (most) relevant parties are Cc'd by
using scripts/get_maintainer.pl to find them, and adding Cc: lines to
the relevant patches.

You can either hand that a patch or a path, e.g.

[mark@leverpostej:~/src/linux]% ./scripts/get_maintainer.pl -f Documentation/devicetree/bindings/interrupt-controller/                  
Thomas Gleixner <tglx@linutronix.de> (maintainer:IRQCHIP DRIVERS)
Jason Cooper <jason@lakedaemon.net> (maintainer:IRQCHIP DRIVERS)
Marc Zyngier <marc.zyngier@arm.com> (maintainer:IRQCHIP DRIVERS)
Rob Herring <robh+dt@kernel.org> (maintainer:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS)
Mark Rutland <mark.rutland@arm.com> (maintainer:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS)
linux-kernel@vger.kernel.org (open list:IRQCHIP DRIVERS)
devicetree@vger.kernel.org (open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS)

Otherwise, I have a few comments inline below.

> On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> > From: "Wesley W. Terpstra" <wesley@sifive.com>
> >
> > Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> > ---
> >  .../interrupt-controller/riscv,cpu-intc.txt        | 46 ++++++++++++++++++++++
> >  .../bindings/interrupt-controller/riscv,plic0.txt  | 44 +++++++++++++++++++++
> >  2 files changed, 90 insertions(+)
> >  create mode 100644 Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
> >  create mode 100644 Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt
> >
> > diff --git a/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt b/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
> > new file mode 100644
> > index 000000000000..62f02e834ff9
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/interrupt-controller/riscv,cpu-intc.txt
> > @@ -0,0 +1,46 @@
> > +RISC-V Hart-Level Interrupt Controller (HLIC)
> > +---------------------------------------------
> > +
> > +RISC-V cores include Control Status Registers (CSRs) which are local to each
> > +hart and can be read or written by software. Some of these CSRs are used to
> > +control local interrupts connected to the core.
> > +
> > +Typical examples of local interrupts on a RISC-V core include: software IPI
> > +interrupts, timer interrupts, and a link to the PLIC interrupt controller.

So IIUC those interrupts are routed directly to the HLIC, and are (only)
controlled thought the HLIC?

Is the HLIC architecturally mandated? i.e. is this guaranteed to be
present on any RISC-V implementation?

Does the presence of the HLIC imply the presence of a PLIC (or
vice/versa)? Typically, the per-cpu and platform-wide parts of the
top-level interrupt controller are fairly intimately coupled.

> > +
> > +Required properties:
> > +- compatible : "riscv,cpu-intc"

You'll need to allocate the "riscv" vendor prefix in 
Documentation/devicetree/bindings/vendor-prefixes.txt

... if that was done in some other patch, I didn't receive it.

> > +- #interrupt-cells : should be <1>

What about the flags?

Are all HLIC interrupts edge triggered (or level triggered)?

> > +- interrupt-controller : Identifies the node as an interrupt controller
> > +
> > +Furthermore, this interrupt-controller MUST be embedded inside the cpu
> > +definition of the hart whose CSRs control these local interrupts.
> > +
> > +Example:
> > +
> > +       cpu1: cpu@1 {
> > +               clock-frequency = <1600000000>;
> > +               compatible = "riscv";
> > +               d-cache-block-size = <64>;
> > +               d-cache-sets = <64>;
> > +               d-cache-size = <16384>;
> > +               d-tlb-sets = <1>;
> > +               d-tlb-size = <32>;
> > +               device_type = "cpu";
> > +               i-cache-block-size = <64>;
> > +               i-cache-sets = <64>;
> > +               i-cache-size = <16384>;
> > +               i-tlb-sets = <1>;
> > +               i-tlb-size = <32>;
> > +               mmu-type = "riscv,sv39";
> > +               next-level-cache = <&L2>;
> > +               reg = <1>;
> > +               riscv,isa = "rv64imac";
> > +               status = "okay";
> > +               tlb-split;

We can probably replace most of these with a "...", as they're largely
irrelevany to this binding.

> > +               cpu1-intc: interrupt-controller {
> > +                       #interrupt-cells = <1>;
> > +                       compatible = "riscv,cpu-intc";
> > +                       interrupt-controller;
> > +               };
> > +       };
> > diff --git a/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt b/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt
> > new file mode 100644
> > index 000000000000..c05b5806f7d2
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/interrupt-controller/riscv,plic0.txt
> > @@ -0,0 +1,44 @@
> > +RISC-V Platform-Level Interrupt Controller (PLIC)
> > +-------------------------------------------------
> > +
> > +RISC-V cores typically include a PLIC, which route interrupts from multiple
> > +devices to multiple hart contexts.  The PLIC is connected to the interrupt
> > +controller embedded in a RISC-V core via the interrupt-related CSRs.

Do you mean that the PLIC is connected to the HLIC, or that the HLIC is
also managed in part through CSRs?

> > +
> > +A hart context is a priviledge mode in a hardware execution thread.  For
> > +example, in an 4 core system with 2-way SMT, you have 8 harts and probably
> > +at least two priviledge modes per hart; machine mode and supervisor mode.
> > +
> > +Each interrupt can be enabled on per-context basis. Any context can claim
> > +a pending enabled interrupt and then release it once it has been handled.
> > +
> > +Each interrupt has a configurable priority. Higher priority interrupts are
> > +serviced firs. Each context can specify a priority threshold. Interrupts
> > +with priority below this threshold will not cause the PLIC to raise its
> > +interrupt line leading to the context.
> > +
> > +Required properties:
> > +- compatible : "riscv,plic0"
> > +- #address-cells : should be <0>
> > +- #interrupt-cells : should be <1>

As with the HLIC, what about the flags?

> > +- interrupt-controller : Identifies the node as an interrupt controller
> > +- reg : Should contain 1 register range (address and length)
> > +- riscv,ndev : Specifies the number of interrupts attached to the PLIC

Why do we need to know this?

I suspect this ia actually the number of interrupts implemented in the
PLIC, rather than the number of interrupts attached. i.e. the PLIC can
be implemented with a subset of the potential registers/bits. Is that
correct?

If so, something like "riscv,num-interrupts" would be better, along with
a clearer description.

> > +- interrupts-extended : Specifies which contexts are connected to the PLIC

That description doesn't sound right.

I take it that these are the HLIC interrupts that the PLIC can raise?

You will need to be explicit about the order of interrupts in this
property. i.e. which interrupt is routed to which context?

Is the interrupt at the HLIC well known? From the example I see that
here local interrupts 11 adn 9 are used. Is that mandated, or just the
case for this particular implementation?

Also, please consider how you will handle the case when the Linux
logical CPU ID is not the same as the physical ID, and how you will
handle physical IDs being sparse.

We went though a lot of pain trying to do something similar for the ARM
PMU interrupts (see the interrupt-affinity property in
Documentation/devicetree/bindings/arm/pmu.txt), and it's still painful
to deal with.

> > +
> > +Example:
> > +
> > +       plic: interrupt-controller@c000000 {
> > +               #address-cells = <0>;

This can go, given you don't have sub-nodes, nor a #size-cells property.

Thanks,
Mark.

> > +               #interrupt-cells = <1>;
> > +               compatible = "riscv,plic0";
> > +               interrupt-controller;
> > +               interrupts-extended = <
> > +                       &cpu0-intc 11
> > +                       &cpu1-intc 11 &cpu1-intc 9
> > +                       &cpu2-intc 11 &cpu2-intc 9
> > +                       &cpu3-intc 11 &cpu3-intc 9
> > +                       &cpu4-intc 11 &cpu4-intc 9>;
> > +               reg = <0xc000000 0x4000000>;
> > +               riscv,ndev = <10>;
> > +       };
> > --
> > 2.13.0

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 10/17] irqchip: New RISC-V PLIC Driver
  2017-06-06 23:00     ` Palmer Dabbelt
  (?)
  (?)
@ 2017-06-07 10:52     ` Marc Zyngier
  2017-06-09 13:47       ` Will Deacon
  2017-06-25 20:49         ` Palmer Dabbelt
  -1 siblings, 2 replies; 283+ messages in thread
From: Marc Zyngier @ 2017-06-07 10:52 UTC (permalink / raw)
  To: Palmer Dabbelt, linux-arch, linux-kernel, Arnd Bergmann, olof
  Cc: albert, patches, Will Deacon, Peter Zijlstra

Hi Palmer,

On 07/06/17 00:00, Palmer Dabbelt wrote:
> This patch adds a driver for the Platform Level Interrupt Controller
> (PLIC) specified as part of the RISC-V supervisor level ISA manual.
> The PLIC connocts global interrupt sources to the local interrupt
> controller on each hart.  A PLIC is present on all RISC-V systems.
> 
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
>  drivers/irqchip/Kconfig          |  12 ++
>  drivers/irqchip/Makefile         |   1 +
>  drivers/irqchip/irq-riscv-plic.c | 253 +++++++++++++++++++++++++++++++++++++++
>  3 files changed, 266 insertions(+)
>  create mode 100644 drivers/irqchip/irq-riscv-plic.c
> 
> diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
> index 478f8ace2664..2906d63934ef 100644
> --- a/drivers/irqchip/Kconfig
> +++ b/drivers/irqchip/Kconfig
> @@ -301,3 +301,15 @@ config QCOM_IRQ_COMBINER
>  	help
>  	  Say yes here to add support for the IRQ combiner devices embedded
>  	  in Qualcomm Technologies chips.
> +
> +config RISCV_PLIC
> +        bool "Platform-Level Interrupt Controller"
> +	depends on RISCV
> +        default y
> +        help
> +	   This enables support for the PLIC chip found in standard RISC-V
> +	   systems. The PLIC is the top-most interrupt controller found in

nit: this seems to slightly contradict what is being said in patch #8,
where the HLIC is the top-level interrupt controller.

> +	   the system, connected directly to the core complex. All other
> +	   interrupt sources (MSI, GPIO, etc) are subordinate to the PLIC.
> +
> +	   If you don't know what to do here, say Y.
> diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile
> index b64c59b838a0..bed94cc89146 100644
> --- a/drivers/irqchip/Makefile
> +++ b/drivers/irqchip/Makefile
> @@ -76,3 +76,4 @@ obj-$(CONFIG_EZNPS_GIC)			+= irq-eznps.o
>  obj-$(CONFIG_ARCH_ASPEED)		+= irq-aspeed-vic.o
>  obj-$(CONFIG_STM32_EXTI) 		+= irq-stm32-exti.o
>  obj-$(CONFIG_QCOM_IRQ_COMBINER)		+= qcom-irq-combiner.o
> +obj-$(CONFIG_RISCV_PLIC)		+= irq-riscv-plic.o
> diff --git a/drivers/irqchip/irq-riscv-plic.c b/drivers/irqchip/irq-riscv-plic.c
> new file mode 100644
> index 000000000000..906c8a62a911
> --- /dev/null
> +++ b/drivers/irqchip/irq-riscv-plic.c
> @@ -0,0 +1,253 @@
> +/*
> + * Copyright (C) 2017 SiFive
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful,
> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *   GNU General Public License for more details.
> + */
> +
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/irq.h>
> +#include <linux/irqchip.h>
> +#include <linux/irqchip/chained_irq.h>
> +#include <linux/irqdomain.h>
> +#include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_address.h>
> +#include <linux/of_irq.h>
> +#include <linux/platform_device.h>
> +
> +/* From the RISC-V Privlidged Spec v1.10:
> + *
> + * Global interrupt sources are assigned small unsigned integer identifiers,
> + * beginning at the value 1.  An interrupt ID of 0 is reserved to mean “no
> + * interrupt”.  Interrupt identifiers are also used to break ties when two or
> + * more interrupt sources have the same assigned priority. Smaller values of
> + * interrupt ID take precedence over larger values of interrupt ID.
> + *
> + * It's not defined what the largest device ID is, so we're just fixing
> + * MAX_DEVICES right here (which is named oddly, as there will never be a
> + * device 0).
> + */
> +#define MAX_DEVICES	1024
> +#define MAX_CONTEXTS	15872

If those are HW properties, they should probably come from the device
tree instead of being hardcoded here.

> +
> +#define PRIORITY_BASE	0
> +#define ENABLE_BASE	0x2000
> +#define ENABLE_SIZE	0x80
> +#define HART_BASE	0x200000
> +#define HART_SIZE	0x1000
> +
> +struct plic_hart_context {
> +	u32 threshold;
> +	u32 claim;
> +};

Representing HW layout as a C structure is not something we usually do
in the kernel. It relies on the ABI (which may change over time), and
makes it harder to quickly distinguish what is a SW concept from what's
not. It also leads to some of the mistakes below...

> +
> +struct plic_enable_context {
> +	atomic_t mask[32]; // 32-bit * 32-entry

/* */ comments please, placed above the field.

> +};
> +
> +struct plic_priority {
> +	u32 prio[MAX_DEVICES];
> +};
> +
> +struct plic_data {
> +	struct irq_chip		chip;
> +	struct irq_domain	*domain;
> +	u32			ndev;
> +	void __iomem		*reg;
> +	int			handlers;
> +	struct plic_handler	*handler;
> +	char			name[30];

Why 30? #define?

> +};
> +
> +struct plic_handler {
> +	struct plic_hart_context	*context;
> +	struct plic_data		*data;
> +};
> +
> +static inline
> +struct plic_hart_context *plic_hart_context(struct plic_data *data, size_t i)

What is 'i' here?

> +{
> +	return (struct plic_hart_context *)((char *)data->reg + HART_BASE + HART_SIZE*i);

This looks dodgy on a number of levels:
- the various levels of casts are useless
- what you return is still an __iomem

Sparse would certainly shout at you if you ran it on this file.

Same comment for the two functions below.

> +}
> +
> +static inline
> +struct plic_enable_context *plic_enable_context(struct plic_data *data, size_t i)
> +{
> +	return (struct plic_enable_context *)((char *)data->reg + ENABLE_BASE + ENABLE_SIZE*i);
> +}
> +
> +static inline
> +struct plic_priority *plic_priority(struct plic_data *data)
> +{
> +	return (struct plic_priority *)((char *)data->reg + PRIORITY_BASE);
> +}
> +
> +static void plic_disable(struct plic_data *data, int i, int hwirq)
> +{
> +	struct plic_enable_context *enable = plic_enable_context(data, i);
> +
> +	atomic_and(~(1 << (hwirq % 32)), &enable->mask[hwirq / 32]);

This is still a device access, right? What does it mean to use the
atomic primitives on that? What are you racing against? I thought the
various context were private to an execution context...

Adding Will and PeterZ to the CC list because they will probably have
their own views on this...

> +}
> +
> +static void plic_enable(struct plic_data *data, int i, int hwirq)
> +{
> +	struct plic_enable_context *enable = plic_enable_context(data, i);
> +
> +	atomic_or((1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
> +}
> +
> +// There is no need to mask/unmask PLIC interrupts
> +// They are "masked" by reading claim and "unmasked" when writing it back.

Hmmm. What you describe here is the FastEOI flow.

> +static void plic_irq_mask(struct irq_data *d) { }
> +static void plic_irq_unmask(struct irq_data *d) { }
> +
> +static void plic_irq_enable(struct irq_data *d)
> +{
> +	struct plic_data *data = irq_data_get_irq_chip_data(d);
> +	struct plic_priority *priority = plic_priority(data);
> +	int i;
> +
> +	iowrite32(1, &priority->prio[d->hwirq]);

Using iowrite is only meaningful if your architecture has an actual I/O
address space that is distinct from the "normal" address space. Using
the write{b,w,l}{_relaxed} accessors would make more sense if you're not
in that case.

> +	for (i = 0; i < data->handlers; ++i)
> +		if (data->handler[i].context)
> +			plic_enable(data, i, d->hwirq);
> +}
> +
> +static void plic_irq_disable(struct irq_data *d)
> +{
> +	struct plic_data *data = irq_data_get_irq_chip_data(d);
> +	struct plic_priority *priority = plic_priority(data);
> +	int i;
> +
> +	iowrite32(0, &priority->prio[d->hwirq]);
> +	for (i = 0; i < data->handlers; ++i)
> +		if (data->handler[i].context)
> +			plic_disable(data, i, d->hwirq);
> +}
> +
> +static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
> +			      irq_hw_number_t hwirq)
> +{
> +	struct plic_data *data = d->host_data;
> +
> +	irq_set_chip_and_handler(irq, &data->chip, handle_simple_irq);
> +	irq_set_chip_data(irq, data);
> +	irq_set_noprobe(irq);
> +
> +	return 0;
> +}
> +
> +static const struct irq_domain_ops plic_irqdomain_ops = {
> +	.map	= plic_irqdomain_map,
> +	.xlate	= irq_domain_xlate_onecell,
> +};
> +
> +static void plic_chained_handle_irq(struct irq_desc *desc)
> +{
> +	struct plic_handler *handler = irq_desc_get_handler_data(desc);
> +	struct irq_chip *chip = irq_desc_get_chip(desc);
> +	struct irq_domain *domain = handler->data->domain;
> +	u32 what;
> +
> +	chained_irq_enter(chip, desc);
> +
> +	while ((what = ioread32(&handler->context->claim))) {
> +		int irq = irq_find_mapping(domain, what);
> +
> +		if (irq > 0)
> +			generic_handle_irq(irq);
> +		else
> +			handle_bad_irq(desc);
> +		iowrite32(what, &handler->context->claim);

So this is your EOI. It should be represented as such, instead of
abusing the handle_simple_irq flow.

> +	}
> +
> +	chained_irq_exit(chip, desc);
> +}
> +
> +static int plic_init(struct device_node *node, struct device_node *parent)
> +{
> +	struct plic_data *data;
> +	struct resource resource;
> +	int i, ok = 0;
> +
> +	data = kzalloc(sizeof(*data), GFP_KERNEL);
> +	if (WARN_ON(!data))
> +		return -ENOMEM;
> +
> +	data->reg = of_iomap(node, 0);
> +	if (WARN_ON(!data->reg))
> +		return -EIO;
> +
> +	of_property_read_u32(node, "riscv,ndev", &data->ndev);
> +	if (WARN_ON(!data->ndev))
> +		return -EINVAL;
> +
> +	data->handlers = of_irq_count(node);
> +	if (WARN_ON(!data->handlers))
> +		return -EINVAL;
> +
> +	data->handler =
> +		kcalloc(data->handlers, sizeof(*data->handler), GFP_KERNEL);
> +	if (WARN_ON(!data->handler))
> +		return -ENOMEM;
> +
> +	data->domain = irq_domain_add_linear(node, data->ndev+1, &plic_irqdomain_ops, data);
> +	if (WARN_ON(!data->domain))
> +		return -ENOMEM;

Memory leak of data->handler and data, data->reg is still mapped...

> +
> +	of_address_to_resource(node, 0, &resource);
> +	snprintf(data->name, sizeof(data->name),
> +		 "riscv,plic0,%llx", resource.start);
> +	data->chip.name = data->name;
> +	data->chip.irq_mask = plic_irq_mask;
> +	data->chip.irq_unmask = plic_irq_unmask;
> +	data->chip.irq_enable = plic_irq_enable;
> +	data->chip.irq_disable = plic_irq_disable;
> +
> +	for (i = 0; i < data->handlers; ++i) {
> +		struct plic_handler *handler = &data->handler[i];
> +		struct of_phandle_args parent;
> +		int parent_irq, hwirq;
> +
> +		if (of_irq_parse_one(node, i, &parent))
> +			continue;
> +		// skip context holes
> +		if (parent.args[0] == -1)
> +			continue;
> +
> +		// skip any contexts that lead to inactive harts
> +		if (of_device_is_compatible(parent.np, "riscv,cpu-intc") &&
> +		    parent.np->parent &&
> +		    riscv_of_processor_hart(parent.np->parent) < 0)
> +			continue;
> +
> +		parent_irq = irq_create_of_mapping(&parent);
> +		if (!parent_irq)
> +			continue;
> +
> +		handler->context = plic_hart_context(data, i);
> +		handler->data = data;
> +		// hwirq prio must be > this to trigger an interrupt
> +		iowrite32(0, &handler->context->threshold);
> +
> +		for (hwirq = 1; hwirq <= data->ndev; ++hwirq)
> +			plic_disable(data, i, hwirq);
> +		irq_set_chained_handler_and_data(parent_irq, plic_chained_handle_irq, handler);
> +		++ok;
> +	}
> +
> +	printk(KERN_INFO "%s: mapped %d interrupts to %d/%d handlers\n",
> +	       data->name, data->ndev, ok, data->handlers);
> +	WARN_ON(!ok);
> +	return 0;
> +}
> +
> +IRQCHIP_DECLARE(plic0, "riscv,plic0", plic_init);
> 

In the future, please CC me (as well as Thomas Gleixner and Jason
Cooper) on all the interrupt-controller related patches.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-06 23:00     ` Palmer Dabbelt
  (?)
  (?)
@ 2017-06-07 11:54     ` Peter Zijlstra
  2017-06-07 12:25       ` Peter Zijlstra
  -1 siblings, 1 reply; 283+ messages in thread
From: Peter Zijlstra @ 2017-06-07 11:54 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, linux-kernel, Arnd Bergmann, olof, albert, patches

On Tue, Jun 06, 2017 at 04:00:03PM -0700, Palmer Dabbelt wrote:
> +/* Assume that atomic operations are already serializing */
> +#define smp_mb__before_atomic_dec()	barrier()
> +#define smp_mb__after_atomic_dec()	barrier()
> +#define smp_mb__before_atomic_inc()	barrier()
> +#define smp_mb__after_atomic_inc()	barrier()

> +#define smp_mb__before_clear_bit()  smp_mb()
> +#define smp_mb__after_clear_bit()   smp_mb()

These no longer exist.. Also how can they be different? bitops would use
the same atomic primitives as regular atomic ops would and would thus
have the very same implicit ordering.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-06 23:00     ` Palmer Dabbelt
                       ` (2 preceding siblings ...)
  (?)
@ 2017-06-07 12:06     ` Peter Zijlstra
  2017-06-07 12:18       ` Peter Zijlstra
  2017-06-07 12:36       ` Peter Zijlstra
  -1 siblings, 2 replies; 283+ messages in thread
From: Peter Zijlstra @ 2017-06-07 12:06 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, linux-kernel, Arnd Bergmann, olof, albert, patches

On Tue, Jun 06, 2017 at 04:00:03PM -0700, Palmer Dabbelt wrote:

> + * atomic_add - add integer to atomic variable
> + * @i: integer value to add
> + * @v: pointer of type atomic_t
> + *
> + * Atomically adds @i to @v.
> + */
> +static inline void atomic_add(int i, atomic_t *v)
> +{
> +	__asm__ __volatile__ (
> +		"amoadd.w zero, %1, %0"
> +		: "+A" (v->counter)
> +		: "r" (i));
> +}
> +
> +#define atomic_fetch_add atomic_fetch_add
> +static inline int atomic_fetch_add(unsigned int mask, atomic_t *v)
> +{
> +	int out;
> +
> +	__asm__ __volatile__ (
> +		"amoadd.w %2, %1, %0"
> +		: "+A" (v->counter), "=r" (out)
> +		: "r" (mask));
> +	return out;
> +}
> +
> +/**
> + * atomic_sub - subtract integer from atomic variable
> + * @i: integer value to subtract
> + * @v: pointer of type atomic_t
> + *
> + * Atomically subtracts @i from @v.
> + */
> +static inline void atomic_sub(int i, atomic_t *v)
> +{
> +	atomic_add(-i, v);
> +}
> +
> +#define atomic_fetch_sub atomic_fetch_sub
> +static inline int atomic_fetch_sub(unsigned int mask, atomic_t *v)
> +{
> +	int out;
> +
> +	__asm__ __volatile__ (
> +		"amosub.w %2, %1, %0"
> +		: "+A" (v->counter), "=r" (out)
> +		: "r" (mask));
> +	return out;
> +}
> +
> +/**
> + * atomic_add_return - add integer to atomic variable
> + * @i: integer value to add
> + * @v: pointer of type atomic_t
> + *
> + * Atomically adds @i to @v and returns the result
> + */
> +static inline int atomic_add_return(int i, atomic_t *v)
> +{
> +	register int c;
> +
> +	__asm__ __volatile__ (
> +		"amoadd.w %0, %2, %1"
> +		: "=r" (c), "+A" (v->counter)
> +		: "r" (i));
> +	return (c + i);
> +}
> +
> +/**
> + * atomic_sub_return - subtract integer from atomic variable
> + * @i: integer value to subtract
> + * @v: pointer of type atomic_t
> + *
> + * Atomically subtracts @i from @v and returns the result
> + */
> +static inline int atomic_sub_return(int i, atomic_t *v)
> +{
> +	return atomic_add_return(-i, v);
> +}
> +

> +/**
> + * atomic_and - Atomically clear bits in atomic variable
> + * @mask: Mask of the bits to be retained
> + * @v: pointer of type atomic_t
> + *
> + * Atomically retains the bits set in @mask from @v
> + */
> +static inline void atomic_and(unsigned int mask, atomic_t *v)
> +{
> +	__asm__ __volatile__ (
> +		"amoand.w zero, %1, %0"
> +		: "+A" (v->counter)
> +		: "r" (mask));
> +}
> +
> +#define atomic_fetch_and atomic_fetch_and
> +static inline int atomic_fetch_and(unsigned int mask, atomic_t *v)
> +{
> +	int out;
> +
> +	__asm__ __volatile__ (
> +		"amoand.w %2, %1, %0"
> +		: "+A" (v->counter), "=r" (out)
> +		: "r" (mask));
> +	return out;
> +}
> +
> +/**
> + * atomic_or - Atomically set bits in atomic variable
> + * @mask: Mask of the bits to be set
> + * @v: pointer of type atomic_t
> + *
> + * Atomically sets the bits set in @mask in @v
> + */
> +static inline void atomic_or(unsigned int mask, atomic_t *v)
> +{
> +	__asm__ __volatile__ (
> +		"amoor.w zero, %1, %0"
> +		: "+A" (v->counter)
> +		: "r" (mask));
> +}
> +
> +#define atomic_fetch_or atomic_fetch_or
> +static inline int atomic_fetch_or(unsigned int mask, atomic_t *v)
> +{
> +	int out;
> +
> +	__asm__ __volatile__ (
> +		"amoor.w %2, %1, %0"
> +		: "+A" (v->counter), "=r" (out)
> +		: "r" (mask));
> +	return out;
> +}
> +
> +/**
> + * atomic_xor - Atomically flips bits in atomic variable
> + * @mask: Mask of the bits to be flipped
> + * @v: pointer of type atomic_t
> + *
> + * Atomically flips the bits set in @mask in @v
> + */
> +static inline void atomic_xor(unsigned int mask, atomic_t *v)
> +{
> +	__asm__ __volatile__ (
> +		"amoxor.w zero, %1, %0"
> +		: "+A" (v->counter)
> +		: "r" (mask));
> +}
> +
> +#define atomic_fetch_xor atomic_fetch_xor
> +static inline int atomic_fetch_xor(unsigned int mask, atomic_t *v)
> +{
> +	int out;
> +
> +	__asm__ __volatile__ (
> +		"amoxor.w %2, %1, %0"
> +		: "+A" (v->counter), "=r" (out)
> +		: "r" (mask));
> +	return out;
> +}

What pretty much all the other architectures do is something like:

#define ATOMIC_OP(op, asm_op, c_op)				\
static __always_inline void atomic_##op(int i, atomic_t *v)	\
{								\
	__asm__ __volatile__ (					\
		"amo" #asm_op ".w zero, %1, %0"			\
		: "+A" (v->counter)				\
		: "r" (i));					\
}

#define ATOMIC_FETCH_OP(op, asm_op, c_op)			\
static __always_inline int atomic_fetch_##op(int i, atomic_t *v)\
{								\
	register int ret;					\
	__asm__ __volatile__ (					\
		"amo" #asm_op ".w %2, %1, %0"			\
		: "+A" (v->counter), "=r" (ret)			\
		: "r" (mask));					\
	return ret;						\
}

#define ATOMIC_OP_RETURN(op, asm_op, c_op)			\
static __always_inline int atomic_##op##_return(int i, atomic_t *v) \
{								\
	return atomic_fetch_##op(i, v) c_op i;			\
}

#define ATOMIC_OPS(op, asm_op, c_op)				\
	ATOMIC_OP(op, asm_op, c_op)				\
	ATOMIC_OP_RETURN(op, asm_op, c_op)			\
	ATOMIC_FETCH_OP(op, asm_op, c_op)

ATOMIC_OPS(add, add, +)
ATOMIC_OPS(sub, sub, -)

#undef ATOMIC_OPS

#define ATOMIC_OPS(op, asm_op, c_op)				\
	ATOMIC_OP(op, asm_op, c_op)				\
	ATOMIC_FETCH_OP(op, asm_op, c_op)

ATOMIC_OPS(and, and, &)
ATOMIC_OPS(or, or, |)
ATOMIC_OPS(xor, xor, ^)

#undef ATOMIC_OPS

Which is much simpler no?

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-07 12:06     ` Peter Zijlstra
@ 2017-06-07 12:18       ` Peter Zijlstra
  2017-06-07 12:36       ` Peter Zijlstra
  1 sibling, 0 replies; 283+ messages in thread
From: Peter Zijlstra @ 2017-06-07 12:18 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, linux-kernel, Arnd Bergmann, olof, albert, patches

On Wed, Jun 07, 2017 at 02:06:13PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 06, 2017 at 04:00:03PM -0700, Palmer Dabbelt wrote:

> > +static inline int atomic_fetch_sub(unsigned int mask, atomic_t *v)
> > +{
> > +	int out;
> > +
> > +	__asm__ __volatile__ (
> > +		"amosub.w %2, %1, %0"
> > +		: "+A" (v->counter), "=r" (out)
> > +		: "r" (mask));
> > +	return out;
> > +}

Your instruction manual does not list AMOSUB as a valid instruction. So
either it is wrong or this code never compiled. Please clarify.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-07 11:54     ` Peter Zijlstra
@ 2017-06-07 12:25       ` Peter Zijlstra
  0 siblings, 0 replies; 283+ messages in thread
From: Peter Zijlstra @ 2017-06-07 12:25 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, linux-kernel, Arnd Bergmann, olof, albert, patches

On Wed, Jun 07, 2017 at 01:54:23PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 06, 2017 at 04:00:03PM -0700, Palmer Dabbelt wrote:
> > +/* Assume that atomic operations are already serializing */
> > +#define smp_mb__before_atomic_dec()	barrier()
> > +#define smp_mb__after_atomic_dec()	barrier()
> > +#define smp_mb__before_atomic_inc()	barrier()
> > +#define smp_mb__after_atomic_inc()	barrier()
> 
> > +#define smp_mb__before_clear_bit()  smp_mb()
> > +#define smp_mb__after_clear_bit()   smp_mb()
> 
> These no longer exist.. Also how can they be different? bitops would use
> the same atomic primitives as regular atomic ops would and would thus
> have the very same implicit ordering.

Your manual states that each atomic instruction (be it AMO or LR/SC)
have two ordering bits: AQ and RL. If neither are set the instruction is
unordered, if either one is set, its either an ACQUIRE or a RELEASE and
if both are set its SC.

So you can in fact make the above happen, however your atomic
implementation does not appear to use ".aq.rl" mnemonics so would be
entirely unordered, just like your bitops.

Therefore the above is just plain wrong and broken.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-07 12:06     ` Peter Zijlstra
  2017-06-07 12:18       ` Peter Zijlstra
@ 2017-06-07 12:36       ` Peter Zijlstra
  2017-06-07 12:58         ` Peter Zijlstra
  1 sibling, 1 reply; 283+ messages in thread
From: Peter Zijlstra @ 2017-06-07 12:36 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, linux-kernel, Arnd Bergmann, olof, albert, patches

On Wed, Jun 07, 2017 at 02:06:13PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 06, 2017 at 04:00:03PM -0700, Palmer Dabbelt wrote:
> What pretty much all the other architectures do is something like:
> 
> #define ATOMIC_OP(op, asm_op, c_op)				\
> static __always_inline void atomic_##op(int i, atomic_t *v)	\
> {								\
> 	__asm__ __volatile__ (					\
> 		"amo" #asm_op ".w zero, %1, %0"			\
> 		: "+A" (v->counter)				\
> 		: "r" (i));					\
> }
> 
> #define ATOMIC_FETCH_OP(op, asm_op, c_op)			\
> static __always_inline int atomic_fetch_##op(int i, atomic_t *v)\
> {								\
> 	register int ret;					\
> 	__asm__ __volatile__ (					\
> 		"amo" #asm_op ".w %2, %1, %0"			\
> 		: "+A" (v->counter), "=r" (ret)			\
> 		: "r" (mask));					\
> 	return ret;						\
> }
> 
> #define ATOMIC_OP_RETURN(op, asm_op, c_op)			\
> static __always_inline int atomic_##op##_return(int i, atomic_t *v) \
> {								\
> 	return atomic_fetch_##op(i, v) c_op i;			\
> }
> 
> #define ATOMIC_OPS(op, asm_op, c_op)				\
> 	ATOMIC_OP(op, asm_op, c_op)				\
> 	ATOMIC_OP_RETURN(op, asm_op, c_op)			\
> 	ATOMIC_FETCH_OP(op, asm_op, c_op)
> 
> ATOMIC_OPS(add, add, +)
> ATOMIC_OPS(sub, sub, -)
> 
> #undef ATOMIC_OPS
> 
> #define ATOMIC_OPS(op, asm_op, c_op)				\
> 	ATOMIC_OP(op, asm_op, c_op)				\
> 	ATOMIC_FETCH_OP(op, asm_op, c_op)
> 
> ATOMIC_OPS(and, and, &)
> ATOMIC_OPS(or, or, |)
> ATOMIC_OPS(xor, xor, ^)
> 
> #undef ATOMIC_OPS
> 
> Which is much simpler no?

In fact, after having read your manual you'd want something like:


#define ATOMIC_OP(op, asm_op, c_op)				\
static __always_inline void atomic_##op(int i, atomic_t *v)	\
{								\
	__asm__ __volatile__ (					\
		"amo" #asm_op ".w zero, %1, %0"			\
		: "+A" (v->counter)				\
		: "r" (i));					\
}

#define ATOMIC_FETCH_OP(op, asm_op, c_op, asm_or, order)	\
static __always_inline int atomic_fetch_##op##order(int i, atomic_t *v)\
{								\
	register int ret;					\
	__asm__ __volatile__ (					\
		"amo" #asm_op ".w" #asm_or " %2, %1, %0"	\
		: "+A" (v->counter), "=r" (ret)			\
		: "r" (mask));					\
	return ret;						\
}

#define ATOMIC_OP_RETURN(op, asm_op, c_op, asm_or, order)	\
static __always_inline int atomic_##op##_return##order(int i, atomic_t *v) \
{								\
	return atomic_fetch_##op##order(i, v) c_op i;		\
}

#define ATOMIC_OPS(op, asm_op, c_op)				\
	ATOMIC_OP(op, asm_op, c_op, , _relaxed)			\
	ATOMIC_OP_RETURN(op, asm_op, c_op, , _relaxed)		\
	ATOMIC_FETCH_OP(op, asm_op, c_op, , _relaxed)

ATOMIC_OPS(add, add, +)
ATOMIC_OPS(sub, sub, -)

#undef ATOMIC_OPS

#define ATOMIC_OPS(op, asm_op, c_op, asm_or, order)		\
	ATOMIC_OP_RETURN(op, asm_op, c_op, , _relaxed)		\
	ATOMIC_FETCH_OP(op, asm_op, c_op, , _relaxed)

ATOMIC_OPS(add, add, +, ".aq", _acquire)
ATOMIC_OPS(add, add, +, ".rl", _release)
ATOMIC_OPS(add, add, +, ".aq.rl", )

ATOMIC_OPS(sub, sub, -, ".aq", _acquire)
ATOMIC_OPS(sub, sub, -, ".rl", _release)
ATOMIC_OPS(sub, sub, -, ".aq.rl", )

#undef ATOMIC_OPS

#define ATOMIC_OPS(op, asm_op, c_op)				\
	ATOMIC_OP(op, asm_op, c_op)				\
	ATOMIC_FETCH_OP(op, asm_op, c_op, , _relaxed)

ATOMIC_OPS(and, and, &)
ATOMIC_OPS(or, or, |)
ATOMIC_OPS(xor, xor, ^)

#undef ATOMIC_OPS

ATOMIC_FETCH_OP(and, and, &, ".aq", _acquire)
ATOMIC_FETCH_OP(and, and, &, ".rl", _release)
ATOMIC_FETCH_OP(and, and, &, ".aq.rl", )

ATOMIC_FETCH_OP(or, or, |, ".aq", _acquire)
ATOMIC_FETCH_OP(or, or, |, ".rl", _release)
ATOMIC_FETCH_OP(or, or, |, ".aq.rl", )

ATOMIC_FETCH_OP(xor, xor, ^, ".aq", _acquire)
ATOMIC_FETCH_OP(xor, xor, ^, ".rl", _release)
ATOMIC_FETCH_OP(xor, xor, ^, ".aq.rl", )


#define smp_mb__before_atomic()	smp_mb()
#define smp_mb__after_atomic()	smp_mb()


Which (pending the sub confusion) will generate the entire set of:

 atomic_add, atomic_add_return{_relaxed,_acquire,_release,} atomic_fetch_add{_relaxed,_acquire,_release,}
 atomic_sub, atomic_sub_return{_relaxed,_acquire,_release,} atomic_fetch_sub{_relaxed,_acquire,_release,}

 atomic_and, atomic_fetch_and{_relaxed,_acquire,_release,}
 atomic_or,  atomic_fetch_or{_relaxed,_acquire,_release,}
 atomic_xor, atomic_fetch_xor{_relaxed,_acquire,_release,}

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-06 23:00     ` Palmer Dabbelt
                       ` (3 preceding siblings ...)
  (?)
@ 2017-06-07 12:42     ` Peter Zijlstra
  -1 siblings, 0 replies; 283+ messages in thread
From: Peter Zijlstra @ 2017-06-07 12:42 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, linux-kernel, Arnd Bergmann, olof, albert, patches

On Tue, Jun 06, 2017 at 04:00:03PM -0700, Palmer Dabbelt wrote:
> diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> new file mode 100644
> index 000000000000..c7ee1321ac18
> --- /dev/null
> +++ b/arch/riscv/include/asm/cmpxchg.h
> @@ -0,0 +1,124 @@
> +/*
> + * Copyright (C) 2014 Regents of the University of California
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful,
> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *   GNU General Public License for more details.
> + */
> +
> +#ifndef _ASM_RISCV_CMPXCHG_H
> +#define _ASM_RISCV_CMPXCHG_H
> +
> +#include <linux/bug.h>
> +
> +#ifdef CONFIG_ISA_A
> +
> +#include <asm/barrier.h>
> +
> +#define __xchg(new, ptr, size)					\
> +({								\
> +	__typeof__(ptr) __ptr = (ptr);				\
> +	__typeof__(new) __new = (new);				\
> +	__typeof__(*(ptr)) __ret;				\
> +	switch (size) {						\
> +	case 4:							\
> +		__asm__ __volatile__ (				\
> +			"amoswap.w %0, %2, %1"			\
> +			: "=r" (__ret), "+A" (*__ptr)		\
> +			: "r" (__new));				\
> +		break;						\
> +	case 8:							\
> +		__asm__ __volatile__ (				\
> +			"amoswap.d %0, %2, %1"			\
> +			: "=r" (__ret), "+A" (*__ptr)		\
> +			: "r" (__new));				\
> +		break;						\
> +	default:						\
> +		BUILD_BUG();					\
> +	}							\
> +	__ret;							\
> +})
> +
> +#define xchg(ptr, x)    (__xchg((x), (ptr), sizeof(*(ptr))))

our xchg() is fully ordered, and thus you need to use "amoswap.aq.rl"

> +
> +
> +/*
> + * Atomic compare and exchange.  Compare OLD with MEM, if identical,
> + * store NEW in MEM.  Return the initial value in MEM.  Success is
> + * indicated by comparing RETURN with OLD.
> + */
> +#define __cmpxchg(ptr, old, new, size)					\
> +({									\
> +	__typeof__(ptr) __ptr = (ptr);					\
> +	__typeof__(old) __old = (old);					\
> +	__typeof__(new) __new = (new);					\
> +	__typeof__(*(ptr)) __ret;					\
> +	register unsigned int __rc;					\
> +	switch (size) {							\
> +	case 4:								\
> +		__asm__ __volatile__ (					\
> +		"0:"							\
> +			"lr.w %0, %2\n"					\
> +			"bne  %0, %z3, 1f\n"				\
> +			"sc.w %1, %z4, %2\n"				\
> +			"bnez %1, 0b\n"					\
> +		"1:"							\
> +			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
> +			: "rJ" (__old), "rJ" (__new));			\
> +		break;							\
> +	case 8:								\
> +		__asm__ __volatile__ (					\
> +		"0:"							\
> +			"lr.d %0, %2\n"					\
> +			"bne  %0, %z3, 1f\n"				\
> +			"sc.d %1, %z4, %2\n"				\
> +			"bnez %1, 0b\n"					\
> +		"1:"							\
> +			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
> +			: "rJ" (__old), "rJ" (__new));			\
> +		break;							\
> +	default:							\
> +		BUILD_BUG();						\
> +	}								\
> +	__ret;								\
> +})
> +
> +#define __cmpxchg_mb(ptr, old, new, size)			\
> +({								\
> +	__typeof__(*(ptr)) __ret;				\
> +	smp_mb();						\
> +	__ret = __cmpxchg((ptr), (old), (new), (size));		\
> +	smp_mb();						\
> +	__ret;							\
> +})

Your ISA of course, but wouldn't setting the AQ and RL bits on LR/SC be
cheaper than doing two full barriers around the thing?

Note that cmpxchg() doesn't need to provide ordering on failure.

Further note that we have:

	{atomic_,}cmpxchg{_relaxed,_acquire,_release,}()

and recently:

	{atomic_,}try_cmpxchg{_relaxed,_acquire,_release,}()

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-07 12:36       ` Peter Zijlstra
@ 2017-06-07 12:58         ` Peter Zijlstra
  2017-06-07 13:16           ` Will Deacon
                             ` (2 more replies)
  0 siblings, 3 replies; 283+ messages in thread
From: Peter Zijlstra @ 2017-06-07 12:58 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, linux-kernel, Arnd Bergmann, olof, albert, patches,
	Will Deacon

On Wed, Jun 07, 2017 at 02:36:27PM +0200, Peter Zijlstra wrote:
> Which (pending the sub confusion) will generate the entire set of:
> 
>  atomic_add, atomic_add_return{_relaxed,_acquire,_release,} atomic_fetch_add{_relaxed,_acquire,_release,}
>  atomic_sub, atomic_sub_return{_relaxed,_acquire,_release,} atomic_fetch_sub{_relaxed,_acquire,_release,}
> 
>  atomic_and, atomic_fetch_and{_relaxed,_acquire,_release,}
>  atomic_or,  atomic_fetch_or{_relaxed,_acquire,_release,}
>  atomic_xor, atomic_fetch_xor{_relaxed,_acquire,_release,}
> 

Another approach would be to override __atomic_op_{acquire,release} and
use things like:

	"FENCE r,rw" -- (load) ACQUIRE
	"FENCE rw,w" -- (store) RELEASE

And then you only need to provide _relaxed atomics.

Also, and I didn't check for that, you need to provide:

smp_load_acquire(), smp_store_release(), atomic_read_acquire(),
atomic_store_release().

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-07 12:58         ` Peter Zijlstra
@ 2017-06-07 13:16           ` Will Deacon
  2017-06-26 20:07               ` Palmer Dabbelt
  2017-06-07 16:35           ` Peter Zijlstra
  2017-06-26 20:07             ` Palmer Dabbelt
  2 siblings, 1 reply; 283+ messages in thread
From: Will Deacon @ 2017-06-07 13:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Palmer Dabbelt, linux-arch, linux-kernel, Arnd Bergmann, olof,
	albert, patches

[sorry, jumping in here because it's the only mail I have relating to
 patch 13]

On Wed, Jun 07, 2017 at 02:58:50PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 07, 2017 at 02:36:27PM +0200, Peter Zijlstra wrote:
> > Which (pending the sub confusion) will generate the entire set of:
> > 
> >  atomic_add, atomic_add_return{_relaxed,_acquire,_release,} atomic_fetch_add{_relaxed,_acquire,_release,}
> >  atomic_sub, atomic_sub_return{_relaxed,_acquire,_release,} atomic_fetch_sub{_relaxed,_acquire,_release,}
> > 
> >  atomic_and, atomic_fetch_and{_relaxed,_acquire,_release,}
> >  atomic_or,  atomic_fetch_or{_relaxed,_acquire,_release,}
> >  atomic_xor, atomic_fetch_xor{_relaxed,_acquire,_release,}
> > 
> 
> Another approach would be to override __atomic_op_{acquire,release} and
> use things like:
> 
> 	"FENCE r,rw" -- (load) ACQUIRE
> 	"FENCE rw,w" -- (store) RELEASE
> 
> And then you only need to provide _relaxed atomics.
> 
> Also, and I didn't check for that, you need to provide:
> 
> smp_load_acquire(), smp_store_release(), atomic_read_acquire(),
> atomic_store_release().

Is there an up-to-date specification for the RISC-V memory model? I looked
at:

https://github.com/riscv/riscv-isa-manual/releases/download/riscv-user-2.2/riscv-spec-v2.2.pdf

but it says:

| 2.7 Memory Model
| This section is out of date as the RISC-V memory model is
| currently under revision to ensure it can efficiently support current
| programming language memory models. The revised base mem- ory model will
| contain further ordering constraints, including at least that loads to the
| same address from the same hart cannot be reordered, and that syntactic data
| dependencies between instructions are respected

which, on the one hand is reassuring (because ignoring dependency ordering is
plain broken), but on the other it doesn't go quite far enough in defining
exactly what constitutes a "syntactic data dependency". The cumulativity of
your fences also needs defining, because I think this was up in the air at some
point and the document above doesn't seem to tackle it (it doesn't seem to
describe what constitutes being a memory of the predecessor or successor sets)

Could you shed some light on this please? We've started relying on RW control
dependencies in semi-recent history, so it's important to get this nailed down.

Thanks,

Will

P.S. You should also totally get your architects to write a formal model ;)

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-06 23:00     ` Palmer Dabbelt
                       ` (4 preceding siblings ...)
  (?)
@ 2017-06-07 13:17     ` Peter Zijlstra
  2017-06-09  8:16       ` Peter Zijlstra
  2017-06-26 20:07         ` Palmer Dabbelt
  -1 siblings, 2 replies; 283+ messages in thread
From: Peter Zijlstra @ 2017-06-07 13:17 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, linux-kernel, Arnd Bergmann, olof, albert, patches,
	Will Deacon

On Tue, Jun 06, 2017 at 04:00:03PM -0700, Palmer Dabbelt wrote:
> diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
> new file mode 100644
> index 000000000000..9736f5714e54
> --- /dev/null
> +++ b/arch/riscv/include/asm/spinlock.h
> @@ -0,0 +1,155 @@
> +/*
> + * Copyright (C) 2015 Regents of the University of California
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful,
> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *   GNU General Public License for more details.
> + */
> +
> +#ifndef _ASM_RISCV_SPINLOCK_H
> +#define _ASM_RISCV_SPINLOCK_H
> +
> +#include <linux/kernel.h>
> +#include <asm/current.h>
> +
> +/*
> + * Simple spin lock operations.  These provide no fairness guarantees.
> + */

Any reason to use a test-and-set spinlock at all?

> +
> +#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
> +#define arch_spin_is_locked(x)	((x)->lock != 0)
> +#define arch_spin_unlock_wait(x) \
> +		do { cpu_relax(); } while ((x)->lock)

Hehe, yeah, no ;-) There are ordering constraints on that.

> +
> +static inline void arch_spin_unlock(arch_spinlock_t *lock)
> +{
> +	__asm__ __volatile__ (
> +		"amoswap.w.rl x0, x0, %0"
> +		: "=A" (lock->lock)
> +		:: "memory");
> +}
> +
> +static inline int arch_spin_trylock(arch_spinlock_t *lock)
> +{
> +	int tmp = 1, busy;
> +
> +	__asm__ __volatile__ (
> +		"amoswap.w.aq %0, %2, %1"
> +		: "=r" (busy), "+A" (lock->lock)
> +		: "r" (tmp)
> +		: "memory");
> +
> +	return !busy;
> +}
> +
> +static inline void arch_spin_lock(arch_spinlock_t *lock)
> +{
> +	while (1) {
> +		if (arch_spin_is_locked(lock))
> +			continue;
> +
> +		if (arch_spin_trylock(lock))
> +			break;
> +	}
> +}

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 01/17] drivers: support PCIe in RISCV
  2017-06-06 22:59     ` Palmer Dabbelt
  (?)
  (?)
@ 2017-06-07 14:25     ` Christoph Hellwig
       [not found]       ` <CAMgXwTjXZ5dsxmJ2FyWhCRWo-3nyvKUDfhfV0nNC+oakF=AEsA@mail.gmail.com>
  -1 siblings, 1 reply; 283+ messages in thread
From: Christoph Hellwig @ 2017-06-07 14:25 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, linux-kernel, Arnd Bergmann, olof, albert, patches,
	Wesley W. Terpstra

On Tue, Jun 06, 2017 at 03:59:51PM -0700, Palmer Dabbelt wrote:
> From: "Wesley W. Terpstra" <wesley@sifive.com>
> 
> There are RISC-V systems that have been mapped to Xilinx FPGAs that have
> their PCIe controllers on chip.  These build system changes allow RISC-V
> systems to enable the Xilinx PCIe controller, and to setup PCIe IRQs.
> 
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
>  drivers/pci/Makefile     | 1 +
>  drivers/pci/host/Kconfig | 2 +-
>  2 files changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
> index 462c1f5f5546..a29d9ec05d13 100644
> --- a/drivers/pci/Makefile
> +++ b/drivers/pci/Makefile
> @@ -41,6 +41,7 @@ obj-$(CONFIG_MIPS) += setup-irq.o
>  obj-$(CONFIG_TILE) += setup-irq.o
>  obj-$(CONFIG_SPARC_LEON) += setup-irq.o
>  obj-$(CONFIG_M68K) += setup-irq.o
> +obj-$(CONFIG_RISCV) += setup-irq.o

Can we do a cleanup here and add a ARCH_USE_GENERIC_PCI_SETUP Kconfig
symbol that all these architectures can select?

>  
>  #
>  # ACPI Related PCI FW Functions
> diff --git a/drivers/pci/host/Kconfig b/drivers/pci/host/Kconfig
> index 7f47cd5e10a5..5148f3d3cab7 100644
> --- a/drivers/pci/host/Kconfig
> +++ b/drivers/pci/host/Kconfig
> @@ -71,7 +71,7 @@ config PCI_HOST_GENERIC
>  
>  config PCIE_XILINX
>  	bool "Xilinx AXI PCIe host bridge support"
> -	depends on ARCH_ZYNQ || MICROBLAZE
> +	depends on ARCH_ZYNQ || MICROBLAZE || RISCV

What about of arch support does this driver need?  It seems to compile
just fine on x86 for me, so maybe we should just drop the arch
dependency entirely.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-07 12:58         ` Peter Zijlstra
  2017-06-07 13:16           ` Will Deacon
@ 2017-06-07 16:35           ` Peter Zijlstra
  2017-06-26 20:07             ` Palmer Dabbelt
  2 siblings, 0 replies; 283+ messages in thread
From: Peter Zijlstra @ 2017-06-07 16:35 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, linux-kernel, Arnd Bergmann, olof, albert, patches,
	Will Deacon

On Wed, Jun 07, 2017 at 02:58:50PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 07, 2017 at 02:36:27PM +0200, Peter Zijlstra wrote:
> > Which (pending the sub confusion) will generate the entire set of:
> > 
> >  atomic_add, atomic_add_return{_relaxed,_acquire,_release,} atomic_fetch_add{_relaxed,_acquire,_release,}
> >  atomic_sub, atomic_sub_return{_relaxed,_acquire,_release,} atomic_fetch_sub{_relaxed,_acquire,_release,}
> > 
> >  atomic_and, atomic_fetch_and{_relaxed,_acquire,_release,}
> >  atomic_or,  atomic_fetch_or{_relaxed,_acquire,_release,}
> >  atomic_xor, atomic_fetch_xor{_relaxed,_acquire,_release,}
> > 
> 
> Another approach would be to override __atomic_op_{acquire,release} and
> use things like:
> 
> 	"FENCE r,rw" -- (load) ACQUIRE
> 	"FENCE rw,w" -- (store) RELEASE
> 
> And then you only need to provide _relaxed atomics.
> 
> Also, and I didn't check for that, you need to provide:
> 
> smp_load_acquire(), smp_store_release(), atomic_read_acquire(),
> atomic_store_release().

Also, you probably need to provide smp_mb__before_spinlock(), but also
see:

  https://lkml.kernel.org/r/20170607161501.819948352@infradead.org

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 01/17] drivers: support PCIe in RISCV
       [not found]       ` <CAMgXwTjXZ5dsxmJ2FyWhCRWo-3nyvKUDfhfV0nNC+oakF=AEsA@mail.gmail.com>
@ 2017-06-07 17:40         ` Olof Johansson
  2017-06-23 21:47             ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Olof Johansson @ 2017-06-07 17:40 UTC (permalink / raw)
  To: Wesley Terpstra
  Cc: Christoph Hellwig, Palmer Dabbelt, patches, linux-arch, albert,
	Arnd Bergmann, linux-kernel

On Wed, Jun 7, 2017 at 10:09 AM, Wesley Terpstra <wesley@sifive.com> wrote:
>
>
> On Jun 7, 2017 7:26 AM, "Christoph Hellwig" <hch@infradead.org> wrote:
>
> On Tue, Jun 06, 2017 at 03:59:51PM -0700, Palmer Dabbelt wrote:
>> From: "Wesley W. Terpstra" <wesley@sifive.com>
>>
>> There are RISC-V systems that have been mapped to Xilinx FPGAs that have
>> their PCIe controllers on chip.  These build system changes allow RISC-V
>> systems to enable the Xilinx PCIe controller, and to setup PCIe IRQs.
>>
>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>> ---
>>  drivers/pci/Makefile     | 1 +
>>  drivers/pci/host/Kconfig | 2 +-
>>  2 files changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
>> index 462c1f5f5546..a29d9ec05d13 100644
>> --- a/drivers/pci/Makefile
>> +++ b/drivers/pci/Makefile
>> @@ -41,6 +41,7 @@ obj-$(CONFIG_MIPS) += setup-irq.o
>>  obj-$(CONFIG_TILE) += setup-irq.o
>>  obj-$(CONFIG_SPARC_LEON) += setup-irq.o
>>  obj-$(CONFIG_M68K) += setup-irq.o
>> +obj-$(CONFIG_RISCV) += setup-irq.o
>
> Can we do a cleanup here and add a ARCH_USE_GENERIC_PCI_SETUP Kconfig
> symbol that all these architectures can select?
>
>
> That would probably be better. I did not want to touch other arch/ folders
> in our changes.

I understand that approach when you're doing things in your tree
early, but in this case (and at this phase in submission/merging),
don't be afraid to touch other architectures and refactor/clean up.

Please do the refactor in a separate preceding patch and submit it
separate/soon instead of keeping it just in the series and bundled
with your addition. That way it can go in when ready even if the rest
of the series is spinning.


-Olof

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 03/17] base: fix order of OF initialization
  2017-06-07  9:35         ` Mark Rutland
  (?)
@ 2017-06-07 18:39         ` Wesley Terpstra
  2017-06-07 21:10           ` Benjamin Herrenschmidt
  2017-06-08  3:49           ` Frank Rowand
  -1 siblings, 2 replies; 283+ messages in thread
From: Wesley Terpstra @ 2017-06-07 18:39 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch, linux-kernel,
	Arnd Bergmann, Olof Johansson, Albert Ou, patches, Rob Herring,
	Frank Rowand, devicetree, Benjamin Herrenschmidt

It was a while ago that I debugged this. I already reported this bug
to Benjamin Herrenschmidt (now in CC), and I believe he has a patch of
his own to fix the same issue.

As I understand it, of_core_init sets up the OF entries in
/sys/firmware/devicetree. During platform bringup, when the system
describes the cpu + cache hierarchy, it also makes an of_node symlink
into that directory. However, if it doesn't exist yet, you get the
warning.

# ls -l /sys/devices/system/cpu/cpu3/of_node
lrwxrwxrwx    1 root     root             0 Jan  1 00:00
/sys/devices/system/cpu/cpu3/of_node ->
../../../../firmware/devicetree/base/cpus/cpu@3

On Wed, Jun 7, 2017 at 2:35 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Wed, Jun 07, 2017 at 09:07:20AM +0200, Geert Uytterhoeven wrote:
>> CC devicetree folks
>>
>> On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> > From: "Wesley W. Terpstra" <wesley@sifive.com>
>> >
>> > This fixes: [    0.010000] cpu cpu0: Error -2 creating of_node link
>> > ... which you get for every CPU on all architectures with a OF cpu/ node.
>
> I take it this means a /cpus node? Or the /cpus/cpu@* nodes?
>
> I'm not seeing this on arm64 when booting v4.12-rc4 with DT, so clearly
> this doesn't affect all such architectures.
>
> What path are these errors happening in?
>
> Thanks,
> Mark.
>
>> >
>> > This affects riscv, nios, etc.
>> >
>> > Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>> > ---
>> >  drivers/base/init.c | 2 +-
>> >  1 file changed, 1 insertion(+), 1 deletion(-)
>> >
>> > diff --git a/drivers/base/init.c b/drivers/base/init.c
>> > index 48c0e220acc0..0dcd17e561d0 100644
>> > --- a/drivers/base/init.c
>> > +++ b/drivers/base/init.c
>> > @@ -31,9 +31,9 @@ void __init driver_init(void)
>> >         /* These are also core pieces, but must come after the
>> >          * core core pieces.
>> >          */
>> > +       of_core_init();
>> >         platform_bus_init();
>> >         cpu_dev_init();
>> >         memory_dev_init();
>> >         container_dev_init();
>> > -       of_core_init();
>> >  }
>> > --
>> > 2.13.0
>> --
>> To unsubscribe from this list: send the line "unsubscribe devicetree" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 08/17] dts: include documentation for the RISC-V interrupt controllers
  2017-06-07 10:13       ` Mark Rutland
@ 2017-06-07 18:57         ` Wesley Terpstra
  2017-06-07 19:57           ` Rob Herring
  2017-06-08 10:52           ` Mark Rutland
  0 siblings, 2 replies; 283+ messages in thread
From: Wesley Terpstra @ 2017-06-07 18:57 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch, linux-kernel,
	Arnd Bergmann, Olof Johansson, Albert Ou, patches,
	Thomas Gleixner, Jason Cooper, Marc Zyngier, Rob Herring,
	devicetree

On Wed, Jun 7, 2017 at 3:13 AM, Mark Rutland <mark.rutland@arm.com> wrote:
>> > +RISC-V Hart-Level Interrupt Controller (HLIC)
>> > +---------------------------------------------
>> > +
>> > +RISC-V cores include Control Status Registers (CSRs) which are local to each
>> > +hart and can be read or written by software. Some of these CSRs are used to
>> > +control local interrupts connected to the core.
>> > +
>> > +Typical examples of local interrupts on a RISC-V core include: software IPI
>> > +interrupts, timer interrupts, and a link to the PLIC interrupt controller.
>
> So IIUC those interrupts are routed directly to the HLIC, and are (only)
> controlled thought the HLIC?

Yes. You can have a local interrupt that goes directly to a specific
core, not via the PLIC.

> Is the HLIC architecturally mandated? i.e. is this guaranteed to be
> present on any RISC-V implementation?

It's in the RISC-V privileged specification. Therefore, if a riscv
core can run linux it will have these CSRs.

> Does the presence of the HLIC imply the presence of a PLIC (or
> vice/versa)?

No. SiFive implementations always have a PLIC, though.

> Typically, the per-cpu and platform-wide parts of the
> top-level interrupt controller are fairly intimately coupled.

They are coupled if they both exist. The privileged specification does
explicitly call out interrupts 9 and 11 in the HLIC for attaching the
PLIC.

> You'll need to allocate the "riscv" vendor prefix in
> Documentation/devicetree/bindings/vendor-prefixes.txt

@palmer: Can you add this?

> What about the flags?

What flags?

> Are all HLIC interrupts edge triggered (or level triggered)?

HLIC = level. PLIC = both.

> We can probably replace most of these with a "...", as they're largely
> irrelevany to this binding.

Sure. I thought it would be nice to include a complete cpu example
somewhere, though.

>> > +RISC-V cores typically include a PLIC, which route interrupts from multiple
>> > +devices to multiple hart contexts.  The PLIC is connected to the interrupt
>> > +controller embedded in a RISC-V core via the interrupt-related CSRs.
>
> Do you mean that the PLIC is connected to the HLIC, or that the HLIC is
> also managed in part through CSRs?

Both. The HLIC is entirely manager through CSRs. The PLIC is managed
through a memory mapped interface. The PLIC is attached to the HLIC.

>> > +Required properties:
>> > +- compatible : "riscv,plic0"
>> > +- #address-cells : should be <0>
>> > +- #interrupt-cells : should be <1>
>
> As with the HLIC, what about the flags?

Still unsure what flags we're talking about.

>> > +- riscv,ndev : Specifies the number of interrupts attached to the PLIC
>
> Why do we need to know this?
>
> I suspect this ia actually the number of interrupts implemented in the
> PLIC, rather than the number of interrupts attached. i.e. the PLIC can
> be implemented with a subset of the potential registers/bits. Is that
> correct?

You're in principle correct, although these are probably always the same.

> If so, something like "riscv,num-interrupts" would be better, along with
> a clearer description.

Uhm. I suppose we can change this. However, it would requires changes
to quite a number of riscv repositories. I believe this is also
included in the riscv platform specification. A better description is
easy, do we really need to change the key?


>> > +- interrupts-extended : Specifies which contexts are connected to the PLIC
>
> That description doesn't sound right.
>
> I take it that these are the HLIC interrupts that the PLIC can raise?

Yes.

> You will need to be explicit about the order of interrupts in this
> property. i.e. which interrupt is routed to which context?

Yes. Order and position matter.

> Is the interrupt at the HLIC well known? From the example I see that
> here local interrupts 11 adn 9 are used. Is that mandated, or just the
> case for this particular implementation?

9 and 11 are in the privileged specification.

> Also, please consider how you will handle the case when the Linux
> logical CPU ID is not the same as the physical ID, and how you will
> handle physical IDs being sparse.

We already deal with this. If the interrupt is '-1', we skip it.
That's done in plic.c:
                if (parent.args[0] == -1) continue; // skip context holes

>> > +       plic: interrupt-controller@c000000 {
>> > +               #address-cells = <0>;
>
> This can go, given you don't have sub-nodes, nor a #size-cells property.

The device-tree-specification seems to indicate that this is mandatory
for an interrupt-controller. Or have I understood this wrongly? When
you use interrupts-extended, doesn't it use the address-cells of the
interrupt controller? We should add that size-cells = 0, though.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 02/17] pcie-xilinx: add missing 5th legacy interrupt
  2017-06-07  9:24     ` Marc Zyngier
@ 2017-06-07 19:03       ` Wesley Terpstra
  0 siblings, 0 replies; 283+ messages in thread
From: Wesley Terpstra @ 2017-06-07 19:03 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Palmer Dabbelt, Linux-Arch, linux-kernel, Arnd Bergmann,
	Olof Johansson, Albert Ou, patches, Bjorn Helgaas

On Wed, Jun 7, 2017 at 2:24 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> This is a common problem with the current OF code that numbers INTx from
> 1 instead of zero (there is no 5th legacy interrupts in the PCI spec,
> despite what $SUBJECT says). I'd be inclined to fix this at the core
> level rather than papering over it in the various drivers...

While I agree that it's a problem with OF, every other driver has
already been changed to paper over the issue. This patch just brings
this one remaining OF-PCIe driver to the same level as the others.
Without the patch, the driver doesn't work at all if there is a bridge
chip on the other end of the controller, so this is not just a
hypothetical concern for us.

Couldn't the eventual OF fix just refactor this driver along with all
of the others? Doing such a sweeping OF change is outside my current
comfort zone. I am not familiar enough with the code to understand all
the parts that would need to be touched.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 08/17] dts: include documentation for the RISC-V interrupt controllers
  2017-06-07 18:57         ` Wesley Terpstra
@ 2017-06-07 19:57           ` Rob Herring
  2017-06-07 20:31             ` Wesley Terpstra
  2017-06-08 10:52           ` Mark Rutland
  1 sibling, 1 reply; 283+ messages in thread
From: Rob Herring @ 2017-06-07 19:57 UTC (permalink / raw)
  To: Wesley Terpstra
  Cc: Mark Rutland, Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch,
	linux-kernel, Arnd Bergmann, Olof Johansson, Albert Ou, patches,
	Thomas Gleixner, Jason Cooper, Marc Zyngier, devicetree

On Wed, Jun 7, 2017 at 1:57 PM, Wesley Terpstra <wesley@sifive.com> wrote:
> On Wed, Jun 7, 2017 at 3:13 AM, Mark Rutland <mark.rutland@arm.com> wrote:
>>> > +RISC-V Hart-Level Interrupt Controller (HLIC)
>>> > +---------------------------------------------

[...]

>>> > +       plic: interrupt-controller@c000000 {
>>> > +               #address-cells = <0>;
>>
>> This can go, given you don't have sub-nodes, nor a #size-cells property.
>
> The device-tree-specification seems to indicate that this is mandatory
> for an interrupt-controller. Or have I understood this wrongly? When
> you use interrupts-extended, doesn't it use the address-cells of the
> interrupt controller? We should add that size-cells = 0, though.

It's only needed if you have an interrupt-map property AIUI.
#size-cells should never be needed (unless you have child nodes of
this one).

Rob

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 08/17] dts: include documentation for the RISC-V interrupt controllers
  2017-06-07 19:57           ` Rob Herring
@ 2017-06-07 20:31             ` Wesley Terpstra
  0 siblings, 0 replies; 283+ messages in thread
From: Wesley Terpstra @ 2017-06-07 20:31 UTC (permalink / raw)
  To: Rob Herring
  Cc: Mark Rutland, Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch,
	linux-kernel, Arnd Bergmann, Olof Johansson, Albert Ou, patches,
	Thomas Gleixner, Jason Cooper, Marc Zyngier, devicetree

I've reread the relevant sections now, and you are correct. We should
remove the address-cells from the PLIC's dts.

On Wed, Jun 7, 2017 at 12:57 PM, Rob Herring <robh+dt@kernel.org> wrote:
> On Wed, Jun 7, 2017 at 1:57 PM, Wesley Terpstra <wesley@sifive.com> wrote:
>> On Wed, Jun 7, 2017 at 3:13 AM, Mark Rutland <mark.rutland@arm.com> wrote:
>>>> > +RISC-V Hart-Level Interrupt Controller (HLIC)
>>>> > +---------------------------------------------
>
> [...]
>
>>>> > +       plic: interrupt-controller@c000000 {
>>>> > +               #address-cells = <0>;
>>>
>>> This can go, given you don't have sub-nodes, nor a #size-cells property.
>>
>> The device-tree-specification seems to indicate that this is mandatory
>> for an interrupt-controller. Or have I understood this wrongly? When
>> you use interrupts-extended, doesn't it use the address-cells of the
>> interrupt controller? We should add that size-cells = 0, though.
>
> It's only needed if you have an interrupt-map property AIUI.
> #size-cells should never be needed (unless you have child nodes of
> this one).
>
> Rob

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 03/17] base: fix order of OF initialization
  2017-06-07 18:39         ` Wesley Terpstra
@ 2017-06-07 21:10           ` Benjamin Herrenschmidt
  2017-06-08  3:49           ` Frank Rowand
  1 sibling, 0 replies; 283+ messages in thread
From: Benjamin Herrenschmidt @ 2017-06-07 21:10 UTC (permalink / raw)
  To: Wesley Terpstra, Mark Rutland
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch, linux-kernel,
	Arnd Bergmann, Olof Johansson, Albert Ou, patches, Rob Herring,
	Frank Rowand, devicetree

On Wed, 2017-06-07 at 11:39 -0700, Wesley Terpstra wrote:
> It was a while ago that I debugged this. I already reported this bug
> to Benjamin Herrenschmidt (now in CC), and I believe he has a patch of
> his own to fix the same issue.
> 
> As I understand it, of_core_init sets up the OF entries in
> /sys/firmware/devicetree. During platform bringup, when the system
> describes the cpu + cache hierarchy, it also makes an of_node symlink
> into that directory. However, if it doesn't exist yet, you get the
> warning.

Ugh... yes I did a patch for that and I think it fell through the
cracks, I can't even find it anymore...

The patch quoted here is fine I think. Everything in the device model
can potentially use OF bits these days, it makes sense to have them
initialized earlier.

Cheers,
Ben.

> # ls -l /sys/devices/system/cpu/cpu3/of_node
> lrwxrwxrwx    1 root     root             0 Jan  1 00:00
> /sys/devices/system/cpu/cpu3/of_node ->
> ../../../../firmware/devicetree/base/cpus/cpu@3
> 
> On Wed, Jun 7, 2017 at 2:35 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Wed, Jun 07, 2017 at 09:07:20AM +0200, Geert Uytterhoeven wrote:
> > > CC devicetree folks
> > > 
> > > On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> > > > From: "Wesley W. Terpstra" <wesley@sifive.com>
> > > > 
> > > > This fixes: [    0.010000] cpu cpu0: Error -2 creating of_node link
> > > > ... which you get for every CPU on all architectures with a OF cpu/ node.
> > 
> > I take it this means a /cpus node? Or the /cpus/cpu@* nodes?
> > 
> > I'm not seeing this on arm64 when booting v4.12-rc4 with DT, so clearly
> > this doesn't affect all such architectures.
> > 
> > What path are these errors happening in?
> > 
> > Thanks,
> > Mark.
> > 
> > > > 
> > > > This affects riscv, nios, etc.
> > > > 
> > > > Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> > > > ---
> > > >  drivers/base/init.c | 2 +-
> > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > > 
> > > > diff --git a/drivers/base/init.c b/drivers/base/init.c
> > > > index 48c0e220acc0..0dcd17e561d0 100644
> > > > --- a/drivers/base/init.c
> > > > +++ b/drivers/base/init.c
> > > > @@ -31,9 +31,9 @@ void __init driver_init(void)
> > > >         /* These are also core pieces, but must come after the
> > > >          * core core pieces.
> > > >          */
> > > > +       of_core_init();
> > > >         platform_bus_init();
> > > >         cpu_dev_init();
> > > >         memory_dev_init();
> > > >         container_dev_init();
> > > > -       of_core_init();
> > > >  }
> > > > --
> > > > 2.13.0
> > > 
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe devicetree" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v2
  2017-06-07  7:29 ` David Howells
  2017-06-07 21:54     ` Palmer Dabbelt
@ 2017-06-07 21:54     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-07 21:54 UTC (permalink / raw)
  To: dhowells
  Cc: dhowells, linux-arch, linux-kernel, Arnd Bergmann,
	Olof Johansson, albert, patches

On Wed, 07 Jun 2017 00:29:44 PDT (-0700), dhowells@redhat.com wrote:
> What's the target type for building cross-binutils and cross-gcc for it?

riscv64-unknown-linux-gnu

We have a super-repo that scripts the build, and includes a handful of patches
that fix GCC and binutils bugs that are in our first releases.

  https://github.com/riscv/riscv-gnu-toolchain

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v2
@ 2017-06-07 21:54     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-07 21:54 UTC (permalink / raw)
  Cc: dhowells, linux-arch, linux-kernel, Arnd Bergmann,
	Olof Johansson, albert, patches

On Wed, 07 Jun 2017 00:29:44 PDT (-0700), dhowells@redhat.com wrote:
> What's the target type for building cross-binutils and cross-gcc for it?

riscv64-unknown-linux-gnu

We have a super-repo that scripts the build, and includes a handful of patches
that fix GCC and binutils bugs that are in our first releases.

  https://github.com/riscv/riscv-gnu-toolchain

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v2
@ 2017-06-07 21:54     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-07 21:54 UTC (permalink / raw)
  To: dhowells
  Cc: linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert, patches

On Wed, 07 Jun 2017 00:29:44 PDT (-0700), dhowells@redhat.com wrote:
> What's the target type for building cross-binutils and cross-gcc for it?

riscv64-unknown-linux-gnu

We have a super-repo that scripts the build, and includes a handful of patches
that fix GCC and binutils bugs that are in our first releases.

  https://github.com/riscv/riscv-gnu-toolchain

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v2
  2017-06-07  9:23   ` RISC-V Linux Port v2 Will Deacon
@ 2017-06-07 21:54       ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-07 21:54 UTC (permalink / raw)
  To: will.deacon
  Cc: linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert, patches

On Wed, 07 Jun 2017 02:23:42 PDT (-0700), will.deacon@arm.com wrote:
> Hi Palmer,
>
> On Tue, Jun 06, 2017 at 03:59:50PM -0700, Palmer Dabbelt wrote:
>> Thanks to everyone who has participated in the review process so far.  We've
>> made a lot of changes since the v1 and while this isn't ready to go yet, I
>> finally managed to get through everything in my inbox so I thought it would be
>> a good time to submit a v2 so everyone is on the same page.
>
> [...]
>
>> [PATCH 13/17] RISC-V: Add include subdirectory
>
> This guy is too big, and got silently dropped by the mailing list. Any
> chance you could split it up please, so that it can be reviewed?

Sorry about that.  It's online

  https://github.com/riscv/riscv-linux/commit/daaea1609cb5e8c93745c1ce112a06e90f5b663c

I'll be sure to split it up next time.  I can submit a v3 if you want?

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v2
@ 2017-06-07 21:54       ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-07 21:54 UTC (permalink / raw)
  To: will.deacon
  Cc: linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert, patches

On Wed, 07 Jun 2017 02:23:42 PDT (-0700), will.deacon@arm.com wrote:
> Hi Palmer,
>
> On Tue, Jun 06, 2017 at 03:59:50PM -0700, Palmer Dabbelt wrote:
>> Thanks to everyone who has participated in the review process so far.  We've
>> made a lot of changes since the v1 and while this isn't ready to go yet, I
>> finally managed to get through everything in my inbox so I thought it would be
>> a good time to submit a v2 so everyone is on the same page.
>
> [...]
>
>> [PATCH 13/17] RISC-V: Add include subdirectory
>
> This guy is too big, and got silently dropped by the mailing list. Any
> chance you could split it up please, so that it can be reviewed?

Sorry about that.  It's online

  https://github.com/riscv/riscv-linux/commit/daaea1609cb5e8c93745c1ce112a06e90f5b663c

I'll be sure to split it up next time.  I can submit a v3 if you want?

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 08/17] dts: include documentation for the RISC-V interrupt controllers
  2017-06-06 22:59     ` Palmer Dabbelt
  (?)
  (?)
@ 2017-06-07 22:27     ` Luis R. Rodriguez
  -1 siblings, 0 replies; 283+ messages in thread
From: Luis R. Rodriguez @ 2017-06-07 22:27 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, Wesley W. Terpstra

On Tue, Jun 6, 2017 at 3:59 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> From: "Wesley W. Terpstra" <wesley@sifive.com>
>
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>

Small nitpick: its rather odd you send patches From one address but
not provide the SOB tag for them as well, best if you can include the
SOB tag of both. This applies to other patches as well.

 Luis

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 03/17] base: fix order of OF initialization
  2017-06-07 18:39         ` Wesley Terpstra
  2017-06-07 21:10           ` Benjamin Herrenschmidt
@ 2017-06-08  3:49           ` Frank Rowand
  2017-06-08  9:05               ` Mark Rutland
  1 sibling, 1 reply; 283+ messages in thread
From: Frank Rowand @ 2017-06-08  3:49 UTC (permalink / raw)
  To: Wesley Terpstra, Mark Rutland
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch, linux-kernel,
	Arnd Bergmann, Olof Johansson, Albert Ou, patches, Rob Herring,
	devicetree, Benjamin Herrenschmidt

On 06/07/17 11:39, Wesley Terpstra wrote:
> It was a while ago that I debugged this. I already reported this bug
> to Benjamin Herrenschmidt (now in CC), and I believe he has a patch of
> his own to fix the same issue.
> 
> As I understand it, of_core_init sets up the OF entries in
> /sys/firmware/devicetree. During platform bringup, when the system
> describes the cpu + cache hierarchy, it also makes an of_node symlink
> into that directory. However, if it doesn't exist yet, you get the
> warning.
> 
> # ls -l /sys/devices/system/cpu/cpu3/of_node
> lrwxrwxrwx    1 root     root             0 Jan  1 00:00
> /sys/devices/system/cpu/cpu3/of_node ->
> ../../../../firmware/devicetree/base/cpus/cpu@3
> 
> On Wed, Jun 7, 2017 at 2:35 AM, Mark Rutland <mark.rutland@arm.com> wrote:
>> On Wed, Jun 07, 2017 at 09:07:20AM +0200, Geert Uytterhoeven wrote:
>>> CC devicetree folks
>>>
>>> On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>>> From: "Wesley W. Terpstra" <wesley@sifive.com>
>>>>
>>>> This fixes: [    0.010000] cpu cpu0: Error -2 creating of_node link
>>>> ... which you get for every CPU on all architectures with a OF cpu/ node.
>>
>> I take it this means a /cpus node? Or the /cpus/cpu@* nodes?
>>
>> I'm not seeing this on arm64 when booting v4.12-rc4 with DT, so clearly
>> this doesn't affect all such architectures.
>>
>> What path are these errors happening in?

On the surface, the patch looks reasonable.  But it is not obvious to me why
the error message is occurring.  I would like to understand the cause before
saying the patch is good.

What kernel version is showing the error?  For a specific architecture
(the patch lists 'riscv, nios, etc'), which config and device tree source?

And again, what is the calling path?

- Frank

>>
>> Thanks,
>> Mark.
>>
>>>>
>>>> This affects riscv, nios, etc.
>>>>
>>>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>>>> ---
>>>>  drivers/base/init.c | 2 +-
>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/base/init.c b/drivers/base/init.c
>>>> index 48c0e220acc0..0dcd17e561d0 100644
>>>> --- a/drivers/base/init.c
>>>> +++ b/drivers/base/init.c
>>>> @@ -31,9 +31,9 @@ void __init driver_init(void)
>>>>         /* These are also core pieces, but must come after the
>>>>          * core core pieces.
>>>>          */
>>>> +       of_core_init();
>>>>         platform_bus_init();
>>>>         cpu_dev_init();
>>>>         memory_dev_init();
>>>>         container_dev_init();
>>>> -       of_core_init();
>>>>  }
>>>> --
>>>> 2.13.0
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe devicetree" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 06/17] pci: Add generic pcibios_{fixup_bus,align_resource}
  2017-06-07  7:19     ` Geert Uytterhoeven
  2017-06-07  8:01       ` Arnd Bergmann
@ 2017-06-08  8:12       ` Christoph Hellwig
  2017-06-08  8:35         ` Arnd Bergmann
  1 sibling, 1 reply; 283+ messages in thread
From: Christoph Hellwig @ 2017-06-08  8:12 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Palmer Dabbelt, Linux-Arch, linux-kernel, Arnd Bergmann,
	Olof Johansson, albert, patches, Bjorn Helgaas, linux-pci

On Wed, Jun 07, 2017 at 09:19:49AM +0200, Geert Uytterhoeven wrote:
> CC pci folks

Ok, replying with pci folks in Cc then :)

Weak symbols have (rightly) gotten a bad reputation, so maybe
we should approach this without them.

It seems we have a large number of emptry pcibios_fixup_bus calls
alreayd, so I think we should simply have the architectures
that do define it define a Kconfig or header symbol and not call
it at all otherwise.

For the ones that exist as lot just seem to call pci_read_bridge_bases
and/or pcibios_fixup_device_resources in one form or another,
and I wonder why we even need the arch indirection for that.

Similarly for pcibios_align_resource: a lot of architetures seem
to have a noop, and the once that don't mostly seem copy and
paste code, so we should again have a symbol for architectures
to opt into it, and we probably should have a generic helper
for the VGA window mirroring code instead of duplicating it multiple
times.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 06/17] pci: Add generic pcibios_{fixup_bus,align_resource}
  2017-06-08  8:12       ` Christoph Hellwig
@ 2017-06-08  8:35         ` Arnd Bergmann
  2017-06-24  2:01             ` Palmer Dabbelt
  0 siblings, 1 reply; 283+ messages in thread
From: Arnd Bergmann @ 2017-06-08  8:35 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch, linux-kernel,
	Olof Johansson, albert, patches, Bjorn Helgaas, linux-pci

On Thu, Jun 8, 2017 at 10:12 AM, Christoph Hellwig <hch@infradead.org> wrote:
> On Wed, Jun 07, 2017 at 09:19:49AM +0200, Geert Uytterhoeven wrote:
>> CC pci folks
>
> Ok, replying with pci folks in Cc then :)
>
> Weak symbols have (rightly) gotten a bad reputation, so maybe
> we should approach this without them.

Agreed, I would almost never recommend using a __weak symbol,
but they are already widely used in the PCI subsystem, so I suggested
using them here for consistency.

We have a struct pci_host_bridge now, and we should eventually
move most of the 38 pci specific weak  per-architecture symbols
into per-host driver callbacks, but that I think that would be too much
to ask for when adding an architecture port.

> It seems we have a large number of emptry pcibios_fixup_bus calls
> alreayd, so I think we should simply have the architectures
> that do define it define a Kconfig or header symbol and not call
> it at all otherwise.

I would argue that most of them should not be per-architecture
in the first place, the current state is mostly an artifact of the
times when each architecture had just one PCI implementation.

The ones that have multiple implementations (arm, powerpc, ...)
tend to actually override the weak functions with their own
per-host multiplexers again.

> For the ones that exist as lot just seem to call pci_read_bridge_bases
> and/or pcibios_fixup_device_resources in one form or another,
> and I wonder why we even need the arch indirection for that.
>
> Similarly for pcibios_align_resource: a lot of architetures seem
> to have a noop, and the once that don't mostly seem copy and
> paste code, so we should again have a symbol for architectures
> to opt into it, and we probably should have a generic helper
> for the VGA window mirroring code instead of duplicating it multiple
> times.

I now remember that we already have a host_bridge->align_resource
callback pointer, so the generic function should definitely try
to use that.

We could just use the version from arch/mips/pci/pci-generic.c
and remove that in the process. I'm not sure about why mips calls
pci_read_bridge_bases() in pcibios_fixup_bus() though, or whether
this make sense to put in the generic version.

       Arnd

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 03/17] base: fix order of OF initialization
  2017-06-08  3:49           ` Frank Rowand
  2017-06-08  9:05               ` Mark Rutland
@ 2017-06-08  9:05               ` Mark Rutland
  0 siblings, 0 replies; 283+ messages in thread
From: Mark Rutland @ 2017-06-08  9:05 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Wesley Terpstra, Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch,
	linux-kernel, Arnd Bergmann, Olof Johansson, Albert Ou, patches,
	Rob Herring, devicetree, Benjamin Herrenschmidt

On Wed, Jun 07, 2017 at 08:49:43PM -0700, Frank Rowand wrote:
> On 06/07/17 11:39, Wesley Terpstra wrote:
> > It was a while ago that I debugged this. I already reported this bug
> > to Benjamin Herrenschmidt (now in CC), and I believe he has a patch of
> > his own to fix the same issue.
> > 
> > As I understand it, of_core_init sets up the OF entries in
> > /sys/firmware/devicetree. During platform bringup, when the system
> > describes the cpu + cache hierarchy, it also makes an of_node symlink
> > into that directory. However, if it doesn't exist yet, you get the
> > warning.
> > 
> > # ls -l /sys/devices/system/cpu/cpu3/of_node
> > lrwxrwxrwx    1 root     root             0 Jan  1 00:00
> > /sys/devices/system/cpu/cpu3/of_node ->
> > ../../../../firmware/devicetree/base/cpus/cpu@3
> > 
> > On Wed, Jun 7, 2017 at 2:35 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> >> On Wed, Jun 07, 2017 at 09:07:20AM +0200, Geert Uytterhoeven wrote:
> >>> CC devicetree folks
> >>>
> >>> On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> >>>> From: "Wesley W. Terpstra" <wesley@sifive.com>
> >>>>
> >>>> This fixes: [    0.010000] cpu cpu0: Error -2 creating of_node link
> >>>> ... which you get for every CPU on all architectures with a OF cpu/ node.
> >>
> >> I take it this means a /cpus node? Or the /cpus/cpu@* nodes?
> >>
> >> I'm not seeing this on arm64 when booting v4.12-rc4 with DT, so clearly
> >> this doesn't affect all such architectures.
> >>
> >> What path are these errors happening in?
> 
> On the surface, the patch looks reasonable.  But it is not obvious to me why
> the error message is occurring.  I would like to understand the cause before
> saying the patch is good.
> 
> What kernel version is showing the error?  For a specific architecture
> (the patch lists 'riscv, nios, etc'), which config and device tree source?
> 
> And again, what is the calling path?

>From having grepped around, I think this affects architectures which
select CONFIG_GENERIC_CPU_DEVICES, which includes nios2.

In that case, driver_init() calls cpu_dev_init() before calling
of_core_init(). Then we get the callchain:

   cpu_dev_init()
-> cpu_dev_register_generic()
-> register_cpu(cpu, i)
-> device_register(&cpu->dev)
-> device_add(dev)
-> device_add_class_symlinks(dev)

... in device_add_class_symlinks, we we dev->of_node, and call
sysfs_create_link(), which fails because we haven't called
of_core_init() to register the sysfs devicetree directory yet.

Given that, this patch makes sense to me.

FWIW, with the commit message updated to describe the particular
ordering problem:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 03/17] base: fix order of OF initialization
@ 2017-06-08  9:05               ` Mark Rutland
  0 siblings, 0 replies; 283+ messages in thread
From: Mark Rutland @ 2017-06-08  9:05 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Wesley Terpstra, Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch,
	linux-kernel, Arnd Bergmann, Olof Johansson, Albert Ou, patches,
	Rob Herring, devicetree, Benjamin Herrenschmidt

On Wed, Jun 07, 2017 at 08:49:43PM -0700, Frank Rowand wrote:
> On 06/07/17 11:39, Wesley Terpstra wrote:
> > It was a while ago that I debugged this. I already reported this bug
> > to Benjamin Herrenschmidt (now in CC), and I believe he has a patch of
> > his own to fix the same issue.
> > 
> > As I understand it, of_core_init sets up the OF entries in
> > /sys/firmware/devicetree. During platform bringup, when the system
> > describes the cpu + cache hierarchy, it also makes an of_node symlink
> > into that directory. However, if it doesn't exist yet, you get the
> > warning.
> > 
> > # ls -l /sys/devices/system/cpu/cpu3/of_node
> > lrwxrwxrwx    1 root     root             0 Jan  1 00:00
> > /sys/devices/system/cpu/cpu3/of_node ->
> > ../../../../firmware/devicetree/base/cpus/cpu@3
> > 
> > On Wed, Jun 7, 2017 at 2:35 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> >> On Wed, Jun 07, 2017 at 09:07:20AM +0200, Geert Uytterhoeven wrote:
> >>> CC devicetree folks
> >>>
> >>> On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> >>>> From: "Wesley W. Terpstra" <wesley@sifive.com>
> >>>>
> >>>> This fixes: [    0.010000] cpu cpu0: Error -2 creating of_node link
> >>>> ... which you get for every CPU on all architectures with a OF cpu/ node.
> >>
> >> I take it this means a /cpus node? Or the /cpus/cpu@* nodes?
> >>
> >> I'm not seeing this on arm64 when booting v4.12-rc4 with DT, so clearly
> >> this doesn't affect all such architectures.
> >>
> >> What path are these errors happening in?
> 
> On the surface, the patch looks reasonable.  But it is not obvious to me why
> the error message is occurring.  I would like to understand the cause before
> saying the patch is good.
> 
> What kernel version is showing the error?  For a specific architecture
> (the patch lists 'riscv, nios, etc'), which config and device tree source?
> 
> And again, what is the calling path?

From having grepped around, I think this affects architectures which
select CONFIG_GENERIC_CPU_DEVICES, which includes nios2.

In that case, driver_init() calls cpu_dev_init() before calling
of_core_init(). Then we get the callchain:

   cpu_dev_init()
-> cpu_dev_register_generic()
-> register_cpu(cpu, i)
-> device_register(&cpu->dev)
-> device_add(dev)
-> device_add_class_symlinks(dev)

... in device_add_class_symlinks, we we dev->of_node, and call
sysfs_create_link(), which fails because we haven't called
of_core_init() to register the sysfs devicetree directory yet.

Given that, this patch makes sense to me.

FWIW, with the commit message updated to describe the particular
ordering problem:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 03/17] base: fix order of OF initialization
@ 2017-06-08  9:05               ` Mark Rutland
  0 siblings, 0 replies; 283+ messages in thread
From: Mark Rutland @ 2017-06-08  9:05 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Wesley Terpstra, Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch,
	linux-kernel, Arnd Bergmann, Olof Johansson, Albert Ou, patches,
	Rob Herring, devicetree, Benjamin Herrenschmidt

On Wed, Jun 07, 2017 at 08:49:43PM -0700, Frank Rowand wrote:
> On 06/07/17 11:39, Wesley Terpstra wrote:
> > It was a while ago that I debugged this. I already reported this bug
> > to Benjamin Herrenschmidt (now in CC), and I believe he has a patch of
> > his own to fix the same issue.
> > 
> > As I understand it, of_core_init sets up the OF entries in
> > /sys/firmware/devicetree. During platform bringup, when the system
> > describes the cpu + cache hierarchy, it also makes an of_node symlink
> > into that directory. However, if it doesn't exist yet, you get the
> > warning.
> > 
> > # ls -l /sys/devices/system/cpu/cpu3/of_node
> > lrwxrwxrwx    1 root     root             0 Jan  1 00:00
> > /sys/devices/system/cpu/cpu3/of_node ->
> > ../../../../firmware/devicetree/base/cpus/cpu@3
> > 
> > On Wed, Jun 7, 2017 at 2:35 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> >> On Wed, Jun 07, 2017 at 09:07:20AM +0200, Geert Uytterhoeven wrote:
> >>> CC devicetree folks
> >>>
> >>> On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> >>>> From: "Wesley W. Terpstra" <wesley@sifive.com>
> >>>>
> >>>> This fixes: [    0.010000] cpu cpu0: Error -2 creating of_node link
> >>>> ... which you get for every CPU on all architectures with a OF cpu/ node.
> >>
> >> I take it this means a /cpus node? Or the /cpus/cpu@* nodes?
> >>
> >> I'm not seeing this on arm64 when booting v4.12-rc4 with DT, so clearly
> >> this doesn't affect all such architectures.
> >>
> >> What path are these errors happening in?
> 
> On the surface, the patch looks reasonable.  But it is not obvious to me why
> the error message is occurring.  I would like to understand the cause before
> saying the patch is good.
> 
> What kernel version is showing the error?  For a specific architecture
> (the patch lists 'riscv, nios, etc'), which config and device tree source?
> 
> And again, what is the calling path?

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v2
  2017-06-07 21:54       ` Palmer Dabbelt
  (?)
@ 2017-06-08 10:26       ` Will Deacon
  2017-06-08 18:16           ` Palmer Dabbelt
  -1 siblings, 1 reply; 283+ messages in thread
From: Will Deacon @ 2017-06-08 10:26 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert, patches

On Wed, Jun 07, 2017 at 02:54:16PM -0700, Palmer Dabbelt wrote:
> On Wed, 07 Jun 2017 02:23:42 PDT (-0700), will.deacon@arm.com wrote:
> > Hi Palmer,
> >
> > On Tue, Jun 06, 2017 at 03:59:50PM -0700, Palmer Dabbelt wrote:
> >> Thanks to everyone who has participated in the review process so far.  We've
> >> made a lot of changes since the v1 and while this isn't ready to go yet, I
> >> finally managed to get through everything in my inbox so I thought it would be
> >> a good time to submit a v2 so everyone is on the same page.
> >
> > [...]
> >
> >> [PATCH 13/17] RISC-V: Add include subdirectory
> >
> > This guy is too big, and got silently dropped by the mailing list. Any
> > chance you could split it up please, so that it can be reviewed?
> 
> Sorry about that.  It's online
> 
>   https://github.com/riscv/riscv-linux/commit/daaea1609cb5e8c93745c1ce112a06e90f5b663c
> 
> I'll be sure to split it up next time.  I can submit a v3 if you want?

It looks like you'll be submitting a v3 anyway, so if you could split it up
for that then it would be much appreciated.

Thanks,

Will

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 08/17] dts: include documentation for the RISC-V interrupt controllers
  2017-06-07 18:57         ` Wesley Terpstra
  2017-06-07 19:57           ` Rob Herring
@ 2017-06-08 10:52           ` Mark Rutland
  2017-06-09 21:46               ` Wesley Terpstra
  2017-06-09 21:58             ` Wesley Terpstra
  1 sibling, 2 replies; 283+ messages in thread
From: Mark Rutland @ 2017-06-08 10:52 UTC (permalink / raw)
  To: Wesley Terpstra
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch, linux-kernel,
	Arnd Bergmann, Olof Johansson, Albert Ou, patches,
	Thomas Gleixner, Jason Cooper, Marc Zyngier, Rob Herring,
	devicetree

On Wed, Jun 07, 2017 at 11:57:17AM -0700, Wesley Terpstra wrote:
> On Wed, Jun 7, 2017 at 3:13 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> >> > +RISC-V Hart-Level Interrupt Controller (HLIC)
> >> > +---------------------------------------------
> >> > +
> >> > +RISC-V cores include Control Status Registers (CSRs) which are local to each
> >> > +hart and can be read or written by software. Some of these CSRs are used to
> >> > +control local interrupts connected to the core.
> >> > +
> >> > +Typical examples of local interrupts on a RISC-V core include: software IPI
> >> > +interrupts, timer interrupts, and a link to the PLIC interrupt controller.
> >
> > So IIUC those interrupts are routed directly to the HLIC, and are (only)
> > controlled thought the HLIC?
> 
> Yes. You can have a local interrupt that goes directly to a specific
> core, not via the PLIC.
> 
> > Is the HLIC architecturally mandated? i.e. is this guaranteed to be
> > present on any RISC-V implementation?
> 
> It's in the RISC-V privileged specification. Therefore, if a riscv
> core can run linux it will have these CSRs.
> 
> > Does the presence of the HLIC imply the presence of a PLIC (or
> > vice/versa)?
> 
> No. SiFive implementations always have a PLIC, though.
> 
> > Typically, the per-cpu and platform-wide parts of the
> > top-level interrupt controller are fairly intimately coupled.
> 
> They are coupled if they both exist. The privileged specification does
> explicitly call out interrupts 9 and 11 in the HLIC for attaching the
> PLIC.
> 
> > You'll need to allocate the "riscv" vendor prefix in
> > Documentation/devicetree/bindings/vendor-prefixes.txt
> 
> @palmer: Can you add this?
> 
> > What about the flags?
> 
> What flags?

Edge vs level, active high vs active low. Typically some of these are
programmable, and are described as flags in the interrupt-specifier.

See the examples in:

Documentation/devicetree/bindings/interrupt-controller/interrupts.txt

> > Are all HLIC interrupts edge triggered (or level triggered)?
> 
> HLIC = level. PLIC = both.

Ok. Given that, flags aren't necessary for the HLIC, and the
interrupt-specifier is fine as-is.

> >> > +RISC-V cores typically include a PLIC, which route interrupts from multiple
> >> > +devices to multiple hart contexts.  The PLIC is connected to the interrupt
> >> > +controller embedded in a RISC-V core via the interrupt-related CSRs.
> >
> > Do you mean that the PLIC is connected to the HLIC, or that the HLIC is
> > also managed in part through CSRs?
> 
> Both. The HLIC is entirely manager through CSRs. The PLIC is managed
> through a memory mapped interface. The PLIC is attached to the HLIC.

So do any CSRs affect the state of the PLIC? If it's just MMIO, the
mention of CSRs above is just a little confusing.

It might be best to just say "The PLIC is connect to the HLIC embedded
in each RISC-V core".

> >> > +Required properties:
> >> > +- compatible : "riscv,plic0"
> >> > +- #address-cells : should be <0>
> >> > +- #interrupt-cells : should be <1>
> >
> > As with the HLIC, what about the flags?
> 
> Still unsure what flags we're talking about.

As covered above for the HLIC.

It sounds like we'd need these to distinguish edge/level interrupts,
unless that's fixed at integration time and you can determine it from
the PLIC itself?

> >> > +- riscv,ndev : Specifies the number of interrupts attached to the PLIC
> >
> > Why do we need to know this?
> >
> > I suspect this ia actually the number of interrupts implemented in the
> > PLIC, rather than the number of interrupts attached. i.e. the PLIC can
> > be implemented with a subset of the potential registers/bits. Is that
> > correct?
> 
> You're in principle correct, although these are probably always the same.

For now, perhaps. Let's not embed an assumption we cannot guarantee.

> > If so, something like "riscv,num-interrupts" would be better, along with
> > a clearer description.
> 
> Uhm. I suppose we can change this. However, it would requires changes
> to quite a number of riscv repositories. I believe this is also
> included in the riscv platform specification. A better description is
> easy, do we really need to change the key?

It's not too much of a problem, but if we end up having to change
anything else from the proposed bindings, those trees are going to
require updates anyway.

If we can, I would like to change this to keep things as clear as
possible from the outset.

[...]

> > You will need to be explicit about the order of interrupts in this
> > property. i.e. which interrupt is routed to which context?
> 
> Yes. Order and position matter.

Ok.

Please update the binding to explicitly define the ordering requirement.

[...]

> > Also, please consider how you will handle the case when the Linux
> > logical CPU ID is not the same as the physical ID, and how you will
> > handle physical IDs being sparse.
> 
> We already deal with this. If the interrupt is '-1', we skip it.
> That's done in plic.c:
>                 if (parent.args[0] == -1) continue; // skip context holes

If this is what we expect people to do, it needs to be documented in the
binding.

Does this mean that you expect Linux logical CPU IDs to be equal to
physical CPU IDs in all cases?

That's going to be painful for very sparse ID ranges, and won't work for
cases like kexec/kdump where you cannot guarantee which physical CPU
will end up being CPU0 in the new kernel.

I would strongly advise that you explicitly describe the relationship
using phandles to CPU nodes.

I would also advise that you try to decouple the physical CPU IDs and
Linux logical IDs. While assuming they're the same might simplify things
today, it will create longer term maintenance problems and get in the
way of a number of features.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v2
  2017-06-08 10:26       ` Will Deacon
@ 2017-06-08 18:16           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-08 18:16 UTC (permalink / raw)
  To: will.deacon
  Cc: linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert, patches

On Thu, 08 Jun 2017 03:26:32 PDT (-0700), will.deacon@arm.com wrote:
> On Wed, Jun 07, 2017 at 02:54:16PM -0700, Palmer Dabbelt wrote:
>> On Wed, 07 Jun 2017 02:23:42 PDT (-0700), will.deacon@arm.com wrote:
>> > Hi Palmer,
>> >
>> > On Tue, Jun 06, 2017 at 03:59:50PM -0700, Palmer Dabbelt wrote:
>> >> Thanks to everyone who has participated in the review process so far.  We've
>> >> made a lot of changes since the v1 and while this isn't ready to go yet, I
>> >> finally managed to get through everything in my inbox so I thought it would be
>> >> a good time to submit a v2 so everyone is on the same page.
>> >
>> > [...]
>> >
>> >> [PATCH 13/17] RISC-V: Add include subdirectory
>> >
>> > This guy is too big, and got silently dropped by the mailing list. Any
>> > chance you could split it up please, so that it can be reviewed?
>>
>> Sorry about that.  It's online
>>
>>   https://github.com/riscv/riscv-linux/commit/daaea1609cb5e8c93745c1ce112a06e90f5b663c
>>
>> I'll be sure to split it up next time.  I can submit a v3 if you want?
>
> It looks like you'll be submitting a v3 anyway, so if you could split it up
> for that then it would be much appreciated.

Sounds good.  I won't spin a v3 just for this problem, as there's a lot of
other comments I need to go through.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: RISC-V Linux Port v2
@ 2017-06-08 18:16           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-08 18:16 UTC (permalink / raw)
  To: will.deacon
  Cc: linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert, patches

On Thu, 08 Jun 2017 03:26:32 PDT (-0700), will.deacon@arm.com wrote:
> On Wed, Jun 07, 2017 at 02:54:16PM -0700, Palmer Dabbelt wrote:
>> On Wed, 07 Jun 2017 02:23:42 PDT (-0700), will.deacon@arm.com wrote:
>> > Hi Palmer,
>> >
>> > On Tue, Jun 06, 2017 at 03:59:50PM -0700, Palmer Dabbelt wrote:
>> >> Thanks to everyone who has participated in the review process so far.  We've
>> >> made a lot of changes since the v1 and while this isn't ready to go yet, I
>> >> finally managed to get through everything in my inbox so I thought it would be
>> >> a good time to submit a v2 so everyone is on the same page.
>> >
>> > [...]
>> >
>> >> [PATCH 13/17] RISC-V: Add include subdirectory
>> >
>> > This guy is too big, and got silently dropped by the mailing list. Any
>> > chance you could split it up please, so that it can be reviewed?
>>
>> Sorry about that.  It's online
>>
>>   https://github.com/riscv/riscv-linux/commit/daaea1609cb5e8c93745c1ce112a06e90f5b663c
>>
>> I'll be sure to split it up next time.  I can submit a v3 if you want?
>
> It looks like you'll be submitting a v3 anyway, so if you could split it up
> for that then it would be much appreciated.

Sounds good.  I won't spin a v3 just for this problem, as there's a lot of
other comments I need to go through.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 03/17] base: fix order of OF initialization
  2017-06-08  9:05               ` Mark Rutland
  (?)
  (?)
@ 2017-06-09  0:37               ` Frank Rowand
  -1 siblings, 0 replies; 283+ messages in thread
From: Frank Rowand @ 2017-06-09  0:37 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Wesley Terpstra, Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch,
	linux-kernel, Arnd Bergmann, Olof Johansson, Albert Ou, patches,
	Rob Herring, devicetree, Benjamin Herrenschmidt

On 06/08/17 02:05, Mark Rutland wrote:
> On Wed, Jun 07, 2017 at 08:49:43PM -0700, Frank Rowand wrote:
>> On 06/07/17 11:39, Wesley Terpstra wrote:
>>> It was a while ago that I debugged this. I already reported this bug
>>> to Benjamin Herrenschmidt (now in CC), and I believe he has a patch of
>>> his own to fix the same issue.
>>>
>>> As I understand it, of_core_init sets up the OF entries in
>>> /sys/firmware/devicetree. During platform bringup, when the system
>>> describes the cpu + cache hierarchy, it also makes an of_node symlink
>>> into that directory. However, if it doesn't exist yet, you get the
>>> warning.
>>>
>>> # ls -l /sys/devices/system/cpu/cpu3/of_node
>>> lrwxrwxrwx    1 root     root             0 Jan  1 00:00
>>> /sys/devices/system/cpu/cpu3/of_node ->
>>> ../../../../firmware/devicetree/base/cpus/cpu@3
>>>
>>> On Wed, Jun 7, 2017 at 2:35 AM, Mark Rutland <mark.rutland@arm.com> wrote:
>>>> On Wed, Jun 07, 2017 at 09:07:20AM +0200, Geert Uytterhoeven wrote:
>>>>> CC devicetree folks
>>>>>
>>>>> On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>>>>> From: "Wesley W. Terpstra" <wesley@sifive.com>
>>>>>>
>>>>>> This fixes: [    0.010000] cpu cpu0: Error -2 creating of_node link
>>>>>> ... which you get for every CPU on all architectures with a OF cpu/ node.
>>>>
>>>> I take it this means a /cpus node? Or the /cpus/cpu@* nodes?
>>>>
>>>> I'm not seeing this on arm64 when booting v4.12-rc4 with DT, so clearly
>>>> this doesn't affect all such architectures.
>>>>
>>>> What path are these errors happening in?
>>
>> On the surface, the patch looks reasonable.  But it is not obvious to me why
>> the error message is occurring.  I would like to understand the cause before
>> saying the patch is good.
>>
>> What kernel version is showing the error?  For a specific architecture
>> (the patch lists 'riscv, nios, etc'), which config and device tree source?
>>
>> And again, what is the calling path?
> 
>>From having grepped around, I think this affects architectures which
> select CONFIG_GENERIC_CPU_DEVICES, which includes nios2.

Thanks Mark!  The "#ifdef CONFIG_GENERIC_CPU_DEVICES" in cpu_dev_register_generic()
explains why we don't see the error on ARM, ARM64, etc.  Without the CONFIG
option, register_cpu(cpu, i) is not called.


> In that case, driver_init() calls cpu_dev_init() before calling
> of_core_init(). Then we get the callchain:
> 
>    cpu_dev_init()
> -> cpu_dev_register_generic()
> -> register_cpu(cpu, i)
> -> device_register(&cpu->dev)
> -> device_add(dev)
> -> device_add_class_symlinks(dev)
> 
> ... in device_add_class_symlinks, we we dev->of_node, and call
> sysfs_create_link(), which fails because we haven't called
> of_core_init() to register the sysfs devicetree directory yet.
> 
> Given that, this patch makes sense to me.
> 
> FWIW, with the commit message updated to describe the particular
> ordering problem:
> 
> Acked-by: Mark Rutland <mark.rutland@arm.com>

Agree with Mark's request to update the commit message, and also
I would like to see this shaken out in the -next tree.

Acked-by: Frank Rowand <frowand.list@gmail.com>

> 
> Thanks,
> Mark.
> 

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-07 13:17     ` Peter Zijlstra
@ 2017-06-09  8:16       ` Peter Zijlstra
  2017-06-26 20:07         ` Palmer Dabbelt
  1 sibling, 0 replies; 283+ messages in thread
From: Peter Zijlstra @ 2017-06-09  8:16 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-arch, linux-kernel, Arnd Bergmann, olof, albert, patches,
	Will Deacon

On Wed, Jun 07, 2017 at 03:17:27PM +0200, Peter Zijlstra wrote:

> > +static inline void arch_spin_unlock(arch_spinlock_t *lock)
> > +{
> > +	__asm__ __volatile__ (
> > +		"amoswap.w.rl x0, x0, %0"
> > +		: "=A" (lock->lock)
> > +		:: "memory");
> > +}
> > +
> > +static inline int arch_spin_trylock(arch_spinlock_t *lock)
> > +{
> > +	int tmp = 1, busy;
> > +
> > +	__asm__ __volatile__ (
> > +		"amoswap.w.aq %0, %2, %1"
> > +		: "=r" (busy), "+A" (lock->lock)
> > +		: "r" (tmp)
> > +		: "memory");
> > +
> > +	return !busy;
> > +}

One other thing, you need to describe the acquire/release semantics for
your platform. Is the above lock RCpc or RCsc ? If RCpc, you need to
look into adding smp_mb__after_unlock_lock().

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 10/17] irqchip: New RISC-V PLIC Driver
  2017-06-07 10:52     ` Marc Zyngier
@ 2017-06-09 13:47       ` Will Deacon
  2017-06-27  1:09           ` Palmer Dabbelt
  2017-06-25 20:49         ` Palmer Dabbelt
  1 sibling, 1 reply; 283+ messages in thread
From: Will Deacon @ 2017-06-09 13:47 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Palmer Dabbelt, linux-arch, linux-kernel, Arnd Bergmann, olof,
	albert, patches, Peter Zijlstra

On Wed, Jun 07, 2017 at 11:52:10AM +0100, Marc Zyngier wrote:
> On 07/06/17 00:00, Palmer Dabbelt wrote:
> > +static void plic_disable(struct plic_data *data, int i, int hwirq)
> > +{
> > +	struct plic_enable_context *enable = plic_enable_context(data, i);
> > +
> > +	atomic_and(~(1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
> 
> This is still a device access, right? What does it mean to use the
> atomic primitives on that? What are you racing against? I thought the
> various context were private to an execution context...
> 
> Adding Will and PeterZ to the CC list because they will probably have
> their own views on this...

atomic_* accesses to MMIO is almost certainly a bad idea. Is this atomic
because you want to allow the function to run concurrently, or is it atomic
because you want some guarantees from the endpoint's view?

Will

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 08/17] dts: include documentation for the RISC-V interrupt controllers
  2017-06-08 10:52           ` Mark Rutland
@ 2017-06-09 21:46               ` Wesley Terpstra
  2017-06-09 21:58             ` Wesley Terpstra
  1 sibling, 0 replies; 283+ messages in thread
From: Wesley Terpstra @ 2017-06-09 21:46 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch, linux-kernel,
	Arnd Bergmann, Olof Johansson, Albert Ou, patches,
	Thomas Gleixner, Jason Cooper, Marc Zyngier, Rob Herring,
	devicetree

On Thu, Jun 8, 2017 at 3:52 AM, Mark Rutland <mark.rutland@arm.com> wrote:
>> What flags?
>
> Edge vs level, active high vs active low. Typically some of these are
> programmable, and are described as flags in the interrupt-specifier.
>
> See the examples in:
>
> Documentation/devicetree/bindings/interrupt-controller/interrupts.txt

Ok. Those are not applicable to the PLIC.

>
>> > Are all HLIC interrupts edge triggered (or level triggered)?
>>
>> HLIC = level. PLIC = both.
>
> Ok. Given that, flags aren't necessary for the HLIC, and the
> interrupt-specifier is fine as-is.
>
>> >> > +RISC-V cores typically include a PLIC, which route interrupts from multiple
>> >> > +devices to multiple hart contexts.  The PLIC is connected to the interrupt
>> >> > +controller embedded in a RISC-V core via the interrupt-related CSRs.
>> >
>> > Do you mean that the PLIC is connected to the HLIC, or that the HLIC is
>> > also managed in part through CSRs?
>>
>> Both. The HLIC is entirely manager through CSRs. The PLIC is managed
>> through a memory mapped interface. The PLIC is attached to the HLIC.
>
> So do any CSRs affect the state of the PLIC? If it's just MMIO, the
> mention of CSRs above is just a little confusing.
>
> It might be best to just say "The PLIC is connect to the HLIC embedded
> in each RISC-V core".
>
>> >> > +Required properties:
>> >> > +- compatible : "riscv,plic0"
>> >> > +- #address-cells : should be <0>
>> >> > +- #interrupt-cells : should be <1>
>> >
>> > As with the HLIC, what about the flags?
>>
>> Still unsure what flags we're talking about.
>
> As covered above for the HLIC.
>
> It sounds like we'd need these to distinguish edge/level interrupts,
> unless that's fixed at integration time and you can determine it from
> the PLIC itself?
>
>> >> > +- riscv,ndev : Specifies the number of interrupts attached to the PLIC
>> >
>> > Why do we need to know this?
>> >
>> > I suspect this ia actually the number of interrupts implemented in the
>> > PLIC, rather than the number of interrupts attached. i.e. the PLIC can
>> > be implemented with a subset of the potential registers/bits. Is that
>> > correct?
>>
>> You're in principle correct, although these are probably always the same.
>
> For now, perhaps. Let's not embed an assumption we cannot guarantee.
>
>> > If so, something like "riscv,num-interrupts" would be better, along with
>> > a clearer description.
>>
>> Uhm. I suppose we can change this. However, it would requires changes
>> to quite a number of riscv repositories. I believe this is also
>> included in the riscv platform specification. A better description is
>> easy, do we really need to change the key?
>
> It's not too much of a problem, but if we end up having to change
> anything else from the proposed bindings, those trees are going to
> require updates anyway.
>
> If we can, I would like to change this to keep things as clear as
> possible from the outset.
>
> [...]
>
>> > You will need to be explicit about the order of interrupts in this
>> > property. i.e. which interrupt is routed to which context?
>>
>> Yes. Order and position matter.
>
> Ok.
>
> Please update the binding to explicitly define the ordering requirement.
>
> [...]
>
>> > Also, please consider how you will handle the case when the Linux
>> > logical CPU ID is not the same as the physical ID, and how you will
>> > handle physical IDs being sparse.
>>
>> We already deal with this. If the interrupt is '-1', we skip it.
>> That's done in plic.c:
>>                 if (parent.args[0] == -1) continue; // skip context holes
>
> If this is what we expect people to do, it needs to be documented in the
> binding.
>
> Does this mean that you expect Linux logical CPU IDs to be equal to
> physical CPU IDs in all cases?
>
> That's going to be painful for very sparse ID ranges, and won't work for
> cases like kexec/kdump where you cannot guarantee which physical CPU
> will end up being CPU0 in the new kernel.
>
> I would strongly advise that you explicitly describe the relationship
> using phandles to CPU nodes.
>
> I would also advise that you try to decouple the physical CPU IDs and
> Linux logical IDs. While assuming they're the same might simplify things
> today, it will create longer term maintenance problems and get in the
> way of a number of features.
>
> Thanks,
> Mark.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 08/17] dts: include documentation for the RISC-V interrupt controllers
@ 2017-06-09 21:46               ` Wesley Terpstra
  0 siblings, 0 replies; 283+ messages in thread
From: Wesley Terpstra @ 2017-06-09 21:46 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Arnd Bergmann,
	Olof Johansson, Albert Ou, patches-q3qR2WxjNRFS9aJRtSZj7A,
	Thomas Gleixner, Jason Cooper, Marc Zyngier, Rob Herring,
	devicetree-u79uwXL29TY76Z2rM5mHXA

On Thu, Jun 8, 2017 at 3:52 AM, Mark Rutland <mark.rutland-5wv7dgnIgG8@public.gmane.org> wrote:
>> What flags?
>
> Edge vs level, active high vs active low. Typically some of these are
> programmable, and are described as flags in the interrupt-specifier.
>
> See the examples in:
>
> Documentation/devicetree/bindings/interrupt-controller/interrupts.txt

Ok. Those are not applicable to the PLIC.

>
>> > Are all HLIC interrupts edge triggered (or level triggered)?
>>
>> HLIC = level. PLIC = both.
>
> Ok. Given that, flags aren't necessary for the HLIC, and the
> interrupt-specifier is fine as-is.
>
>> >> > +RISC-V cores typically include a PLIC, which route interrupts from multiple
>> >> > +devices to multiple hart contexts.  The PLIC is connected to the interrupt
>> >> > +controller embedded in a RISC-V core via the interrupt-related CSRs.
>> >
>> > Do you mean that the PLIC is connected to the HLIC, or that the HLIC is
>> > also managed in part through CSRs?
>>
>> Both. The HLIC is entirely manager through CSRs. The PLIC is managed
>> through a memory mapped interface. The PLIC is attached to the HLIC.
>
> So do any CSRs affect the state of the PLIC? If it's just MMIO, the
> mention of CSRs above is just a little confusing.
>
> It might be best to just say "The PLIC is connect to the HLIC embedded
> in each RISC-V core".
>
>> >> > +Required properties:
>> >> > +- compatible : "riscv,plic0"
>> >> > +- #address-cells : should be <0>
>> >> > +- #interrupt-cells : should be <1>
>> >
>> > As with the HLIC, what about the flags?
>>
>> Still unsure what flags we're talking about.
>
> As covered above for the HLIC.
>
> It sounds like we'd need these to distinguish edge/level interrupts,
> unless that's fixed at integration time and you can determine it from
> the PLIC itself?
>
>> >> > +- riscv,ndev : Specifies the number of interrupts attached to the PLIC
>> >
>> > Why do we need to know this?
>> >
>> > I suspect this ia actually the number of interrupts implemented in the
>> > PLIC, rather than the number of interrupts attached. i.e. the PLIC can
>> > be implemented with a subset of the potential registers/bits. Is that
>> > correct?
>>
>> You're in principle correct, although these are probably always the same.
>
> For now, perhaps. Let's not embed an assumption we cannot guarantee.
>
>> > If so, something like "riscv,num-interrupts" would be better, along with
>> > a clearer description.
>>
>> Uhm. I suppose we can change this. However, it would requires changes
>> to quite a number of riscv repositories. I believe this is also
>> included in the riscv platform specification. A better description is
>> easy, do we really need to change the key?
>
> It's not too much of a problem, but if we end up having to change
> anything else from the proposed bindings, those trees are going to
> require updates anyway.
>
> If we can, I would like to change this to keep things as clear as
> possible from the outset.
>
> [...]
>
>> > You will need to be explicit about the order of interrupts in this
>> > property. i.e. which interrupt is routed to which context?
>>
>> Yes. Order and position matter.
>
> Ok.
>
> Please update the binding to explicitly define the ordering requirement.
>
> [...]
>
>> > Also, please consider how you will handle the case when the Linux
>> > logical CPU ID is not the same as the physical ID, and how you will
>> > handle physical IDs being sparse.
>>
>> We already deal with this. If the interrupt is '-1', we skip it.
>> That's done in plic.c:
>>                 if (parent.args[0] == -1) continue; // skip context holes
>
> If this is what we expect people to do, it needs to be documented in the
> binding.
>
> Does this mean that you expect Linux logical CPU IDs to be equal to
> physical CPU IDs in all cases?
>
> That's going to be painful for very sparse ID ranges, and won't work for
> cases like kexec/kdump where you cannot guarantee which physical CPU
> will end up being CPU0 in the new kernel.
>
> I would strongly advise that you explicitly describe the relationship
> using phandles to CPU nodes.
>
> I would also advise that you try to decouple the physical CPU IDs and
> Linux logical IDs. While assuming they're the same might simplify things
> today, it will create longer term maintenance problems and get in the
> way of a number of features.
>
> Thanks,
> Mark.
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 08/17] dts: include documentation for the RISC-V interrupt controllers
  2017-06-08 10:52           ` Mark Rutland
  2017-06-09 21:46               ` Wesley Terpstra
@ 2017-06-09 21:58             ` Wesley Terpstra
  2017-06-19 14:30               ` Mark Rutland
  1 sibling, 1 reply; 283+ messages in thread
From: Wesley Terpstra @ 2017-06-09 21:58 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch, linux-kernel,
	Arnd Bergmann, Olof Johansson, Albert Ou, patches,
	Thomas Gleixner, Jason Cooper, Marc Zyngier, Rob Herring,
	devicetree

Ugh. Clicked reply without being done writing the reply!

On Thu, Jun 8, 2017 at 3:52 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> Edge vs level, active high vs active low. Typically some of these are
> programmable, and are described as flags in the interrupt-specifier.
>
> See the examples in:
>
> Documentation/devicetree/bindings/interrupt-controller/interrupts.txt

Ok. Those are not really relevant to the PLIC. It has a flat interrupt
namespace and the way you handle all four kinds of interrupts in the
driver is uniform. I don't think we want to architecturally expose
this to the operating system.

> So do any CSRs affect the state of the PLIC? If it's just MMIO, the
> mention of CSRs above is just a little confusing.

HLIC = CSR only. PLIC = MMIO only.

> It might be best to just say "The PLIC is connect to the HLIC embedded
> in each RISC-V core".

Fair enough.

> It sounds like we'd need these to distinguish edge/level interrupts,
> unless that's fixed at integration time and you can determine it from
> the PLIC itself?

They are fixed at integration time and the PLIC driver does not need to care.

> It's not too much of a problem, but if we end up having to change
> anything else from the proposed bindings, those trees are going to
> require updates anyway.

I'll talk to the other stakeholders.

> Please update the binding to explicitly define the ordering requirement.

The ordering requirement is that the first interrupts-extended entry
corresponds to the first context, second to second, etc. If a context
is unused for some reason, that's when you need a -1. The contexts are
linear and contiguous in the MMIO address map.

> Does this mean that you expect Linux logical CPU IDs to be equal to
> physical CPU IDs in all cases?

No. There is no 'physical CPU ID' anyway, except the mhartid CSR which
is unavailable to linux. The SBI conveys your CPU ID by passing it as
the first argument to the linux kernel.

The interrupts-extended in the PLIC uses a phandle to reference the
matching CPU in DTS. The num in cpu@<num> only need correspond to the
first register argument to the kernel.

If for some reason there end up too many -1 holes in the PLIC (b/c you
virtualized a 128-core machine down to say 2), you can always
virtualize the PLIC device and provide a matching simplified DTB.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 08/17] dts: include documentation for the RISC-V interrupt controllers
  2017-06-09 21:58             ` Wesley Terpstra
@ 2017-06-19 14:30               ` Mark Rutland
  0 siblings, 0 replies; 283+ messages in thread
From: Mark Rutland @ 2017-06-19 14:30 UTC (permalink / raw)
  To: Wesley Terpstra
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Linux-Arch, linux-kernel,
	Arnd Bergmann, Olof Johansson, Albert Ou, patches,
	Thomas Gleixner, Jason Cooper, Marc Zyngier, Rob Herring,
	devicetree

On Fri, Jun 09, 2017 at 02:58:14PM -0700, Wesley Terpstra wrote:
> Ugh. Clicked reply without being done writing the reply!
> 
> On Thu, Jun 8, 2017 at 3:52 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> > Edge vs level, active high vs active low. Typically some of these are
> > programmable, and are described as flags in the interrupt-specifier.
> >
> > See the examples in:
> >
> > Documentation/devicetree/bindings/interrupt-controller/interrupts.txt
> 
> Ok. Those are not really relevant to the PLIC. It has a flat interrupt
> namespace and the way you handle all four kinds of interrupts in the
> driver is uniform. I don't think we want to architecturally expose
> this to the operating system.

Sure thing. I was just making sure I clarified my initial statement.

> > So do any CSRs affect the state of the PLIC? If it's just MMIO, the
> > mention of CSRs above is just a little confusing.
> 
> HLIC = CSR only. PLIC = MMIO only.
> 
> > It might be best to just say "The PLIC is connect to the HLIC embedded
> > in each RISC-V core".
> 
> Fair enough.
> 
> > It sounds like we'd need these to distinguish edge/level interrupts,
> > unless that's fixed at integration time and you can determine it from
> > the PLIC itself?
> 
> They are fixed at integration time and the PLIC driver does not need to care.

Ok.

> > Please update the binding to explicitly define the ordering requirement.
> 
> The ordering requirement is that the first interrupts-extended entry
> corresponds to the first context, second to second, etc. If a context
> is unused for some reason, that's when you need a -1. The contexts are
> linear and contiguous in the MMIO address map.

Ah. It wasn't at all clear to me that the order was in relation to the
MMIO space.

How is it possible to tell if an context is unused?

Are those mandated to be linear and contiguous in the address space?

How is the relationship between an MMIO context and hart determined?

I'm not sure that it makes sense to use -1 in this manner, since the
first cell of each interrupts-extended tuple entry is a phandle. IIRC
zero is currently de-facto invalid, and if we really need a NULL
phandle, we should get that clarified in the DT spec.

It might be better to describe each MMIO context separately, and then
associate IRQs with those in-order.

> 
> > Does this mean that you expect Linux logical CPU IDs to be equal to
> > physical CPU IDs in all cases?
> 
> No. There is no 'physical CPU ID' anyway, except the mhartid CSR which
> is unavailable to linux. The SBI conveys your CPU ID by passing it as
> the first argument to the linux kernel.
> 
> The interrupts-extended in the PLIC uses a phandle to reference the
> matching CPU in DTS. The num in cpu@<num> only need correspond to the
> first register argument to the kernel.

So you use the reference to the HLIC to implictly provide the CPU
affinity? Or do you assume that these align with the SBI-provided CPU
IDs?

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [patches] Re: [PATCH 01/17] drivers: support PCIe in RISCV
  2017-06-07 17:40         ` Olof Johansson
@ 2017-06-23 21:47             ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-23 21:47 UTC (permalink / raw)
  To: Olof Johansson
  Cc: Wesley Terpstra, hch, patches, linux-arch, albert, Arnd Bergmann,
	linux-kernel

On Wed, 07 Jun 2017 10:40:50 PDT (-0700), Olof Johansson wrote:
> On Wed, Jun 7, 2017 at 10:09 AM, Wesley Terpstra <wesley@sifive.com> wrote:
>>
>>
>> On Jun 7, 2017 7:26 AM, "Christoph Hellwig" <hch@infradead.org> wrote:
>>
>> On Tue, Jun 06, 2017 at 03:59:51PM -0700, Palmer Dabbelt wrote:
>>> From: "Wesley W. Terpstra" <wesley@sifive.com>
>>>
>>> There are RISC-V systems that have been mapped to Xilinx FPGAs that have
>>> their PCIe controllers on chip.  These build system changes allow RISC-V
>>> systems to enable the Xilinx PCIe controller, and to setup PCIe IRQs.
>>>
>>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>>> ---
>>>  drivers/pci/Makefile     | 1 +
>>>  drivers/pci/host/Kconfig | 2 +-
>>>  2 files changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
>>> index 462c1f5f5546..a29d9ec05d13 100644
>>> --- a/drivers/pci/Makefile
>>> +++ b/drivers/pci/Makefile
>>> @@ -41,6 +41,7 @@ obj-$(CONFIG_MIPS) += setup-irq.o
>>>  obj-$(CONFIG_TILE) += setup-irq.o
>>>  obj-$(CONFIG_SPARC_LEON) += setup-irq.o
>>>  obj-$(CONFIG_M68K) += setup-irq.o
>>> +obj-$(CONFIG_RISCV) += setup-irq.o
>>
>> Can we do a cleanup here and add a ARCH_USE_GENERIC_PCI_SETUP Kconfig
>> symbol that all these architectures can select?
>>
>>
>> That would probably be better. I did not want to touch other arch/ folders
>> in our changes.
>
> I understand that approach when you're doing things in your tree
> early, but in this case (and at this phase in submission/merging),
> don't be afraid to touch other architectures and refactor/clean up.
>
> Please do the refactor in a separate preceding patch and submit it
> separate/soon instead of keeping it just in the series and bundled
> with your addition. That way it can go in when ready even if the rest
> of the series is spinning.

Makes sense.  I've started submitting the various smaller patches so we can get
them all in to make our patch set smaller.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [patches] Re: [PATCH 01/17] drivers: support PCIe in RISCV
@ 2017-06-23 21:47             ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-23 21:47 UTC (permalink / raw)
  To: Olof Johansson
  Cc: Wesley Terpstra, hch, patches, linux-arch, albert, Arnd Bergmann,
	linux-kernel

On Wed, 07 Jun 2017 10:40:50 PDT (-0700), Olof Johansson wrote:
> On Wed, Jun 7, 2017 at 10:09 AM, Wesley Terpstra <wesley@sifive.com> wrote:
>>
>>
>> On Jun 7, 2017 7:26 AM, "Christoph Hellwig" <hch@infradead.org> wrote:
>>
>> On Tue, Jun 06, 2017 at 03:59:51PM -0700, Palmer Dabbelt wrote:
>>> From: "Wesley W. Terpstra" <wesley@sifive.com>
>>>
>>> There are RISC-V systems that have been mapped to Xilinx FPGAs that have
>>> their PCIe controllers on chip.  These build system changes allow RISC-V
>>> systems to enable the Xilinx PCIe controller, and to setup PCIe IRQs.
>>>
>>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>>> ---
>>>  drivers/pci/Makefile     | 1 +
>>>  drivers/pci/host/Kconfig | 2 +-
>>>  2 files changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
>>> index 462c1f5f5546..a29d9ec05d13 100644
>>> --- a/drivers/pci/Makefile
>>> +++ b/drivers/pci/Makefile
>>> @@ -41,6 +41,7 @@ obj-$(CONFIG_MIPS) += setup-irq.o
>>>  obj-$(CONFIG_TILE) += setup-irq.o
>>>  obj-$(CONFIG_SPARC_LEON) += setup-irq.o
>>>  obj-$(CONFIG_M68K) += setup-irq.o
>>> +obj-$(CONFIG_RISCV) += setup-irq.o
>>
>> Can we do a cleanup here and add a ARCH_USE_GENERIC_PCI_SETUP Kconfig
>> symbol that all these architectures can select?
>>
>>
>> That would probably be better. I did not want to touch other arch/ folders
>> in our changes.
>
> I understand that approach when you're doing things in your tree
> early, but in this case (and at this phase in submission/merging),
> don't be afraid to touch other architectures and refactor/clean up.
>
> Please do the refactor in a separate preceding patch and submit it
> separate/soon instead of keeping it just in the series and bundled
> with your addition. That way it can go in when ready even if the rest
> of the series is spinning.

Makes sense.  I've started submitting the various smaller patches so we can get
them all in to make our patch set smaller.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 09/17] clocksource/timer-riscv: New RISC-V Clocksource
  2017-06-07  7:25       ` Arnd Bergmann
@ 2017-06-23 23:24           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-23 23:24 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: geert, linux-arch, linux-kernel, Olof Johansson, albert, patches,
	tglx, daniel.lezcano

On Wed, 07 Jun 2017 00:25:37 PDT (-0700), Arnd Bergmann wrote:
> On Wed, Jun 7, 2017 at 9:12 AM, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>> CC clocksource folks
>>
>> On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> The RISC-V ISA defines a single RTC as well as an SBI oneshot timer.
>>> This timer is present on all RISC-V systems.
>>>
>>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>>> ---
>>>  drivers/clocksource/Kconfig       |   8 +++
>>>  drivers/clocksource/Makefile      |   1 +
>>>  drivers/clocksource/timer-riscv.c | 118 ++++++++++++++++++++++++++++++++++++++
>>>  3 files changed, 127 insertions(+)
>>>  create mode 100644 drivers/clocksource/timer-riscv.c
>>>
>>> diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
>>> index 545d541ae20e..1c2c6e7c7fab 100644
>>> --- a/drivers/clocksource/Kconfig
>>> +++ b/drivers/clocksource/Kconfig
>>> @@ -612,4 +612,12 @@ config CLKSRC_ST_LPC
>>>           Enable this option to use the Low Power controller timer
>>>           as clocksource.
>>>
>>> +config CLKSRC_RISCV
>>> +       #bool "Clocksource for the RISC-V platform"
>>> +       def_bool y if RISCV
>>> +       depends on RISCV
>
> I don't like the commenting out parts of the entry. If there are no
> build-time dependencies, you can just make it 'default y' and still allow
> users to disabled the driver if they really want to (e.g. on a machine
> specific kernel that has a driver for another clocksource), or you
> just leave it 'def_bool RISCV'.
>
>>> +
>>> +static int riscv_timer_set_oneshot(struct clock_event_device *evt)
>>> +{
>>> +       /* no-op; only one mode */
>>> +       return 0;
>>> +}
>>> +
>>> +static int riscv_timer_set_shutdown(struct clock_event_device *evt)
>>> +{
>>> +       /* can't stop the clock! */
>>> +       return 0;
>>> +}
>
> I'd just leave out the empty callbacks, the callers all protect NULL
> pointers.
>
>>> +static u64 riscv_rdtime(struct clocksource *cs)
>>> +{
>>> +       return get_cycles();
>>> +}
>>> +
>>> +static struct clocksource riscv_clocksource = {
>>> +       .name = "riscv_clocksource",
>>> +       .rating = 300,
>>> +       .read = riscv_rdtime,
>>> +#ifdef CONFIG_64BITS
>>> +       .mask = CLOCKSOURCE_MASK(64),
>>> +#else
>>> +       .mask = CLOCKSOURCE_MASK(32),
>>> +#endif /* CONFIG_64BITS */
>>> +       .flags = CLOCK_SOURCE_IS_CONTINUOUS,
>>> +};
>
>      ".mask = BITS_PER_LONG" maybe?
>
>>> +void riscv_timer_interrupt(void)
>>> +{
>>> +       int cpu = smp_processor_id();
>>> +       struct clock_event_device *evdev = &per_cpu(clock_event, cpu);
>>> +
>>> +       evdev->event_handler(evdev);
>>> +}
>>> +
>>> +void __init init_clockevent(void)
>>> +{
>>> +       int cpu = smp_processor_id();
>>> +       struct clock_event_device *ce = &per_cpu(clock_event, cpu);
>>> +
>>> +       *ce = (struct clock_event_device){
>>> +               .name = "riscv_timer_clockevent",
>>> +               .features = CLOCK_EVT_FEAT_ONESHOT,
>>> +               .rating = 300,
>>> +               .cpumask = cpumask_of(cpu),
>>> +               .set_next_event = riscv_timer_set_next_event,
>>> +               .set_state_oneshot  = riscv_timer_set_oneshot,
>>> +               .set_state_shutdown = riscv_timer_set_shutdown,
>>> +       };
>>> +
>>> +       /* Enable timer interrupts */
>>> +       csr_set(sie, SIE_STIE);
>>> +
>>> +       clockevents_config_and_register(ce, riscv_timebase, 100, 0x7fffffff);
>>> +}
>>> +
>>> +static unsigned long __init of_timebase(void)
>>> +{
>>> +       struct device_node *cpu;
>>> +       const __be32 *prop;
>>> +
>>> +       cpu = of_find_node_by_path("/cpus");
>>> +       if (cpu) {
>>> +               prop = of_get_property(cpu, "timebase-frequency", NULL);
>>> +               if (prop)
>>> +                       return be32_to_cpu(*prop);
>
> of_property_read_u32()
>
>>> +       }
>>> +
>>> +       return 10000000;
>
> The default seems rather arbitrary. Any reason for this particular
> number? Maybe it's better to fail if the property is missing.

This was just there for compatibility with the systems before we had device
tree up and running, there's no reason for it to be there any more.  I've
changed this to panic instead, as our delay code relies on this clock source
initializing correctly.  I think that's safer than just having a bogus timer.

I've included this and your other CR comments here

  https://github.com/riscv/riscv-linux/commit/4b8dffa13a5d965d1aefe9f5f4fed41b0a4835d7

which I'll include in a v3 patch set.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 09/17] clocksource/timer-riscv: New RISC-V Clocksource
@ 2017-06-23 23:24           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-23 23:24 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: geert, linux-arch, linux-kernel, Olof Johansson, albert, patches,
	tglx, daniel.lezcano

On Wed, 07 Jun 2017 00:25:37 PDT (-0700), Arnd Bergmann wrote:
> On Wed, Jun 7, 2017 at 9:12 AM, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>> CC clocksource folks
>>
>> On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> The RISC-V ISA defines a single RTC as well as an SBI oneshot timer.
>>> This timer is present on all RISC-V systems.
>>>
>>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>>> ---
>>>  drivers/clocksource/Kconfig       |   8 +++
>>>  drivers/clocksource/Makefile      |   1 +
>>>  drivers/clocksource/timer-riscv.c | 118 ++++++++++++++++++++++++++++++++++++++
>>>  3 files changed, 127 insertions(+)
>>>  create mode 100644 drivers/clocksource/timer-riscv.c
>>>
>>> diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
>>> index 545d541ae20e..1c2c6e7c7fab 100644
>>> --- a/drivers/clocksource/Kconfig
>>> +++ b/drivers/clocksource/Kconfig
>>> @@ -612,4 +612,12 @@ config CLKSRC_ST_LPC
>>>           Enable this option to use the Low Power controller timer
>>>           as clocksource.
>>>
>>> +config CLKSRC_RISCV
>>> +       #bool "Clocksource for the RISC-V platform"
>>> +       def_bool y if RISCV
>>> +       depends on RISCV
>
> I don't like the commenting out parts of the entry. If there are no
> build-time dependencies, you can just make it 'default y' and still allow
> users to disabled the driver if they really want to (e.g. on a machine
> specific kernel that has a driver for another clocksource), or you
> just leave it 'def_bool RISCV'.
>
>>> +
>>> +static int riscv_timer_set_oneshot(struct clock_event_device *evt)
>>> +{
>>> +       /* no-op; only one mode */
>>> +       return 0;
>>> +}
>>> +
>>> +static int riscv_timer_set_shutdown(struct clock_event_device *evt)
>>> +{
>>> +       /* can't stop the clock! */
>>> +       return 0;
>>> +}
>
> I'd just leave out the empty callbacks, the callers all protect NULL
> pointers.
>
>>> +static u64 riscv_rdtime(struct clocksource *cs)
>>> +{
>>> +       return get_cycles();
>>> +}
>>> +
>>> +static struct clocksource riscv_clocksource = {
>>> +       .name = "riscv_clocksource",
>>> +       .rating = 300,
>>> +       .read = riscv_rdtime,
>>> +#ifdef CONFIG_64BITS
>>> +       .mask = CLOCKSOURCE_MASK(64),
>>> +#else
>>> +       .mask = CLOCKSOURCE_MASK(32),
>>> +#endif /* CONFIG_64BITS */
>>> +       .flags = CLOCK_SOURCE_IS_CONTINUOUS,
>>> +};
>
>      ".mask = BITS_PER_LONG" maybe?
>
>>> +void riscv_timer_interrupt(void)
>>> +{
>>> +       int cpu = smp_processor_id();
>>> +       struct clock_event_device *evdev = &per_cpu(clock_event, cpu);
>>> +
>>> +       evdev->event_handler(evdev);
>>> +}
>>> +
>>> +void __init init_clockevent(void)
>>> +{
>>> +       int cpu = smp_processor_id();
>>> +       struct clock_event_device *ce = &per_cpu(clock_event, cpu);
>>> +
>>> +       *ce = (struct clock_event_device){
>>> +               .name = "riscv_timer_clockevent",
>>> +               .features = CLOCK_EVT_FEAT_ONESHOT,
>>> +               .rating = 300,
>>> +               .cpumask = cpumask_of(cpu),
>>> +               .set_next_event = riscv_timer_set_next_event,
>>> +               .set_state_oneshot  = riscv_timer_set_oneshot,
>>> +               .set_state_shutdown = riscv_timer_set_shutdown,
>>> +       };
>>> +
>>> +       /* Enable timer interrupts */
>>> +       csr_set(sie, SIE_STIE);
>>> +
>>> +       clockevents_config_and_register(ce, riscv_timebase, 100, 0x7fffffff);
>>> +}
>>> +
>>> +static unsigned long __init of_timebase(void)
>>> +{
>>> +       struct device_node *cpu;
>>> +       const __be32 *prop;
>>> +
>>> +       cpu = of_find_node_by_path("/cpus");
>>> +       if (cpu) {
>>> +               prop = of_get_property(cpu, "timebase-frequency", NULL);
>>> +               if (prop)
>>> +                       return be32_to_cpu(*prop);
>
> of_property_read_u32()
>
>>> +       }
>>> +
>>> +       return 10000000;
>
> The default seems rather arbitrary. Any reason for this particular
> number? Maybe it's better to fail if the property is missing.

This was just there for compatibility with the systems before we had device
tree up and running, there's no reason for it to be there any more.  I've
changed this to panic instead, as our delay code relies on this clock source
initializing correctly.  I think that's safer than just having a bogus timer.

I've included this and your other CR comments here

  https://github.com/riscv/riscv-linux/commit/4b8dffa13a5d965d1aefe9f5f4fed41b0a4835d7

which I'll include in a v3 patch set.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/7] RISC-V: arch/riscv/lib
  2017-06-07  7:35               ` Arnd Bergmann
@ 2017-06-23 23:24                 ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-23 23:24 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: linux-kernel, Olof Johansson, albert

On Wed, 07 Jun 2017 00:35:04 PDT (-0700), Arnd Bergmann wrote:
> On Tue, Jun 6, 2017 at 10:53 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> On Tue, 06 Jun 2017 02:31:02 PDT (-0700), Arnd Bergmann wrote:
>>> On Tue, Jun 6, 2017 at 6:56 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>>> On Fri, 26 May 2017 02:06:58 PDT (-0700), Arnd Bergmann wrote:
>>>>> On Thu, May 25, 2017 at 3:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>>>>> On Tue, 23 May 2017 04:19:42 PDT (-0700), Arnd Bergmann wrote:
>>>>>>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>>>>
>>>>>>> Also, it would be good to replace the multiply+div64
>>>>>>> with a single multiplication here, see how x86 and arm do it
>>>>>>> (for the tsc/__timer_delay case).
>>>>>>
>>>>>> Makes sense.  I think this should do it
>>>>>>
>>>>>>   https://github.com/riscv/riscv-linux/commit/d397332f6ebff42f3ecb385e9cf3284fdeda6776
>>>>>>
>>>>>> but I'm finding this hard to test as this only works for 2ms sleeps.  It seems
>>>>>> at least in the right ballpark
>>>>>
>>>>> + if (usecs > MAX_UDELAY_US) {
>>>>> + __delay((u64)usecs * riscv_timebase / 1000000ULL);
>>>>> + return;
>>>>> + }
>>>>>
>>>>> You still do the 64-bit division here. What I meant is to completely
>>>>> avoid the division and use a multiply+shift.
>>>>
>>>> The goal here was to avoid the error case that ARM has on overflow and instead
>>>> just delay for the requested time.  This should only divide when the delay is
>>>>>=2ms, so the division won't cost much in comparison.
>>>>
>>>> The normal case should have no division in it.
>>>>
>>>> I can copy ARM's error handling if you think that's better, but it seemed more
>>>> complicated than just computing the correct answer.
>>>
>>> I think the intention originally was to avoid overflowing the 32-bit
>>> argument in
>>>
>>>  void __delay(unsigned long cycles)
>>>
>>> If you need to delay for more than 4 billion clocksource cycles,
>>> your code is still broken.
>>
>> Maybe I'm crazy, but I thought the goal was to avoid overflowing on the
>> multiply.  Specifically, the code looks like
>>
>>   udelay(long input) {
>>     long a = input * MUL_VAL;
>>     long b = a >> SHIFT_VAL;
>>     __delay(b);
>>   }
>>
>> so the place there's extra overflow is at computing a, not b (the input to
>> __delay).  When I modified the ARM code I went and recalculated the point at
>> which the multiply would overflow and it matched the value from the ARM code,
>> which is 2000us.
>
> Ah, that's right. But then you might run into the next overflow at a larger
> delay interval.

Well, that requires a huge delay as there's a (u64) cast in there.  The
timebase is a u32 which puts the delay at an hour, which seems unlikely :).

>
>> While I can buy the argument that 2000us is still too long, the real reason I
>> wrote the code this way is because I thought it was easier than having an error
>> case.  If you think the error is better then I'll do it that way.
>
> It's probably ok as long as the complexity is not in the inline wrapper, and you
> avoid the division for small delays.

I put an unlikely() around the comparison and it looks like the complier does
well enough: there are no references to riscv_timebase outside of udelay (so
it's not inline anywhere), and there's just one branch on the hot path to test
for overflow (which is probably free anyway, as it's in the shadow of a mul and
never taken).

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 10/17] irqchip: New RISC-V PLIC Driver
  2017-06-07  7:55       ` Arnd Bergmann
@ 2017-06-24  0:45           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-24  0:45 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: geert, linux-arch, linux-kernel, Olof Johansson, albert, patches,
	tglx, jason, Marc.Zyngier

On Wed, 07 Jun 2017 00:55:28 PDT (-0700), Arnd Bergmann wrote:
> On Wed, Jun 7, 2017 at 9:13 AM, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>>> +struct plic_enable_context {
>>> +       atomic_t mask[32]; // 32-bit * 32-entry
>>> +};
>
> You use many '//' style comments in this file, please change them all to '/* */'
> for consistency with kernel coding style.

OK, I fixed them here and in all our other files that had them.

>>> +
>>> +struct plic_priority {
>>> +       u32 prio[MAX_DEVICES];
>>> +};
>>> +
>>> +struct plic_data {
>>> +       struct irq_chip         chip;
>>> +       struct irq_domain       *domain;
>>> +       u32                     ndev;
>>> +       void __iomem            *reg;
>>> +       int                     handlers;
>>> +       struct plic_handler     *handler;
>>> +       char                    name[30];
>>> +};
>>> +
>>> +struct plic_handler {
>>> +       struct plic_hart_context        *context;
>>> +       struct plic_data                *data;
>>> +};
>>> +
>>> +static inline
>>> +struct plic_hart_context *plic_hart_context(struct plic_data *data, size_t i)
>>> +{
>>> +       return (struct plic_hart_context *)((char *)data->reg + HART_BASE + HART_SIZE*i);
>>> +}
>
> 'data->reg' is an __iomem pointer, so when you build-test this with 'make C=1',
> you should get a valid warning from sparse about an address space mismatch.
> Please address all the warning from sparse.

I didn't know about sparse.  I'll run it on our port and fix everything.

>>> +static void plic_disable(struct plic_data *data, int i, int hwirq)
>>> +{
>>> +       struct plic_enable_context *enable = plic_enable_context(data, i);
>>> +
>>> +       atomic_and(~(1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
>>> +}
>
> In particular, you must not do atomic operations on MMIO pointers.
> On most architectures these are explicitly disallowed and trap for
> a good reason, as the hardware implementation behind atomics tend
> to rely on the cache controller, while mmio registers are required
> to be uncached.

Sorry about that: the SiFive bus actually supports AMOs natively out to the
every device, even without caches, bit the RISC-V spec allows regions
to be marked as not supporting AMOs.  Sometimes a few SiFive-isms sneak in from
before the supervisor spec was written in this particular manner.  I've
converted this to a spinlock instead.

  https://github.com/riscv/riscv-linux/commit/79b26ca800663399ea7d9dead73f3715deee1a99


>>> +       iowrite32(1, &priority->prio[d->hwirq]);
>
> I would normally use 'readl' instead of 'iowrite32'. They may be the same
> on riscv, but they have slightly different meaning in portable drivers.

I assume you meant writel?  If so, that makes sense

  https://github.com/riscv/riscv-linux/commit/45c968f1f068c35e0a5c8c90ba0776e7bdb6db78

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 10/17] irqchip: New RISC-V PLIC Driver
@ 2017-06-24  0:45           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-24  0:45 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: geert, linux-arch, linux-kernel, Olof Johansson, albert, patches,
	tglx, jason, Marc.Zyngier

On Wed, 07 Jun 2017 00:55:28 PDT (-0700), Arnd Bergmann wrote:
> On Wed, Jun 7, 2017 at 9:13 AM, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>>> +struct plic_enable_context {
>>> +       atomic_t mask[32]; // 32-bit * 32-entry
>>> +};
>
> You use many '//' style comments in this file, please change them all to '/* */'
> for consistency with kernel coding style.

OK, I fixed them here and in all our other files that had them.

>>> +
>>> +struct plic_priority {
>>> +       u32 prio[MAX_DEVICES];
>>> +};
>>> +
>>> +struct plic_data {
>>> +       struct irq_chip         chip;
>>> +       struct irq_domain       *domain;
>>> +       u32                     ndev;
>>> +       void __iomem            *reg;
>>> +       int                     handlers;
>>> +       struct plic_handler     *handler;
>>> +       char                    name[30];
>>> +};
>>> +
>>> +struct plic_handler {
>>> +       struct plic_hart_context        *context;
>>> +       struct plic_data                *data;
>>> +};
>>> +
>>> +static inline
>>> +struct plic_hart_context *plic_hart_context(struct plic_data *data, size_t i)
>>> +{
>>> +       return (struct plic_hart_context *)((char *)data->reg + HART_BASE + HART_SIZE*i);
>>> +}
>
> 'data->reg' is an __iomem pointer, so when you build-test this with 'make C=1',
> you should get a valid warning from sparse about an address space mismatch.
> Please address all the warning from sparse.

I didn't know about sparse.  I'll run it on our port and fix everything.

>>> +static void plic_disable(struct plic_data *data, int i, int hwirq)
>>> +{
>>> +       struct plic_enable_context *enable = plic_enable_context(data, i);
>>> +
>>> +       atomic_and(~(1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
>>> +}
>
> In particular, you must not do atomic operations on MMIO pointers.
> On most architectures these are explicitly disallowed and trap for
> a good reason, as the hardware implementation behind atomics tend
> to rely on the cache controller, while mmio registers are required
> to be uncached.

Sorry about that: the SiFive bus actually supports AMOs natively out to the
every device, even without caches, bit the RISC-V spec allows regions
to be marked as not supporting AMOs.  Sometimes a few SiFive-isms sneak in from
before the supervisor spec was written in this particular manner.  I've
converted this to a spinlock instead.

  https://github.com/riscv/riscv-linux/commit/79b26ca800663399ea7d9dead73f3715deee1a99


>>> +       iowrite32(1, &priority->prio[d->hwirq]);
>
> I would normally use 'readl' instead of 'iowrite32'. They may be the same
> on riscv, but they have slightly different meaning in portable drivers.

I assume you meant writel?  If so, that makes sense

  https://github.com/riscv/riscv-linux/commit/45c968f1f068c35e0a5c8c90ba0776e7bdb6db78

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 12/17] tty: New RISC-V SBI Console Driver
  2017-06-07  7:58       ` Arnd Bergmann
@ 2017-06-24  0:45           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-24  0:45 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: geert, linux-arch, linux-kernel, Olof Johansson, albert, patches,
	gregkh, jslaby, linuxppc-dev

On Wed, 07 Jun 2017 00:58:04 PDT (-0700), Arnd Bergmann wrote:
> On Wed, Jun 7, 2017 at 9:15 AM, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>> CC (hypervisor) console folks
>>
>> On Wed, Jun 7, 2017 at 1:00 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> This patch adds a new driver for the console availiable via the RISC-V
>>> SBI.  This console is specified to be used for early boot messages, and
>>> is designed to be a very simple (albiet somewhat slow) console that is
>>> always availiable.  All RISC-V systems have an SBI console.
>>>
>>> The SBI console is made availiable for early printk messages and is also
>>> availiable as a regular console.
>>>
>>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>>> ---
>>>  drivers/tty/hvc/Kconfig   |  11 +++++
>>>  drivers/tty/hvc/Makefile  |   1 +
>>>  drivers/tty/hvc/hvc_sbi.c | 102 ++++++++++++++++++++++++++++++++++++++++++++++
>>>  3 files changed, 114 insertions(+)
>>>  create mode 100644 drivers/tty/hvc/hvc_sbi.c
>>>
>>> diff --git a/drivers/tty/hvc/Kconfig b/drivers/tty/hvc/Kconfig
>>> index 574da15fe618..f3774adab240 100644
>>> --- a/drivers/tty/hvc/Kconfig
>>> +++ b/drivers/tty/hvc/Kconfig
>>> @@ -114,4 +114,15 @@ config HVCS
>>>           which will also be compiled when this driver is built as a
>>>           module.
>>>
>>> +config HVC_SBI
>>> +       bool "SBI console support"
>>> +       depends on RISCV
>>> +       select HVC_DRIVER
>>> +       default y
>>> +       help
>>> +         This enables support for console output via RISC-V SBI calls, which
>>> +         is normally used only during boot to output printk.
>>> +
>>> +         If you don't know what do to here, say Y.
>>> +
>>>  endif # TTY
>
> Please move this a little higher along with the other HVC_DRIVER
> implementations.

OK: https://github.com/riscv/riscv-linux/commit/1c769cad7931b7b08644d2d4a7b6985777a8e0be

>>> + * RISC-V SBI interface to hvc_console.c
>>> + *  based on drivers-tty/hvc/hvc_udbg.c
>>> + *
>>> + * Copyright (C) 2008 David Gibson, IBM Corporation
>>> + * Copyright (C) 2012 Regents of the University of California
>
> 2017?

https://github.com/riscv/riscv-linux/commit/dafa678d26886076a8a9cccc2486f1bbbfa44aa8

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 12/17] tty: New RISC-V SBI Console Driver
@ 2017-06-24  0:45           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-24  0:45 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: geert, linux-arch, linux-kernel, Olof Johansson, albert, patches,
	gregkh, jslaby, linuxppc-dev

On Wed, 07 Jun 2017 00:58:04 PDT (-0700), Arnd Bergmann wrote:
> On Wed, Jun 7, 2017 at 9:15 AM, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>> CC (hypervisor) console folks
>>
>> On Wed, Jun 7, 2017 at 1:00 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> This patch adds a new driver for the console availiable via the RISC-V
>>> SBI.  This console is specified to be used for early boot messages, and
>>> is designed to be a very simple (albiet somewhat slow) console that is
>>> always availiable.  All RISC-V systems have an SBI console.
>>>
>>> The SBI console is made availiable for early printk messages and is also
>>> availiable as a regular console.
>>>
>>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>>> ---
>>>  drivers/tty/hvc/Kconfig   |  11 +++++
>>>  drivers/tty/hvc/Makefile  |   1 +
>>>  drivers/tty/hvc/hvc_sbi.c | 102 ++++++++++++++++++++++++++++++++++++++++++++++
>>>  3 files changed, 114 insertions(+)
>>>  create mode 100644 drivers/tty/hvc/hvc_sbi.c
>>>
>>> diff --git a/drivers/tty/hvc/Kconfig b/drivers/tty/hvc/Kconfig
>>> index 574da15fe618..f3774adab240 100644
>>> --- a/drivers/tty/hvc/Kconfig
>>> +++ b/drivers/tty/hvc/Kconfig
>>> @@ -114,4 +114,15 @@ config HVCS
>>>           which will also be compiled when this driver is built as a
>>>           module.
>>>
>>> +config HVC_SBI
>>> +       bool "SBI console support"
>>> +       depends on RISCV
>>> +       select HVC_DRIVER
>>> +       default y
>>> +       help
>>> +         This enables support for console output via RISC-V SBI calls, which
>>> +         is normally used only during boot to output printk.
>>> +
>>> +         If you don't know what do to here, say Y.
>>> +
>>>  endif # TTY
>
> Please move this a little higher along with the other HVC_DRIVER
> implementations.

OK: https://github.com/riscv/riscv-linux/commit/1c769cad7931b7b08644d2d4a7b6985777a8e0be

>>> + * RISC-V SBI interface to hvc_console.c
>>> + *  based on drivers-tty/hvc/hvc_udbg.c
>>> + *
>>> + * Copyright (C) 2008 David Gibson, IBM Corporation
>>> + * Copyright (C) 2012 Regents of the University of California
>
> 2017?

https://github.com/riscv/riscv-linux/commit/dafa678d26886076a8a9cccc2486f1bbbfa44aa8

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 06/17] pci: Add generic pcibios_{fixup_bus,align_resource}
  2017-06-08  8:35         ` Arnd Bergmann
@ 2017-06-24  2:01             ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-24  2:01 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: hch, geert, linux-arch, linux-kernel, Olof Johansson, albert,
	patches, bhelgaas, linux-pci

On Thu, 08 Jun 2017 01:35:29 PDT (-0700), Arnd Bergmann wrote:
> On Thu, Jun 8, 2017 at 10:12 AM, Christoph Hellwig <hch@infradead.org> wrote:
>> On Wed, Jun 07, 2017 at 09:19:49AM +0200, Geert Uytterhoeven wrote:
>>> CC pci folks
>>
>> Ok, replying with pci folks in Cc then :)
>>
>> Weak symbols have (rightly) gotten a bad reputation, so maybe
>> we should approach this without them.
>
> Agreed, I would almost never recommend using a __weak symbol,
> but they are already widely used in the PCI subsystem, so I suggested
> using them here for consistency.
>
> We have a struct pci_host_bridge now, and we should eventually
> move most of the 38 pci specific weak  per-architecture symbols
> into per-host driver callbacks, but that I think that would be too much
> to ask for when adding an architecture port.

I agree :)

>> It seems we have a large number of emptry pcibios_fixup_bus calls
>> alreayd, so I think we should simply have the architectures
>> that do define it define a Kconfig or header symbol and not call
>> it at all otherwise.
>
> I would argue that most of them should not be per-architecture
> in the first place, the current state is mostly an artifact of the
> times when each architecture had just one PCI implementation.
>
> The ones that have multiple implementations (arm, powerpc, ...)
> tend to actually override the weak functions with their own
> per-host multiplexers again.
>
>> For the ones that exist as lot just seem to call pci_read_bridge_bases
>> and/or pcibios_fixup_device_resources in one form or another,
>> and I wonder why we even need the arch indirection for that.
>>
>> Similarly for pcibios_align_resource: a lot of architetures seem
>> to have a noop, and the once that don't mostly seem copy and
>> paste code, so we should again have a symbol for architectures
>> to opt into it, and we probably should have a generic helper
>> for the VGA window mirroring code instead of duplicating it multiple
>> times.
>
> I now remember that we already have a host_bridge->align_resource
> callback pointer, so the generic function should definitely try
> to use that.
>
> We could just use the version from arch/mips/pci/pci-generic.c
> and remove that in the process. I'm not sure about why mips calls
> pci_read_bridge_bases() in pcibios_fixup_bus() though, or whether
> this make sense to put in the generic version.

I'm splitting this off into another patch set and sending it to a bunch of PCI
people.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 06/17] pci: Add generic pcibios_{fixup_bus,align_resource}
@ 2017-06-24  2:01             ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-24  2:01 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: hch, geert, linux-arch, linux-kernel, Olof Johansson, albert,
	patches, bhelgaas, linux-pci

On Thu, 08 Jun 2017 01:35:29 PDT (-0700), Arnd Bergmann wrote:
> On Thu, Jun 8, 2017 at 10:12 AM, Christoph Hellwig <hch@infradead.org> wrote:
>> On Wed, Jun 07, 2017 at 09:19:49AM +0200, Geert Uytterhoeven wrote:
>>> CC pci folks
>>
>> Ok, replying with pci folks in Cc then :)
>>
>> Weak symbols have (rightly) gotten a bad reputation, so maybe
>> we should approach this without them.
>
> Agreed, I would almost never recommend using a __weak symbol,
> but they are already widely used in the PCI subsystem, so I suggested
> using them here for consistency.
>
> We have a struct pci_host_bridge now, and we should eventually
> move most of the 38 pci specific weak  per-architecture symbols
> into per-host driver callbacks, but that I think that would be too much
> to ask for when adding an architecture port.

I agree :)

>> It seems we have a large number of emptry pcibios_fixup_bus calls
>> alreayd, so I think we should simply have the architectures
>> that do define it define a Kconfig or header symbol and not call
>> it at all otherwise.
>
> I would argue that most of them should not be per-architecture
> in the first place, the current state is mostly an artifact of the
> times when each architecture had just one PCI implementation.
>
> The ones that have multiple implementations (arm, powerpc, ...)
> tend to actually override the weak functions with their own
> per-host multiplexers again.
>
>> For the ones that exist as lot just seem to call pci_read_bridge_bases
>> and/or pcibios_fixup_device_resources in one form or another,
>> and I wonder why we even need the arch indirection for that.
>>
>> Similarly for pcibios_align_resource: a lot of architetures seem
>> to have a noop, and the once that don't mostly seem copy and
>> paste code, so we should again have a symbol for architectures
>> to opt into it, and we probably should have a generic helper
>> for the VGA window mirroring code instead of duplicating it multiple
>> times.
>
> I now remember that we already have a host_bridge->align_resource
> callback pointer, so the generic function should definitely try
> to use that.
>
> We could just use the version from arch/mips/pci/pci-generic.c
> and remove that in the process. I'm not sure about why mips calls
> pci_read_bridge_bases() in pcibios_fixup_bus() though, or whether
> this make sense to put in the generic version.

I'm splitting this off into another patch set and sending it to a bunch of PCI
people.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 06/17] pci: Add generic pcibios_{fixup_bus,align_resource}
  2017-06-07  8:01       ` Arnd Bergmann
@ 2017-06-24  2:01           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-24  2:01 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: geert, linux-arch, linux-kernel, Olof Johansson, albert, patches,
	bhelgaas, linux-pci

On Wed, 07 Jun 2017 01:01:57 PDT (-0700), Arnd Bergmann wrote:
> On Wed, Jun 7, 2017 at 9:19 AM, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>> CC pci folks
>>
>> On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> While upstreaming the RISC-V port, it was pointed out that multiple
>>> architectures (arc, arm64, cris, microblaze, sh, tile) have copied the
>>> mostly empty versions of at least one of these functions.  This defines
>>> weakly bound versions of the common functions so other architetures can
>>> use them.
>>>
>>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>
> Thanks a lot for taking care of this!

No problem.

>
>>> diff --git a/drivers/pci/bios.c b/drivers/pci/bios.c
>>> new file mode 100644
>>> index 000000000000..ffe34c024aa8
>>> --- /dev/null
>>> +++ b/drivers/pci/bios.c
>>> @@ -0,0 +1,42 @@
>>> +
>>> +/* This file contains weakly bound functions that implement pcibios functions
>>> + * that some architectures have copied verbatim.
>>> + */
>
> Instead of adding a new file, I would suggest adding the two functions next
> to their callers, in probe.c and setup-res.c, respectively.

I've split this out into another patch set.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 06/17] pci: Add generic pcibios_{fixup_bus,align_resource}
@ 2017-06-24  2:01           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-24  2:01 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: geert, linux-arch, linux-kernel, Olof Johansson, albert, patches,
	bhelgaas, linux-pci

On Wed, 07 Jun 2017 01:01:57 PDT (-0700), Arnd Bergmann wrote:
> On Wed, Jun 7, 2017 at 9:19 AM, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>> CC pci folks
>>
>> On Wed, Jun 7, 2017 at 12:59 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> While upstreaming the RISC-V port, it was pointed out that multiple
>>> architectures (arc, arm64, cris, microblaze, sh, tile) have copied the
>>> mostly empty versions of at least one of these functions.  This defines
>>> weakly bound versions of the common functions so other architetures can
>>> use them.
>>>
>>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>
> Thanks a lot for taking care of this!

No problem.

>
>>> diff --git a/drivers/pci/bios.c b/drivers/pci/bios.c
>>> new file mode 100644
>>> index 000000000000..ffe34c024aa8
>>> --- /dev/null
>>> +++ b/drivers/pci/bios.c
>>> @@ -0,0 +1,42 @@
>>> +
>>> +/* This file contains weakly bound functions that implement pcibios functions
>>> + * that some architectures have copied verbatim.
>>> + */
>
> Instead of adding a new file, I would suggest adding the two functions next
> to their callers, in probe.c and setup-res.c, respectively.

I've split this out into another patch set.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-07  8:12     ` Arnd Bergmann
@ 2017-06-24  2:01         ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-24  2:01 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: linux-arch, linux-kernel, Olof Johansson, albert, patches, benh

On Wed, 07 Jun 2017 01:12:00 PDT (-0700), Arnd Bergmann wrote:
> On Wed, Jun 7, 2017 at 1:00 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> This patch adds the include files for the RISC-V port.  These are mostly
>> based on the score port, but there are a lot of arm64-based files as
>> well.
>>
>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>
> It might be better to split this up into several parts, as the patch
> is longer than
> most people are willing to review at once.
>
> The uapi should definitely be a separate patch, as it includes the parts that
> cannot be changed any more later. memory management (pgtable, mmu,
> uaccess) would be another part to split out, and possibly all the atomics
> in one separate patch (along with spinlocks and bitops).

OK, we'll do this for the v3.

>
>> +
>> +/* IO barriers.  These only fence on the IO bits because they're only required
>> + * to order device access.  We're defining mmiowb because our AMO instructions
>> + * (which are used to implement locks) don't specify ordering.  From Chapter 7
>> + * of v2.2 of the user ISA:
>> + * "The bits order accesses to one of the two address domains, memory or I/O,
>> + * depending on which address domain the atomic instruction is accessing. No
>> + * ordering constraint is implied to accesses to the other domain, and a FENCE
>> + * instruction should be used to order across both domains."
>> + */
>> +
>> +#define __iormb()      __asm__ __volatile__ ("fence i,io" : : : "memory");
>> +#define __iowmb()      __asm__ __volatile__ ("fence io,o" : : : "memory");
>> +
>> +#define mmiowb()       __asm__ __volatile__ ("fence io,io" : : : "memory");
>> +
>> +/*
>> + * Relaxed I/O memory access primitives. These follow the Device memory
>> + * ordering rules but do not guarantee any ordering relative to Normal memory
>> + * accesses.
>> + */
>> +#define readb_relaxed(c)       ({ u8  __r = __raw_readb(c); __r; })
>> +#define readw_relaxed(c)       ({ u16 __r = le16_to_cpu((__force __le16)__raw_readw(c)); __r; })
>> +#define readl_relaxed(c)       ({ u32 __r = le32_to_cpu((__force __le32)__raw_readl(c)); __r; })
>> +#define readq_relaxed(c)       ({ u64 __r = le64_to_cpu((__force __le64)__raw_readq(c)); __r; })
>> +
>> +#define writeb_relaxed(v,c)    ((void)__raw_writeb((v),(c)))
>> +#define writew_relaxed(v,c)    ((void)__raw_writew((__force u16)cpu_to_le16(v),(c)))
>> +#define writel_relaxed(v,c)    ((void)__raw_writel((__force u32)cpu_to_le32(v),(c)))
>> +#define writeq_relaxed(v,c)    ((void)__raw_writeq((__force u64)cpu_to_le64(v),(c)))
>> +
>> +/*
>> + * I/O memory access primitives. Reads are ordered relative to any
>> + * following Normal memory access. Writes are ordered relative to any prior
>> + * Normal memory access.
>> + */
>> +#define readb(c)               ({ u8  __v = readb_relaxed(c); __iormb(); __v; })
>> +#define readw(c)               ({ u16 __v = readw_relaxed(c); __iormb(); __v; })
>> +#define readl(c)               ({ u32 __v = readl_relaxed(c); __iormb(); __v; })
>> +#define readq(c)               ({ u64 __v = readq_relaxed(c); __iormb(); __v; })
>> +
>> +#define writeb(v,c)            ({ __iowmb(); writeb_relaxed((v),(c)); })
>> +#define writew(v,c)            ({ __iowmb(); writew_relaxed((v),(c)); })
>> +#define writel(v,c)            ({ __iowmb(); writel_relaxed((v),(c)); })
>> +#define writeq(v,c)            ({ __iowmb(); writeq_relaxed((v),(c)); })
>> +
>> +#include <asm-generic/io.h>
>
> These do not yet contain all the changes we discussed: the relaxed operations
> don't seem to be ordered against one another and the regular accessors
> are not ordered against DMA.

Sorry, I must have forgotten to write this -- I just wanted to push out a v3
patch set without the changes to the atomics so everything else could be looked
at.  I wanted to just go through the atomics completely and fix them, as I
found a handful of problems (everything was missing the AQ and RL bits, for
example) and figured it would be best to just get them done right.

I think that's not something for after dinner on a Friday, but hopefully I'll
get to it tomorrow morning.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
@ 2017-06-24  2:01         ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-24  2:01 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: linux-arch, linux-kernel, Olof Johansson, albert, patches, benh

On Wed, 07 Jun 2017 01:12:00 PDT (-0700), Arnd Bergmann wrote:
> On Wed, Jun 7, 2017 at 1:00 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> This patch adds the include files for the RISC-V port.  These are mostly
>> based on the score port, but there are a lot of arm64-based files as
>> well.
>>
>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>
> It might be better to split this up into several parts, as the patch
> is longer than
> most people are willing to review at once.
>
> The uapi should definitely be a separate patch, as it includes the parts that
> cannot be changed any more later. memory management (pgtable, mmu,
> uaccess) would be another part to split out, and possibly all the atomics
> in one separate patch (along with spinlocks and bitops).

OK, we'll do this for the v3.

>
>> +
>> +/* IO barriers.  These only fence on the IO bits because they're only required
>> + * to order device access.  We're defining mmiowb because our AMO instructions
>> + * (which are used to implement locks) don't specify ordering.  From Chapter 7
>> + * of v2.2 of the user ISA:
>> + * "The bits order accesses to one of the two address domains, memory or I/O,
>> + * depending on which address domain the atomic instruction is accessing. No
>> + * ordering constraint is implied to accesses to the other domain, and a FENCE
>> + * instruction should be used to order across both domains."
>> + */
>> +
>> +#define __iormb()      __asm__ __volatile__ ("fence i,io" : : : "memory");
>> +#define __iowmb()      __asm__ __volatile__ ("fence io,o" : : : "memory");
>> +
>> +#define mmiowb()       __asm__ __volatile__ ("fence io,io" : : : "memory");
>> +
>> +/*
>> + * Relaxed I/O memory access primitives. These follow the Device memory
>> + * ordering rules but do not guarantee any ordering relative to Normal memory
>> + * accesses.
>> + */
>> +#define readb_relaxed(c)       ({ u8  __r = __raw_readb(c); __r; })
>> +#define readw_relaxed(c)       ({ u16 __r = le16_to_cpu((__force __le16)__raw_readw(c)); __r; })
>> +#define readl_relaxed(c)       ({ u32 __r = le32_to_cpu((__force __le32)__raw_readl(c)); __r; })
>> +#define readq_relaxed(c)       ({ u64 __r = le64_to_cpu((__force __le64)__raw_readq(c)); __r; })
>> +
>> +#define writeb_relaxed(v,c)    ((void)__raw_writeb((v),(c)))
>> +#define writew_relaxed(v,c)    ((void)__raw_writew((__force u16)cpu_to_le16(v),(c)))
>> +#define writel_relaxed(v,c)    ((void)__raw_writel((__force u32)cpu_to_le32(v),(c)))
>> +#define writeq_relaxed(v,c)    ((void)__raw_writeq((__force u64)cpu_to_le64(v),(c)))
>> +
>> +/*
>> + * I/O memory access primitives. Reads are ordered relative to any
>> + * following Normal memory access. Writes are ordered relative to any prior
>> + * Normal memory access.
>> + */
>> +#define readb(c)               ({ u8  __v = readb_relaxed(c); __iormb(); __v; })
>> +#define readw(c)               ({ u16 __v = readw_relaxed(c); __iormb(); __v; })
>> +#define readl(c)               ({ u32 __v = readl_relaxed(c); __iormb(); __v; })
>> +#define readq(c)               ({ u64 __v = readq_relaxed(c); __iormb(); __v; })
>> +
>> +#define writeb(v,c)            ({ __iowmb(); writeb_relaxed((v),(c)); })
>> +#define writew(v,c)            ({ __iowmb(); writew_relaxed((v),(c)); })
>> +#define writel(v,c)            ({ __iowmb(); writel_relaxed((v),(c)); })
>> +#define writeq(v,c)            ({ __iowmb(); writeq_relaxed((v),(c)); })
>> +
>> +#include <asm-generic/io.h>
>
> These do not yet contain all the changes we discussed: the relaxed operations
> don't seem to be ordered against one another and the regular accessors
> are not ordered against DMA.

Sorry, I must have forgotten to write this -- I just wanted to push out a v3
patch set without the changes to the atomics so everything else could be looked
at.  I wanted to just go through the atomics completely and fix them, as I
found a handful of problems (everything was missing the AQ and RL bits, for
example) and figured it would be best to just get them done right.

I think that's not something for after dinner on a Friday, but hopefully I'll
get to it tomorrow morning.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 09/17] clocksource/timer-riscv: New RISC-V Clocksource
  2017-06-07  9:43     ` Marc Zyngier
@ 2017-06-24  2:02         ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-24  2:02 UTC (permalink / raw)
  To: marc.zyngier
  Cc: linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert, patches

On Wed, 07 Jun 2017 02:43:09 PDT (-0700), marc.zyngier@arm.com wrote:
> On 06/06/17 23:59, Palmer Dabbelt wrote:
>> The RISC-V ISA defines a single RTC as well as an SBI oneshot timer.
>> This timer is present on all RISC-V systems.
>>
>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>> ---
>>  drivers/clocksource/Kconfig       |   8 +++
>>  drivers/clocksource/Makefile      |   1 +
>>  drivers/clocksource/timer-riscv.c | 118 ++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 127 insertions(+)
>>  create mode 100644 drivers/clocksource/timer-riscv.c
>>
>> diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
>> index 545d541ae20e..1c2c6e7c7fab 100644
>> --- a/drivers/clocksource/Kconfig
>> +++ b/drivers/clocksource/Kconfig
>> @@ -612,4 +612,12 @@ config CLKSRC_ST_LPC
>>  	  Enable this option to use the Low Power controller timer
>>  	  as clocksource.
>>
>> +config CLKSRC_RISCV
>> +	#bool "Clocksource for the RISC-V platform"
>> +	def_bool y if RISCV
>> +	depends on RISCV
>> +	help
>> +	  This enables a clocksource based on the RISC-V SBI timer, which is
>> +	  built in to all RISC-V systems.
>> +
>>  endmenu
>> diff --git a/drivers/clocksource/Makefile b/drivers/clocksource/Makefile
>> index 2b5b56a6f00f..408ed9d314dc 100644
>> --- a/drivers/clocksource/Makefile
>> +++ b/drivers/clocksource/Makefile
>> @@ -73,3 +73,4 @@ obj-$(CONFIG_H8300_TMR16)		+= h8300_timer16.o
>>  obj-$(CONFIG_H8300_TPU)			+= h8300_tpu.o
>>  obj-$(CONFIG_CLKSRC_ST_LPC)		+= clksrc_st_lpc.o
>>  obj-$(CONFIG_X86_NUMACHIP)		+= numachip.o
>> +obj-$(CONFIG_CLKSRC_RISCV)		+= timer-riscv.o
>> diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
>> new file mode 100644
>> index 000000000000..04ef7b9130b3
>> --- /dev/null
>> +++ b/drivers/clocksource/timer-riscv.c
>> @@ -0,0 +1,118 @@
>> +/*
>> + * Copyright (C) 2012 Regents of the University of California
>> + * Copyright (C) 2017 SiFive
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful,
>> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + *   GNU General Public License for more details.
>> + */
>> +
>> +#include <linux/clocksource.h>
>> +#include <linux/clockchips.h>
>> +#include <linux/interrupt.h>
>> +#include <linux/irq.h>
>> +#include <linux/delay.h>
>> +#include <linux/of.h>
>> +
>> +#include <asm/irq.h>
>> +#include <asm/csr.h>
>> +#include <asm/sbi.h>
>> +#include <asm/delay.h>
>> +
>> +unsigned long riscv_timebase;
>> +
>> +static DEFINE_PER_CPU(struct clock_event_device, clock_event);
>> +
>> +static int riscv_timer_set_next_event(unsigned long delta,
>> +	struct clock_event_device *evdev)
>> +{
>> +	sbi_set_timer(get_cycles() + delta);
>> +	return 0;
>> +}
>> +
>> +static int riscv_timer_set_oneshot(struct clock_event_device *evt)
>> +{
>> +	/* no-op; only one mode */
>> +	return 0;
>> +}
>> +
>> +static int riscv_timer_set_shutdown(struct clock_event_device *evt)
>> +{
>> +	/* can't stop the clock! */
>> +	return 0;
>> +}
>> +
>> +static u64 riscv_rdtime(struct clocksource *cs)
>> +{
>> +	return get_cycles();
>> +}
>> +
>> +static struct clocksource riscv_clocksource = {
>> +	.name = "riscv_clocksource",
>> +	.rating = 300,
>> +	.read = riscv_rdtime,
>> +#ifdef CONFIG_64BITS
>> +	.mask = CLOCKSOURCE_MASK(64),
>> +#else
>> +	.mask = CLOCKSOURCE_MASK(32),
>> +#endif /* CONFIG_64BITS */
>> +	.flags = CLOCK_SOURCE_IS_CONTINUOUS,
>> +};
>> +
>> +void riscv_timer_interrupt(void)
>> +{
>> +	int cpu = smp_processor_id();
>> +	struct clock_event_device *evdev = &per_cpu(clock_event, cpu);
>> +
>> +	evdev->event_handler(evdev);
>> +}
>> +
>> +void __init init_clockevent(void)
>> +{
>> +	int cpu = smp_processor_id();
>> +	struct clock_event_device *ce = &per_cpu(clock_event, cpu);
>> +
>> +	*ce = (struct clock_event_device){
>> +		.name = "riscv_timer_clockevent",
>> +		.features = CLOCK_EVT_FEAT_ONESHOT,
>> +		.rating = 300,
>> +		.cpumask = cpumask_of(cpu),
>> +		.set_next_event = riscv_timer_set_next_event,
>> +		.set_state_oneshot  = riscv_timer_set_oneshot,
>> +		.set_state_shutdown = riscv_timer_set_shutdown,
>> +	};
>> +
>> +	/* Enable timer interrupts */
>> +	csr_set(sie, SIE_STIE);
>> +
>> +	clockevents_config_and_register(ce, riscv_timebase, 100, 0x7fffffff);
>> +}
>> +
>> +static unsigned long __init of_timebase(void)
>> +{
>> +	struct device_node *cpu;
>> +	const __be32 *prop;
>> +
>> +	cpu = of_find_node_by_path("/cpus");
>> +	if (cpu) {
>> +		prop = of_get_property(cpu, "timebase-frequency", NULL);
>> +		if (prop)
>> +			return be32_to_cpu(*prop);
>
> Consider using of_property_read_u32() instead.

Thanks, that's must cleaner.

>
>> +	}
>> +
>> +	return 10000000;
>
> Is this an architectural guarantee? Or something that is implementation
> specific?

It was just a holdover from before we converted our port to device tree, it's
been fixed.

>
>> +}
>> +
>> +void __init time_init(void)
>> +{
>> +	riscv_timebase = of_timebase();
>> +	lpj_fine = riscv_timebase / HZ;
>> +
>> +	clocksource_register_hz(&riscv_clocksource, riscv_timebase);
>> +	init_clockevent();
>> +}
>>

Thanks, I'll put these in the v3 patch set.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 09/17] clocksource/timer-riscv: New RISC-V Clocksource
@ 2017-06-24  2:02         ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-24  2:02 UTC (permalink / raw)
  To: marc.zyngier
  Cc: linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert, patches

On Wed, 07 Jun 2017 02:43:09 PDT (-0700), marc.zyngier@arm.com wrote:
> On 06/06/17 23:59, Palmer Dabbelt wrote:
>> The RISC-V ISA defines a single RTC as well as an SBI oneshot timer.
>> This timer is present on all RISC-V systems.
>>
>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>> ---
>>  drivers/clocksource/Kconfig       |   8 +++
>>  drivers/clocksource/Makefile      |   1 +
>>  drivers/clocksource/timer-riscv.c | 118 ++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 127 insertions(+)
>>  create mode 100644 drivers/clocksource/timer-riscv.c
>>
>> diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
>> index 545d541ae20e..1c2c6e7c7fab 100644
>> --- a/drivers/clocksource/Kconfig
>> +++ b/drivers/clocksource/Kconfig
>> @@ -612,4 +612,12 @@ config CLKSRC_ST_LPC
>>  	  Enable this option to use the Low Power controller timer
>>  	  as clocksource.
>>
>> +config CLKSRC_RISCV
>> +	#bool "Clocksource for the RISC-V platform"
>> +	def_bool y if RISCV
>> +	depends on RISCV
>> +	help
>> +	  This enables a clocksource based on the RISC-V SBI timer, which is
>> +	  built in to all RISC-V systems.
>> +
>>  endmenu
>> diff --git a/drivers/clocksource/Makefile b/drivers/clocksource/Makefile
>> index 2b5b56a6f00f..408ed9d314dc 100644
>> --- a/drivers/clocksource/Makefile
>> +++ b/drivers/clocksource/Makefile
>> @@ -73,3 +73,4 @@ obj-$(CONFIG_H8300_TMR16)		+= h8300_timer16.o
>>  obj-$(CONFIG_H8300_TPU)			+= h8300_tpu.o
>>  obj-$(CONFIG_CLKSRC_ST_LPC)		+= clksrc_st_lpc.o
>>  obj-$(CONFIG_X86_NUMACHIP)		+= numachip.o
>> +obj-$(CONFIG_CLKSRC_RISCV)		+= timer-riscv.o
>> diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
>> new file mode 100644
>> index 000000000000..04ef7b9130b3
>> --- /dev/null
>> +++ b/drivers/clocksource/timer-riscv.c
>> @@ -0,0 +1,118 @@
>> +/*
>> + * Copyright (C) 2012 Regents of the University of California
>> + * Copyright (C) 2017 SiFive
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful,
>> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + *   GNU General Public License for more details.
>> + */
>> +
>> +#include <linux/clocksource.h>
>> +#include <linux/clockchips.h>
>> +#include <linux/interrupt.h>
>> +#include <linux/irq.h>
>> +#include <linux/delay.h>
>> +#include <linux/of.h>
>> +
>> +#include <asm/irq.h>
>> +#include <asm/csr.h>
>> +#include <asm/sbi.h>
>> +#include <asm/delay.h>
>> +
>> +unsigned long riscv_timebase;
>> +
>> +static DEFINE_PER_CPU(struct clock_event_device, clock_event);
>> +
>> +static int riscv_timer_set_next_event(unsigned long delta,
>> +	struct clock_event_device *evdev)
>> +{
>> +	sbi_set_timer(get_cycles() + delta);
>> +	return 0;
>> +}
>> +
>> +static int riscv_timer_set_oneshot(struct clock_event_device *evt)
>> +{
>> +	/* no-op; only one mode */
>> +	return 0;
>> +}
>> +
>> +static int riscv_timer_set_shutdown(struct clock_event_device *evt)
>> +{
>> +	/* can't stop the clock! */
>> +	return 0;
>> +}
>> +
>> +static u64 riscv_rdtime(struct clocksource *cs)
>> +{
>> +	return get_cycles();
>> +}
>> +
>> +static struct clocksource riscv_clocksource = {
>> +	.name = "riscv_clocksource",
>> +	.rating = 300,
>> +	.read = riscv_rdtime,
>> +#ifdef CONFIG_64BITS
>> +	.mask = CLOCKSOURCE_MASK(64),
>> +#else
>> +	.mask = CLOCKSOURCE_MASK(32),
>> +#endif /* CONFIG_64BITS */
>> +	.flags = CLOCK_SOURCE_IS_CONTINUOUS,
>> +};
>> +
>> +void riscv_timer_interrupt(void)
>> +{
>> +	int cpu = smp_processor_id();
>> +	struct clock_event_device *evdev = &per_cpu(clock_event, cpu);
>> +
>> +	evdev->event_handler(evdev);
>> +}
>> +
>> +void __init init_clockevent(void)
>> +{
>> +	int cpu = smp_processor_id();
>> +	struct clock_event_device *ce = &per_cpu(clock_event, cpu);
>> +
>> +	*ce = (struct clock_event_device){
>> +		.name = "riscv_timer_clockevent",
>> +		.features = CLOCK_EVT_FEAT_ONESHOT,
>> +		.rating = 300,
>> +		.cpumask = cpumask_of(cpu),
>> +		.set_next_event = riscv_timer_set_next_event,
>> +		.set_state_oneshot  = riscv_timer_set_oneshot,
>> +		.set_state_shutdown = riscv_timer_set_shutdown,
>> +	};
>> +
>> +	/* Enable timer interrupts */
>> +	csr_set(sie, SIE_STIE);
>> +
>> +	clockevents_config_and_register(ce, riscv_timebase, 100, 0x7fffffff);
>> +}
>> +
>> +static unsigned long __init of_timebase(void)
>> +{
>> +	struct device_node *cpu;
>> +	const __be32 *prop;
>> +
>> +	cpu = of_find_node_by_path("/cpus");
>> +	if (cpu) {
>> +		prop = of_get_property(cpu, "timebase-frequency", NULL);
>> +		if (prop)
>> +			return be32_to_cpu(*prop);
>
> Consider using of_property_read_u32() instead.

Thanks, that's must cleaner.

>
>> +	}
>> +
>> +	return 10000000;
>
> Is this an architectural guarantee? Or something that is implementation
> specific?

It was just a holdover from before we converted our port to device tree, it's
been fixed.

>
>> +}
>> +
>> +void __init time_init(void)
>> +{
>> +	riscv_timebase = of_timebase();
>> +	lpj_fine = riscv_timebase / HZ;
>> +
>> +	clocksource_register_hz(&riscv_clocksource, riscv_timebase);
>> +	init_clockevent();
>> +}
>>

Thanks, I'll put these in the v3 patch set.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-24  2:01         ` Palmer Dabbelt
  (?)
@ 2017-06-24 15:42         ` Benjamin Herrenschmidt
  2017-06-24 21:32             ` Palmer Dabbelt
  -1 siblings, 1 reply; 283+ messages in thread
From: Benjamin Herrenschmidt @ 2017-06-24 15:42 UTC (permalink / raw)
  To: Palmer Dabbelt, Arnd Bergmann
  Cc: linux-arch, linux-kernel, Olof Johansson, albert, patches

On Fri, 2017-06-23 at 19:01 -0700, Palmer Dabbelt wrote:
> > > +#define mmiowb()       __asm__ __volatile__ ("fence io,io" : : : "memory");

I forgot if we already mentioned that but mmiowb is primarily intended
to order MMIO stores vs. a subsequent spin_unlock.

I'm not sure an IO only fence is sufficient here.

Note that I've never trusted drivers to get that right, it's a rather
bad abstraction to begin with, so on powerpc, instead, I just set a
per-cpu flag on every non-relaxed MMIO write and test it in spin_unlock
in order to "beef up" the barrier in there if necessary.

Cheers,
Ben.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [patches] Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-24 15:42         ` Benjamin Herrenschmidt
@ 2017-06-24 21:32             ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-24 21:32 UTC (permalink / raw)
  To: benh
  Cc: Arnd Bergmann, linux-arch, linux-kernel, Olof Johansson, albert, patches

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1013 bytes --]

On Sat, 24 Jun 2017 08:42:05 PDT (-0700), benh@kernel.crashing.org wrote:
> On Fri, 2017-06-23 at 19:01 -0700, Palmer Dabbelt wrote:
>> > > +#define mmiowb()       __asm__ __volatile__ ("fence io,io" : : : "memory");
>
> I forgot if we already mentioned that but mmiowb is primarily intended
> to order MMIO stores vs. a subsequent spin_unlock.
>
> I'm not sure an IO only fence is sufficient here.
>
> Note that I've never trusted drivers to get that right, it's a rather
> bad abstraction to begin with, so on powerpc, instead, I just set a
> per-cpu flag on every non-relaxed MMIO write and test it in spin_unlock
> in order to "beef up" the barrier in there if necessary.

Sorry about that -- I thought I'd included a note somewhere that the atomics
and barriers weren't ready to go yet, as we'd found a bunch of problems with
them in the first review and I needed to go through them all.  Arnd suggested
copying the PowerPC approach to mmiowb and I like that better, so we're going
to use it.

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [patches] Re: [PATCH 13/17] RISC-V: Add include subdirectory
@ 2017-06-24 21:32             ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-24 21:32 UTC (permalink / raw)
  To: benh
  Cc: Arnd Bergmann, linux-arch, linux-kernel, Olof Johansson, albert, patches

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1013 bytes --]

On Sat, 24 Jun 2017 08:42:05 PDT (-0700), benh@kernel.crashing.org wrote:
> On Fri, 2017-06-23 at 19:01 -0700, Palmer Dabbelt wrote:
>> > > +#define mmiowb()       __asm__ __volatile__ ("fence io,io" : : : "memory");
>
> I forgot if we already mentioned that but mmiowb is primarily intended
> to order MMIO stores vs. a subsequent spin_unlock.
>
> I'm not sure an IO only fence is sufficient here.
>
> Note that I've never trusted drivers to get that right, it's a rather
> bad abstraction to begin with, so on powerpc, instead, I just set a
> per-cpu flag on every non-relaxed MMIO write and test it in spin_unlock
> in order to "beef up" the barrier in there if necessary.

Sorry about that -- I thought I'd included a note somewhere that the atomics
and barriers weren't ready to go yet, as we'd found a bunch of problems with
them in the first review and I needed to go through them all.  Arnd suggested
copying the PowerPC approach to mmiowb and I like that better, so we're going
to use it.

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [patches] Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-24 21:32             ` Palmer Dabbelt
  (?)
@ 2017-06-25  3:01             ` Benjamin Herrenschmidt
  -1 siblings, 0 replies; 283+ messages in thread
From: Benjamin Herrenschmidt @ 2017-06-25  3:01 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Arnd Bergmann, linux-arch, linux-kernel, Olof Johansson, albert, patches

On Sat, 2017-06-24 at 14:32 -0700, Palmer Dabbelt wrote:
> On Sat, 24 Jun 2017 08:42:05 PDT (-0700), benh@kernel.crashing.org wrote:
> > On Fri, 2017-06-23 at 19:01 -0700, Palmer Dabbelt wrote:
> > > > > +#define mmiowb()       __asm__ __volatile__ ("fence io,io" : : : "memory");
> > 
> > I forgot if we already mentioned that but mmiowb is primarily intended
> > to order MMIO stores vs. a subsequent spin_unlock.
> > 
> > I'm not sure an IO only fence is sufficient here.
> > 
> > Note that I've never trusted drivers to get that right, it's a rather
> > bad abstraction to begin with, so on powerpc, instead, I just set a
> > per-cpu flag on every non-relaxed MMIO write and test it in spin_unlock
> > in order to "beef up" the barrier in there if necessary.
> 
> Sorry about that -- I thought I'd included a note somewhere that the atomics
> and barriers weren't ready to go yet, as we'd found a bunch of problems with
> them in the first review and I needed to go through them all.  Arnd suggested
> copying the PowerPC approach to mmiowb and I like that better, so we're going
> to use it.

Ah yes, I did see your note, I just wasn't sure we had clarified the
mmiowb case and thought it was worth mentioning.

Cheers,
Ben,

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 10/17] irqchip: New RISC-V PLIC Driver
  2017-06-07 10:52     ` Marc Zyngier
@ 2017-06-25 20:49         ` Palmer Dabbelt
  2017-06-25 20:49         ` Palmer Dabbelt
  1 sibling, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-25 20:49 UTC (permalink / raw)
  To: marc.zyngier
  Cc: linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, will.deacon, peterz

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 14664 bytes --]

On Wed, 07 Jun 2017 03:52:10 PDT (-0700), marc.zyngier@arm.com wrote:
> Hi Palmer,
>
> On 07/06/17 00:00, Palmer Dabbelt wrote:
>> This patch adds a driver for the Platform Level Interrupt Controller
>> (PLIC) specified as part of the RISC-V supervisor level ISA manual.
>> The PLIC connocts global interrupt sources to the local interrupt
>> controller on each hart.  A PLIC is present on all RISC-V systems.
>>
>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>> ---
>>  drivers/irqchip/Kconfig          |  12 ++
>>  drivers/irqchip/Makefile         |   1 +
>>  drivers/irqchip/irq-riscv-plic.c | 253 +++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 266 insertions(+)
>>  create mode 100644 drivers/irqchip/irq-riscv-plic.c
>>
>> diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
>> index 478f8ace2664..2906d63934ef 100644
>> --- a/drivers/irqchip/Kconfig
>> +++ b/drivers/irqchip/Kconfig
>> @@ -301,3 +301,15 @@ config QCOM_IRQ_COMBINER
>>  	help
>>  	  Say yes here to add support for the IRQ combiner devices embedded
>>  	  in Qualcomm Technologies chips.
>> +
>> +config RISCV_PLIC
>> +        bool "Platform-Level Interrupt Controller"
>> +	depends on RISCV
>> +        default y
>> +        help
>> +	   This enables support for the PLIC chip found in standard RISC-V
>> +	   systems. The PLIC is the top-most interrupt controller found in
>
> nit: this seems to slightly contradict what is being said in patch #8,
> where the HLIC is the top-level interrupt controller.

Sorry, I guess that was a bit confusing: by "top-level interrupt controller
found in the system" I meant the stuff outside the core complex -- in other
words, all the devices.  How does this sound instead?

    This enables support for the PLIC chip found in standard RISC-V
    systems.  The PLIC controls devices interrupts and connects them to
    each core's local interrupt controller.  Aside from timer and
    software interrupts, all other interrupt sources (MSI, GPIO, etc)
    are subordinate to the PLIC.

https://github.com/riscv/riscv-linux/commit/2395b17a36c3cf5aabd9b019e1f8fc7efe67e9a0

>> +	   the system, connected directly to the core complex. All other
>> +	   interrupt sources (MSI, GPIO, etc) are subordinate to the PLIC.
>> +
>> +	   If you don't know what to do here, say Y.
>> diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile
>> index b64c59b838a0..bed94cc89146 100644
>> --- a/drivers/irqchip/Makefile
>> +++ b/drivers/irqchip/Makefile
>> @@ -76,3 +76,4 @@ obj-$(CONFIG_EZNPS_GIC)			+= irq-eznps.o
>>  obj-$(CONFIG_ARCH_ASPEED)		+= irq-aspeed-vic.o
>>  obj-$(CONFIG_STM32_EXTI) 		+= irq-stm32-exti.o
>>  obj-$(CONFIG_QCOM_IRQ_COMBINER)		+= qcom-irq-combiner.o
>> +obj-$(CONFIG_RISCV_PLIC)		+= irq-riscv-plic.o
>> diff --git a/drivers/irqchip/irq-riscv-plic.c b/drivers/irqchip/irq-riscv-plic.c
>> new file mode 100644
>> index 000000000000..906c8a62a911
>> --- /dev/null
>> +++ b/drivers/irqchip/irq-riscv-plic.c
>> @@ -0,0 +1,253 @@
>> +/*
>> + * Copyright (C) 2017 SiFive
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful,
>> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + *   GNU General Public License for more details.
>> + */
>> +
>> +#include <linux/interrupt.h>
>> +#include <linux/io.h>
>> +#include <linux/irq.h>
>> +#include <linux/irqchip.h>
>> +#include <linux/irqchip/chained_irq.h>
>> +#include <linux/irqdomain.h>
>> +#include <linux/module.h>
>> +#include <linux/of.h>
>> +#include <linux/of_address.h>
>> +#include <linux/of_irq.h>
>> +#include <linux/platform_device.h>
>> +
>> +/* From the RISC-V Privlidged Spec v1.10:
>> + *
>> + * Global interrupt sources are assigned small unsigned integer identifiers,
>> + * beginning at the value 1.  An interrupt ID of 0 is reserved to mean “no
>> + * interrupt”.  Interrupt identifiers are also used to break ties when two or
>> + * more interrupt sources have the same assigned priority. Smaller values of
>> + * interrupt ID take precedence over larger values of interrupt ID.
>> + *
>> + * It's not defined what the largest device ID is, so we're just fixing
>> + * MAX_DEVICES right here (which is named oddly, as there will never be a
>> + * device 0).
>> + */
>> +#define MAX_DEVICES	1024
>> +#define MAX_CONTEXTS	15872
>
> If those are HW properties, they should probably come from the device
> tree instead of being hardcoded here.

Sorry, this is a bit confusing: the RISC-V supervisor spec (which is what this
comment was about) doesn't actually specify much about the PLIC, while this
driver is for the concrete 'riscv,plic0' device (which is SiFive's PLIC
implementation, and will be part of the platform specification).  Maybe this is
a better comment?

 * While the RISC-V supervisor spec doesn't define the maximum number of
 * devices supported by the PLIC, the largest number supported by devices
 * marked as 'riscv,plic0' (which is the only device type this driver supports,
 * and is the only extant PLIC as of now) is 1024.  As mentioned above, device
 * 0 is defined to be non-existant so this device really only supports 1023
 * devices.

https://github.com/riscv/riscv-linux/commit/ef43982522f84874be3dfba24f5acb57ad0d9186

>> +
>> +#define PRIORITY_BASE	0
>> +#define ENABLE_BASE	0x2000
>> +#define ENABLE_SIZE	0x80
>> +#define HART_BASE	0x200000
>> +#define HART_SIZE	0x1000
>> +
>> +struct plic_hart_context {
>> +	u32 threshold;
>> +	u32 claim;
>> +};
>
> Representing HW layout as a C structure is not something we usually do
> in the kernel. It relies on the ABI (which may change over time), and
> makes it harder to quickly distinguish what is a SW concept from what's
> not. It also leads to some of the mistakes below...

OK, makes sense.  I went ahead and removed

>> +
>> +struct plic_enable_context {
>> +	atomic_t mask[32]; // 32-bit * 32-entry
>
> /* */ comments please, placed above the field.
>
>> +};
>> +
>> +struct plic_priority {
>> +	u32 prio[MAX_DEVICES];
>> +};
>> +
>> +struct plic_data {
>> +	struct irq_chip		chip;
>> +	struct irq_domain	*domain;
>> +	u32			ndev;
>> +	void __iomem		*reg;
>> +	int			handlers;
>> +	struct plic_handler	*handler;
>> +	char			name[30];
>
> Why 30? #define?
>
>> +};
>> +
>> +struct plic_handler {
>> +	struct plic_hart_context	*context;
>> +	struct plic_data		*data;
>> +};
>> +
>> +static inline
>> +struct plic_hart_context *plic_hart_context(struct plic_data *data, size_t i)
>
> What is 'i' here?
>
>> +{
>> +	return (struct plic_hart_context *)((char *)data->reg + HART_BASE + HART_SIZE*i);
>
> This looks dodgy on a number of levels:
> - the various levels of casts are useless
> - what you return is still an __iomem
>
> Sparse would certainly shout at you if you ran it on this file.
>
> Same comment for the two functions below.
>
>> +}
>> +
>> +static inline
>> +struct plic_enable_context *plic_enable_context(struct plic_data *data, size_t i)
>> +{
>> +	return (struct plic_enable_context *)((char *)data->reg + ENABLE_BASE + ENABLE_SIZE*i);
>> +}
>> +
>> +static inline
>> +struct plic_priority *plic_priority(struct plic_data *data)
>> +{
>> +	return (struct plic_priority *)((char *)data->reg + PRIORITY_BASE);
>> +}
>> +
>> +static void plic_disable(struct plic_data *data, int i, int hwirq)
>> +{
>> +	struct plic_enable_context *enable = plic_enable_context(data, i);
>> +
>> +	atomic_and(~(1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
>
> This is still a device access, right? What does it mean to use the
> atomic primitives on that? What are you racing against? I thought the
> various context were private to an execution context...
>
> Adding Will and PeterZ to the CC list because they will probably have
> their own views on this...
>
>> +}
>> +
>> +static void plic_enable(struct plic_data *data, int i, int hwirq)
>> +{
>> +	struct plic_enable_context *enable = plic_enable_context(data, i);
>> +
>> +	atomic_or((1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
>> +}
>> +
>> +// There is no need to mask/unmask PLIC interrupts
>> +// They are "masked" by reading claim and "unmasked" when writing it back.
>
> Hmmm. What you describe here is the FastEOI flow.
>
>> +static void plic_irq_mask(struct irq_data *d) { }
>> +static void plic_irq_unmask(struct irq_data *d) { }
>> +
>> +static void plic_irq_enable(struct irq_data *d)
>> +{
>> +	struct plic_data *data = irq_data_get_irq_chip_data(d);
>> +	struct plic_priority *priority = plic_priority(data);
>> +	int i;
>> +
>> +	iowrite32(1, &priority->prio[d->hwirq]);
>
> Using iowrite is only meaningful if your architecture has an actual I/O
> address space that is distinct from the "normal" address space. Using
> the write{b,w,l}{_relaxed} accessors would make more sense if you're not
> in that case.
>
>> +	for (i = 0; i < data->handlers; ++i)
>> +		if (data->handler[i].context)
>> +			plic_enable(data, i, d->hwirq);
>> +}
>> +
>> +static void plic_irq_disable(struct irq_data *d)
>> +{
>> +	struct plic_data *data = irq_data_get_irq_chip_data(d);
>> +	struct plic_priority *priority = plic_priority(data);
>> +	int i;
>> +
>> +	iowrite32(0, &priority->prio[d->hwirq]);
>> +	for (i = 0; i < data->handlers; ++i)
>> +		if (data->handler[i].context)
>> +			plic_disable(data, i, d->hwirq);
>> +}
>> +
>> +static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
>> +			      irq_hw_number_t hwirq)
>> +{
>> +	struct plic_data *data = d->host_data;
>> +
>> +	irq_set_chip_and_handler(irq, &data->chip, handle_simple_irq);
>> +	irq_set_chip_data(irq, data);
>> +	irq_set_noprobe(irq);
>> +
>> +	return 0;
>> +}
>> +
>> +static const struct irq_domain_ops plic_irqdomain_ops = {
>> +	.map	= plic_irqdomain_map,
>> +	.xlate	= irq_domain_xlate_onecell,
>> +};
>> +
>> +static void plic_chained_handle_irq(struct irq_desc *desc)
>> +{
>> +	struct plic_handler *handler = irq_desc_get_handler_data(desc);
>> +	struct irq_chip *chip = irq_desc_get_chip(desc);
>> +	struct irq_domain *domain = handler->data->domain;
>> +	u32 what;
>> +
>> +	chained_irq_enter(chip, desc);
>> +
>> +	while ((what = ioread32(&handler->context->claim))) {
>> +		int irq = irq_find_mapping(domain, what);
>> +
>> +		if (irq > 0)
>> +			generic_handle_irq(irq);
>> +		else
>> +			handle_bad_irq(desc);
>> +		iowrite32(what, &handler->context->claim);
>
> So this is your EOI. It should be represented as such, instead of
> abusing the handle_simple_irq flow.

OK, I think that makes sense.  From my understanding of this, when using the
FastEOI flow we'll get a callback to our irq_eoi at the end of handling the IRQ
via generic_handle_irq.  If I've understood everything correctly then I think
this should do it

  https://github.com/riscv/riscv-linux/commit/c2cf592609eb927e3eef60876ca78f73538af69a

Note that I haven't tested it yet, so I probably managed to screw something up...

The one thing I'm still confused about is: what is the difference between
masking an IRQ and disabling an IRQ, and are our empty mask/unmask functions
obsolete in the FastEOI flow?

>> +	}
>> +
>> +	chained_irq_exit(chip, desc);
>> +}
>> +
>> +static int plic_init(struct device_node *node, struct device_node *parent)
>> +{
>> +	struct plic_data *data;
>> +	struct resource resource;
>> +	int i, ok = 0;
>> +
>> +	data = kzalloc(sizeof(*data), GFP_KERNEL);
>> +	if (WARN_ON(!data))
>> +		return -ENOMEM;
>> +
>> +	data->reg = of_iomap(node, 0);
>> +	if (WARN_ON(!data->reg))
>> +		return -EIO;
>> +
>> +	of_property_read_u32(node, "riscv,ndev", &data->ndev);
>> +	if (WARN_ON(!data->ndev))
>> +		return -EINVAL;
>> +
>> +	data->handlers = of_irq_count(node);
>> +	if (WARN_ON(!data->handlers))
>> +		return -EINVAL;
>> +
>> +	data->handler =
>> +		kcalloc(data->handlers, sizeof(*data->handler), GFP_KERNEL);
>> +	if (WARN_ON(!data->handler))
>> +		return -ENOMEM;
>> +
>> +	data->domain = irq_domain_add_linear(node, data->ndev+1, &plic_irqdomain_ops, data);
>> +	if (WARN_ON(!data->domain))
>> +		return -ENOMEM;
>
> Memory leak of data->handler and data, data->reg is still mapped...

Sorry, I should have caught that earlier.  I think this should fix it

  https://github.com/riscv/riscv-linux/commit/ce8fdac83558c1e2cfd759b9a090a6f4657e2951

>> +
>> +	of_address_to_resource(node, 0, &resource);
>> +	snprintf(data->name, sizeof(data->name),
>> +		 "riscv,plic0,%llx", resource.start);
>> +	data->chip.name = data->name;
>> +	data->chip.irq_mask = plic_irq_mask;
>> +	data->chip.irq_unmask = plic_irq_unmask;
>> +	data->chip.irq_enable = plic_irq_enable;
>> +	data->chip.irq_disable = plic_irq_disable;
>> +
>> +	for (i = 0; i < data->handlers; ++i) {
>> +		struct plic_handler *handler = &data->handler[i];
>> +		struct of_phandle_args parent;
>> +		int parent_irq, hwirq;
>> +
>> +		if (of_irq_parse_one(node, i, &parent))
>> +			continue;
>> +		// skip context holes
>> +		if (parent.args[0] == -1)
>> +			continue;
>> +
>> +		// skip any contexts that lead to inactive harts
>> +		if (of_device_is_compatible(parent.np, "riscv,cpu-intc") &&
>> +		    parent.np->parent &&
>> +		    riscv_of_processor_hart(parent.np->parent) < 0)
>> +			continue;
>> +
>> +		parent_irq = irq_create_of_mapping(&parent);
>> +		if (!parent_irq)
>> +			continue;
>> +
>> +		handler->context = plic_hart_context(data, i);
>> +		handler->data = data;
>> +		// hwirq prio must be > this to trigger an interrupt
>> +		iowrite32(0, &handler->context->threshold);
>> +
>> +		for (hwirq = 1; hwirq <= data->ndev; ++hwirq)
>> +			plic_disable(data, i, hwirq);
>> +		irq_set_chained_handler_and_data(parent_irq, plic_chained_handle_irq, handler);
>> +		++ok;
>> +	}
>> +
>> +	printk(KERN_INFO "%s: mapped %d interrupts to %d/%d handlers\n",
>> +	       data->name, data->ndev, ok, data->handlers);
>> +	WARN_ON(!ok);
>> +	return 0;
>> +}
>> +
>> +IRQCHIP_DECLARE(plic0, "riscv,plic0", plic_init);
>>
>
> In the future, please CC me (as well as Thomas Gleixner and Jason
> Cooper) on all the interrupt-controller related patches.

OK.  We'd just been keeping the RISC-V port all as one big patch set in order
to simplify our lives (for example, this v2 is the first time we had anything
outside of arch/riscv).  The big feedback has been that's the wrong way to do
it, so I'm going to start splitting everything up to try to get things to the
correct parties.

Once I get a chance to test all this I'll go and submit a patch set with our
IRQ drivers.

Thanks for reviewing our patches!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 10/17] irqchip: New RISC-V PLIC Driver
@ 2017-06-25 20:49         ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-25 20:49 UTC (permalink / raw)
  To: marc.zyngier
  Cc: linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, will.deacon, peterz

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 14664 bytes --]

On Wed, 07 Jun 2017 03:52:10 PDT (-0700), marc.zyngier@arm.com wrote:
> Hi Palmer,
>
> On 07/06/17 00:00, Palmer Dabbelt wrote:
>> This patch adds a driver for the Platform Level Interrupt Controller
>> (PLIC) specified as part of the RISC-V supervisor level ISA manual.
>> The PLIC connocts global interrupt sources to the local interrupt
>> controller on each hart.  A PLIC is present on all RISC-V systems.
>>
>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>> ---
>>  drivers/irqchip/Kconfig          |  12 ++
>>  drivers/irqchip/Makefile         |   1 +
>>  drivers/irqchip/irq-riscv-plic.c | 253 +++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 266 insertions(+)
>>  create mode 100644 drivers/irqchip/irq-riscv-plic.c
>>
>> diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
>> index 478f8ace2664..2906d63934ef 100644
>> --- a/drivers/irqchip/Kconfig
>> +++ b/drivers/irqchip/Kconfig
>> @@ -301,3 +301,15 @@ config QCOM_IRQ_COMBINER
>>  	help
>>  	  Say yes here to add support for the IRQ combiner devices embedded
>>  	  in Qualcomm Technologies chips.
>> +
>> +config RISCV_PLIC
>> +        bool "Platform-Level Interrupt Controller"
>> +	depends on RISCV
>> +        default y
>> +        help
>> +	   This enables support for the PLIC chip found in standard RISC-V
>> +	   systems. The PLIC is the top-most interrupt controller found in
>
> nit: this seems to slightly contradict what is being said in patch #8,
> where the HLIC is the top-level interrupt controller.

Sorry, I guess that was a bit confusing: by "top-level interrupt controller
found in the system" I meant the stuff outside the core complex -- in other
words, all the devices.  How does this sound instead?

    This enables support for the PLIC chip found in standard RISC-V
    systems.  The PLIC controls devices interrupts and connects them to
    each core's local interrupt controller.  Aside from timer and
    software interrupts, all other interrupt sources (MSI, GPIO, etc)
    are subordinate to the PLIC.

https://github.com/riscv/riscv-linux/commit/2395b17a36c3cf5aabd9b019e1f8fc7efe67e9a0

>> +	   the system, connected directly to the core complex. All other
>> +	   interrupt sources (MSI, GPIO, etc) are subordinate to the PLIC.
>> +
>> +	   If you don't know what to do here, say Y.
>> diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile
>> index b64c59b838a0..bed94cc89146 100644
>> --- a/drivers/irqchip/Makefile
>> +++ b/drivers/irqchip/Makefile
>> @@ -76,3 +76,4 @@ obj-$(CONFIG_EZNPS_GIC)			+= irq-eznps.o
>>  obj-$(CONFIG_ARCH_ASPEED)		+= irq-aspeed-vic.o
>>  obj-$(CONFIG_STM32_EXTI) 		+= irq-stm32-exti.o
>>  obj-$(CONFIG_QCOM_IRQ_COMBINER)		+= qcom-irq-combiner.o
>> +obj-$(CONFIG_RISCV_PLIC)		+= irq-riscv-plic.o
>> diff --git a/drivers/irqchip/irq-riscv-plic.c b/drivers/irqchip/irq-riscv-plic.c
>> new file mode 100644
>> index 000000000000..906c8a62a911
>> --- /dev/null
>> +++ b/drivers/irqchip/irq-riscv-plic.c
>> @@ -0,0 +1,253 @@
>> +/*
>> + * Copyright (C) 2017 SiFive
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful,
>> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + *   GNU General Public License for more details.
>> + */
>> +
>> +#include <linux/interrupt.h>
>> +#include <linux/io.h>
>> +#include <linux/irq.h>
>> +#include <linux/irqchip.h>
>> +#include <linux/irqchip/chained_irq.h>
>> +#include <linux/irqdomain.h>
>> +#include <linux/module.h>
>> +#include <linux/of.h>
>> +#include <linux/of_address.h>
>> +#include <linux/of_irq.h>
>> +#include <linux/platform_device.h>
>> +
>> +/* From the RISC-V Privlidged Spec v1.10:
>> + *
>> + * Global interrupt sources are assigned small unsigned integer identifiers,
>> + * beginning at the value 1.  An interrupt ID of 0 is reserved to mean “no
>> + * interrupt”.  Interrupt identifiers are also used to break ties when two or
>> + * more interrupt sources have the same assigned priority. Smaller values of
>> + * interrupt ID take precedence over larger values of interrupt ID.
>> + *
>> + * It's not defined what the largest device ID is, so we're just fixing
>> + * MAX_DEVICES right here (which is named oddly, as there will never be a
>> + * device 0).
>> + */
>> +#define MAX_DEVICES	1024
>> +#define MAX_CONTEXTS	15872
>
> If those are HW properties, they should probably come from the device
> tree instead of being hardcoded here.

Sorry, this is a bit confusing: the RISC-V supervisor spec (which is what this
comment was about) doesn't actually specify much about the PLIC, while this
driver is for the concrete 'riscv,plic0' device (which is SiFive's PLIC
implementation, and will be part of the platform specification).  Maybe this is
a better comment?

 * While the RISC-V supervisor spec doesn't define the maximum number of
 * devices supported by the PLIC, the largest number supported by devices
 * marked as 'riscv,plic0' (which is the only device type this driver supports,
 * and is the only extant PLIC as of now) is 1024.  As mentioned above, device
 * 0 is defined to be non-existant so this device really only supports 1023
 * devices.

https://github.com/riscv/riscv-linux/commit/ef43982522f84874be3dfba24f5acb57ad0d9186

>> +
>> +#define PRIORITY_BASE	0
>> +#define ENABLE_BASE	0x2000
>> +#define ENABLE_SIZE	0x80
>> +#define HART_BASE	0x200000
>> +#define HART_SIZE	0x1000
>> +
>> +struct plic_hart_context {
>> +	u32 threshold;
>> +	u32 claim;
>> +};
>
> Representing HW layout as a C structure is not something we usually do
> in the kernel. It relies on the ABI (which may change over time), and
> makes it harder to quickly distinguish what is a SW concept from what's
> not. It also leads to some of the mistakes below...

OK, makes sense.  I went ahead and removed

>> +
>> +struct plic_enable_context {
>> +	atomic_t mask[32]; // 32-bit * 32-entry
>
> /* */ comments please, placed above the field.
>
>> +};
>> +
>> +struct plic_priority {
>> +	u32 prio[MAX_DEVICES];
>> +};
>> +
>> +struct plic_data {
>> +	struct irq_chip		chip;
>> +	struct irq_domain	*domain;
>> +	u32			ndev;
>> +	void __iomem		*reg;
>> +	int			handlers;
>> +	struct plic_handler	*handler;
>> +	char			name[30];
>
> Why 30? #define?
>
>> +};
>> +
>> +struct plic_handler {
>> +	struct plic_hart_context	*context;
>> +	struct plic_data		*data;
>> +};
>> +
>> +static inline
>> +struct plic_hart_context *plic_hart_context(struct plic_data *data, size_t i)
>
> What is 'i' here?
>
>> +{
>> +	return (struct plic_hart_context *)((char *)data->reg + HART_BASE + HART_SIZE*i);
>
> This looks dodgy on a number of levels:
> - the various levels of casts are useless
> - what you return is still an __iomem
>
> Sparse would certainly shout at you if you ran it on this file.
>
> Same comment for the two functions below.
>
>> +}
>> +
>> +static inline
>> +struct plic_enable_context *plic_enable_context(struct plic_data *data, size_t i)
>> +{
>> +	return (struct plic_enable_context *)((char *)data->reg + ENABLE_BASE + ENABLE_SIZE*i);
>> +}
>> +
>> +static inline
>> +struct plic_priority *plic_priority(struct plic_data *data)
>> +{
>> +	return (struct plic_priority *)((char *)data->reg + PRIORITY_BASE);
>> +}
>> +
>> +static void plic_disable(struct plic_data *data, int i, int hwirq)
>> +{
>> +	struct plic_enable_context *enable = plic_enable_context(data, i);
>> +
>> +	atomic_and(~(1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
>
> This is still a device access, right? What does it mean to use the
> atomic primitives on that? What are you racing against? I thought the
> various context were private to an execution context...
>
> Adding Will and PeterZ to the CC list because they will probably have
> their own views on this...
>
>> +}
>> +
>> +static void plic_enable(struct plic_data *data, int i, int hwirq)
>> +{
>> +	struct plic_enable_context *enable = plic_enable_context(data, i);
>> +
>> +	atomic_or((1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
>> +}
>> +
>> +// There is no need to mask/unmask PLIC interrupts
>> +// They are "masked" by reading claim and "unmasked" when writing it back.
>
> Hmmm. What you describe here is the FastEOI flow.
>
>> +static void plic_irq_mask(struct irq_data *d) { }
>> +static void plic_irq_unmask(struct irq_data *d) { }
>> +
>> +static void plic_irq_enable(struct irq_data *d)
>> +{
>> +	struct plic_data *data = irq_data_get_irq_chip_data(d);
>> +	struct plic_priority *priority = plic_priority(data);
>> +	int i;
>> +
>> +	iowrite32(1, &priority->prio[d->hwirq]);
>
> Using iowrite is only meaningful if your architecture has an actual I/O
> address space that is distinct from the "normal" address space. Using
> the write{b,w,l}{_relaxed} accessors would make more sense if you're not
> in that case.
>
>> +	for (i = 0; i < data->handlers; ++i)
>> +		if (data->handler[i].context)
>> +			plic_enable(data, i, d->hwirq);
>> +}
>> +
>> +static void plic_irq_disable(struct irq_data *d)
>> +{
>> +	struct plic_data *data = irq_data_get_irq_chip_data(d);
>> +	struct plic_priority *priority = plic_priority(data);
>> +	int i;
>> +
>> +	iowrite32(0, &priority->prio[d->hwirq]);
>> +	for (i = 0; i < data->handlers; ++i)
>> +		if (data->handler[i].context)
>> +			plic_disable(data, i, d->hwirq);
>> +}
>> +
>> +static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
>> +			      irq_hw_number_t hwirq)
>> +{
>> +	struct plic_data *data = d->host_data;
>> +
>> +	irq_set_chip_and_handler(irq, &data->chip, handle_simple_irq);
>> +	irq_set_chip_data(irq, data);
>> +	irq_set_noprobe(irq);
>> +
>> +	return 0;
>> +}
>> +
>> +static const struct irq_domain_ops plic_irqdomain_ops = {
>> +	.map	= plic_irqdomain_map,
>> +	.xlate	= irq_domain_xlate_onecell,
>> +};
>> +
>> +static void plic_chained_handle_irq(struct irq_desc *desc)
>> +{
>> +	struct plic_handler *handler = irq_desc_get_handler_data(desc);
>> +	struct irq_chip *chip = irq_desc_get_chip(desc);
>> +	struct irq_domain *domain = handler->data->domain;
>> +	u32 what;
>> +
>> +	chained_irq_enter(chip, desc);
>> +
>> +	while ((what = ioread32(&handler->context->claim))) {
>> +		int irq = irq_find_mapping(domain, what);
>> +
>> +		if (irq > 0)
>> +			generic_handle_irq(irq);
>> +		else
>> +			handle_bad_irq(desc);
>> +		iowrite32(what, &handler->context->claim);
>
> So this is your EOI. It should be represented as such, instead of
> abusing the handle_simple_irq flow.

OK, I think that makes sense.  From my understanding of this, when using the
FastEOI flow we'll get a callback to our irq_eoi at the end of handling the IRQ
via generic_handle_irq.  If I've understood everything correctly then I think
this should do it

  https://github.com/riscv/riscv-linux/commit/c2cf592609eb927e3eef60876ca78f73538af69a

Note that I haven't tested it yet, so I probably managed to screw something up...

The one thing I'm still confused about is: what is the difference between
masking an IRQ and disabling an IRQ, and are our empty mask/unmask functions
obsolete in the FastEOI flow?

>> +	}
>> +
>> +	chained_irq_exit(chip, desc);
>> +}
>> +
>> +static int plic_init(struct device_node *node, struct device_node *parent)
>> +{
>> +	struct plic_data *data;
>> +	struct resource resource;
>> +	int i, ok = 0;
>> +
>> +	data = kzalloc(sizeof(*data), GFP_KERNEL);
>> +	if (WARN_ON(!data))
>> +		return -ENOMEM;
>> +
>> +	data->reg = of_iomap(node, 0);
>> +	if (WARN_ON(!data->reg))
>> +		return -EIO;
>> +
>> +	of_property_read_u32(node, "riscv,ndev", &data->ndev);
>> +	if (WARN_ON(!data->ndev))
>> +		return -EINVAL;
>> +
>> +	data->handlers = of_irq_count(node);
>> +	if (WARN_ON(!data->handlers))
>> +		return -EINVAL;
>> +
>> +	data->handler =
>> +		kcalloc(data->handlers, sizeof(*data->handler), GFP_KERNEL);
>> +	if (WARN_ON(!data->handler))
>> +		return -ENOMEM;
>> +
>> +	data->domain = irq_domain_add_linear(node, data->ndev+1, &plic_irqdomain_ops, data);
>> +	if (WARN_ON(!data->domain))
>> +		return -ENOMEM;
>
> Memory leak of data->handler and data, data->reg is still mapped...

Sorry, I should have caught that earlier.  I think this should fix it

  https://github.com/riscv/riscv-linux/commit/ce8fdac83558c1e2cfd759b9a090a6f4657e2951

>> +
>> +	of_address_to_resource(node, 0, &resource);
>> +	snprintf(data->name, sizeof(data->name),
>> +		 "riscv,plic0,%llx", resource.start);
>> +	data->chip.name = data->name;
>> +	data->chip.irq_mask = plic_irq_mask;
>> +	data->chip.irq_unmask = plic_irq_unmask;
>> +	data->chip.irq_enable = plic_irq_enable;
>> +	data->chip.irq_disable = plic_irq_disable;
>> +
>> +	for (i = 0; i < data->handlers; ++i) {
>> +		struct plic_handler *handler = &data->handler[i];
>> +		struct of_phandle_args parent;
>> +		int parent_irq, hwirq;
>> +
>> +		if (of_irq_parse_one(node, i, &parent))
>> +			continue;
>> +		// skip context holes
>> +		if (parent.args[0] == -1)
>> +			continue;
>> +
>> +		// skip any contexts that lead to inactive harts
>> +		if (of_device_is_compatible(parent.np, "riscv,cpu-intc") &&
>> +		    parent.np->parent &&
>> +		    riscv_of_processor_hart(parent.np->parent) < 0)
>> +			continue;
>> +
>> +		parent_irq = irq_create_of_mapping(&parent);
>> +		if (!parent_irq)
>> +			continue;
>> +
>> +		handler->context = plic_hart_context(data, i);
>> +		handler->data = data;
>> +		// hwirq prio must be > this to trigger an interrupt
>> +		iowrite32(0, &handler->context->threshold);
>> +
>> +		for (hwirq = 1; hwirq <= data->ndev; ++hwirq)
>> +			plic_disable(data, i, hwirq);
>> +		irq_set_chained_handler_and_data(parent_irq, plic_chained_handle_irq, handler);
>> +		++ok;
>> +	}
>> +
>> +	printk(KERN_INFO "%s: mapped %d interrupts to %d/%d handlers\n",
>> +	       data->name, data->ndev, ok, data->handlers);
>> +	WARN_ON(!ok);
>> +	return 0;
>> +}
>> +
>> +IRQCHIP_DECLARE(plic0, "riscv,plic0", plic_init);
>>
>
> In the future, please CC me (as well as Thomas Gleixner and Jason
> Cooper) on all the interrupt-controller related patches.

OK.  We'd just been keeping the RISC-V port all as one big patch set in order
to simplify our lives (for example, this v2 is the first time we had anything
outside of arch/riscv).  The big feedback has been that's the wrong way to do
it, so I'm going to start splitting everything up to try to get things to the
correct parties.

Once I get a chance to test all this I'll go and submit a patch set with our
IRQ drivers.

Thanks for reviewing our patches!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-07 13:17     ` Peter Zijlstra
@ 2017-06-26 20:07         ` Palmer Dabbelt
  2017-06-26 20:07         ` Palmer Dabbelt
  1 sibling, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-26 20:07 UTC (permalink / raw)
  To: peterz
  Cc: linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, will.deacon

On Wed, 07 Jun 2017 06:17:27 PDT (-0700), peterz@infradead.org wrote:
> On Tue, Jun 06, 2017 at 04:00:03PM -0700, Palmer Dabbelt wrote:
>> diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
>> new file mode 100644
>> index 000000000000..9736f5714e54
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/spinlock.h
>> @@ -0,0 +1,155 @@
>> +/*
>> + * Copyright (C) 2015 Regents of the University of California
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful,
>> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + *   GNU General Public License for more details.
>> + */
>> +
>> +#ifndef _ASM_RISCV_SPINLOCK_H
>> +#define _ASM_RISCV_SPINLOCK_H
>> +
>> +#include <linux/kernel.h>
>> +#include <asm/current.h>
>> +
>> +/*
>> + * Simple spin lock operations.  These provide no fairness guarantees.
>> + */
>
> Any reason to use a test-and-set spinlock at all?

Just simplicity.  I looked at the MIPS ticket lock and I think we can implement
that while still maintaining the forward progress constraints on our LR/SC
sequences.  I added a FIXME about it, which I'll try to get around to

  https://github.com/riscv/riscv-linux/commit/a75d28c849e695639b7909ffa88ce571abfb0c76

but I'm going to try and get a v3 patch set out first.

>> +
>> +#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
>> +#define arch_spin_is_locked(x)	((x)->lock != 0)
>> +#define arch_spin_unlock_wait(x) \
>> +		do { cpu_relax(); } while ((x)->lock)
>
> Hehe, yeah, no ;-) There are ordering constraints on that.

OK.  I copied the ordering guarntees from MIPS here

  https://github.com/riscv/riscv-linux/commit/7d16920452c6bd14c847aade2d51c56d2a1ae457

>> +
>> +static inline void arch_spin_unlock(arch_spinlock_t *lock)
>> +{
>> +	__asm__ __volatile__ (
>> +		"amoswap.w.rl x0, x0, %0"
>> +		: "=A" (lock->lock)
>> +		:: "memory");
>> +}
>> +
>> +static inline int arch_spin_trylock(arch_spinlock_t *lock)
>> +{
>> +	int tmp = 1, busy;
>> +
>> +	__asm__ __volatile__ (
>> +		"amoswap.w.aq %0, %2, %1"
>> +		: "=r" (busy), "+A" (lock->lock)
>> +		: "r" (tmp)
>> +		: "memory");
>> +
>> +	return !busy;
>> +}
>> +
>> +static inline void arch_spin_lock(arch_spinlock_t *lock)
>> +{
>> +	while (1) {
>> +		if (arch_spin_is_locked(lock))
>> +			continue;
>> +
>> +		if (arch_spin_trylock(lock))
>> +			break;
>> +	}
>> +}

Thanks for all the comments!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
@ 2017-06-26 20:07         ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-26 20:07 UTC (permalink / raw)
  To: peterz
  Cc: linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, will.deacon

On Wed, 07 Jun 2017 06:17:27 PDT (-0700), peterz@infradead.org wrote:
> On Tue, Jun 06, 2017 at 04:00:03PM -0700, Palmer Dabbelt wrote:
>> diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
>> new file mode 100644
>> index 000000000000..9736f5714e54
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/spinlock.h
>> @@ -0,0 +1,155 @@
>> +/*
>> + * Copyright (C) 2015 Regents of the University of California
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful,
>> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + *   GNU General Public License for more details.
>> + */
>> +
>> +#ifndef _ASM_RISCV_SPINLOCK_H
>> +#define _ASM_RISCV_SPINLOCK_H
>> +
>> +#include <linux/kernel.h>
>> +#include <asm/current.h>
>> +
>> +/*
>> + * Simple spin lock operations.  These provide no fairness guarantees.
>> + */
>
> Any reason to use a test-and-set spinlock at all?

Just simplicity.  I looked at the MIPS ticket lock and I think we can implement
that while still maintaining the forward progress constraints on our LR/SC
sequences.  I added a FIXME about it, which I'll try to get around to

  https://github.com/riscv/riscv-linux/commit/a75d28c849e695639b7909ffa88ce571abfb0c76

but I'm going to try and get a v3 patch set out first.

>> +
>> +#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
>> +#define arch_spin_is_locked(x)	((x)->lock != 0)
>> +#define arch_spin_unlock_wait(x) \
>> +		do { cpu_relax(); } while ((x)->lock)
>
> Hehe, yeah, no ;-) There are ordering constraints on that.

OK.  I copied the ordering guarntees from MIPS here

  https://github.com/riscv/riscv-linux/commit/7d16920452c6bd14c847aade2d51c56d2a1ae457

>> +
>> +static inline void arch_spin_unlock(arch_spinlock_t *lock)
>> +{
>> +	__asm__ __volatile__ (
>> +		"amoswap.w.rl x0, x0, %0"
>> +		: "=A" (lock->lock)
>> +		:: "memory");
>> +}
>> +
>> +static inline int arch_spin_trylock(arch_spinlock_t *lock)
>> +{
>> +	int tmp = 1, busy;
>> +
>> +	__asm__ __volatile__ (
>> +		"amoswap.w.aq %0, %2, %1"
>> +		: "=r" (busy), "+A" (lock->lock)
>> +		: "r" (tmp)
>> +		: "memory");
>> +
>> +	return !busy;
>> +}
>> +
>> +static inline void arch_spin_lock(arch_spinlock_t *lock)
>> +{
>> +	while (1) {
>> +		if (arch_spin_is_locked(lock))
>> +			continue;
>> +
>> +		if (arch_spin_trylock(lock))
>> +			break;
>> +	}
>> +}

Thanks for all the comments!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-07 12:58         ` Peter Zijlstra
@ 2017-06-26 20:07             ` Palmer Dabbelt
  2017-06-07 16:35           ` Peter Zijlstra
  2017-06-26 20:07             ` Palmer Dabbelt
  2 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-26 20:07 UTC (permalink / raw)
  To: peterz
  Cc: linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, will.deacon

On Wed, 07 Jun 2017 05:58:50 PDT (-0700), peterz@infradead.org wrote:
> On Wed, Jun 07, 2017 at 02:36:27PM +0200, Peter Zijlstra wrote:
>> Which (pending the sub confusion) will generate the entire set of:
>>
>>  atomic_add, atomic_add_return{_relaxed,_acquire,_release,} atomic_fetch_add{_relaxed,_acquire,_release,}
>>  atomic_sub, atomic_sub_return{_relaxed,_acquire,_release,} atomic_fetch_sub{_relaxed,_acquire,_release,}
>>
>>  atomic_and, atomic_fetch_and{_relaxed,_acquire,_release,}
>>  atomic_or,  atomic_fetch_or{_relaxed,_acquire,_release,}
>>  atomic_xor, atomic_fetch_xor{_relaxed,_acquire,_release,}
>>
>
> Another approach would be to override __atomic_op_{acquire,release} and
> use things like:
>
> 	"FENCE r,rw" -- (load) ACQUIRE
> 	"FENCE rw,w" -- (store) RELEASE
>
> And then you only need to provide _relaxed atomics.
>
> Also, and I didn't check for that, you need to provide:
>
> smp_load_acquire(), smp_store_release(), atomic_read_acquire(),
> atomic_store_release().

OK, thanks for looking so deeply into this.  Sorry it was such a mess, I
thought I included a note somewhere that this all needed to be redone -- I just
wanted to get a v2 out first as that split all the drivers out.  I've went
ahead and completely rewrote atomic.h using your suggestions in a slightly
modified way.  It includes

 * _relaxed, _acquire, and _release versions of everything via a bunch of macros.
 * What I believe to be correct aqrl bits on every op.
 * 64-bit and 32-bit atomics (as opposed to just copying everything)

I didn't implement try_cmpxchg yet.  I'm going to go ahead and sort through our
memory barriers, look at the few remaining CR comments from our v2, and then
submit a v3 patch set.

I'm only replying to this message, but I believe I'll have taken into account
all your comments for the v3.

Thanks, again, for your time!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
@ 2017-06-26 20:07             ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-26 20:07 UTC (permalink / raw)
  To: peterz
  Cc: linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson, albert,
	patches, will.deacon

On Wed, 07 Jun 2017 05:58:50 PDT (-0700), peterz@infradead.org wrote:
> On Wed, Jun 07, 2017 at 02:36:27PM +0200, Peter Zijlstra wrote:
>> Which (pending the sub confusion) will generate the entire set of:
>>
>>  atomic_add, atomic_add_return{_relaxed,_acquire,_release,} atomic_fetch_add{_relaxed,_acquire,_release,}
>>  atomic_sub, atomic_sub_return{_relaxed,_acquire,_release,} atomic_fetch_sub{_relaxed,_acquire,_release,}
>>
>>  atomic_and, atomic_fetch_and{_relaxed,_acquire,_release,}
>>  atomic_or,  atomic_fetch_or{_relaxed,_acquire,_release,}
>>  atomic_xor, atomic_fetch_xor{_relaxed,_acquire,_release,}
>>
>
> Another approach would be to override __atomic_op_{acquire,release} and
> use things like:
>
> 	"FENCE r,rw" -- (load) ACQUIRE
> 	"FENCE rw,w" -- (store) RELEASE
>
> And then you only need to provide _relaxed atomics.
>
> Also, and I didn't check for that, you need to provide:
>
> smp_load_acquire(), smp_store_release(), atomic_read_acquire(),
> atomic_store_release().

OK, thanks for looking so deeply into this.  Sorry it was such a mess, I
thought I included a note somewhere that this all needed to be redone -- I just
wanted to get a v2 out first as that split all the drivers out.  I've went
ahead and completely rewrote atomic.h using your suggestions in a slightly
modified way.  It includes

 * _relaxed, _acquire, and _release versions of everything via a bunch of macros.
 * What I believe to be correct aqrl bits on every op.
 * 64-bit and 32-bit atomics (as opposed to just copying everything)

I didn't implement try_cmpxchg yet.  I'm going to go ahead and sort through our
memory barriers, look at the few remaining CR comments from our v2, and then
submit a v3 patch set.

I'm only replying to this message, but I believe I'll have taken into account
all your comments for the v3.

Thanks, again, for your time!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-07 13:16           ` Will Deacon
@ 2017-06-26 20:07               ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-26 20:07 UTC (permalink / raw)
  To: will.deacon, dlustig
  Cc: peterz, linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson,
	albert, patches

On Wed, 07 Jun 2017 06:16:11 PDT (-0700), will.deacon@arm.com wrote:
> [sorry, jumping in here because it's the only mail I have relating to
>  patch 13]
>
> On Wed, Jun 07, 2017 at 02:58:50PM +0200, Peter Zijlstra wrote:
>> On Wed, Jun 07, 2017 at 02:36:27PM +0200, Peter Zijlstra wrote:
>> > Which (pending the sub confusion) will generate the entire set of:
>> >
>> >  atomic_add, atomic_add_return{_relaxed,_acquire,_release,} atomic_fetch_add{_relaxed,_acquire,_release,}
>> >  atomic_sub, atomic_sub_return{_relaxed,_acquire,_release,} atomic_fetch_sub{_relaxed,_acquire,_release,}
>> >
>> >  atomic_and, atomic_fetch_and{_relaxed,_acquire,_release,}
>> >  atomic_or,  atomic_fetch_or{_relaxed,_acquire,_release,}
>> >  atomic_xor, atomic_fetch_xor{_relaxed,_acquire,_release,}
>> >
>>
>> Another approach would be to override __atomic_op_{acquire,release} and
>> use things like:
>>
>> 	"FENCE r,rw" -- (load) ACQUIRE
>> 	"FENCE rw,w" -- (store) RELEASE
>>
>> And then you only need to provide _relaxed atomics.
>>
>> Also, and I didn't check for that, you need to provide:
>>
>> smp_load_acquire(), smp_store_release(), atomic_read_acquire(),
>> atomic_store_release().
>
> Is there an up-to-date specification for the RISC-V memory model? I looked
> at:
>
> https://github.com/riscv/riscv-isa-manual/releases/download/riscv-user-2.2/riscv-spec-v2.2.pdf

That's the most up to date spec.

> but it says:
>
> | 2.7 Memory Model
> | This section is out of date as the RISC-V memory model is
> | currently under revision to ensure it can efficiently support current
> | programming language memory models. The revised base mem- ory model will
> | contain further ordering constraints, including at least that loads to the
> | same address from the same hart cannot be reordered, and that syntactic data
> | dependencies between instructions are respected
>
> which, on the one hand is reassuring (because ignoring dependency ordering is
> plain broken), but on the other it doesn't go quite far enough in defining
> exactly what constitutes a "syntactic data dependency". The cumulativity of
> your fences also needs defining, because I think this was up in the air at some
> point and the document above doesn't seem to tackle it (it doesn't seem to
> describe what constitutes being a memory of the predecessor or successor sets)

Unfortunately I'm not really a formal memory model guy, so I probably know a
lot less about what a "syntactic data dependency" is than you do.  I believe
the plan (like for most of RISC-V) is to avoid doing anything weird, to support
existing software systems, and to allow for implementation flexibility where
possible.

> Could you shed some light on this please? We've started relying on RW control
> dependencies in semi-recent history, so it's important to get this nailed down.

This has been a sticking point for a while.  The result of lacking a proper
memory model has been that implementations have been extremely conservative and
as a result there aren't any issues in practice, but that itself is a bad
thing.

> Thanks,
>
> Will
>
> P.S. You should also totally get your architects to write a formal model ;)

The RISC-V organization has a working group defining a formal memory model.
Here's the original posting about the working group

  https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/Oxm_IvfYItY

To the best of my understanding we hope to have a formal memory model defined
by the end of the year.  I've added Daniel Lustig, the chair of the working
group, to the thread.  He knows a lot more about this than I do.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
@ 2017-06-26 20:07               ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-26 20:07 UTC (permalink / raw)
  To: will.deacon, dlustig
  Cc: peterz, linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson,
	albert, patches

On Wed, 07 Jun 2017 06:16:11 PDT (-0700), will.deacon@arm.com wrote:
> [sorry, jumping in here because it's the only mail I have relating to
>  patch 13]
>
> On Wed, Jun 07, 2017 at 02:58:50PM +0200, Peter Zijlstra wrote:
>> On Wed, Jun 07, 2017 at 02:36:27PM +0200, Peter Zijlstra wrote:
>> > Which (pending the sub confusion) will generate the entire set of:
>> >
>> >  atomic_add, atomic_add_return{_relaxed,_acquire,_release,} atomic_fetch_add{_relaxed,_acquire,_release,}
>> >  atomic_sub, atomic_sub_return{_relaxed,_acquire,_release,} atomic_fetch_sub{_relaxed,_acquire,_release,}
>> >
>> >  atomic_and, atomic_fetch_and{_relaxed,_acquire,_release,}
>> >  atomic_or,  atomic_fetch_or{_relaxed,_acquire,_release,}
>> >  atomic_xor, atomic_fetch_xor{_relaxed,_acquire,_release,}
>> >
>>
>> Another approach would be to override __atomic_op_{acquire,release} and
>> use things like:
>>
>> 	"FENCE r,rw" -- (load) ACQUIRE
>> 	"FENCE rw,w" -- (store) RELEASE
>>
>> And then you only need to provide _relaxed atomics.
>>
>> Also, and I didn't check for that, you need to provide:
>>
>> smp_load_acquire(), smp_store_release(), atomic_read_acquire(),
>> atomic_store_release().
>
> Is there an up-to-date specification for the RISC-V memory model? I looked
> at:
>
> https://github.com/riscv/riscv-isa-manual/releases/download/riscv-user-2.2/riscv-spec-v2.2.pdf

That's the most up to date spec.

> but it says:
>
> | 2.7 Memory Model
> | This section is out of date as the RISC-V memory model is
> | currently under revision to ensure it can efficiently support current
> | programming language memory models. The revised base mem- ory model will
> | contain further ordering constraints, including at least that loads to the
> | same address from the same hart cannot be reordered, and that syntactic data
> | dependencies between instructions are respected
>
> which, on the one hand is reassuring (because ignoring dependency ordering is
> plain broken), but on the other it doesn't go quite far enough in defining
> exactly what constitutes a "syntactic data dependency". The cumulativity of
> your fences also needs defining, because I think this was up in the air at some
> point and the document above doesn't seem to tackle it (it doesn't seem to
> describe what constitutes being a memory of the predecessor or successor sets)

Unfortunately I'm not really a formal memory model guy, so I probably know a
lot less about what a "syntactic data dependency" is than you do.  I believe
the plan (like for most of RISC-V) is to avoid doing anything weird, to support
existing software systems, and to allow for implementation flexibility where
possible.

> Could you shed some light on this please? We've started relying on RW control
> dependencies in semi-recent history, so it's important to get this nailed down.

This has been a sticking point for a while.  The result of lacking a proper
memory model has been that implementations have been extremely conservative and
as a result there aren't any issues in practice, but that itself is a bad
thing.

> Thanks,
>
> Will
>
> P.S. You should also totally get your architects to write a formal model ;)

The RISC-V organization has a working group defining a formal memory model.
Here's the original posting about the working group

  https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/Oxm_IvfYItY

To the best of my understanding we hope to have a formal memory model defined
by the end of the year.  I've added Daniel Lustig, the chair of the working
group, to the thread.  He knows a lot more about this than I do.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* RE: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-26 20:07               ` Palmer Dabbelt
  (?)
@ 2017-06-27  0:07               ` Daniel Lustig
  2017-06-27  8:48                 ` Will Deacon
  -1 siblings, 1 reply; 283+ messages in thread
From: Daniel Lustig @ 2017-06-27  0:07 UTC (permalink / raw)
  To: Palmer Dabbelt, will.deacon
  Cc: peterz, linux-arch, linux-kernel, Arnd Bergmann, Olof Johansson,
	albert, patches

> > https://github.com/riscv/riscv-isa-manual/releases/download/riscv-user
> > -2.2/riscv-spec-v2.2.pdf
> 
> That's the most up to date spec.

Yes, that's the most up to date public spec.  Internally, the RISC-V memory
model task group has been working on fixing the memory model spec for the
past couple of months now.  We're aiming to release it for public review
well before the end of the year.  Hopefully in the coming weeks even.

> > which, on the one hand is reassuring (because ignoring dependency
> > ordering is plain broken), but on the other it doesn't go quite far
> > enough in defining exactly what constitutes a "syntactic data
> > dependency". The cumulativity of your fences also needs defining,
> > because I think this was up in the air at some point and the document
> > above doesn't seem to tackle it (it doesn't seem to describe what
> > constitutes being a memory of the predecessor or successor sets)

That will all covered in the new spec.

> > P.S. You should also totally get your architects to write a formal
> > model ;)

Also in progress :)

Were there any more specific questions I can answer in the meantime?  Or
any specific concern you'd like to point me to?

Dan

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 10/17] irqchip: New RISC-V PLIC Driver
  2017-06-09 13:47       ` Will Deacon
@ 2017-06-27  1:09           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-27  1:09 UTC (permalink / raw)
  To: will.deacon
  Cc: marc.zyngier, linux-arch, linux-kernel, Arnd Bergmann,
	Olof Johansson, albert, patches, peterz

On Fri, 09 Jun 2017 06:47:48 PDT (-0700), will.deacon@arm.com wrote:
> On Wed, Jun 07, 2017 at 11:52:10AM +0100, Marc Zyngier wrote:
>> On 07/06/17 00:00, Palmer Dabbelt wrote:
>> > +static void plic_disable(struct plic_data *data, int i, int hwirq)
>> > +{
>> > +	struct plic_enable_context *enable = plic_enable_context(data, i);
>> > +
>> > +	atomic_and(~(1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
>>
>> This is still a device access, right? What does it mean to use the
>> atomic primitives on that? What are you racing against? I thought the
>> various context were private to an execution context...
>>
>> Adding Will and PeterZ to the CC list because they will probably have
>> their own views on this...
>
> atomic_* accesses to MMIO is almost certainly a bad idea. Is this atomic
> because you want to allow the function to run concurrently, or is it atomic
> because you want some guarantees from the endpoint's view?

Concurrency: while most operations on the PLIC are per-hart, in order to
disable an interrupt you have to go clear a bit for every hart.  The AMO ensure
the read-modify-write cycle didn't drop some other hart disabling a different
interrupt (as the bits are all in the same word).

I've done a big cleanup on this driver to avoid memory-mapped structures, use a
lock instead of AMOs, and use the FastEOI flow.  I'm getting close to getting
through all the code reviews for my v2 RISC-V patch set (which included this,
our arch support, and all our other driver patches).  I'm going to split out
the drivers for the next patch set, as it appears that's the better way to do
it.

Hopefully I can get the cleaned up patches out tonight.

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 10/17] irqchip: New RISC-V PLIC Driver
@ 2017-06-27  1:09           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-27  1:09 UTC (permalink / raw)
  To: will.deacon
  Cc: marc.zyngier, linux-arch, linux-kernel, Arnd Bergmann,
	Olof Johansson, albert, patches, peterz

On Fri, 09 Jun 2017 06:47:48 PDT (-0700), will.deacon@arm.com wrote:
> On Wed, Jun 07, 2017 at 11:52:10AM +0100, Marc Zyngier wrote:
>> On 07/06/17 00:00, Palmer Dabbelt wrote:
>> > +static void plic_disable(struct plic_data *data, int i, int hwirq)
>> > +{
>> > +	struct plic_enable_context *enable = plic_enable_context(data, i);
>> > +
>> > +	atomic_and(~(1 << (hwirq % 32)), &enable->mask[hwirq / 32]);
>>
>> This is still a device access, right? What does it mean to use the
>> atomic primitives on that? What are you racing against? I thought the
>> various context were private to an execution context...
>>
>> Adding Will and PeterZ to the CC list because they will probably have
>> their own views on this...
>
> atomic_* accesses to MMIO is almost certainly a bad idea. Is this atomic
> because you want to allow the function to run concurrently, or is it atomic
> because you want some guarantees from the endpoint's view?

Concurrency: while most operations on the PLIC are per-hart, in order to
disable an interrupt you have to go clear a bit for every hart.  The AMO ensure
the read-modify-write cycle didn't drop some other hart disabling a different
interrupt (as the bits are all in the same word).

I've done a big cleanup on this driver to avoid memory-mapped structures, use a
lock instead of AMOs, and use the FastEOI flow.  I'm getting close to getting
through all the code reviews for my v2 RISC-V patch set (which included this,
our arch support, and all our other driver patches).  I'm going to split out
the drivers for the next patch set, as it appears that's the better way to do
it.

Hopefully I can get the cleaned up patches out tonight.

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 13/17] RISC-V: Add include subdirectory
  2017-06-27  0:07               ` Daniel Lustig
@ 2017-06-27  8:48                 ` Will Deacon
  0 siblings, 0 replies; 283+ messages in thread
From: Will Deacon @ 2017-06-27  8:48 UTC (permalink / raw)
  To: Daniel Lustig
  Cc: Palmer Dabbelt, peterz, linux-arch, linux-kernel, Arnd Bergmann,
	Olof Johansson, albert, patches

Hi Dan,

On Tue, Jun 27, 2017 at 12:07:20AM +0000, Daniel Lustig wrote:
> > > https://github.com/riscv/riscv-isa-manual/releases/download/riscv-user
> > > -2.2/riscv-spec-v2.2.pdf
> > 
> > That's the most up to date spec.
> 
> Yes, that's the most up to date public spec.  Internally, the RISC-V memory
> model task group has been working on fixing the memory model spec for the
> past couple of months now.  We're aiming to release it for public review
> well before the end of the year.  Hopefully in the coming weeks even.

Excellent, cheers for the update.

> > > which, on the one hand is reassuring (because ignoring dependency
> > > ordering is plain broken), but on the other it doesn't go quite far
> > > enough in defining exactly what constitutes a "syntactic data
> > > dependency". The cumulativity of your fences also needs defining,
> > > because I think this was up in the air at some point and the document
> > > above doesn't seem to tackle it (it doesn't seem to describe what
> > > constitutes being a memory of the predecessor or successor sets)
> 
> That will all covered in the new spec.
> 
> > > P.S. You should also totally get your architects to write a formal
> > > model ;)
> 
> Also in progress :)

3/3 :)

> Were there any more specific questions I can answer in the meantime?  Or
> any specific concern you'd like to point me to?

Nothing specific, but we won't be able to review the
memory-ordering/atomics/locking parts of this patch series until we have
a spec.

Will

^ permalink raw reply	[flat|nested] 283+ messages in thread

* RISC-V Linux Port v3
  2017-06-06 22:59   ` Palmer Dabbelt
@ 2017-06-28 18:55     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert

Thanks to everyone who has particpated in the review process so far.  We've
made a handful of changes since the v2 port, and at this point aside from a
handful of FIXMEs floating around the code I don't think there's anything left
I know about that's still missing for a minimal port.  I believe I've addressed
every review comment so far, if I've missed yours then I'm sorry but I'd like
to request you resend it.

A highlight of the changes since the v1 patch set includes:

 * We've split out all our drivers into separate patch sets, which I've already
   sent out to the relevant maintainers.  I haven't included those patches in
   this patch set, but some of them are necessary to build our port.  A git
   tree that contains all our patch sets merged together lives at
   <https://github.com/riscv/riscv-linux/tree/riscv-for-submission-v3>.

 * The patch set is now split up differently: rather than being split per
   directory it is split per topic.  Hopefully this will make it easier to
   review the port on the mailing list.  The split is a bit rough, so you
   probably still want to look at the patch set as a whole.

 * atomic.h has been completely rewritten and is hopefully now correct.  I've
   attempted to sanitize the various other memory model related code as well,
   and I think it should all be sane now aside from a handful of FIXMEs
   commented in the code.

 * We've changed the cmpexchg syscall to always exist and to not be
   multiplexed.  There is also a VDSO entry for compare and exchange, which
   allows kernels with the A extension to execute user code without the A
   extension reasonably fast.

 * Our user-visible register state now contains enough space for the Q
   extension for 128-bit floating point, as well as a few words to allow
   extensibility to future ISA extensions like the eventual V extension for
   vectors.

 * A handful of driver cleanups, but these have been split into separate patch
   sets now so I won't duplicate them here.

The full list of patches is below

  [PATCH 1/9] RISC-V: Init and Halt Code
  [PATCH 2/9] RISC-V: Atomic and Locking Code
  [PATCH 3/9] RISC-V: Generic library routines and assembly
  [PATCH 4/9] RISC-V: ELF and module implementation
  [PATCH 5/9] RISC-V: Task implementation
  [PATCH 6/9] RISC-V: Device, timer, IRQs, and the SBI
  [PATCH 7/9] RISC-V: Paging and MMU
  [PATCH 8/9] RISC-V: User-facing API
  [PATCH 9/9] RISC-V: Build Infastructure

In case one gets eaten by the mailing list, this is also availiable as a git
tree on our Git Hub

  https://github.com/riscv/riscv-linux/tree/riscv-for-submission-arch-v3

This patch set just contains the arch code, we have various drivers that are
required to build and boot a RISC-V system.  A tree that contains this patch
set merged with all our other patch sets lives at

  https://github.com/riscv/riscv-linux/tree/riscv-for-submission-v3

  commit 319a127e0685ed294996e0e6b25b229f42ec1d6e
  Merge: a980edd4a4b7 e67734c51bc9
  Author: Palmer Dabbelt <palmer@dabbelt.com>
  Date:   Wed Jun 28 10:45:14 2017 -0700
  
      Merge branch 'riscv-for-submission-arch-v3' into riscv-for-submission-v3

If you're going to try to build or boot the kernel, I'd recommend using that.

Thanks to everyone who has helped review our port!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* RISC-V Linux Port v3
@ 2017-06-28 18:55     ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert

Thanks to everyone who has particpated in the review process so far.  We've
made a handful of changes since the v2 port, and at this point aside from a
handful of FIXMEs floating around the code I don't think there's anything left
I know about that's still missing for a minimal port.  I believe I've addressed
every review comment so far, if I've missed yours then I'm sorry but I'd like
to request you resend it.

A highlight of the changes since the v1 patch set includes:

 * We've split out all our drivers into separate patch sets, which I've already
   sent out to the relevant maintainers.  I haven't included those patches in
   this patch set, but some of them are necessary to build our port.  A git
   tree that contains all our patch sets merged together lives at
   <https://github.com/riscv/riscv-linux/tree/riscv-for-submission-v3>.

 * The patch set is now split up differently: rather than being split per
   directory it is split per topic.  Hopefully this will make it easier to
   review the port on the mailing list.  The split is a bit rough, so you
   probably still want to look at the patch set as a whole.

 * atomic.h has been completely rewritten and is hopefully now correct.  I've
   attempted to sanitize the various other memory model related code as well,
   and I think it should all be sane now aside from a handful of FIXMEs
   commented in the code.

 * We've changed the cmpexchg syscall to always exist and to not be
   multiplexed.  There is also a VDSO entry for compare and exchange, which
   allows kernels with the A extension to execute user code without the A
   extension reasonably fast.

 * Our user-visible register state now contains enough space for the Q
   extension for 128-bit floating point, as well as a few words to allow
   extensibility to future ISA extensions like the eventual V extension for
   vectors.

 * A handful of driver cleanups, but these have been split into separate patch
   sets now so I won't duplicate them here.

The full list of patches is below

  [PATCH 1/9] RISC-V: Init and Halt Code
  [PATCH 2/9] RISC-V: Atomic and Locking Code
  [PATCH 3/9] RISC-V: Generic library routines and assembly
  [PATCH 4/9] RISC-V: ELF and module implementation
  [PATCH 5/9] RISC-V: Task implementation
  [PATCH 6/9] RISC-V: Device, timer, IRQs, and the SBI
  [PATCH 7/9] RISC-V: Paging and MMU
  [PATCH 8/9] RISC-V: User-facing API
  [PATCH 9/9] RISC-V: Build Infastructure

In case one gets eaten by the mailing list, this is also availiable as a git
tree on our Git Hub

  https://github.com/riscv/riscv-linux/tree/riscv-for-submission-arch-v3

This patch set just contains the arch code, we have various drivers that are
required to build and boot a RISC-V system.  A tree that contains this patch
set merged with all our other patch sets lives at

  https://github.com/riscv/riscv-linux/tree/riscv-for-submission-v3

  commit 319a127e0685ed294996e0e6b25b229f42ec1d6e
  Merge: a980edd4a4b7 e67734c51bc9
  Author: Palmer Dabbelt <palmer@dabbelt.com>
  Date:   Wed Jun 28 10:45:14 2017 -0700
  
      Merge branch 'riscv-for-submission-arch-v3' into riscv-for-submission-v3

If you're going to try to build or boot the kernel, I'd recommend using that.

Thanks to everyone who has helped review our port!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* [PATCH 1/9] RISC-V: Init and Halt Code
  2017-06-28 18:55     ` Palmer Dabbelt
@ 2017-06-28 18:55       ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This contains the various __init C functions, the initial assembly
kernel entry point, and the code to reset the system.  When a file was
init-related, it contains

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/bug.h   |  88 +++++++++++++++
 arch/riscv/include/asm/cache.h |  22 ++++
 arch/riscv/include/asm/smp.h   |  41 +++++++
 arch/riscv/kernel/cacheinfo.c  | 105 ++++++++++++++++++
 arch/riscv/kernel/cpu.c        |  89 +++++++++++++++
 arch/riscv/kernel/head.S       | 147 +++++++++++++++++++++++++
 arch/riscv/kernel/irq.c        |  20 ++++
 arch/riscv/kernel/reset.c      |  36 +++++++
 arch/riscv/kernel/setup.c      | 240 +++++++++++++++++++++++++++++++++++++++++
 arch/riscv/kernel/smp.c        | 110 +++++++++++++++++++
 arch/riscv/kernel/smpboot.c    | 103 ++++++++++++++++++
 arch/riscv/kernel/time.c       |  61 +++++++++++
 arch/riscv/kernel/traps.c      | 183 +++++++++++++++++++++++++++++++
 arch/riscv/kernel/vdso.c       | 125 +++++++++++++++++++++
 arch/riscv/mm/init.c           |  70 ++++++++++++
 15 files changed, 1440 insertions(+)
 create mode 100644 arch/riscv/include/asm/bug.h
 create mode 100644 arch/riscv/include/asm/cache.h
 create mode 100644 arch/riscv/include/asm/smp.h
 create mode 100644 arch/riscv/kernel/cacheinfo.c
 create mode 100644 arch/riscv/kernel/cpu.c
 create mode 100644 arch/riscv/kernel/head.S
 create mode 100644 arch/riscv/kernel/irq.c
 create mode 100644 arch/riscv/kernel/reset.c
 create mode 100644 arch/riscv/kernel/setup.c
 create mode 100644 arch/riscv/kernel/smp.c
 create mode 100644 arch/riscv/kernel/smpboot.c
 create mode 100644 arch/riscv/kernel/time.c
 create mode 100644 arch/riscv/kernel/traps.c
 create mode 100644 arch/riscv/kernel/vdso.c
 create mode 100644 arch/riscv/mm/init.c

diff --git a/arch/riscv/include/asm/bug.h b/arch/riscv/include/asm/bug.h
new file mode 100644
index 000000000000..e2f690c20729
--- /dev/null
+++ b/arch/riscv/include/asm/bug.h
@@ -0,0 +1,88 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_BUG_H
+#define _ASM_RISCV_BUG_H
+
+#include <linux/compiler.h>
+#include <linux/const.h>
+#include <linux/types.h>
+
+#include <asm/asm.h>
+
+#ifdef CONFIG_GENERIC_BUG
+#define __BUG_INSN	_AC(0x00100073, UL) /* sbreak */
+
+#ifndef __ASSEMBLY__
+typedef u32 bug_insn_t;
+
+#ifdef CONFIG_GENERIC_BUG_RELATIVE_POINTERS
+#define __BUG_ENTRY_ADDR	INT " 1b - 2b"
+#define __BUG_ENTRY_FILE	INT " %0 - 2b"
+#else
+#define __BUG_ENTRY_ADDR	RISCV_PTR " 1b"
+#define __BUG_ENTRY_FILE	RISCV_PTR " %0"
+#endif
+
+#ifdef CONFIG_DEBUG_BUGVERBOSE
+#define __BUG_ENTRY			\
+	__BUG_ENTRY_ADDR "\n\t"		\
+	__BUG_ENTRY_FILE "\n\t"		\
+	SHORT " %1"
+#else
+#define __BUG_ENTRY			\
+	__BUG_ENTRY_ADDR
+#endif
+
+#define BUG()							\
+do {								\
+	__asm__ __volatile__ (					\
+		"1:\n\t"					\
+			"sbreak\n"				\
+			".pushsection __bug_table,\"a\"\n\t"	\
+		"2:\n\t"					\
+			__BUG_ENTRY "\n\t"			\
+			".org 2b + %2\n\t"			\
+			".popsection"				\
+		:						\
+		: "i" (__FILE__), "i" (__LINE__),		\
+		  "i" (sizeof(struct bug_entry)));		\
+	unreachable();						\
+} while (0)
+#endif /* !__ASSEMBLY__ */
+#else /* CONFIG_GENERIC_BUG */
+#ifndef __ASSEMBLY__
+#define BUG()							\
+do {								\
+	__asm__ __volatile__ ("sbreak\n");			\
+	unreachable();						\
+} while (0)
+#endif /* !__ASSEMBLY__ */
+#endif /* CONFIG_GENERIC_BUG */
+
+#define HAVE_ARCH_BUG
+
+#include <asm-generic/bug.h>
+
+#ifndef __ASSEMBLY__
+
+struct pt_regs;
+struct task_struct;
+
+extern void die(struct pt_regs *regs, const char *str);
+extern void do_trap(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, struct task_struct *tsk);
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BUG_H */
diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h
new file mode 100644
index 000000000000..e8f0d1110d74
--- /dev/null
+++ b/arch/riscv/include/asm/cache.h
@@ -0,0 +1,22 @@
+/*
+ * Copyright (C) 2017 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CACHE_H
+#define _ASM_RISCV_CACHE_H
+
+#define L1_CACHE_SHIFT		6
+
+#define L1_CACHE_BYTES		(1 << L1_CACHE_SHIFT)
+
+#endif /* _ASM_RISCV_CACHE_H */
diff --git a/arch/riscv/include/asm/smp.h b/arch/riscv/include/asm/smp.h
new file mode 100644
index 000000000000..f61f8c25f95b
--- /dev/null
+++ b/arch/riscv/include/asm/smp.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SMP_H
+#define _ASM_RISCV_SMP_H
+
+#include <linux/cpumask.h>
+#include <linux/irqreturn.h>
+
+#ifdef CONFIG_SMP
+
+/* SMP initialization hook for setup_arch */
+void __init init_clockevent(void);
+
+/* SMP initialization hook for setup_arch */
+void __init setup_smp(void);
+
+/* Hook for the generic smp_call_function_many() routine. */
+void arch_send_call_function_ipi_mask(struct cpumask *mask);
+
+/* Hook for the generic smp_call_function_single() routine. */
+void arch_send_call_function_single_ipi(int cpu);
+
+#define raw_smp_processor_id() (current_thread_info()->cpu)
+
+/* Interprocessor interrupt handler */
+irqreturn_t handle_ipi(void);
+
+#endif /* CONFIG_SMP */
+
+#endif /* _ASM_RISCV_SMP_H */
diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c
new file mode 100644
index 000000000000..10ed2749e246
--- /dev/null
+++ b/arch/riscv/kernel/cacheinfo.c
@@ -0,0 +1,105 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/cacheinfo.h>
+#include <linux/cpu.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+
+static void ci_leaf_init(struct cacheinfo *this_leaf,
+			 struct device_node *node,
+			 enum cache_type type, unsigned int level)
+{
+	this_leaf->of_node = node;
+	this_leaf->level = level;
+	this_leaf->type = type;
+	/* not a sector cache */
+	this_leaf->physical_line_partition = 1;
+	/* TODO: Add to DTS */
+	this_leaf->attributes =
+		CACHE_WRITE_BACK
+		| CACHE_READ_ALLOCATE
+		| CACHE_WRITE_ALLOCATE;
+}
+
+static int __init_cache_level(unsigned int cpu)
+{
+	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+	struct device_node *np = of_cpu_device_node_get(cpu);
+	int levels = 0, leaves = 0, level;
+
+	if (of_property_read_bool(np, "cache-size"))
+		++leaves;
+	if (of_property_read_bool(np, "i-cache-size"))
+		++leaves;
+	if (of_property_read_bool(np, "d-cache-size"))
+		++leaves;
+	if (leaves > 0)
+		levels = 1;
+
+	while ((np = of_find_next_cache_node(np))) {
+		if (!of_device_is_compatible(np, "cache"))
+			break;
+		if (of_property_read_u32(np, "cache-level", &level))
+			break;
+		if (level <= levels)
+			break;
+		if (of_property_read_bool(np, "cache-size"))
+			++leaves;
+		if (of_property_read_bool(np, "i-cache-size"))
+			++leaves;
+		if (of_property_read_bool(np, "d-cache-size"))
+			++leaves;
+		levels = level;
+	}
+
+	this_cpu_ci->num_levels = levels;
+	this_cpu_ci->num_leaves = leaves;
+	return 0;
+}
+
+static int __populate_cache_leaves(unsigned int cpu)
+{
+	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+	struct cacheinfo *this_leaf = this_cpu_ci->info_list;
+	struct device_node *np = of_cpu_device_node_get(cpu);
+	int levels = 1, level = 1;
+
+	if (of_property_read_bool(np, "cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
+	if (of_property_read_bool(np, "i-cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
+	if (of_property_read_bool(np, "d-cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
+
+	while ((np = of_find_next_cache_node(np))) {
+		if (!of_device_is_compatible(np, "cache"))
+			break;
+		if (of_property_read_u32(np, "cache-level", &level))
+			break;
+		if (level <= levels)
+			break;
+		if (of_property_read_bool(np, "cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
+		if (of_property_read_bool(np, "i-cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
+		if (of_property_read_bool(np, "d-cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
+		levels = level;
+	}
+
+	return 0;
+}
+
+DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
+DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c
new file mode 100644
index 000000000000..20004bd7a216
--- /dev/null
+++ b/arch/riscv/kernel/cpu.c
@@ -0,0 +1,89 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <linux/of.h>
+
+/* Return -1 if not a valid hart */
+int riscv_of_processor_hart(struct device_node *node)
+{
+	const char *isa, *status;
+	u32 hart;
+
+	if (!of_device_is_compatible(node, "riscv"))
+		return -(ENODEV);
+	if (of_property_read_u32(node, "reg", &hart)
+	    || hart >= NR_CPUS)
+		return -(ENODEV);
+	if (of_property_read_string(node, "status", &status)
+	    || strcmp(status, "okay"))
+		return -(ENODEV);
+	if (of_property_read_string(node, "riscv,isa", &isa)
+	    || isa[0] != 'r'
+	    || isa[1] != 'v')
+		return -(ENODEV);
+
+	return hart;
+}
+
+#ifdef CONFIG_PROC_FS
+
+static void *c_start(struct seq_file *m, loff_t *pos)
+{
+	*pos = cpumask_next(*pos - 1, cpu_online_mask);
+	if ((*pos) < nr_cpu_ids)
+		return (void *)(uintptr_t)(1 + *pos);
+	return NULL;
+}
+
+static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+{
+	(*pos)++;
+	return c_start(m, pos);
+}
+
+static void c_stop(struct seq_file *m, void *v)
+{
+}
+
+static int c_show(struct seq_file *m, void *v)
+{
+	unsigned long hart_id = (unsigned long)v - 1;
+	struct device_node *node = of_get_cpu_node(hart_id, NULL);
+	const char *compat, *isa, *mmu;
+
+	seq_printf(m, "hart\t: %lu\n", hart_id);
+	if (!of_property_read_string(node, "riscv,isa", &isa)
+	    && isa[0] == 'r'
+	    && isa[1] == 'v')
+		seq_printf(m, "isa\t: %s\n", isa);
+	if (!of_property_read_string(node, "mmu-type", &mmu)
+	    && !strncmp(mmu, "riscv,", 6))
+		seq_printf(m, "mmu\t: %s\n", mmu+6);
+	if (!of_property_read_string(node, "compatible", &compat)
+	    && strcmp(compat, "riscv"))
+		seq_printf(m, "uarch\t: %s\n", compat);
+	seq_puts(m, "\n");
+
+	return 0;
+}
+
+const struct seq_operations cpuinfo_op = {
+	.start	= c_start,
+	.next	= c_next,
+	.stop	= c_stop,
+	.show	= c_show
+};
+
+#endif /* CONFIG_PROC_FS */
diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
new file mode 100644
index 000000000000..608e57d4531f
--- /dev/null
+++ b/arch/riscv/kernel/head.S
@@ -0,0 +1,147 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+#include <asm/asm.h>
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/thread_info.h>
+#include <asm/page.h>
+#include <asm/csr.h>
+
+__INIT
+ENTRY(_start)
+	/* Mask all interrupts */
+	csrw sie, zero
+
+	/* Disable FPU to detect illegal usage of
+	   floating point in kernel space */
+	li t0, SR_FS
+	csrc sstatus, t0
+
+#ifndef CONFIG_RV_PUM
+	/* Allow access to user memory */
+	li t0, SR_SUM
+	csrs sstatus, t0
+#endif
+
+#ifdef CONFIG_ISA_A
+	/* Pick one hart to run the main boot sequence */
+	la a3, hart_lottery
+	li a2, 1
+	amoadd.w a3, a2, (a3)
+	bnez a3, .Lsecondary_start
+#else
+	/* We don't have atomic support, so the boot hart must be picked
+	 * staticly.  Hart 0 is the only sane choice.
+	 */
+	bnez a0, .Lsecondary_park
+#endif
+
+	/* Save hart ID and DTB physical address */
+	mv s0, a0
+	mv s1, a1
+
+	/* Initialize page tables and relocate to virtual addresses */
+	la sp, init_thread_union + THREAD_SIZE
+	call setup_vm
+	call relocate
+
+	/* Restore C environment */
+	la tp, init_task
+
+	la sp, init_thread_union
+	li a0, ASM_THREAD_SIZE
+	add sp, sp, a0
+
+	/* Start the kernel */
+	mv a0, s0
+	mv a1, s1
+	call sbi_save
+	tail start_kernel
+
+relocate:
+	/* Relocate return address */
+	li a1, PAGE_OFFSET
+	la a0, _start
+	sub a1, a1, a0
+	add ra, ra, a1
+
+	/* Point stvec to virtual address of intruction after sptbr write */
+	la a0, 1f
+	add a0, a0, a1
+	csrw stvec, a0
+
+	/* Compute sptbr for kernel page tables, but don't load it yet */
+	la a2, swapper_pg_dir
+	srl a2, a2, PAGE_SHIFT
+	li a1, SPTBR_MODE
+	or a2, a2, a1
+
+	/* Load trampoline page directory, which will cause us to trap to
+	   stvec if VA != PA, or simply fall through if VA == PA */
+	la a0, trampoline_pg_dir
+	srl a0, a0, PAGE_SHIFT
+	or a0, a0, a1
+	sfence.vma
+	csrw sptbr, a0
+1:
+	/* Set trap vector to spin forever to help debug */
+	la a0, .Lsecondary_park
+	csrw stvec, a0
+
+	/* Load the global pointer */
+	la gp, __global_pointer$
+
+	/* Switch to kernel page tables */
+	csrw sptbr, a2
+
+	ret
+
+.Lsecondary_start:
+#ifdef CONFIG_SMP
+	li a1, CONFIG_NR_CPUS
+	bgeu a0, a1, .Lsecondary_park
+
+	la a1, __cpu_up_stack_pointer
+	slli a0, a0, LGREG
+	add a0, a0, a1
+
+.Lwait_for_cpu_up:
+	REG_L sp, (a0)
+	beqz sp, .Lwait_for_cpu_up
+	fence
+
+	/* Enable virtual memory and relocate to virtual address */
+	call relocate
+
+	/* Initialize task_struct pointer */
+	li tp, -THREAD_SIZE
+	add tp, tp, sp
+
+	tail smp_callin
+#endif
+
+.Lsecondary_park:
+	/* We lack SMP support or have too many harts, so park this hart */
+	wfi
+	j .Lsecondary_park
+END(_start)
+
+__PAGE_ALIGNED_BSS
+	/* Empty zero page */
+	.balign PAGE_SIZE
+ENTRY(empty_zero_page)
+	.fill (empty_zero_page + PAGE_SIZE) - ., 1, 0x00
+END(empty_zero_page)
diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c
new file mode 100644
index 000000000000..737d7cce2c6d
--- /dev/null
+++ b/arch/riscv/kernel/irq.c
@@ -0,0 +1,20 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/irqchip.h>
+
+void __init init_IRQ(void)
+{
+	irqchip_init();
+}
diff --git a/arch/riscv/kernel/reset.c b/arch/riscv/kernel/reset.c
new file mode 100644
index 000000000000..2a53d26ffdd6
--- /dev/null
+++ b/arch/riscv/kernel/reset.c
@@ -0,0 +1,36 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/reboot.h>
+#include <linux/export.h>
+#include <asm/sbi.h>
+
+void (*pm_power_off)(void) = machine_power_off;
+EXPORT_SYMBOL(pm_power_off);
+
+void machine_restart(char *cmd)
+{
+	do_kernel_restart(cmd);
+	while (1);
+}
+
+void machine_halt(void)
+{
+	machine_power_off();
+}
+
+void machine_power_off(void)
+{
+	sbi_shutdown();
+	while (1);
+}
diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
new file mode 100644
index 000000000000..9ed70e84d74e
--- /dev/null
+++ b/arch/riscv/kernel/setup.c
@@ -0,0 +1,240 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/memblock.h>
+#include <linux/sched.h>
+#include <linux/initrd.h>
+#include <linux/console.h>
+#include <linux/screen_info.h>
+#include <linux/of_fdt.h>
+#include <linux/of_platform.h>
+#include <linux/sched/task.h>
+
+#include <asm/setup.h>
+#include <asm/sections.h>
+#include <asm/pgtable.h>
+#include <asm/smp.h>
+#include <asm/sbi.h>
+#include <asm/tlbflush.h>
+#include <asm/thread_info.h>
+
+#ifdef CONFIG_DUMMY_CONSOLE
+struct screen_info screen_info = {
+	.orig_video_lines	= 30,
+	.orig_video_cols	= 80,
+	.orig_video_mode	= 0,
+	.orig_video_ega_bx	= 0,
+	.orig_video_isVGA	= 1,
+	.orig_video_points	= 8
+};
+#endif
+
+#ifdef CONFIG_CMDLINE_BOOL
+static char __initdata builtin_cmdline[COMMAND_LINE_SIZE] = CONFIG_CMDLINE;
+#endif /* CONFIG_CMDLINE_BOOL */
+
+unsigned long va_pa_offset;
+unsigned long pfn_base;
+
+/* The lucky hart to first increment this variable will boot the other cores */
+atomic_t hart_lottery;
+
+#ifdef CONFIG_BLK_DEV_INITRD
+static void __init setup_initrd(void)
+{
+	extern char __initramfs_start[];
+	extern unsigned long __initramfs_size;
+	unsigned long size;
+
+	if (__initramfs_size > 0) {
+		initrd_start = (unsigned long)(&__initramfs_start);
+		initrd_end = initrd_start + __initramfs_size;
+	}
+
+	if (initrd_start >= initrd_end) {
+		printk(KERN_INFO "initrd not found or empty");
+		goto disable;
+	}
+	if (__pa(initrd_end) > PFN_PHYS(max_low_pfn)) {
+		printk(KERN_ERR "initrd extends beyond end of memory");
+		goto disable;
+	}
+
+	size =  initrd_end - initrd_start;
+	memblock_reserve(__pa(initrd_start), size);
+	initrd_below_start_ok = 1;
+
+	printk(KERN_INFO "Initial ramdisk at: 0x%p (%lu bytes)\n",
+		(void *)(initrd_start), size);
+	return;
+disable:
+	printk(KERN_CONT " - disabling initrd\n");
+	initrd_start = 0;
+	initrd_end = 0;
+}
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+pgd_t trampoline_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+
+#ifndef __PAGETABLE_PMD_FOLDED
+#define NUM_SWAPPER_PMDS ((uintptr_t)-PAGE_OFFSET >> PGDIR_SHIFT)
+pmd_t swapper_pmd[PTRS_PER_PMD*((-PAGE_OFFSET)/PGDIR_SIZE)] __page_aligned_bss;
+pmd_t trampoline_pmd[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+#endif
+
+asmlinkage void __init setup_vm(void)
+{
+	extern char _start;
+	uintptr_t i;
+	uintptr_t pa = (uintptr_t) &_start;
+	pgprot_t prot = __pgprot(pgprot_val(PAGE_KERNEL) | _PAGE_EXEC);
+
+	va_pa_offset = PAGE_OFFSET - pa;
+	pfn_base = PFN_DOWN(pa);
+
+	/* Sanity check alignment and size */
+	BUG_ON((PAGE_OFFSET % PGDIR_SIZE) != 0);
+	BUG_ON((pa % (PAGE_SIZE * PTRS_PER_PTE)) != 0);
+
+#ifndef __PAGETABLE_PMD_FOLDED
+	trampoline_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD] =
+		pfn_pgd(PFN_DOWN((uintptr_t)trampoline_pmd),
+			__pgprot(_PAGE_TABLE));
+	trampoline_pmd[0] = pfn_pmd(PFN_DOWN(pa), prot);
+
+	for (i = 0; i < (-PAGE_OFFSET)/PGDIR_SIZE; ++i) {
+		size_t o = (PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD + i;
+		swapper_pg_dir[o] =
+			pfn_pgd(PFN_DOWN((uintptr_t)swapper_pmd) + i,
+				__pgprot(_PAGE_TABLE));
+	}
+	for (i = 0; i < ARRAY_SIZE(swapper_pmd); i++)
+		swapper_pmd[i] = pfn_pmd(PFN_DOWN(pa + i * PMD_SIZE), prot);
+#else
+	trampoline_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD] =
+		pfn_pgd(PFN_DOWN(pa), prot);
+
+	for (i = 0; i < (-PAGE_OFFSET)/PGDIR_SIZE; ++i) {
+		size_t o = (PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD + i;
+		swapper_pg_dir[o] =
+			pfn_pgd(PFN_DOWN(pa + i * PGDIR_SIZE), prot);
+	}
+#endif
+}
+
+void __init sbi_save(unsigned int hartid, void *dtb)
+{
+	early_init_dt_scan(__va(dtb));
+}
+
+/* Allow the user to manually add a memory region (in case DTS is broken); "mem_end=nn[KkMmGg]" */
+static int __init mem_end_override(char *p)
+{
+	resource_size_t base, end;
+
+	if (!p)
+		return -EINVAL;
+	base = (uintptr_t) __pa(PAGE_OFFSET);
+	end = memparse(p, &p) & PMD_MASK;
+	if (end == 0)
+		return -EINVAL;
+	memblock_add(base, end - base);
+	return 0;
+}
+early_param("mem_end", mem_end_override);
+
+static void __init setup_bootmem(void)
+{
+	struct memblock_region *reg;
+	phys_addr_t mem_size = 0;
+
+	/* Find the memory region containing the kernel */
+	for_each_memblock(memory, reg) {
+		phys_addr_t vmlinux_end = __pa(_end);
+		phys_addr_t end = reg->base + reg->size;
+
+		if (reg->base <= vmlinux_end && vmlinux_end <= end) {
+			/* Reserve from the start of the region to the end of
+			 * the kernel
+			 */
+			memblock_reserve(reg->base, vmlinux_end - reg->base);
+			mem_size = min(reg->size, (phys_addr_t)-PAGE_OFFSET);
+		}
+	}
+	BUG_ON(mem_size == 0);
+
+	set_max_mapnr(PFN_DOWN(mem_size));
+	max_low_pfn = pfn_base + PFN_DOWN(mem_size);
+
+#ifdef CONFIG_BLK_DEV_INITRD
+	setup_initrd();
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+	early_init_fdt_reserve_self();
+	early_init_fdt_scan_reserved_mem();
+	memblock_allow_resize();
+	memblock_dump_all();
+}
+
+void __init setup_arch(char **cmdline_p)
+{
+#ifdef CONFIG_CMDLINE_BOOL
+#ifdef CONFIG_CMDLINE_OVERRIDE
+	strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+#else
+	if (builtin_cmdline[0] != '\0') {
+		/* Append bootloader command line to built-in */
+		strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);
+		strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+		strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+	}
+#endif /* CONFIG_CMDLINE_OVERRIDE */
+#endif /* CONFIG_CMDLINE_BOOL */
+	*cmdline_p = boot_command_line;
+
+	parse_early_param();
+
+	init_mm.start_code = (unsigned long) _stext;
+	init_mm.end_code   = (unsigned long) _etext;
+	init_mm.end_data   = (unsigned long) _edata;
+	init_mm.brk        = (unsigned long) _end;
+
+	setup_bootmem();
+	paging_init();
+	unflatten_device_tree();
+
+#ifdef CONFIG_SMP
+	setup_smp();
+#endif
+
+#ifdef CONFIG_DUMMY_CONSOLE
+	conswitchp = &dummy_con;
+#endif
+}
+
+static int __init riscv_device_init(void)
+{
+	return of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
+}
+subsys_initcall_sync(riscv_device_init);
diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
new file mode 100644
index 000000000000..b4a71ec5906f
--- /dev/null
+++ b/arch/riscv/kernel/smp.c
@@ -0,0 +1,110 @@
+/*
+ * SMP initialisation and IPI support
+ * Based on arch/arm64/kernel/smp.c
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/smp.h>
+#include <linux/sched.h>
+
+#include <asm/sbi.h>
+#include <asm/tlbflush.h>
+#include <asm/cacheflush.h>
+
+/* A collection of single bit ipi messages.  */
+static struct {
+	unsigned long bits ____cacheline_aligned;
+} ipi_data[NR_CPUS] __cacheline_aligned;
+
+enum ipi_message_type {
+	IPI_RESCHEDULE,
+	IPI_CALL_FUNC,
+	IPI_MAX
+};
+
+irqreturn_t handle_ipi(void)
+{
+	unsigned long *pending_ipis = &ipi_data[smp_processor_id()].bits;
+
+	/* Clear pending IPI */
+	csr_clear(sip, SIE_SSIE);
+
+	while (true) {
+		unsigned long ops;
+
+		/* Order bit clearing and data access. */
+		mb();
+
+		ops = xchg(pending_ipis, 0);
+		if (ops == 0)
+			return IRQ_HANDLED;
+
+		if (ops & (1 << IPI_RESCHEDULE))
+			scheduler_ipi();
+
+		if (ops & (1 << IPI_CALL_FUNC))
+			generic_smp_call_function_interrupt();
+
+		BUG_ON((ops >> IPI_MAX) != 0);
+
+		/* Order data access and bit testing. */
+		mb();
+	}
+
+	return IRQ_HANDLED;
+}
+
+static void
+send_ipi_message(const struct cpumask *to_whom, enum ipi_message_type operation)
+{
+	int i;
+
+	mb();
+	for_each_cpu(i, to_whom)
+		set_bit(operation, &ipi_data[i].bits);
+
+	mb();
+	sbi_send_ipi(cpumask_bits(to_whom));
+}
+
+void arch_send_call_function_ipi_mask(struct cpumask *mask)
+{
+	send_ipi_message(mask, IPI_CALL_FUNC);
+}
+
+void arch_send_call_function_single_ipi(int cpu)
+{
+	send_ipi_message(cpumask_of(cpu), IPI_CALL_FUNC);
+}
+
+static void ipi_stop(void *unused)
+{
+	while (1)
+		wait_for_interrupt();
+}
+
+void smp_send_stop(void)
+{
+	on_each_cpu(ipi_stop, NULL, 1);
+}
+
+void smp_send_reschedule(int cpu)
+{
+	send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE);
+}
diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
new file mode 100644
index 000000000000..7043bbbfbc1e
--- /dev/null
+++ b/arch/riscv/kernel/smpboot.c
@@ -0,0 +1,103 @@
+/*
+ * SMP initialisation and IPI support
+ * Based on arch/arm64/kernel/smp.c
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/kernel_stat.h>
+#include <linux/notifier.h>
+#include <linux/cpu.h>
+#include <linux/percpu.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/irq.h>
+#include <linux/of.h>
+#include <linux/sched/task_stack.h>
+#include <asm/mmu_context.h>
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+#include <asm/sbi.h>
+
+void *__cpu_up_stack_pointer[NR_CPUS];
+
+void __init smp_prepare_boot_cpu(void)
+{
+}
+
+void __init smp_prepare_cpus(unsigned int max_cpus)
+{
+}
+
+void __init setup_smp(void)
+{
+	struct device_node *dn = NULL;
+	int hart, im_okay_therefore_i_am = 0;
+
+	while ((dn = of_find_node_by_type(dn, "cpu"))) {
+		hart = riscv_of_processor_hart(dn);
+		if (hart >= 0) {
+			set_cpu_possible(hart, true);
+			set_cpu_present(hart, true);
+			if (hart == smp_processor_id()) {
+				BUG_ON(im_okay_therefore_i_am);
+				im_okay_therefore_i_am = 1;
+			}
+		}
+	}
+
+	BUG_ON(!im_okay_therefore_i_am);
+}
+
+int __cpu_up(unsigned int cpu, struct task_struct *tidle)
+{
+	/* Signal cpu to start */
+	mb();
+	__cpu_up_stack_pointer[cpu] = task_stack_page(tidle) + THREAD_SIZE;
+
+	while (!cpu_online(cpu))
+		cpu_relax();
+
+	return 0;
+}
+
+void __init smp_cpus_done(unsigned int max_cpus)
+{
+}
+
+/*
+ * C entry point for a secondary processor.
+ */
+asmlinkage void __init smp_callin(void)
+{
+	struct mm_struct *mm = &init_mm;
+
+	/* All kernel threads share the same mm context.  */
+	atomic_inc(&mm->mm_count);
+	current->active_mm = mm;
+
+	trap_init();
+	init_clockevent();
+	notify_cpu_starting(smp_processor_id());
+	set_cpu_online(smp_processor_id(), 1);
+	local_flush_tlb_all();
+	local_irq_enable();
+	preempt_disable();
+	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+}
diff --git a/arch/riscv/kernel/time.c b/arch/riscv/kernel/time.c
new file mode 100644
index 000000000000..6bba2e6f2a84
--- /dev/null
+++ b/arch/riscv/kernel/time.c
@@ -0,0 +1,61 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/clocksource.h>
+#include <linux/clockchips.h>
+#include <linux/delay.h>
+#include <linux/timer_riscv.h>
+
+#include <asm/sbi.h>
+
+unsigned long riscv_timebase;
+
+static int next_event(unsigned long delta, struct clock_event_device *ce)
+{
+	BUG_ON(ce != timer_riscv_device(smp_processor_id()));
+	sbi_set_timer(get_cycles64() + delta);
+	return 0;
+}
+
+static unsigned long long rdtime(struct clocksource *cs)
+{
+	BUG_ON(cs != timer_riscv_source(smp_processor_id()));
+	return get_cycles64();
+}
+
+void riscv_timer_interrupt(void)
+{
+	int cpu = smp_processor_id();
+	struct clock_event_device *evdev = timer_riscv_device(cpu);
+
+	evdev->event_handler(evdev);
+}
+
+void __init time_init(void)
+{
+	struct device_node *cpu;
+	u32 prop;
+	int cpu_id = smp_processor_id();
+
+	cpu = of_find_node_by_path("/cpus");
+	if (!cpu || of_property_read_u32(cpu, "timebase-frequency", &prop))
+		panic(KERN_WARNING "RISC-V system with no 'timebase-frequency' in DTS\n");
+	riscv_timebase = prop;
+
+	lpj_fine = riscv_timebase / HZ;
+
+	timer_riscv_init(cpu_id, riscv_timebase, &rdtime, &next_event);
+
+	csr_set(sie, SIE_STIE);
+}
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
new file mode 100644
index 000000000000..3a4698199ecf
--- /dev/null
+++ b/arch/riscv/kernel/traps.c
@@ -0,0 +1,183 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <linux/sched/signal.h>
+#include <linux/signal.h>
+#include <linux/kdebug.h>
+#include <linux/uaccess.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/irq.h>
+
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/csr.h>
+
+int show_unhandled_signals = 1;
+
+extern asmlinkage void handle_exception(void);
+
+static DEFINE_SPINLOCK(die_lock);
+
+void die(struct pt_regs *regs, const char *str)
+{
+	static int die_counter;
+	int ret;
+
+	oops_enter();
+
+	spin_lock_irq(&die_lock);
+	console_verbose();
+	bust_spinlocks(1);
+
+	pr_emerg("%s [#%d]\n", str, ++die_counter);
+	print_modules();
+	show_regs(regs);
+
+	ret = notify_die(DIE_OOPS, str, regs, 0, regs->scause, SIGSEGV);
+
+	bust_spinlocks(0);
+	add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
+	spin_unlock_irq(&die_lock);
+	oops_exit();
+
+	if (in_interrupt())
+		panic("Fatal exception in interrupt");
+	if (panic_on_oops)
+		panic("Fatal exception");
+	if (ret != NOTIFY_STOP)
+		do_exit(SIGSEGV);
+}
+
+static inline void do_trap_siginfo(int signo, int code,
+	unsigned long addr, struct task_struct *tsk)
+{
+	siginfo_t info;
+
+	info.si_signo = signo;
+	info.si_errno = 0;
+	info.si_code = code;
+	info.si_addr = (void __user *)addr;
+	force_sig_info(signo, &info, tsk);
+}
+
+void do_trap(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, struct task_struct *tsk)
+{
+	if (show_unhandled_signals && unhandled_signal(tsk, signo)
+	    && printk_ratelimit()) {
+		pr_info("%s[%d]: unhandled signal %d code 0x%x at 0x" REG_FMT,
+			tsk->comm, task_pid_nr(tsk), signo, code, addr);
+		print_vma_addr(KERN_CONT " in ", GET_IP(regs));
+		pr_cont("\n");
+		show_regs(regs);
+	}
+
+	do_trap_siginfo(signo, code, addr, tsk);
+}
+
+static void do_trap_error(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, const char *str)
+{
+	if (user_mode(regs)) {
+		do_trap(regs, signo, code, addr, current);
+	} else {
+		if (!fixup_exception(regs))
+			die(regs, str);
+	}
+}
+
+#define DO_ERROR_INFO(name, signo, code, str)				\
+asmlinkage void name(struct pt_regs *regs)				\
+{									\
+	do_trap_error(regs, signo, code, regs->sepc, "Oops - " str);	\
+}
+
+DO_ERROR_INFO(do_trap_unknown,
+	SIGILL, ILL_ILLTRP, "unknown exception");
+DO_ERROR_INFO(do_trap_insn_misaligned,
+	SIGBUS, BUS_ADRALN, "instruction address misaligned");
+DO_ERROR_INFO(do_trap_insn_fault,
+	SIGBUS, BUS_ADRALN, "instruction access fault");
+DO_ERROR_INFO(do_trap_insn_illegal,
+	SIGILL, ILL_ILLOPC, "illegal instruction");
+DO_ERROR_INFO(do_trap_load_misaligned,
+	SIGBUS, BUS_ADRALN, "load address misaligned");
+DO_ERROR_INFO(do_trap_load_fault,
+	SIGSEGV, SEGV_ACCERR, "load access fault");
+DO_ERROR_INFO(do_trap_store_misaligned,
+	SIGBUS, BUS_ADRALN, "store (or AMO) address misaligned");
+DO_ERROR_INFO(do_trap_store_fault,
+	SIGSEGV, SEGV_ACCERR, "store (or AMO) access fault");
+DO_ERROR_INFO(do_trap_ecall_u,
+	SIGILL, ILL_ILLTRP, "environment call from U-mode");
+DO_ERROR_INFO(do_trap_ecall_s,
+	SIGILL, ILL_ILLTRP, "environment call from S-mode");
+DO_ERROR_INFO(do_trap_ecall_m,
+	SIGILL, ILL_ILLTRP, "environment call from M-mode");
+
+asmlinkage void do_trap_break(struct pt_regs *regs)
+{
+#ifdef CONFIG_GENERIC_BUG
+	if (!user_mode(regs)) {
+		enum bug_trap_type type;
+
+		type = report_bug(regs->sepc, regs);
+		switch (type) {
+		case BUG_TRAP_TYPE_NONE:
+			break;
+		case BUG_TRAP_TYPE_WARN:
+			regs->sepc += sizeof(bug_insn_t);
+			return;
+		case BUG_TRAP_TYPE_BUG:
+			die(regs, "Kernel BUG");
+		}
+	}
+#endif /* CONFIG_GENERIC_BUG */
+
+	do_trap_siginfo(SIGTRAP, TRAP_BRKPT, regs->sepc, current);
+	regs->sepc += 0x4;
+}
+
+#ifdef CONFIG_GENERIC_BUG
+int is_valid_bugaddr(unsigned long pc)
+{
+	bug_insn_t insn;
+
+	if (pc < PAGE_OFFSET)
+		return 0;
+	if (probe_kernel_address((bug_insn_t __user *)pc, insn))
+		return 0;
+	return (insn == __BUG_INSN);
+}
+#endif /* CONFIG_GENERIC_BUG */
+
+void __init trap_init(void)
+{
+	int hart = smp_processor_id();
+
+	/* Set sup0 scratch register to 0, indicating to exception vector
+	 * that we are presently executing in the kernel
+	 */
+	csr_write(sscratch, 0);
+	/* Set the exception vector address */
+	csr_write(stvec, &handle_exception);
+	/* Enable software interrupts and setup initial mask */
+	csr_write(sie,
+		  SIE_SSIE | atomic_long_read(&per_cpu(riscv_early_sie, hart))
+		);
+}
diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c
new file mode 100644
index 000000000000..e8a178df8144
--- /dev/null
+++ b/arch/riscv/kernel/vdso.c
@@ -0,0 +1,125 @@
+/*
+ * Copyright (C) 2004 Benjamin Herrenschmidt, IBM Corp.
+ *                    <benh@kernel.crashing.org>
+ * Copyright (C) 2012 ARM Limited
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/binfmts.h>
+#include <linux/err.h>
+
+#include <asm/vdso.h>
+
+extern char vdso_start[], vdso_end[];
+
+static unsigned int vdso_pages;
+static struct page **vdso_pagelist;
+
+/*
+ * The vDSO data page.
+ */
+static union {
+	struct vdso_data	data;
+	u8			page[PAGE_SIZE];
+} vdso_data_store __page_aligned_data;
+struct vdso_data *vdso_data = &vdso_data_store.data;
+
+static int __init vdso_init(void)
+{
+	unsigned int i;
+
+	vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT;
+	vdso_pagelist =
+		kcalloc(vdso_pages + 1, sizeof(struct page *), GFP_KERNEL);
+	if (unlikely(vdso_pagelist == NULL)) {
+		pr_err("vdso: pagelist allocation failed\n");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < vdso_pages; i++) {
+		struct page *pg;
+
+		pg = virt_to_page(vdso_start + (i << PAGE_SHIFT));
+		ClearPageReserved(pg);
+		vdso_pagelist[i] = pg;
+	}
+	vdso_pagelist[i] = virt_to_page(vdso_data);
+
+	return 0;
+}
+arch_initcall(vdso_init);
+
+int arch_setup_additional_pages(struct linux_binprm *bprm,
+	int uses_interp)
+{
+	struct mm_struct *mm = current->mm;
+	unsigned long vdso_base, vdso_len;
+	int ret;
+
+	vdso_len = (vdso_pages + 1) << PAGE_SHIFT;
+
+	down_write(&mm->mmap_sem);
+	vdso_base = get_unmapped_area(NULL, 0, vdso_len, 0, 0);
+	if (unlikely(IS_ERR_VALUE(vdso_base))) {
+		ret = vdso_base;
+		goto end;
+	}
+
+	/*
+	 * Put vDSO base into mm struct. We need to do this before calling
+	 * install_special_mapping or the perf counter mmap tracking code
+	 * will fail to recognise it as a vDSO (since arch_vma_name fails).
+	 */
+	mm->context.vdso = (void *)vdso_base;
+
+	ret = install_special_mapping(mm, vdso_base, vdso_len,
+		(VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC),
+		vdso_pagelist);
+
+	if (unlikely(ret))
+		mm->context.vdso = NULL;
+
+end:
+	up_write(&mm->mmap_sem);
+	return ret;
+}
+
+const char *arch_vma_name(struct vm_area_struct *vma)
+{
+	if (vma->vm_mm && (vma->vm_start == (long)vma->vm_mm->context.vdso))
+		return "[vdso]";
+	return NULL;
+}
+
+/*
+ * Function stubs to prevent linker errors when AT_SYSINFO_EHDR is defined
+ */
+
+int in_gate_area_no_mm(unsigned long addr)
+{
+	return 0;
+}
+
+int in_gate_area(struct mm_struct *mm, unsigned long addr)
+{
+	return 0;
+}
+
+struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
+{
+	return NULL;
+}
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
new file mode 100644
index 000000000000..9f4bee5e51fd
--- /dev/null
+++ b/arch/riscv/mm/init.c
@@ -0,0 +1,70 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/bootmem.h>
+#include <linux/initrd.h>
+#include <linux/memblock.h>
+#include <linux/swap.h>
+
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+#include <asm/pgtable.h>
+#include <asm/io.h>
+
+static void __init zone_sizes_init(void)
+{
+	unsigned long zones_size[MAX_NR_ZONES];
+
+	memset(zones_size, 0, sizeof(zones_size));
+	zones_size[ZONE_NORMAL] = max_mapnr;
+	free_area_init_node(0, zones_size, pfn_base, NULL);
+}
+
+void setup_zero_page(void)
+{
+	memset((void *)empty_zero_page, 0, PAGE_SIZE);
+}
+
+void __init paging_init(void)
+{
+	init_mm.pgd = (pgd_t *)pfn_to_virt(csr_read(sptbr));
+
+	setup_zero_page();
+	local_flush_tlb_all();
+	zone_sizes_init();
+}
+
+void __init mem_init(void)
+{
+#ifdef CONFIG_FLATMEM
+	BUG_ON(!mem_map);
+#endif /* CONFIG_FLATMEM */
+
+	high_memory = (void *)(__va(PFN_PHYS(max_low_pfn)));
+	free_all_bootmem();
+
+	mem_init_print_info(NULL);
+}
+
+void free_initmem(void)
+{
+	free_initmem_default(0);
+}
+
+#ifdef CONFIG_BLK_DEV_INITRD
+void free_initrd_mem(unsigned long start, unsigned long end)
+{
+}
+#endif /* CONFIG_BLK_DEV_INITRD */
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 1/9] RISC-V: Init and Halt Code
  2017-06-28 18:55     ` Palmer Dabbelt
  (?)
  (?)
@ 2017-06-28 18:55     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This contains the various __init C functions, the initial assembly
kernel entry point, and the code to reset the system.  When a file was
init-related, it contains

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/bug.h   |  88 +++++++++++++++
 arch/riscv/include/asm/cache.h |  22 ++++
 arch/riscv/include/asm/smp.h   |  41 +++++++
 arch/riscv/kernel/cacheinfo.c  | 105 ++++++++++++++++++
 arch/riscv/kernel/cpu.c        |  89 +++++++++++++++
 arch/riscv/kernel/head.S       | 147 +++++++++++++++++++++++++
 arch/riscv/kernel/irq.c        |  20 ++++
 arch/riscv/kernel/reset.c      |  36 +++++++
 arch/riscv/kernel/setup.c      | 240 +++++++++++++++++++++++++++++++++++++++++
 arch/riscv/kernel/smp.c        | 110 +++++++++++++++++++
 arch/riscv/kernel/smpboot.c    | 103 ++++++++++++++++++
 arch/riscv/kernel/time.c       |  61 +++++++++++
 arch/riscv/kernel/traps.c      | 183 +++++++++++++++++++++++++++++++
 arch/riscv/kernel/vdso.c       | 125 +++++++++++++++++++++
 arch/riscv/mm/init.c           |  70 ++++++++++++
 15 files changed, 1440 insertions(+)
 create mode 100644 arch/riscv/include/asm/bug.h
 create mode 100644 arch/riscv/include/asm/cache.h
 create mode 100644 arch/riscv/include/asm/smp.h
 create mode 100644 arch/riscv/kernel/cacheinfo.c
 create mode 100644 arch/riscv/kernel/cpu.c
 create mode 100644 arch/riscv/kernel/head.S
 create mode 100644 arch/riscv/kernel/irq.c
 create mode 100644 arch/riscv/kernel/reset.c
 create mode 100644 arch/riscv/kernel/setup.c
 create mode 100644 arch/riscv/kernel/smp.c
 create mode 100644 arch/riscv/kernel/smpboot.c
 create mode 100644 arch/riscv/kernel/time.c
 create mode 100644 arch/riscv/kernel/traps.c
 create mode 100644 arch/riscv/kernel/vdso.c
 create mode 100644 arch/riscv/mm/init.c

diff --git a/arch/riscv/include/asm/bug.h b/arch/riscv/include/asm/bug.h
new file mode 100644
index 000000000000..e2f690c20729
--- /dev/null
+++ b/arch/riscv/include/asm/bug.h
@@ -0,0 +1,88 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_BUG_H
+#define _ASM_RISCV_BUG_H
+
+#include <linux/compiler.h>
+#include <linux/const.h>
+#include <linux/types.h>
+
+#include <asm/asm.h>
+
+#ifdef CONFIG_GENERIC_BUG
+#define __BUG_INSN	_AC(0x00100073, UL) /* sbreak */
+
+#ifndef __ASSEMBLY__
+typedef u32 bug_insn_t;
+
+#ifdef CONFIG_GENERIC_BUG_RELATIVE_POINTERS
+#define __BUG_ENTRY_ADDR	INT " 1b - 2b"
+#define __BUG_ENTRY_FILE	INT " %0 - 2b"
+#else
+#define __BUG_ENTRY_ADDR	RISCV_PTR " 1b"
+#define __BUG_ENTRY_FILE	RISCV_PTR " %0"
+#endif
+
+#ifdef CONFIG_DEBUG_BUGVERBOSE
+#define __BUG_ENTRY			\
+	__BUG_ENTRY_ADDR "\n\t"		\
+	__BUG_ENTRY_FILE "\n\t"		\
+	SHORT " %1"
+#else
+#define __BUG_ENTRY			\
+	__BUG_ENTRY_ADDR
+#endif
+
+#define BUG()							\
+do {								\
+	__asm__ __volatile__ (					\
+		"1:\n\t"					\
+			"sbreak\n"				\
+			".pushsection __bug_table,\"a\"\n\t"	\
+		"2:\n\t"					\
+			__BUG_ENTRY "\n\t"			\
+			".org 2b + %2\n\t"			\
+			".popsection"				\
+		:						\
+		: "i" (__FILE__), "i" (__LINE__),		\
+		  "i" (sizeof(struct bug_entry)));		\
+	unreachable();						\
+} while (0)
+#endif /* !__ASSEMBLY__ */
+#else /* CONFIG_GENERIC_BUG */
+#ifndef __ASSEMBLY__
+#define BUG()							\
+do {								\
+	__asm__ __volatile__ ("sbreak\n");			\
+	unreachable();						\
+} while (0)
+#endif /* !__ASSEMBLY__ */
+#endif /* CONFIG_GENERIC_BUG */
+
+#define HAVE_ARCH_BUG
+
+#include <asm-generic/bug.h>
+
+#ifndef __ASSEMBLY__
+
+struct pt_regs;
+struct task_struct;
+
+extern void die(struct pt_regs *regs, const char *str);
+extern void do_trap(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, struct task_struct *tsk);
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BUG_H */
diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h
new file mode 100644
index 000000000000..e8f0d1110d74
--- /dev/null
+++ b/arch/riscv/include/asm/cache.h
@@ -0,0 +1,22 @@
+/*
+ * Copyright (C) 2017 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CACHE_H
+#define _ASM_RISCV_CACHE_H
+
+#define L1_CACHE_SHIFT		6
+
+#define L1_CACHE_BYTES		(1 << L1_CACHE_SHIFT)
+
+#endif /* _ASM_RISCV_CACHE_H */
diff --git a/arch/riscv/include/asm/smp.h b/arch/riscv/include/asm/smp.h
new file mode 100644
index 000000000000..f61f8c25f95b
--- /dev/null
+++ b/arch/riscv/include/asm/smp.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SMP_H
+#define _ASM_RISCV_SMP_H
+
+#include <linux/cpumask.h>
+#include <linux/irqreturn.h>
+
+#ifdef CONFIG_SMP
+
+/* SMP initialization hook for setup_arch */
+void __init init_clockevent(void);
+
+/* SMP initialization hook for setup_arch */
+void __init setup_smp(void);
+
+/* Hook for the generic smp_call_function_many() routine. */
+void arch_send_call_function_ipi_mask(struct cpumask *mask);
+
+/* Hook for the generic smp_call_function_single() routine. */
+void arch_send_call_function_single_ipi(int cpu);
+
+#define raw_smp_processor_id() (current_thread_info()->cpu)
+
+/* Interprocessor interrupt handler */
+irqreturn_t handle_ipi(void);
+
+#endif /* CONFIG_SMP */
+
+#endif /* _ASM_RISCV_SMP_H */
diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c
new file mode 100644
index 000000000000..10ed2749e246
--- /dev/null
+++ b/arch/riscv/kernel/cacheinfo.c
@@ -0,0 +1,105 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/cacheinfo.h>
+#include <linux/cpu.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+
+static void ci_leaf_init(struct cacheinfo *this_leaf,
+			 struct device_node *node,
+			 enum cache_type type, unsigned int level)
+{
+	this_leaf->of_node = node;
+	this_leaf->level = level;
+	this_leaf->type = type;
+	/* not a sector cache */
+	this_leaf->physical_line_partition = 1;
+	/* TODO: Add to DTS */
+	this_leaf->attributes =
+		CACHE_WRITE_BACK
+		| CACHE_READ_ALLOCATE
+		| CACHE_WRITE_ALLOCATE;
+}
+
+static int __init_cache_level(unsigned int cpu)
+{
+	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+	struct device_node *np = of_cpu_device_node_get(cpu);
+	int levels = 0, leaves = 0, level;
+
+	if (of_property_read_bool(np, "cache-size"))
+		++leaves;
+	if (of_property_read_bool(np, "i-cache-size"))
+		++leaves;
+	if (of_property_read_bool(np, "d-cache-size"))
+		++leaves;
+	if (leaves > 0)
+		levels = 1;
+
+	while ((np = of_find_next_cache_node(np))) {
+		if (!of_device_is_compatible(np, "cache"))
+			break;
+		if (of_property_read_u32(np, "cache-level", &level))
+			break;
+		if (level <= levels)
+			break;
+		if (of_property_read_bool(np, "cache-size"))
+			++leaves;
+		if (of_property_read_bool(np, "i-cache-size"))
+			++leaves;
+		if (of_property_read_bool(np, "d-cache-size"))
+			++leaves;
+		levels = level;
+	}
+
+	this_cpu_ci->num_levels = levels;
+	this_cpu_ci->num_leaves = leaves;
+	return 0;
+}
+
+static int __populate_cache_leaves(unsigned int cpu)
+{
+	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+	struct cacheinfo *this_leaf = this_cpu_ci->info_list;
+	struct device_node *np = of_cpu_device_node_get(cpu);
+	int levels = 1, level = 1;
+
+	if (of_property_read_bool(np, "cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
+	if (of_property_read_bool(np, "i-cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
+	if (of_property_read_bool(np, "d-cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
+
+	while ((np = of_find_next_cache_node(np))) {
+		if (!of_device_is_compatible(np, "cache"))
+			break;
+		if (of_property_read_u32(np, "cache-level", &level))
+			break;
+		if (level <= levels)
+			break;
+		if (of_property_read_bool(np, "cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
+		if (of_property_read_bool(np, "i-cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
+		if (of_property_read_bool(np, "d-cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
+		levels = level;
+	}
+
+	return 0;
+}
+
+DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
+DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c
new file mode 100644
index 000000000000..20004bd7a216
--- /dev/null
+++ b/arch/riscv/kernel/cpu.c
@@ -0,0 +1,89 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <linux/of.h>
+
+/* Return -1 if not a valid hart */
+int riscv_of_processor_hart(struct device_node *node)
+{
+	const char *isa, *status;
+	u32 hart;
+
+	if (!of_device_is_compatible(node, "riscv"))
+		return -(ENODEV);
+	if (of_property_read_u32(node, "reg", &hart)
+	    || hart >= NR_CPUS)
+		return -(ENODEV);
+	if (of_property_read_string(node, "status", &status)
+	    || strcmp(status, "okay"))
+		return -(ENODEV);
+	if (of_property_read_string(node, "riscv,isa", &isa)
+	    || isa[0] != 'r'
+	    || isa[1] != 'v')
+		return -(ENODEV);
+
+	return hart;
+}
+
+#ifdef CONFIG_PROC_FS
+
+static void *c_start(struct seq_file *m, loff_t *pos)
+{
+	*pos = cpumask_next(*pos - 1, cpu_online_mask);
+	if ((*pos) < nr_cpu_ids)
+		return (void *)(uintptr_t)(1 + *pos);
+	return NULL;
+}
+
+static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+{
+	(*pos)++;
+	return c_start(m, pos);
+}
+
+static void c_stop(struct seq_file *m, void *v)
+{
+}
+
+static int c_show(struct seq_file *m, void *v)
+{
+	unsigned long hart_id = (unsigned long)v - 1;
+	struct device_node *node = of_get_cpu_node(hart_id, NULL);
+	const char *compat, *isa, *mmu;
+
+	seq_printf(m, "hart\t: %lu\n", hart_id);
+	if (!of_property_read_string(node, "riscv,isa", &isa)
+	    && isa[0] == 'r'
+	    && isa[1] == 'v')
+		seq_printf(m, "isa\t: %s\n", isa);
+	if (!of_property_read_string(node, "mmu-type", &mmu)
+	    && !strncmp(mmu, "riscv,", 6))
+		seq_printf(m, "mmu\t: %s\n", mmu+6);
+	if (!of_property_read_string(node, "compatible", &compat)
+	    && strcmp(compat, "riscv"))
+		seq_printf(m, "uarch\t: %s\n", compat);
+	seq_puts(m, "\n");
+
+	return 0;
+}
+
+const struct seq_operations cpuinfo_op = {
+	.start	= c_start,
+	.next	= c_next,
+	.stop	= c_stop,
+	.show	= c_show
+};
+
+#endif /* CONFIG_PROC_FS */
diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
new file mode 100644
index 000000000000..608e57d4531f
--- /dev/null
+++ b/arch/riscv/kernel/head.S
@@ -0,0 +1,147 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+#include <asm/asm.h>
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/thread_info.h>
+#include <asm/page.h>
+#include <asm/csr.h>
+
+__INIT
+ENTRY(_start)
+	/* Mask all interrupts */
+	csrw sie, zero
+
+	/* Disable FPU to detect illegal usage of
+	   floating point in kernel space */
+	li t0, SR_FS
+	csrc sstatus, t0
+
+#ifndef CONFIG_RV_PUM
+	/* Allow access to user memory */
+	li t0, SR_SUM
+	csrs sstatus, t0
+#endif
+
+#ifdef CONFIG_ISA_A
+	/* Pick one hart to run the main boot sequence */
+	la a3, hart_lottery
+	li a2, 1
+	amoadd.w a3, a2, (a3)
+	bnez a3, .Lsecondary_start
+#else
+	/* We don't have atomic support, so the boot hart must be picked
+	 * staticly.  Hart 0 is the only sane choice.
+	 */
+	bnez a0, .Lsecondary_park
+#endif
+
+	/* Save hart ID and DTB physical address */
+	mv s0, a0
+	mv s1, a1
+
+	/* Initialize page tables and relocate to virtual addresses */
+	la sp, init_thread_union + THREAD_SIZE
+	call setup_vm
+	call relocate
+
+	/* Restore C environment */
+	la tp, init_task
+
+	la sp, init_thread_union
+	li a0, ASM_THREAD_SIZE
+	add sp, sp, a0
+
+	/* Start the kernel */
+	mv a0, s0
+	mv a1, s1
+	call sbi_save
+	tail start_kernel
+
+relocate:
+	/* Relocate return address */
+	li a1, PAGE_OFFSET
+	la a0, _start
+	sub a1, a1, a0
+	add ra, ra, a1
+
+	/* Point stvec to virtual address of intruction after sptbr write */
+	la a0, 1f
+	add a0, a0, a1
+	csrw stvec, a0
+
+	/* Compute sptbr for kernel page tables, but don't load it yet */
+	la a2, swapper_pg_dir
+	srl a2, a2, PAGE_SHIFT
+	li a1, SPTBR_MODE
+	or a2, a2, a1
+
+	/* Load trampoline page directory, which will cause us to trap to
+	   stvec if VA != PA, or simply fall through if VA == PA */
+	la a0, trampoline_pg_dir
+	srl a0, a0, PAGE_SHIFT
+	or a0, a0, a1
+	sfence.vma
+	csrw sptbr, a0
+1:
+	/* Set trap vector to spin forever to help debug */
+	la a0, .Lsecondary_park
+	csrw stvec, a0
+
+	/* Load the global pointer */
+	la gp, __global_pointer$
+
+	/* Switch to kernel page tables */
+	csrw sptbr, a2
+
+	ret
+
+.Lsecondary_start:
+#ifdef CONFIG_SMP
+	li a1, CONFIG_NR_CPUS
+	bgeu a0, a1, .Lsecondary_park
+
+	la a1, __cpu_up_stack_pointer
+	slli a0, a0, LGREG
+	add a0, a0, a1
+
+.Lwait_for_cpu_up:
+	REG_L sp, (a0)
+	beqz sp, .Lwait_for_cpu_up
+	fence
+
+	/* Enable virtual memory and relocate to virtual address */
+	call relocate
+
+	/* Initialize task_struct pointer */
+	li tp, -THREAD_SIZE
+	add tp, tp, sp
+
+	tail smp_callin
+#endif
+
+.Lsecondary_park:
+	/* We lack SMP support or have too many harts, so park this hart */
+	wfi
+	j .Lsecondary_park
+END(_start)
+
+__PAGE_ALIGNED_BSS
+	/* Empty zero page */
+	.balign PAGE_SIZE
+ENTRY(empty_zero_page)
+	.fill (empty_zero_page + PAGE_SIZE) - ., 1, 0x00
+END(empty_zero_page)
diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c
new file mode 100644
index 000000000000..737d7cce2c6d
--- /dev/null
+++ b/arch/riscv/kernel/irq.c
@@ -0,0 +1,20 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/irqchip.h>
+
+void __init init_IRQ(void)
+{
+	irqchip_init();
+}
diff --git a/arch/riscv/kernel/reset.c b/arch/riscv/kernel/reset.c
new file mode 100644
index 000000000000..2a53d26ffdd6
--- /dev/null
+++ b/arch/riscv/kernel/reset.c
@@ -0,0 +1,36 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/reboot.h>
+#include <linux/export.h>
+#include <asm/sbi.h>
+
+void (*pm_power_off)(void) = machine_power_off;
+EXPORT_SYMBOL(pm_power_off);
+
+void machine_restart(char *cmd)
+{
+	do_kernel_restart(cmd);
+	while (1);
+}
+
+void machine_halt(void)
+{
+	machine_power_off();
+}
+
+void machine_power_off(void)
+{
+	sbi_shutdown();
+	while (1);
+}
diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
new file mode 100644
index 000000000000..9ed70e84d74e
--- /dev/null
+++ b/arch/riscv/kernel/setup.c
@@ -0,0 +1,240 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/memblock.h>
+#include <linux/sched.h>
+#include <linux/initrd.h>
+#include <linux/console.h>
+#include <linux/screen_info.h>
+#include <linux/of_fdt.h>
+#include <linux/of_platform.h>
+#include <linux/sched/task.h>
+
+#include <asm/setup.h>
+#include <asm/sections.h>
+#include <asm/pgtable.h>
+#include <asm/smp.h>
+#include <asm/sbi.h>
+#include <asm/tlbflush.h>
+#include <asm/thread_info.h>
+
+#ifdef CONFIG_DUMMY_CONSOLE
+struct screen_info screen_info = {
+	.orig_video_lines	= 30,
+	.orig_video_cols	= 80,
+	.orig_video_mode	= 0,
+	.orig_video_ega_bx	= 0,
+	.orig_video_isVGA	= 1,
+	.orig_video_points	= 8
+};
+#endif
+
+#ifdef CONFIG_CMDLINE_BOOL
+static char __initdata builtin_cmdline[COMMAND_LINE_SIZE] = CONFIG_CMDLINE;
+#endif /* CONFIG_CMDLINE_BOOL */
+
+unsigned long va_pa_offset;
+unsigned long pfn_base;
+
+/* The lucky hart to first increment this variable will boot the other cores */
+atomic_t hart_lottery;
+
+#ifdef CONFIG_BLK_DEV_INITRD
+static void __init setup_initrd(void)
+{
+	extern char __initramfs_start[];
+	extern unsigned long __initramfs_size;
+	unsigned long size;
+
+	if (__initramfs_size > 0) {
+		initrd_start = (unsigned long)(&__initramfs_start);
+		initrd_end = initrd_start + __initramfs_size;
+	}
+
+	if (initrd_start >= initrd_end) {
+		printk(KERN_INFO "initrd not found or empty");
+		goto disable;
+	}
+	if (__pa(initrd_end) > PFN_PHYS(max_low_pfn)) {
+		printk(KERN_ERR "initrd extends beyond end of memory");
+		goto disable;
+	}
+
+	size =  initrd_end - initrd_start;
+	memblock_reserve(__pa(initrd_start), size);
+	initrd_below_start_ok = 1;
+
+	printk(KERN_INFO "Initial ramdisk at: 0x%p (%lu bytes)\n",
+		(void *)(initrd_start), size);
+	return;
+disable:
+	printk(KERN_CONT " - disabling initrd\n");
+	initrd_start = 0;
+	initrd_end = 0;
+}
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+pgd_t trampoline_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+
+#ifndef __PAGETABLE_PMD_FOLDED
+#define NUM_SWAPPER_PMDS ((uintptr_t)-PAGE_OFFSET >> PGDIR_SHIFT)
+pmd_t swapper_pmd[PTRS_PER_PMD*((-PAGE_OFFSET)/PGDIR_SIZE)] __page_aligned_bss;
+pmd_t trampoline_pmd[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+#endif
+
+asmlinkage void __init setup_vm(void)
+{
+	extern char _start;
+	uintptr_t i;
+	uintptr_t pa = (uintptr_t) &_start;
+	pgprot_t prot = __pgprot(pgprot_val(PAGE_KERNEL) | _PAGE_EXEC);
+
+	va_pa_offset = PAGE_OFFSET - pa;
+	pfn_base = PFN_DOWN(pa);
+
+	/* Sanity check alignment and size */
+	BUG_ON((PAGE_OFFSET % PGDIR_SIZE) != 0);
+	BUG_ON((pa % (PAGE_SIZE * PTRS_PER_PTE)) != 0);
+
+#ifndef __PAGETABLE_PMD_FOLDED
+	trampoline_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD] =
+		pfn_pgd(PFN_DOWN((uintptr_t)trampoline_pmd),
+			__pgprot(_PAGE_TABLE));
+	trampoline_pmd[0] = pfn_pmd(PFN_DOWN(pa), prot);
+
+	for (i = 0; i < (-PAGE_OFFSET)/PGDIR_SIZE; ++i) {
+		size_t o = (PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD + i;
+		swapper_pg_dir[o] =
+			pfn_pgd(PFN_DOWN((uintptr_t)swapper_pmd) + i,
+				__pgprot(_PAGE_TABLE));
+	}
+	for (i = 0; i < ARRAY_SIZE(swapper_pmd); i++)
+		swapper_pmd[i] = pfn_pmd(PFN_DOWN(pa + i * PMD_SIZE), prot);
+#else
+	trampoline_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD] =
+		pfn_pgd(PFN_DOWN(pa), prot);
+
+	for (i = 0; i < (-PAGE_OFFSET)/PGDIR_SIZE; ++i) {
+		size_t o = (PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD + i;
+		swapper_pg_dir[o] =
+			pfn_pgd(PFN_DOWN(pa + i * PGDIR_SIZE), prot);
+	}
+#endif
+}
+
+void __init sbi_save(unsigned int hartid, void *dtb)
+{
+	early_init_dt_scan(__va(dtb));
+}
+
+/* Allow the user to manually add a memory region (in case DTS is broken); "mem_end=nn[KkMmGg]" */
+static int __init mem_end_override(char *p)
+{
+	resource_size_t base, end;
+
+	if (!p)
+		return -EINVAL;
+	base = (uintptr_t) __pa(PAGE_OFFSET);
+	end = memparse(p, &p) & PMD_MASK;
+	if (end == 0)
+		return -EINVAL;
+	memblock_add(base, end - base);
+	return 0;
+}
+early_param("mem_end", mem_end_override);
+
+static void __init setup_bootmem(void)
+{
+	struct memblock_region *reg;
+	phys_addr_t mem_size = 0;
+
+	/* Find the memory region containing the kernel */
+	for_each_memblock(memory, reg) {
+		phys_addr_t vmlinux_end = __pa(_end);
+		phys_addr_t end = reg->base + reg->size;
+
+		if (reg->base <= vmlinux_end && vmlinux_end <= end) {
+			/* Reserve from the start of the region to the end of
+			 * the kernel
+			 */
+			memblock_reserve(reg->base, vmlinux_end - reg->base);
+			mem_size = min(reg->size, (phys_addr_t)-PAGE_OFFSET);
+		}
+	}
+	BUG_ON(mem_size == 0);
+
+	set_max_mapnr(PFN_DOWN(mem_size));
+	max_low_pfn = pfn_base + PFN_DOWN(mem_size);
+
+#ifdef CONFIG_BLK_DEV_INITRD
+	setup_initrd();
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+	early_init_fdt_reserve_self();
+	early_init_fdt_scan_reserved_mem();
+	memblock_allow_resize();
+	memblock_dump_all();
+}
+
+void __init setup_arch(char **cmdline_p)
+{
+#ifdef CONFIG_CMDLINE_BOOL
+#ifdef CONFIG_CMDLINE_OVERRIDE
+	strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+#else
+	if (builtin_cmdline[0] != '\0') {
+		/* Append bootloader command line to built-in */
+		strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);
+		strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+		strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+	}
+#endif /* CONFIG_CMDLINE_OVERRIDE */
+#endif /* CONFIG_CMDLINE_BOOL */
+	*cmdline_p = boot_command_line;
+
+	parse_early_param();
+
+	init_mm.start_code = (unsigned long) _stext;
+	init_mm.end_code   = (unsigned long) _etext;
+	init_mm.end_data   = (unsigned long) _edata;
+	init_mm.brk        = (unsigned long) _end;
+
+	setup_bootmem();
+	paging_init();
+	unflatten_device_tree();
+
+#ifdef CONFIG_SMP
+	setup_smp();
+#endif
+
+#ifdef CONFIG_DUMMY_CONSOLE
+	conswitchp = &dummy_con;
+#endif
+}
+
+static int __init riscv_device_init(void)
+{
+	return of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
+}
+subsys_initcall_sync(riscv_device_init);
diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
new file mode 100644
index 000000000000..b4a71ec5906f
--- /dev/null
+++ b/arch/riscv/kernel/smp.c
@@ -0,0 +1,110 @@
+/*
+ * SMP initialisation and IPI support
+ * Based on arch/arm64/kernel/smp.c
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/smp.h>
+#include <linux/sched.h>
+
+#include <asm/sbi.h>
+#include <asm/tlbflush.h>
+#include <asm/cacheflush.h>
+
+/* A collection of single bit ipi messages.  */
+static struct {
+	unsigned long bits ____cacheline_aligned;
+} ipi_data[NR_CPUS] __cacheline_aligned;
+
+enum ipi_message_type {
+	IPI_RESCHEDULE,
+	IPI_CALL_FUNC,
+	IPI_MAX
+};
+
+irqreturn_t handle_ipi(void)
+{
+	unsigned long *pending_ipis = &ipi_data[smp_processor_id()].bits;
+
+	/* Clear pending IPI */
+	csr_clear(sip, SIE_SSIE);
+
+	while (true) {
+		unsigned long ops;
+
+		/* Order bit clearing and data access. */
+		mb();
+
+		ops = xchg(pending_ipis, 0);
+		if (ops == 0)
+			return IRQ_HANDLED;
+
+		if (ops & (1 << IPI_RESCHEDULE))
+			scheduler_ipi();
+
+		if (ops & (1 << IPI_CALL_FUNC))
+			generic_smp_call_function_interrupt();
+
+		BUG_ON((ops >> IPI_MAX) != 0);
+
+		/* Order data access and bit testing. */
+		mb();
+	}
+
+	return IRQ_HANDLED;
+}
+
+static void
+send_ipi_message(const struct cpumask *to_whom, enum ipi_message_type operation)
+{
+	int i;
+
+	mb();
+	for_each_cpu(i, to_whom)
+		set_bit(operation, &ipi_data[i].bits);
+
+	mb();
+	sbi_send_ipi(cpumask_bits(to_whom));
+}
+
+void arch_send_call_function_ipi_mask(struct cpumask *mask)
+{
+	send_ipi_message(mask, IPI_CALL_FUNC);
+}
+
+void arch_send_call_function_single_ipi(int cpu)
+{
+	send_ipi_message(cpumask_of(cpu), IPI_CALL_FUNC);
+}
+
+static void ipi_stop(void *unused)
+{
+	while (1)
+		wait_for_interrupt();
+}
+
+void smp_send_stop(void)
+{
+	on_each_cpu(ipi_stop, NULL, 1);
+}
+
+void smp_send_reschedule(int cpu)
+{
+	send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE);
+}
diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
new file mode 100644
index 000000000000..7043bbbfbc1e
--- /dev/null
+++ b/arch/riscv/kernel/smpboot.c
@@ -0,0 +1,103 @@
+/*
+ * SMP initialisation and IPI support
+ * Based on arch/arm64/kernel/smp.c
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/kernel_stat.h>
+#include <linux/notifier.h>
+#include <linux/cpu.h>
+#include <linux/percpu.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/irq.h>
+#include <linux/of.h>
+#include <linux/sched/task_stack.h>
+#include <asm/mmu_context.h>
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+#include <asm/sbi.h>
+
+void *__cpu_up_stack_pointer[NR_CPUS];
+
+void __init smp_prepare_boot_cpu(void)
+{
+}
+
+void __init smp_prepare_cpus(unsigned int max_cpus)
+{
+}
+
+void __init setup_smp(void)
+{
+	struct device_node *dn = NULL;
+	int hart, im_okay_therefore_i_am = 0;
+
+	while ((dn = of_find_node_by_type(dn, "cpu"))) {
+		hart = riscv_of_processor_hart(dn);
+		if (hart >= 0) {
+			set_cpu_possible(hart, true);
+			set_cpu_present(hart, true);
+			if (hart == smp_processor_id()) {
+				BUG_ON(im_okay_therefore_i_am);
+				im_okay_therefore_i_am = 1;
+			}
+		}
+	}
+
+	BUG_ON(!im_okay_therefore_i_am);
+}
+
+int __cpu_up(unsigned int cpu, struct task_struct *tidle)
+{
+	/* Signal cpu to start */
+	mb();
+	__cpu_up_stack_pointer[cpu] = task_stack_page(tidle) + THREAD_SIZE;
+
+	while (!cpu_online(cpu))
+		cpu_relax();
+
+	return 0;
+}
+
+void __init smp_cpus_done(unsigned int max_cpus)
+{
+}
+
+/*
+ * C entry point for a secondary processor.
+ */
+asmlinkage void __init smp_callin(void)
+{
+	struct mm_struct *mm = &init_mm;
+
+	/* All kernel threads share the same mm context.  */
+	atomic_inc(&mm->mm_count);
+	current->active_mm = mm;
+
+	trap_init();
+	init_clockevent();
+	notify_cpu_starting(smp_processor_id());
+	set_cpu_online(smp_processor_id(), 1);
+	local_flush_tlb_all();
+	local_irq_enable();
+	preempt_disable();
+	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+}
diff --git a/arch/riscv/kernel/time.c b/arch/riscv/kernel/time.c
new file mode 100644
index 000000000000..6bba2e6f2a84
--- /dev/null
+++ b/arch/riscv/kernel/time.c
@@ -0,0 +1,61 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/clocksource.h>
+#include <linux/clockchips.h>
+#include <linux/delay.h>
+#include <linux/timer_riscv.h>
+
+#include <asm/sbi.h>
+
+unsigned long riscv_timebase;
+
+static int next_event(unsigned long delta, struct clock_event_device *ce)
+{
+	BUG_ON(ce != timer_riscv_device(smp_processor_id()));
+	sbi_set_timer(get_cycles64() + delta);
+	return 0;
+}
+
+static unsigned long long rdtime(struct clocksource *cs)
+{
+	BUG_ON(cs != timer_riscv_source(smp_processor_id()));
+	return get_cycles64();
+}
+
+void riscv_timer_interrupt(void)
+{
+	int cpu = smp_processor_id();
+	struct clock_event_device *evdev = timer_riscv_device(cpu);
+
+	evdev->event_handler(evdev);
+}
+
+void __init time_init(void)
+{
+	struct device_node *cpu;
+	u32 prop;
+	int cpu_id = smp_processor_id();
+
+	cpu = of_find_node_by_path("/cpus");
+	if (!cpu || of_property_read_u32(cpu, "timebase-frequency", &prop))
+		panic(KERN_WARNING "RISC-V system with no 'timebase-frequency' in DTS\n");
+	riscv_timebase = prop;
+
+	lpj_fine = riscv_timebase / HZ;
+
+	timer_riscv_init(cpu_id, riscv_timebase, &rdtime, &next_event);
+
+	csr_set(sie, SIE_STIE);
+}
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
new file mode 100644
index 000000000000..3a4698199ecf
--- /dev/null
+++ b/arch/riscv/kernel/traps.c
@@ -0,0 +1,183 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <linux/sched/signal.h>
+#include <linux/signal.h>
+#include <linux/kdebug.h>
+#include <linux/uaccess.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/irq.h>
+
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/csr.h>
+
+int show_unhandled_signals = 1;
+
+extern asmlinkage void handle_exception(void);
+
+static DEFINE_SPINLOCK(die_lock);
+
+void die(struct pt_regs *regs, const char *str)
+{
+	static int die_counter;
+	int ret;
+
+	oops_enter();
+
+	spin_lock_irq(&die_lock);
+	console_verbose();
+	bust_spinlocks(1);
+
+	pr_emerg("%s [#%d]\n", str, ++die_counter);
+	print_modules();
+	show_regs(regs);
+
+	ret = notify_die(DIE_OOPS, str, regs, 0, regs->scause, SIGSEGV);
+
+	bust_spinlocks(0);
+	add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
+	spin_unlock_irq(&die_lock);
+	oops_exit();
+
+	if (in_interrupt())
+		panic("Fatal exception in interrupt");
+	if (panic_on_oops)
+		panic("Fatal exception");
+	if (ret != NOTIFY_STOP)
+		do_exit(SIGSEGV);
+}
+
+static inline void do_trap_siginfo(int signo, int code,
+	unsigned long addr, struct task_struct *tsk)
+{
+	siginfo_t info;
+
+	info.si_signo = signo;
+	info.si_errno = 0;
+	info.si_code = code;
+	info.si_addr = (void __user *)addr;
+	force_sig_info(signo, &info, tsk);
+}
+
+void do_trap(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, struct task_struct *tsk)
+{
+	if (show_unhandled_signals && unhandled_signal(tsk, signo)
+	    && printk_ratelimit()) {
+		pr_info("%s[%d]: unhandled signal %d code 0x%x at 0x" REG_FMT,
+			tsk->comm, task_pid_nr(tsk), signo, code, addr);
+		print_vma_addr(KERN_CONT " in ", GET_IP(regs));
+		pr_cont("\n");
+		show_regs(regs);
+	}
+
+	do_trap_siginfo(signo, code, addr, tsk);
+}
+
+static void do_trap_error(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, const char *str)
+{
+	if (user_mode(regs)) {
+		do_trap(regs, signo, code, addr, current);
+	} else {
+		if (!fixup_exception(regs))
+			die(regs, str);
+	}
+}
+
+#define DO_ERROR_INFO(name, signo, code, str)				\
+asmlinkage void name(struct pt_regs *regs)				\
+{									\
+	do_trap_error(regs, signo, code, regs->sepc, "Oops - " str);	\
+}
+
+DO_ERROR_INFO(do_trap_unknown,
+	SIGILL, ILL_ILLTRP, "unknown exception");
+DO_ERROR_INFO(do_trap_insn_misaligned,
+	SIGBUS, BUS_ADRALN, "instruction address misaligned");
+DO_ERROR_INFO(do_trap_insn_fault,
+	SIGBUS, BUS_ADRALN, "instruction access fault");
+DO_ERROR_INFO(do_trap_insn_illegal,
+	SIGILL, ILL_ILLOPC, "illegal instruction");
+DO_ERROR_INFO(do_trap_load_misaligned,
+	SIGBUS, BUS_ADRALN, "load address misaligned");
+DO_ERROR_INFO(do_trap_load_fault,
+	SIGSEGV, SEGV_ACCERR, "load access fault");
+DO_ERROR_INFO(do_trap_store_misaligned,
+	SIGBUS, BUS_ADRALN, "store (or AMO) address misaligned");
+DO_ERROR_INFO(do_trap_store_fault,
+	SIGSEGV, SEGV_ACCERR, "store (or AMO) access fault");
+DO_ERROR_INFO(do_trap_ecall_u,
+	SIGILL, ILL_ILLTRP, "environment call from U-mode");
+DO_ERROR_INFO(do_trap_ecall_s,
+	SIGILL, ILL_ILLTRP, "environment call from S-mode");
+DO_ERROR_INFO(do_trap_ecall_m,
+	SIGILL, ILL_ILLTRP, "environment call from M-mode");
+
+asmlinkage void do_trap_break(struct pt_regs *regs)
+{
+#ifdef CONFIG_GENERIC_BUG
+	if (!user_mode(regs)) {
+		enum bug_trap_type type;
+
+		type = report_bug(regs->sepc, regs);
+		switch (type) {
+		case BUG_TRAP_TYPE_NONE:
+			break;
+		case BUG_TRAP_TYPE_WARN:
+			regs->sepc += sizeof(bug_insn_t);
+			return;
+		case BUG_TRAP_TYPE_BUG:
+			die(regs, "Kernel BUG");
+		}
+	}
+#endif /* CONFIG_GENERIC_BUG */
+
+	do_trap_siginfo(SIGTRAP, TRAP_BRKPT, regs->sepc, current);
+	regs->sepc += 0x4;
+}
+
+#ifdef CONFIG_GENERIC_BUG
+int is_valid_bugaddr(unsigned long pc)
+{
+	bug_insn_t insn;
+
+	if (pc < PAGE_OFFSET)
+		return 0;
+	if (probe_kernel_address((bug_insn_t __user *)pc, insn))
+		return 0;
+	return (insn == __BUG_INSN);
+}
+#endif /* CONFIG_GENERIC_BUG */
+
+void __init trap_init(void)
+{
+	int hart = smp_processor_id();
+
+	/* Set sup0 scratch register to 0, indicating to exception vector
+	 * that we are presently executing in the kernel
+	 */
+	csr_write(sscratch, 0);
+	/* Set the exception vector address */
+	csr_write(stvec, &handle_exception);
+	/* Enable software interrupts and setup initial mask */
+	csr_write(sie,
+		  SIE_SSIE | atomic_long_read(&per_cpu(riscv_early_sie, hart))
+		);
+}
diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c
new file mode 100644
index 000000000000..e8a178df8144
--- /dev/null
+++ b/arch/riscv/kernel/vdso.c
@@ -0,0 +1,125 @@
+/*
+ * Copyright (C) 2004 Benjamin Herrenschmidt, IBM Corp.
+ *                    <benh@kernel.crashing.org>
+ * Copyright (C) 2012 ARM Limited
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/binfmts.h>
+#include <linux/err.h>
+
+#include <asm/vdso.h>
+
+extern char vdso_start[], vdso_end[];
+
+static unsigned int vdso_pages;
+static struct page **vdso_pagelist;
+
+/*
+ * The vDSO data page.
+ */
+static union {
+	struct vdso_data	data;
+	u8			page[PAGE_SIZE];
+} vdso_data_store __page_aligned_data;
+struct vdso_data *vdso_data = &vdso_data_store.data;
+
+static int __init vdso_init(void)
+{
+	unsigned int i;
+
+	vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT;
+	vdso_pagelist =
+		kcalloc(vdso_pages + 1, sizeof(struct page *), GFP_KERNEL);
+	if (unlikely(vdso_pagelist == NULL)) {
+		pr_err("vdso: pagelist allocation failed\n");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < vdso_pages; i++) {
+		struct page *pg;
+
+		pg = virt_to_page(vdso_start + (i << PAGE_SHIFT));
+		ClearPageReserved(pg);
+		vdso_pagelist[i] = pg;
+	}
+	vdso_pagelist[i] = virt_to_page(vdso_data);
+
+	return 0;
+}
+arch_initcall(vdso_init);
+
+int arch_setup_additional_pages(struct linux_binprm *bprm,
+	int uses_interp)
+{
+	struct mm_struct *mm = current->mm;
+	unsigned long vdso_base, vdso_len;
+	int ret;
+
+	vdso_len = (vdso_pages + 1) << PAGE_SHIFT;
+
+	down_write(&mm->mmap_sem);
+	vdso_base = get_unmapped_area(NULL, 0, vdso_len, 0, 0);
+	if (unlikely(IS_ERR_VALUE(vdso_base))) {
+		ret = vdso_base;
+		goto end;
+	}
+
+	/*
+	 * Put vDSO base into mm struct. We need to do this before calling
+	 * install_special_mapping or the perf counter mmap tracking code
+	 * will fail to recognise it as a vDSO (since arch_vma_name fails).
+	 */
+	mm->context.vdso = (void *)vdso_base;
+
+	ret = install_special_mapping(mm, vdso_base, vdso_len,
+		(VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC),
+		vdso_pagelist);
+
+	if (unlikely(ret))
+		mm->context.vdso = NULL;
+
+end:
+	up_write(&mm->mmap_sem);
+	return ret;
+}
+
+const char *arch_vma_name(struct vm_area_struct *vma)
+{
+	if (vma->vm_mm && (vma->vm_start == (long)vma->vm_mm->context.vdso))
+		return "[vdso]";
+	return NULL;
+}
+
+/*
+ * Function stubs to prevent linker errors when AT_SYSINFO_EHDR is defined
+ */
+
+int in_gate_area_no_mm(unsigned long addr)
+{
+	return 0;
+}
+
+int in_gate_area(struct mm_struct *mm, unsigned long addr)
+{
+	return 0;
+}
+
+struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
+{
+	return NULL;
+}
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
new file mode 100644
index 000000000000..9f4bee5e51fd
--- /dev/null
+++ b/arch/riscv/mm/init.c
@@ -0,0 +1,70 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/bootmem.h>
+#include <linux/initrd.h>
+#include <linux/memblock.h>
+#include <linux/swap.h>
+
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+#include <asm/pgtable.h>
+#include <asm/io.h>
+
+static void __init zone_sizes_init(void)
+{
+	unsigned long zones_size[MAX_NR_ZONES];
+
+	memset(zones_size, 0, sizeof(zones_size));
+	zones_size[ZONE_NORMAL] = max_mapnr;
+	free_area_init_node(0, zones_size, pfn_base, NULL);
+}
+
+void setup_zero_page(void)
+{
+	memset((void *)empty_zero_page, 0, PAGE_SIZE);
+}
+
+void __init paging_init(void)
+{
+	init_mm.pgd = (pgd_t *)pfn_to_virt(csr_read(sptbr));
+
+	setup_zero_page();
+	local_flush_tlb_all();
+	zone_sizes_init();
+}
+
+void __init mem_init(void)
+{
+#ifdef CONFIG_FLATMEM
+	BUG_ON(!mem_map);
+#endif /* CONFIG_FLATMEM */
+
+	high_memory = (void *)(__va(PFN_PHYS(max_low_pfn)));
+	free_all_bootmem();
+
+	mem_init_print_info(NULL);
+}
+
+void free_initmem(void)
+{
+	free_initmem_default(0);
+}
+
+#ifdef CONFIG_BLK_DEV_INITRD
+void free_initrd_mem(unsigned long start, unsigned long end)
+{
+}
+#endif /* CONFIG_BLK_DEV_INITRD */
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 1/9] RISC-V: Init and Halt Code
@ 2017-06-28 18:55       ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This contains the various __init C functions, the initial assembly
kernel entry point, and the code to reset the system.  When a file was
init-related, it contains

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/bug.h   |  88 +++++++++++++++
 arch/riscv/include/asm/cache.h |  22 ++++
 arch/riscv/include/asm/smp.h   |  41 +++++++
 arch/riscv/kernel/cacheinfo.c  | 105 ++++++++++++++++++
 arch/riscv/kernel/cpu.c        |  89 +++++++++++++++
 arch/riscv/kernel/head.S       | 147 +++++++++++++++++++++++++
 arch/riscv/kernel/irq.c        |  20 ++++
 arch/riscv/kernel/reset.c      |  36 +++++++
 arch/riscv/kernel/setup.c      | 240 +++++++++++++++++++++++++++++++++++++++++
 arch/riscv/kernel/smp.c        | 110 +++++++++++++++++++
 arch/riscv/kernel/smpboot.c    | 103 ++++++++++++++++++
 arch/riscv/kernel/time.c       |  61 +++++++++++
 arch/riscv/kernel/traps.c      | 183 +++++++++++++++++++++++++++++++
 arch/riscv/kernel/vdso.c       | 125 +++++++++++++++++++++
 arch/riscv/mm/init.c           |  70 ++++++++++++
 15 files changed, 1440 insertions(+)
 create mode 100644 arch/riscv/include/asm/bug.h
 create mode 100644 arch/riscv/include/asm/cache.h
 create mode 100644 arch/riscv/include/asm/smp.h
 create mode 100644 arch/riscv/kernel/cacheinfo.c
 create mode 100644 arch/riscv/kernel/cpu.c
 create mode 100644 arch/riscv/kernel/head.S
 create mode 100644 arch/riscv/kernel/irq.c
 create mode 100644 arch/riscv/kernel/reset.c
 create mode 100644 arch/riscv/kernel/setup.c
 create mode 100644 arch/riscv/kernel/smp.c
 create mode 100644 arch/riscv/kernel/smpboot.c
 create mode 100644 arch/riscv/kernel/time.c
 create mode 100644 arch/riscv/kernel/traps.c
 create mode 100644 arch/riscv/kernel/vdso.c
 create mode 100644 arch/riscv/mm/init.c

diff --git a/arch/riscv/include/asm/bug.h b/arch/riscv/include/asm/bug.h
new file mode 100644
index 000000000000..e2f690c20729
--- /dev/null
+++ b/arch/riscv/include/asm/bug.h
@@ -0,0 +1,88 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_BUG_H
+#define _ASM_RISCV_BUG_H
+
+#include <linux/compiler.h>
+#include <linux/const.h>
+#include <linux/types.h>
+
+#include <asm/asm.h>
+
+#ifdef CONFIG_GENERIC_BUG
+#define __BUG_INSN	_AC(0x00100073, UL) /* sbreak */
+
+#ifndef __ASSEMBLY__
+typedef u32 bug_insn_t;
+
+#ifdef CONFIG_GENERIC_BUG_RELATIVE_POINTERS
+#define __BUG_ENTRY_ADDR	INT " 1b - 2b"
+#define __BUG_ENTRY_FILE	INT " %0 - 2b"
+#else
+#define __BUG_ENTRY_ADDR	RISCV_PTR " 1b"
+#define __BUG_ENTRY_FILE	RISCV_PTR " %0"
+#endif
+
+#ifdef CONFIG_DEBUG_BUGVERBOSE
+#define __BUG_ENTRY			\
+	__BUG_ENTRY_ADDR "\n\t"		\
+	__BUG_ENTRY_FILE "\n\t"		\
+	SHORT " %1"
+#else
+#define __BUG_ENTRY			\
+	__BUG_ENTRY_ADDR
+#endif
+
+#define BUG()							\
+do {								\
+	__asm__ __volatile__ (					\
+		"1:\n\t"					\
+			"sbreak\n"				\
+			".pushsection __bug_table,\"a\"\n\t"	\
+		"2:\n\t"					\
+			__BUG_ENTRY "\n\t"			\
+			".org 2b + %2\n\t"			\
+			".popsection"				\
+		:						\
+		: "i" (__FILE__), "i" (__LINE__),		\
+		  "i" (sizeof(struct bug_entry)));		\
+	unreachable();						\
+} while (0)
+#endif /* !__ASSEMBLY__ */
+#else /* CONFIG_GENERIC_BUG */
+#ifndef __ASSEMBLY__
+#define BUG()							\
+do {								\
+	__asm__ __volatile__ ("sbreak\n");			\
+	unreachable();						\
+} while (0)
+#endif /* !__ASSEMBLY__ */
+#endif /* CONFIG_GENERIC_BUG */
+
+#define HAVE_ARCH_BUG
+
+#include <asm-generic/bug.h>
+
+#ifndef __ASSEMBLY__
+
+struct pt_regs;
+struct task_struct;
+
+extern void die(struct pt_regs *regs, const char *str);
+extern void do_trap(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, struct task_struct *tsk);
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BUG_H */
diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h
new file mode 100644
index 000000000000..e8f0d1110d74
--- /dev/null
+++ b/arch/riscv/include/asm/cache.h
@@ -0,0 +1,22 @@
+/*
+ * Copyright (C) 2017 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CACHE_H
+#define _ASM_RISCV_CACHE_H
+
+#define L1_CACHE_SHIFT		6
+
+#define L1_CACHE_BYTES		(1 << L1_CACHE_SHIFT)
+
+#endif /* _ASM_RISCV_CACHE_H */
diff --git a/arch/riscv/include/asm/smp.h b/arch/riscv/include/asm/smp.h
new file mode 100644
index 000000000000..f61f8c25f95b
--- /dev/null
+++ b/arch/riscv/include/asm/smp.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SMP_H
+#define _ASM_RISCV_SMP_H
+
+#include <linux/cpumask.h>
+#include <linux/irqreturn.h>
+
+#ifdef CONFIG_SMP
+
+/* SMP initialization hook for setup_arch */
+void __init init_clockevent(void);
+
+/* SMP initialization hook for setup_arch */
+void __init setup_smp(void);
+
+/* Hook for the generic smp_call_function_many() routine. */
+void arch_send_call_function_ipi_mask(struct cpumask *mask);
+
+/* Hook for the generic smp_call_function_single() routine. */
+void arch_send_call_function_single_ipi(int cpu);
+
+#define raw_smp_processor_id() (current_thread_info()->cpu)
+
+/* Interprocessor interrupt handler */
+irqreturn_t handle_ipi(void);
+
+#endif /* CONFIG_SMP */
+
+#endif /* _ASM_RISCV_SMP_H */
diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c
new file mode 100644
index 000000000000..10ed2749e246
--- /dev/null
+++ b/arch/riscv/kernel/cacheinfo.c
@@ -0,0 +1,105 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/cacheinfo.h>
+#include <linux/cpu.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+
+static void ci_leaf_init(struct cacheinfo *this_leaf,
+			 struct device_node *node,
+			 enum cache_type type, unsigned int level)
+{
+	this_leaf->of_node = node;
+	this_leaf->level = level;
+	this_leaf->type = type;
+	/* not a sector cache */
+	this_leaf->physical_line_partition = 1;
+	/* TODO: Add to DTS */
+	this_leaf->attributes =
+		CACHE_WRITE_BACK
+		| CACHE_READ_ALLOCATE
+		| CACHE_WRITE_ALLOCATE;
+}
+
+static int __init_cache_level(unsigned int cpu)
+{
+	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+	struct device_node *np = of_cpu_device_node_get(cpu);
+	int levels = 0, leaves = 0, level;
+
+	if (of_property_read_bool(np, "cache-size"))
+		++leaves;
+	if (of_property_read_bool(np, "i-cache-size"))
+		++leaves;
+	if (of_property_read_bool(np, "d-cache-size"))
+		++leaves;
+	if (leaves > 0)
+		levels = 1;
+
+	while ((np = of_find_next_cache_node(np))) {
+		if (!of_device_is_compatible(np, "cache"))
+			break;
+		if (of_property_read_u32(np, "cache-level", &level))
+			break;
+		if (level <= levels)
+			break;
+		if (of_property_read_bool(np, "cache-size"))
+			++leaves;
+		if (of_property_read_bool(np, "i-cache-size"))
+			++leaves;
+		if (of_property_read_bool(np, "d-cache-size"))
+			++leaves;
+		levels = level;
+	}
+
+	this_cpu_ci->num_levels = levels;
+	this_cpu_ci->num_leaves = leaves;
+	return 0;
+}
+
+static int __populate_cache_leaves(unsigned int cpu)
+{
+	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+	struct cacheinfo *this_leaf = this_cpu_ci->info_list;
+	struct device_node *np = of_cpu_device_node_get(cpu);
+	int levels = 1, level = 1;
+
+	if (of_property_read_bool(np, "cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
+	if (of_property_read_bool(np, "i-cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
+	if (of_property_read_bool(np, "d-cache-size"))
+		ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
+
+	while ((np = of_find_next_cache_node(np))) {
+		if (!of_device_is_compatible(np, "cache"))
+			break;
+		if (of_property_read_u32(np, "cache-level", &level))
+			break;
+		if (level <= levels)
+			break;
+		if (of_property_read_bool(np, "cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level);
+		if (of_property_read_bool(np, "i-cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_INST, level);
+		if (of_property_read_bool(np, "d-cache-size"))
+			ci_leaf_init(this_leaf++, np, CACHE_TYPE_DATA, level);
+		levels = level;
+	}
+
+	return 0;
+}
+
+DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
+DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c
new file mode 100644
index 000000000000..20004bd7a216
--- /dev/null
+++ b/arch/riscv/kernel/cpu.c
@@ -0,0 +1,89 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <linux/of.h>
+
+/* Return -1 if not a valid hart */
+int riscv_of_processor_hart(struct device_node *node)
+{
+	const char *isa, *status;
+	u32 hart;
+
+	if (!of_device_is_compatible(node, "riscv"))
+		return -(ENODEV);
+	if (of_property_read_u32(node, "reg", &hart)
+	    || hart >= NR_CPUS)
+		return -(ENODEV);
+	if (of_property_read_string(node, "status", &status)
+	    || strcmp(status, "okay"))
+		return -(ENODEV);
+	if (of_property_read_string(node, "riscv,isa", &isa)
+	    || isa[0] != 'r'
+	    || isa[1] != 'v')
+		return -(ENODEV);
+
+	return hart;
+}
+
+#ifdef CONFIG_PROC_FS
+
+static void *c_start(struct seq_file *m, loff_t *pos)
+{
+	*pos = cpumask_next(*pos - 1, cpu_online_mask);
+	if ((*pos) < nr_cpu_ids)
+		return (void *)(uintptr_t)(1 + *pos);
+	return NULL;
+}
+
+static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+{
+	(*pos)++;
+	return c_start(m, pos);
+}
+
+static void c_stop(struct seq_file *m, void *v)
+{
+}
+
+static int c_show(struct seq_file *m, void *v)
+{
+	unsigned long hart_id = (unsigned long)v - 1;
+	struct device_node *node = of_get_cpu_node(hart_id, NULL);
+	const char *compat, *isa, *mmu;
+
+	seq_printf(m, "hart\t: %lu\n", hart_id);
+	if (!of_property_read_string(node, "riscv,isa", &isa)
+	    && isa[0] == 'r'
+	    && isa[1] == 'v')
+		seq_printf(m, "isa\t: %s\n", isa);
+	if (!of_property_read_string(node, "mmu-type", &mmu)
+	    && !strncmp(mmu, "riscv,", 6))
+		seq_printf(m, "mmu\t: %s\n", mmu+6);
+	if (!of_property_read_string(node, "compatible", &compat)
+	    && strcmp(compat, "riscv"))
+		seq_printf(m, "uarch\t: %s\n", compat);
+	seq_puts(m, "\n");
+
+	return 0;
+}
+
+const struct seq_operations cpuinfo_op = {
+	.start	= c_start,
+	.next	= c_next,
+	.stop	= c_stop,
+	.show	= c_show
+};
+
+#endif /* CONFIG_PROC_FS */
diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
new file mode 100644
index 000000000000..608e57d4531f
--- /dev/null
+++ b/arch/riscv/kernel/head.S
@@ -0,0 +1,147 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+#include <asm/asm.h>
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/thread_info.h>
+#include <asm/page.h>
+#include <asm/csr.h>
+
+__INIT
+ENTRY(_start)
+	/* Mask all interrupts */
+	csrw sie, zero
+
+	/* Disable FPU to detect illegal usage of
+	   floating point in kernel space */
+	li t0, SR_FS
+	csrc sstatus, t0
+
+#ifndef CONFIG_RV_PUM
+	/* Allow access to user memory */
+	li t0, SR_SUM
+	csrs sstatus, t0
+#endif
+
+#ifdef CONFIG_ISA_A
+	/* Pick one hart to run the main boot sequence */
+	la a3, hart_lottery
+	li a2, 1
+	amoadd.w a3, a2, (a3)
+	bnez a3, .Lsecondary_start
+#else
+	/* We don't have atomic support, so the boot hart must be picked
+	 * staticly.  Hart 0 is the only sane choice.
+	 */
+	bnez a0, .Lsecondary_park
+#endif
+
+	/* Save hart ID and DTB physical address */
+	mv s0, a0
+	mv s1, a1
+
+	/* Initialize page tables and relocate to virtual addresses */
+	la sp, init_thread_union + THREAD_SIZE
+	call setup_vm
+	call relocate
+
+	/* Restore C environment */
+	la tp, init_task
+
+	la sp, init_thread_union
+	li a0, ASM_THREAD_SIZE
+	add sp, sp, a0
+
+	/* Start the kernel */
+	mv a0, s0
+	mv a1, s1
+	call sbi_save
+	tail start_kernel
+
+relocate:
+	/* Relocate return address */
+	li a1, PAGE_OFFSET
+	la a0, _start
+	sub a1, a1, a0
+	add ra, ra, a1
+
+	/* Point stvec to virtual address of intruction after sptbr write */
+	la a0, 1f
+	add a0, a0, a1
+	csrw stvec, a0
+
+	/* Compute sptbr for kernel page tables, but don't load it yet */
+	la a2, swapper_pg_dir
+	srl a2, a2, PAGE_SHIFT
+	li a1, SPTBR_MODE
+	or a2, a2, a1
+
+	/* Load trampoline page directory, which will cause us to trap to
+	   stvec if VA != PA, or simply fall through if VA == PA */
+	la a0, trampoline_pg_dir
+	srl a0, a0, PAGE_SHIFT
+	or a0, a0, a1
+	sfence.vma
+	csrw sptbr, a0
+1:
+	/* Set trap vector to spin forever to help debug */
+	la a0, .Lsecondary_park
+	csrw stvec, a0
+
+	/* Load the global pointer */
+	la gp, __global_pointer$
+
+	/* Switch to kernel page tables */
+	csrw sptbr, a2
+
+	ret
+
+.Lsecondary_start:
+#ifdef CONFIG_SMP
+	li a1, CONFIG_NR_CPUS
+	bgeu a0, a1, .Lsecondary_park
+
+	la a1, __cpu_up_stack_pointer
+	slli a0, a0, LGREG
+	add a0, a0, a1
+
+.Lwait_for_cpu_up:
+	REG_L sp, (a0)
+	beqz sp, .Lwait_for_cpu_up
+	fence
+
+	/* Enable virtual memory and relocate to virtual address */
+	call relocate
+
+	/* Initialize task_struct pointer */
+	li tp, -THREAD_SIZE
+	add tp, tp, sp
+
+	tail smp_callin
+#endif
+
+.Lsecondary_park:
+	/* We lack SMP support or have too many harts, so park this hart */
+	wfi
+	j .Lsecondary_park
+END(_start)
+
+__PAGE_ALIGNED_BSS
+	/* Empty zero page */
+	.balign PAGE_SIZE
+ENTRY(empty_zero_page)
+	.fill (empty_zero_page + PAGE_SIZE) - ., 1, 0x00
+END(empty_zero_page)
diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c
new file mode 100644
index 000000000000..737d7cce2c6d
--- /dev/null
+++ b/arch/riscv/kernel/irq.c
@@ -0,0 +1,20 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/irqchip.h>
+
+void __init init_IRQ(void)
+{
+	irqchip_init();
+}
diff --git a/arch/riscv/kernel/reset.c b/arch/riscv/kernel/reset.c
new file mode 100644
index 000000000000..2a53d26ffdd6
--- /dev/null
+++ b/arch/riscv/kernel/reset.c
@@ -0,0 +1,36 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/reboot.h>
+#include <linux/export.h>
+#include <asm/sbi.h>
+
+void (*pm_power_off)(void) = machine_power_off;
+EXPORT_SYMBOL(pm_power_off);
+
+void machine_restart(char *cmd)
+{
+	do_kernel_restart(cmd);
+	while (1);
+}
+
+void machine_halt(void)
+{
+	machine_power_off();
+}
+
+void machine_power_off(void)
+{
+	sbi_shutdown();
+	while (1);
+}
diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
new file mode 100644
index 000000000000..9ed70e84d74e
--- /dev/null
+++ b/arch/riscv/kernel/setup.c
@@ -0,0 +1,240 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/memblock.h>
+#include <linux/sched.h>
+#include <linux/initrd.h>
+#include <linux/console.h>
+#include <linux/screen_info.h>
+#include <linux/of_fdt.h>
+#include <linux/of_platform.h>
+#include <linux/sched/task.h>
+
+#include <asm/setup.h>
+#include <asm/sections.h>
+#include <asm/pgtable.h>
+#include <asm/smp.h>
+#include <asm/sbi.h>
+#include <asm/tlbflush.h>
+#include <asm/thread_info.h>
+
+#ifdef CONFIG_DUMMY_CONSOLE
+struct screen_info screen_info = {
+	.orig_video_lines	= 30,
+	.orig_video_cols	= 80,
+	.orig_video_mode	= 0,
+	.orig_video_ega_bx	= 0,
+	.orig_video_isVGA	= 1,
+	.orig_video_points	= 8
+};
+#endif
+
+#ifdef CONFIG_CMDLINE_BOOL
+static char __initdata builtin_cmdline[COMMAND_LINE_SIZE] = CONFIG_CMDLINE;
+#endif /* CONFIG_CMDLINE_BOOL */
+
+unsigned long va_pa_offset;
+unsigned long pfn_base;
+
+/* The lucky hart to first increment this variable will boot the other cores */
+atomic_t hart_lottery;
+
+#ifdef CONFIG_BLK_DEV_INITRD
+static void __init setup_initrd(void)
+{
+	extern char __initramfs_start[];
+	extern unsigned long __initramfs_size;
+	unsigned long size;
+
+	if (__initramfs_size > 0) {
+		initrd_start = (unsigned long)(&__initramfs_start);
+		initrd_end = initrd_start + __initramfs_size;
+	}
+
+	if (initrd_start >= initrd_end) {
+		printk(KERN_INFO "initrd not found or empty");
+		goto disable;
+	}
+	if (__pa(initrd_end) > PFN_PHYS(max_low_pfn)) {
+		printk(KERN_ERR "initrd extends beyond end of memory");
+		goto disable;
+	}
+
+	size =  initrd_end - initrd_start;
+	memblock_reserve(__pa(initrd_start), size);
+	initrd_below_start_ok = 1;
+
+	printk(KERN_INFO "Initial ramdisk at: 0x%p (%lu bytes)\n",
+		(void *)(initrd_start), size);
+	return;
+disable:
+	printk(KERN_CONT " - disabling initrd\n");
+	initrd_start = 0;
+	initrd_end = 0;
+}
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+pgd_t trampoline_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+
+#ifndef __PAGETABLE_PMD_FOLDED
+#define NUM_SWAPPER_PMDS ((uintptr_t)-PAGE_OFFSET >> PGDIR_SHIFT)
+pmd_t swapper_pmd[PTRS_PER_PMD*((-PAGE_OFFSET)/PGDIR_SIZE)] __page_aligned_bss;
+pmd_t trampoline_pmd[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+#endif
+
+asmlinkage void __init setup_vm(void)
+{
+	extern char _start;
+	uintptr_t i;
+	uintptr_t pa = (uintptr_t) &_start;
+	pgprot_t prot = __pgprot(pgprot_val(PAGE_KERNEL) | _PAGE_EXEC);
+
+	va_pa_offset = PAGE_OFFSET - pa;
+	pfn_base = PFN_DOWN(pa);
+
+	/* Sanity check alignment and size */
+	BUG_ON((PAGE_OFFSET % PGDIR_SIZE) != 0);
+	BUG_ON((pa % (PAGE_SIZE * PTRS_PER_PTE)) != 0);
+
+#ifndef __PAGETABLE_PMD_FOLDED
+	trampoline_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD] =
+		pfn_pgd(PFN_DOWN((uintptr_t)trampoline_pmd),
+			__pgprot(_PAGE_TABLE));
+	trampoline_pmd[0] = pfn_pmd(PFN_DOWN(pa), prot);
+
+	for (i = 0; i < (-PAGE_OFFSET)/PGDIR_SIZE; ++i) {
+		size_t o = (PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD + i;
+		swapper_pg_dir[o] =
+			pfn_pgd(PFN_DOWN((uintptr_t)swapper_pmd) + i,
+				__pgprot(_PAGE_TABLE));
+	}
+	for (i = 0; i < ARRAY_SIZE(swapper_pmd); i++)
+		swapper_pmd[i] = pfn_pmd(PFN_DOWN(pa + i * PMD_SIZE), prot);
+#else
+	trampoline_pg_dir[(PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD] =
+		pfn_pgd(PFN_DOWN(pa), prot);
+
+	for (i = 0; i < (-PAGE_OFFSET)/PGDIR_SIZE; ++i) {
+		size_t o = (PAGE_OFFSET >> PGDIR_SHIFT) % PTRS_PER_PGD + i;
+		swapper_pg_dir[o] =
+			pfn_pgd(PFN_DOWN(pa + i * PGDIR_SIZE), prot);
+	}
+#endif
+}
+
+void __init sbi_save(unsigned int hartid, void *dtb)
+{
+	early_init_dt_scan(__va(dtb));
+}
+
+/* Allow the user to manually add a memory region (in case DTS is broken); "mem_end=nn[KkMmGg]" */
+static int __init mem_end_override(char *p)
+{
+	resource_size_t base, end;
+
+	if (!p)
+		return -EINVAL;
+	base = (uintptr_t) __pa(PAGE_OFFSET);
+	end = memparse(p, &p) & PMD_MASK;
+	if (end == 0)
+		return -EINVAL;
+	memblock_add(base, end - base);
+	return 0;
+}
+early_param("mem_end", mem_end_override);
+
+static void __init setup_bootmem(void)
+{
+	struct memblock_region *reg;
+	phys_addr_t mem_size = 0;
+
+	/* Find the memory region containing the kernel */
+	for_each_memblock(memory, reg) {
+		phys_addr_t vmlinux_end = __pa(_end);
+		phys_addr_t end = reg->base + reg->size;
+
+		if (reg->base <= vmlinux_end && vmlinux_end <= end) {
+			/* Reserve from the start of the region to the end of
+			 * the kernel
+			 */
+			memblock_reserve(reg->base, vmlinux_end - reg->base);
+			mem_size = min(reg->size, (phys_addr_t)-PAGE_OFFSET);
+		}
+	}
+	BUG_ON(mem_size == 0);
+
+	set_max_mapnr(PFN_DOWN(mem_size));
+	max_low_pfn = pfn_base + PFN_DOWN(mem_size);
+
+#ifdef CONFIG_BLK_DEV_INITRD
+	setup_initrd();
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+	early_init_fdt_reserve_self();
+	early_init_fdt_scan_reserved_mem();
+	memblock_allow_resize();
+	memblock_dump_all();
+}
+
+void __init setup_arch(char **cmdline_p)
+{
+#ifdef CONFIG_CMDLINE_BOOL
+#ifdef CONFIG_CMDLINE_OVERRIDE
+	strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+#else
+	if (builtin_cmdline[0] != '\0') {
+		/* Append bootloader command line to built-in */
+		strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);
+		strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+		strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+	}
+#endif /* CONFIG_CMDLINE_OVERRIDE */
+#endif /* CONFIG_CMDLINE_BOOL */
+	*cmdline_p = boot_command_line;
+
+	parse_early_param();
+
+	init_mm.start_code = (unsigned long) _stext;
+	init_mm.end_code   = (unsigned long) _etext;
+	init_mm.end_data   = (unsigned long) _edata;
+	init_mm.brk        = (unsigned long) _end;
+
+	setup_bootmem();
+	paging_init();
+	unflatten_device_tree();
+
+#ifdef CONFIG_SMP
+	setup_smp();
+#endif
+
+#ifdef CONFIG_DUMMY_CONSOLE
+	conswitchp = &dummy_con;
+#endif
+}
+
+static int __init riscv_device_init(void)
+{
+	return of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
+}
+subsys_initcall_sync(riscv_device_init);
diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
new file mode 100644
index 000000000000..b4a71ec5906f
--- /dev/null
+++ b/arch/riscv/kernel/smp.c
@@ -0,0 +1,110 @@
+/*
+ * SMP initialisation and IPI support
+ * Based on arch/arm64/kernel/smp.c
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/smp.h>
+#include <linux/sched.h>
+
+#include <asm/sbi.h>
+#include <asm/tlbflush.h>
+#include <asm/cacheflush.h>
+
+/* A collection of single bit ipi messages.  */
+static struct {
+	unsigned long bits ____cacheline_aligned;
+} ipi_data[NR_CPUS] __cacheline_aligned;
+
+enum ipi_message_type {
+	IPI_RESCHEDULE,
+	IPI_CALL_FUNC,
+	IPI_MAX
+};
+
+irqreturn_t handle_ipi(void)
+{
+	unsigned long *pending_ipis = &ipi_data[smp_processor_id()].bits;
+
+	/* Clear pending IPI */
+	csr_clear(sip, SIE_SSIE);
+
+	while (true) {
+		unsigned long ops;
+
+		/* Order bit clearing and data access. */
+		mb();
+
+		ops = xchg(pending_ipis, 0);
+		if (ops == 0)
+			return IRQ_HANDLED;
+
+		if (ops & (1 << IPI_RESCHEDULE))
+			scheduler_ipi();
+
+		if (ops & (1 << IPI_CALL_FUNC))
+			generic_smp_call_function_interrupt();
+
+		BUG_ON((ops >> IPI_MAX) != 0);
+
+		/* Order data access and bit testing. */
+		mb();
+	}
+
+	return IRQ_HANDLED;
+}
+
+static void
+send_ipi_message(const struct cpumask *to_whom, enum ipi_message_type operation)
+{
+	int i;
+
+	mb();
+	for_each_cpu(i, to_whom)
+		set_bit(operation, &ipi_data[i].bits);
+
+	mb();
+	sbi_send_ipi(cpumask_bits(to_whom));
+}
+
+void arch_send_call_function_ipi_mask(struct cpumask *mask)
+{
+	send_ipi_message(mask, IPI_CALL_FUNC);
+}
+
+void arch_send_call_function_single_ipi(int cpu)
+{
+	send_ipi_message(cpumask_of(cpu), IPI_CALL_FUNC);
+}
+
+static void ipi_stop(void *unused)
+{
+	while (1)
+		wait_for_interrupt();
+}
+
+void smp_send_stop(void)
+{
+	on_each_cpu(ipi_stop, NULL, 1);
+}
+
+void smp_send_reschedule(int cpu)
+{
+	send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE);
+}
diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
new file mode 100644
index 000000000000..7043bbbfbc1e
--- /dev/null
+++ b/arch/riscv/kernel/smpboot.c
@@ -0,0 +1,103 @@
+/*
+ * SMP initialisation and IPI support
+ * Based on arch/arm64/kernel/smp.c
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/kernel_stat.h>
+#include <linux/notifier.h>
+#include <linux/cpu.h>
+#include <linux/percpu.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/irq.h>
+#include <linux/of.h>
+#include <linux/sched/task_stack.h>
+#include <asm/mmu_context.h>
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+#include <asm/sbi.h>
+
+void *__cpu_up_stack_pointer[NR_CPUS];
+
+void __init smp_prepare_boot_cpu(void)
+{
+}
+
+void __init smp_prepare_cpus(unsigned int max_cpus)
+{
+}
+
+void __init setup_smp(void)
+{
+	struct device_node *dn = NULL;
+	int hart, im_okay_therefore_i_am = 0;
+
+	while ((dn = of_find_node_by_type(dn, "cpu"))) {
+		hart = riscv_of_processor_hart(dn);
+		if (hart >= 0) {
+			set_cpu_possible(hart, true);
+			set_cpu_present(hart, true);
+			if (hart == smp_processor_id()) {
+				BUG_ON(im_okay_therefore_i_am);
+				im_okay_therefore_i_am = 1;
+			}
+		}
+	}
+
+	BUG_ON(!im_okay_therefore_i_am);
+}
+
+int __cpu_up(unsigned int cpu, struct task_struct *tidle)
+{
+	/* Signal cpu to start */
+	mb();
+	__cpu_up_stack_pointer[cpu] = task_stack_page(tidle) + THREAD_SIZE;
+
+	while (!cpu_online(cpu))
+		cpu_relax();
+
+	return 0;
+}
+
+void __init smp_cpus_done(unsigned int max_cpus)
+{
+}
+
+/*
+ * C entry point for a secondary processor.
+ */
+asmlinkage void __init smp_callin(void)
+{
+	struct mm_struct *mm = &init_mm;
+
+	/* All kernel threads share the same mm context.  */
+	atomic_inc(&mm->mm_count);
+	current->active_mm = mm;
+
+	trap_init();
+	init_clockevent();
+	notify_cpu_starting(smp_processor_id());
+	set_cpu_online(smp_processor_id(), 1);
+	local_flush_tlb_all();
+	local_irq_enable();
+	preempt_disable();
+	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+}
diff --git a/arch/riscv/kernel/time.c b/arch/riscv/kernel/time.c
new file mode 100644
index 000000000000..6bba2e6f2a84
--- /dev/null
+++ b/arch/riscv/kernel/time.c
@@ -0,0 +1,61 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/clocksource.h>
+#include <linux/clockchips.h>
+#include <linux/delay.h>
+#include <linux/timer_riscv.h>
+
+#include <asm/sbi.h>
+
+unsigned long riscv_timebase;
+
+static int next_event(unsigned long delta, struct clock_event_device *ce)
+{
+	BUG_ON(ce != timer_riscv_device(smp_processor_id()));
+	sbi_set_timer(get_cycles64() + delta);
+	return 0;
+}
+
+static unsigned long long rdtime(struct clocksource *cs)
+{
+	BUG_ON(cs != timer_riscv_source(smp_processor_id()));
+	return get_cycles64();
+}
+
+void riscv_timer_interrupt(void)
+{
+	int cpu = smp_processor_id();
+	struct clock_event_device *evdev = timer_riscv_device(cpu);
+
+	evdev->event_handler(evdev);
+}
+
+void __init time_init(void)
+{
+	struct device_node *cpu;
+	u32 prop;
+	int cpu_id = smp_processor_id();
+
+	cpu = of_find_node_by_path("/cpus");
+	if (!cpu || of_property_read_u32(cpu, "timebase-frequency", &prop))
+		panic(KERN_WARNING "RISC-V system with no 'timebase-frequency' in DTS\n");
+	riscv_timebase = prop;
+
+	lpj_fine = riscv_timebase / HZ;
+
+	timer_riscv_init(cpu_id, riscv_timebase, &rdtime, &next_event);
+
+	csr_set(sie, SIE_STIE);
+}
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
new file mode 100644
index 000000000000..3a4698199ecf
--- /dev/null
+++ b/arch/riscv/kernel/traps.c
@@ -0,0 +1,183 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <linux/sched/signal.h>
+#include <linux/signal.h>
+#include <linux/kdebug.h>
+#include <linux/uaccess.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/irq.h>
+
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/csr.h>
+
+int show_unhandled_signals = 1;
+
+extern asmlinkage void handle_exception(void);
+
+static DEFINE_SPINLOCK(die_lock);
+
+void die(struct pt_regs *regs, const char *str)
+{
+	static int die_counter;
+	int ret;
+
+	oops_enter();
+
+	spin_lock_irq(&die_lock);
+	console_verbose();
+	bust_spinlocks(1);
+
+	pr_emerg("%s [#%d]\n", str, ++die_counter);
+	print_modules();
+	show_regs(regs);
+
+	ret = notify_die(DIE_OOPS, str, regs, 0, regs->scause, SIGSEGV);
+
+	bust_spinlocks(0);
+	add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
+	spin_unlock_irq(&die_lock);
+	oops_exit();
+
+	if (in_interrupt())
+		panic("Fatal exception in interrupt");
+	if (panic_on_oops)
+		panic("Fatal exception");
+	if (ret != NOTIFY_STOP)
+		do_exit(SIGSEGV);
+}
+
+static inline void do_trap_siginfo(int signo, int code,
+	unsigned long addr, struct task_struct *tsk)
+{
+	siginfo_t info;
+
+	info.si_signo = signo;
+	info.si_errno = 0;
+	info.si_code = code;
+	info.si_addr = (void __user *)addr;
+	force_sig_info(signo, &info, tsk);
+}
+
+void do_trap(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, struct task_struct *tsk)
+{
+	if (show_unhandled_signals && unhandled_signal(tsk, signo)
+	    && printk_ratelimit()) {
+		pr_info("%s[%d]: unhandled signal %d code 0x%x at 0x" REG_FMT,
+			tsk->comm, task_pid_nr(tsk), signo, code, addr);
+		print_vma_addr(KERN_CONT " in ", GET_IP(regs));
+		pr_cont("\n");
+		show_regs(regs);
+	}
+
+	do_trap_siginfo(signo, code, addr, tsk);
+}
+
+static void do_trap_error(struct pt_regs *regs, int signo, int code,
+	unsigned long addr, const char *str)
+{
+	if (user_mode(regs)) {
+		do_trap(regs, signo, code, addr, current);
+	} else {
+		if (!fixup_exception(regs))
+			die(regs, str);
+	}
+}
+
+#define DO_ERROR_INFO(name, signo, code, str)				\
+asmlinkage void name(struct pt_regs *regs)				\
+{									\
+	do_trap_error(regs, signo, code, regs->sepc, "Oops - " str);	\
+}
+
+DO_ERROR_INFO(do_trap_unknown,
+	SIGILL, ILL_ILLTRP, "unknown exception");
+DO_ERROR_INFO(do_trap_insn_misaligned,
+	SIGBUS, BUS_ADRALN, "instruction address misaligned");
+DO_ERROR_INFO(do_trap_insn_fault,
+	SIGBUS, BUS_ADRALN, "instruction access fault");
+DO_ERROR_INFO(do_trap_insn_illegal,
+	SIGILL, ILL_ILLOPC, "illegal instruction");
+DO_ERROR_INFO(do_trap_load_misaligned,
+	SIGBUS, BUS_ADRALN, "load address misaligned");
+DO_ERROR_INFO(do_trap_load_fault,
+	SIGSEGV, SEGV_ACCERR, "load access fault");
+DO_ERROR_INFO(do_trap_store_misaligned,
+	SIGBUS, BUS_ADRALN, "store (or AMO) address misaligned");
+DO_ERROR_INFO(do_trap_store_fault,
+	SIGSEGV, SEGV_ACCERR, "store (or AMO) access fault");
+DO_ERROR_INFO(do_trap_ecall_u,
+	SIGILL, ILL_ILLTRP, "environment call from U-mode");
+DO_ERROR_INFO(do_trap_ecall_s,
+	SIGILL, ILL_ILLTRP, "environment call from S-mode");
+DO_ERROR_INFO(do_trap_ecall_m,
+	SIGILL, ILL_ILLTRP, "environment call from M-mode");
+
+asmlinkage void do_trap_break(struct pt_regs *regs)
+{
+#ifdef CONFIG_GENERIC_BUG
+	if (!user_mode(regs)) {
+		enum bug_trap_type type;
+
+		type = report_bug(regs->sepc, regs);
+		switch (type) {
+		case BUG_TRAP_TYPE_NONE:
+			break;
+		case BUG_TRAP_TYPE_WARN:
+			regs->sepc += sizeof(bug_insn_t);
+			return;
+		case BUG_TRAP_TYPE_BUG:
+			die(regs, "Kernel BUG");
+		}
+	}
+#endif /* CONFIG_GENERIC_BUG */
+
+	do_trap_siginfo(SIGTRAP, TRAP_BRKPT, regs->sepc, current);
+	regs->sepc += 0x4;
+}
+
+#ifdef CONFIG_GENERIC_BUG
+int is_valid_bugaddr(unsigned long pc)
+{
+	bug_insn_t insn;
+
+	if (pc < PAGE_OFFSET)
+		return 0;
+	if (probe_kernel_address((bug_insn_t __user *)pc, insn))
+		return 0;
+	return (insn == __BUG_INSN);
+}
+#endif /* CONFIG_GENERIC_BUG */
+
+void __init trap_init(void)
+{
+	int hart = smp_processor_id();
+
+	/* Set sup0 scratch register to 0, indicating to exception vector
+	 * that we are presently executing in the kernel
+	 */
+	csr_write(sscratch, 0);
+	/* Set the exception vector address */
+	csr_write(stvec, &handle_exception);
+	/* Enable software interrupts and setup initial mask */
+	csr_write(sie,
+		  SIE_SSIE | atomic_long_read(&per_cpu(riscv_early_sie, hart))
+		);
+}
diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c
new file mode 100644
index 000000000000..e8a178df8144
--- /dev/null
+++ b/arch/riscv/kernel/vdso.c
@@ -0,0 +1,125 @@
+/*
+ * Copyright (C) 2004 Benjamin Herrenschmidt, IBM Corp.
+ *                    <benh@kernel.crashing.org>
+ * Copyright (C) 2012 ARM Limited
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/binfmts.h>
+#include <linux/err.h>
+
+#include <asm/vdso.h>
+
+extern char vdso_start[], vdso_end[];
+
+static unsigned int vdso_pages;
+static struct page **vdso_pagelist;
+
+/*
+ * The vDSO data page.
+ */
+static union {
+	struct vdso_data	data;
+	u8			page[PAGE_SIZE];
+} vdso_data_store __page_aligned_data;
+struct vdso_data *vdso_data = &vdso_data_store.data;
+
+static int __init vdso_init(void)
+{
+	unsigned int i;
+
+	vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT;
+	vdso_pagelist =
+		kcalloc(vdso_pages + 1, sizeof(struct page *), GFP_KERNEL);
+	if (unlikely(vdso_pagelist == NULL)) {
+		pr_err("vdso: pagelist allocation failed\n");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < vdso_pages; i++) {
+		struct page *pg;
+
+		pg = virt_to_page(vdso_start + (i << PAGE_SHIFT));
+		ClearPageReserved(pg);
+		vdso_pagelist[i] = pg;
+	}
+	vdso_pagelist[i] = virt_to_page(vdso_data);
+
+	return 0;
+}
+arch_initcall(vdso_init);
+
+int arch_setup_additional_pages(struct linux_binprm *bprm,
+	int uses_interp)
+{
+	struct mm_struct *mm = current->mm;
+	unsigned long vdso_base, vdso_len;
+	int ret;
+
+	vdso_len = (vdso_pages + 1) << PAGE_SHIFT;
+
+	down_write(&mm->mmap_sem);
+	vdso_base = get_unmapped_area(NULL, 0, vdso_len, 0, 0);
+	if (unlikely(IS_ERR_VALUE(vdso_base))) {
+		ret = vdso_base;
+		goto end;
+	}
+
+	/*
+	 * Put vDSO base into mm struct. We need to do this before calling
+	 * install_special_mapping or the perf counter mmap tracking code
+	 * will fail to recognise it as a vDSO (since arch_vma_name fails).
+	 */
+	mm->context.vdso = (void *)vdso_base;
+
+	ret = install_special_mapping(mm, vdso_base, vdso_len,
+		(VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC),
+		vdso_pagelist);
+
+	if (unlikely(ret))
+		mm->context.vdso = NULL;
+
+end:
+	up_write(&mm->mmap_sem);
+	return ret;
+}
+
+const char *arch_vma_name(struct vm_area_struct *vma)
+{
+	if (vma->vm_mm && (vma->vm_start == (long)vma->vm_mm->context.vdso))
+		return "[vdso]";
+	return NULL;
+}
+
+/*
+ * Function stubs to prevent linker errors when AT_SYSINFO_EHDR is defined
+ */
+
+int in_gate_area_no_mm(unsigned long addr)
+{
+	return 0;
+}
+
+int in_gate_area(struct mm_struct *mm, unsigned long addr)
+{
+	return 0;
+}
+
+struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
+{
+	return NULL;
+}
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
new file mode 100644
index 000000000000..9f4bee5e51fd
--- /dev/null
+++ b/arch/riscv/mm/init.c
@@ -0,0 +1,70 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/bootmem.h>
+#include <linux/initrd.h>
+#include <linux/memblock.h>
+#include <linux/swap.h>
+
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+#include <asm/pgtable.h>
+#include <asm/io.h>
+
+static void __init zone_sizes_init(void)
+{
+	unsigned long zones_size[MAX_NR_ZONES];
+
+	memset(zones_size, 0, sizeof(zones_size));
+	zones_size[ZONE_NORMAL] = max_mapnr;
+	free_area_init_node(0, zones_size, pfn_base, NULL);
+}
+
+void setup_zero_page(void)
+{
+	memset((void *)empty_zero_page, 0, PAGE_SIZE);
+}
+
+void __init paging_init(void)
+{
+	init_mm.pgd = (pgd_t *)pfn_to_virt(csr_read(sptbr));
+
+	setup_zero_page();
+	local_flush_tlb_all();
+	zone_sizes_init();
+}
+
+void __init mem_init(void)
+{
+#ifdef CONFIG_FLATMEM
+	BUG_ON(!mem_map);
+#endif /* CONFIG_FLATMEM */
+
+	high_memory = (void *)(__va(PFN_PHYS(max_low_pfn)));
+	free_all_bootmem();
+
+	mem_init_print_info(NULL);
+}
+
+void free_initmem(void)
+{
+	free_initmem_default(0);
+}
+
+#ifdef CONFIG_BLK_DEV_INITRD
+void free_initrd_mem(unsigned long start, unsigned long end)
+{
+}
+#endif /* CONFIG_BLK_DEV_INITRD */
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 2/9] RISC-V: Atomic and Locking Code
  2017-06-28 18:55     ` Palmer Dabbelt
@ 2017-06-28 18:55       ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This contains all the code that directly interfaces with the RISC-V
memory model.  While this code corforms to the current RISC-V ISA
specifications (user 2.2 and priv 1.10), the memory model is somewhat
underspecified in those documents.  There is a working group that hopes
to produce a formal memory model by the end of the year, but my
understanding is that the basic definitions we're relying on here won't
change significantly.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/atomic.h         | 330 ++++++++++++++++++++++++++++++++
 arch/riscv/include/asm/barrier.h        | 139 ++++++++++++++
 arch/riscv/include/asm/bitops.h         | 228 ++++++++++++++++++++++
 arch/riscv/include/asm/cacheflush.h     |  39 ++++
 arch/riscv/include/asm/cmpxchg.h        | 138 +++++++++++++
 arch/riscv/include/asm/io.h             | 178 +++++++++++++++++
 arch/riscv/include/asm/spinlock.h       | 167 ++++++++++++++++
 arch/riscv/include/asm/spinlock_types.h |  33 ++++
 arch/riscv/include/asm/tlb.h            |  24 +++
 arch/riscv/include/asm/tlbflush.h       |  64 +++++++
 10 files changed, 1340 insertions(+)
 create mode 100644 arch/riscv/include/asm/atomic.h
 create mode 100644 arch/riscv/include/asm/barrier.h
 create mode 100644 arch/riscv/include/asm/bitops.h
 create mode 100644 arch/riscv/include/asm/cacheflush.h
 create mode 100644 arch/riscv/include/asm/cmpxchg.h
 create mode 100644 arch/riscv/include/asm/io.h
 create mode 100644 arch/riscv/include/asm/spinlock.h
 create mode 100644 arch/riscv/include/asm/spinlock_types.h
 create mode 100644 arch/riscv/include/asm/tlb.h
 create mode 100644 arch/riscv/include/asm/tlbflush.h

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
new file mode 100644
index 000000000000..7c5bfd18fe4e
--- /dev/null
+++ b/arch/riscv/include/asm/atomic.h
@@ -0,0 +1,330 @@
+/*
+ * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#ifndef _ASM_RISCV_ATOMIC_H
+#define _ASM_RISCV_ATOMIC_H
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+# include <asm-generic/atomic64.h>
+#else
+# if (__riscv_xlen < 64)
+#  error "64-bit atomics require XLEN to be at least 64"
+# endif
+#endif
+
+#ifdef CONFIG_ISA_A
+
+#include <asm/cmpxchg.h>
+#include <asm/barrier.h>
+
+#define ATOMIC_INIT(i)	{ (i) }
+static __always_inline int atomic_read(const atomic_t *v)
+{
+	return READ_ONCE(v->counter);
+}
+static __always_inline void atomic_set(atomic_t *v, int i)
+{
+	WRITE_ONCE(v->counter, i);
+}
+
+#ifndef CONFIG_GENERIC_ATOMIC64
+#define ATOMIC64_INIT(i) { (i) }
+static __always_inline int atomic64_read(const atomic64_t *v)
+{
+	return READ_ONCE(v->counter);
+}
+static __always_inline void atomic64_set(atomic64_t *v, int i)
+{
+	WRITE_ONCE(v->counter, i);
+}
+#endif
+
+/* First, the atomic ops that have no ordering constraints and therefor don't
+ * have the AQ or RL bits set.  These don't return anything, so there's only
+ * one version to worry about.
+ */
+#define ATOMIC_OP(op, asm_op, c_op, I, asm_type, c_type, prefix)				\
+static __always_inline void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v)		\
+{												\
+	__asm__ __volatile__ (									\
+		"amo" #asm_op "." #asm_type " zero, %1, %0"					\
+		: "+A" (v->counter)								\
+		: "r" (I));									\
+}
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+#define ATOMIC_OPS(op, asm_op, c_op, I)			\
+        ATOMIC_OP (op, asm_op, c_op, I, w,  int,   )
+#else
+#define ATOMIC_OPS(op, asm_op, c_op, I)			\
+        ATOMIC_OP (op, asm_op, c_op, I, w,  int,   )	\
+        ATOMIC_OP (op, asm_op, c_op, I, d, long, 64)
+#endif
+
+ATOMIC_OPS(add, add, +,  i)
+ATOMIC_OPS(sub, add, +, -i)
+/* FIXME: I could only find documentation that atomic_{add,sub,inc,dec} are
+ * barrier-free.  I'm assuming that and/or/xor have the same constraints as the
+ * others.
+ */
+ATOMIC_OPS(and, and, &,  i)
+ATOMIC_OPS( or,  or, |,  i)
+ATOMIC_OPS(xor, xor, ^,  i)
+
+#undef ATOMIC_OP
+#undef ATOMIC_OPS
+
+/* Atomic ops that have ordered, relaxed, acquire, and relese variants.
+ * There's two flavors of these: the arithmatic ops have both fetch and return
+ * versions, while the logical ops only have fetch versions. */
+#define ATOMIC_FETCH_OP(op, asm_op, c_op, I, asm_or, c_or, asm_type, c_type, prefix)			\
+static __always_inline c_type atomic##prefix##_fetch_##op##c_or(c_type i, atomic##prefix##_t *v)	\
+{													\
+	register c_type ret;										\
+	__asm__ __volatile__ (										\
+		"amo" #asm_op "." #asm_type #asm_or " %2, %1, %0"					\
+		: "+A" (v->counter), "=r" (ret)								\
+		: "r" (I));										\
+	return ret;											\
+}
+
+#define ATOMIC_OP_RETURN(op, asm_op, c_op, I, asm_or, c_or, asm_type, c_type, prefix)			\
+static __always_inline c_type atomic##prefix##_##op##_return##c_or(int i, atomic##prefix##_t *v)	\
+{													\
+        return atomic##prefix##_fetch_##op##c_or(i, v) c_op I;						\
+}
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+#define ATOMIC_OPS(op, asm_op, c_op, I, asm_or, c_or)				\
+        ATOMIC_FETCH_OP (op, asm_op, c_op, I, asm_or, c_or, w,  int,   )	\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I, asm_or, c_or, w,  int,   )
+#else
+#define ATOMIC_OPS(op, asm_op, c_op, I, asm_or, c_or)				\
+        ATOMIC_FETCH_OP (op, asm_op, c_op, I, asm_or, c_or, w,  int,   )	\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I, asm_or, c_or, w,  int,   )	\
+        ATOMIC_FETCH_OP (op, asm_op, c_op, I, asm_or, c_or, d, long, 64)	\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I, asm_or, c_or, d, long, 64)
+#endif
+
+ATOMIC_OPS(add, add, +,  i,      , _relaxed)
+ATOMIC_OPS(add, add, +,  i, .aq  , _acquire)
+ATOMIC_OPS(add, add, +,  i, .rl  , _release)
+ATOMIC_OPS(add, add, +,  i, .aqrl,         )
+
+ATOMIC_OPS(sub, add, +, -i,      , _relaxed)
+ATOMIC_OPS(sub, add, +, -i, .aq  , _acquire)
+ATOMIC_OPS(sub, add, +, -i, .rl  , _release)
+ATOMIC_OPS(sub, add, +, -i, .aqrl,         )
+
+#undef ATOMIC_OPS
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+#define ATOMIC_OPS(op, asm_op, c_op, I, asm_or, c_or)				\
+        ATOMIC_FETCH_OP(op, asm_op, c_op, I, asm_or, c_or, w,  int,   )
+#else
+#define ATOMIC_OPS(op, asm_op, c_op, I, asm_or, c_or)				\
+        ATOMIC_FETCH_OP(op, asm_op, c_op, I, asm_or, c_or, w,  int,   )		\
+        ATOMIC_FETCH_OP(op, asm_op, c_op, I, asm_or, c_or, d, long, 64)
+#endif
+
+ATOMIC_OPS(and, and, &,  i,      , _relaxed)
+ATOMIC_OPS(and, and, &,  i, .aq  , _acquire)
+ATOMIC_OPS(and, and, &,  i, .rl  , _release)
+ATOMIC_OPS(and, and, &,  i, .aqrl,         )
+
+ATOMIC_OPS( or,  or, |,  i,      , _relaxed)
+ATOMIC_OPS( or,  or, |,  i, .aq  , _acquire)
+ATOMIC_OPS( or,  or, |,  i, .rl  , _release)
+ATOMIC_OPS( or,  or, |,  i, .aqrl,         )
+
+ATOMIC_OPS(xor, xor, ^,  i,      , _relaxed)
+ATOMIC_OPS(xor, xor, ^,  i, .aq  , _acquire)
+ATOMIC_OPS(xor, xor, ^,  i, .rl  , _release)
+ATOMIC_OPS(xor, xor, ^,  i, .aqrl,         )
+
+#undef ATOMIC_OPS
+
+#undef ATOMIC_FETCH_OP
+#undef ATOMIC_OP_RETURN
+
+/* The extra atomic operations that are constructed from one of the core
+ * AMO-based operations above (aside from sub, which is easier to fit above).
+ * These are required to perform a barrier, but they're OK this way because
+ * atomic_*_return is also required to perform a barrier.
+ */
+#define ATOMIC_OP(op, func_op, comp_op, I, c_type, prefix)			\
+static __always_inline bool atomic##prefix##_##op(c_type i, atomic##prefix##_t *v) \
+{										\
+	return atomic##prefix##_##func_op##_return(i, v) comp_op I;		\
+}
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+#define ATOMIC_OPS(op, func_op, comp_op, I)			\
+        ATOMIC_OP (op, func_op, comp_op, I,  int,   )
+#else
+#define ATOMIC_OPS(op, func_op, comp_op, I)			\
+        ATOMIC_OP (op, func_op, comp_op, I,  int,   )		\
+        ATOMIC_OP (op, func_op, comp_op, I, long, 64)
+#endif
+
+ATOMIC_OPS(add_and_test, add, ==, 0)
+ATOMIC_OPS(sub_and_test, sub, ==, 0)
+ATOMIC_OPS(add_negative, add,  <, 0)
+
+#undef ATOMIC_OP
+#undef ATOMIC_OPS
+
+#define ATOMIC_OP(op, func_op, c_op, I, prefix)					\
+static __always_inline void atomic##prefix##_##op(atomic##prefix##_t *v)	\
+{										\
+	atomic##prefix##_##func_op(I, v);					\
+}
+
+#define ATOMIC_FETCH_OP(op, func_op, c_op, I, prefix)				\
+static __always_inline int atomic##prefix##_fetch_##op(atomic##prefix##_t *v)	\
+{										\
+	return atomic##prefix##_fetch_##func_op(I, v);				\
+}
+
+#define ATOMIC_OP_RETURN(op, asm_op, c_op, I, prefix)				\
+static __always_inline int atomic##prefix##_##op##_return(atomic##prefix##_t *v) \
+{										\
+        return atomic##prefix##_fetch_##op(v) c_op I;				\
+}
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+#define ATOMIC_OPS(op, asm_op, c_op, I)						\
+        ATOMIC_OP       (op, asm_op, c_op, I,   )				\
+        ATOMIC_FETCH_OP (op, asm_op, c_op, I,   )				\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I,   )
+#else
+#define ATOMIC_OPS(op, asm_op, c_op, I)						\
+        ATOMIC_OP       (op, asm_op, c_op, I,   )				\
+        ATOMIC_FETCH_OP (op, asm_op, c_op, I,   )				\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I,   )				\
+        ATOMIC_OP       (op, asm_op, c_op, I, 64)				\
+        ATOMIC_FETCH_OP (op, asm_op, c_op, I, 64)				\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I, 64)
+#endif
+
+ATOMIC_OPS(inc, add, +,  1)
+ATOMIC_OPS(dec, add, +, -1)
+
+#undef ATOMIC_OPS
+#undef ATOMIC_OP
+#undef ATOMIC_FETCH_OP
+#undef ATOMIC_OP_RETURN
+
+#define ATOMIC_OP(op, func_op, comp_op, I, prefix)				\
+static __always_inline bool atomic##prefix##_##op(atomic##prefix##_t *v)	\
+{										\
+	return atomic##prefix##_##func_op##_return(v) comp_op I;		\
+}
+
+ATOMIC_OP(inc_and_test, inc, ==, 0,   )
+ATOMIC_OP(dec_and_test, dec, ==, 0,   )
+#ifndef CONFIG_GENERIC_ATOMIC64
+ATOMIC_OP(inc_and_test, inc, ==, 0, 64)
+ATOMIC_OP(dec_and_test, dec, ==, 0, 64)
+#endif
+
+#undef ATOMIC_OP
+
+/* This is required to provide a barrier on success. */
+static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
+{
+       register int prev, rc;
+
+	__asm__ __volatile__ (
+		"0:\n\t"
+		"lr.w.aqrl %0, %2\n\t"
+		"beq       %0, %4, 1f\n\t"
+		"add       %1, %0, %3\n\t"
+		"sc.w.aqrl %1, %1, %2\n\t"
+		"bnez      %1, 0b\n\t"
+		"1:"
+		: "=&r" (prev), "=&r" (rc), "+A" (v->counter)
+		: "r" (a), "r" (u));
+	return prev;
+}
+
+#ifndef CONFIG_GENERIC_ATOMIC64
+static __always_inline long atomic64_add_unless(atomic64_t *v, long a, long u)
+{
+       register long prev, rc;
+
+	__asm__ __volatile__ (
+		"0:\n\t"
+		"lr.d.aqrl %0, %2\n\t"
+		"beq       %0, %4, 1f\n\t"
+		"add       %1, %0, %3\n\t"
+		"sc.d.aqrl %1, %1, %2\n\t"
+		"bnez      %1, 0b\n\t"
+		"1:"
+		: "=&r" (prev), "=&r" (rc), "+A" (v->counter)
+		: "r" (a), "r" (u));
+	return prev;
+}
+#endif
+
+/* The extra atomic operations that are constructed from one of the core
+ * LR/SC-based operations above.
+ */
+static __always_inline int atomic_inc_not_zero(atomic_t *v)
+{
+        return __atomic_add_unless(v, 1, 0);
+}
+
+#ifndef CONFIG_GENERIC_ATOMIC64
+static __always_inline int atomic64_inc_not_zero(atomic64_t *v)
+{
+        return atomic64_add_unless(v, 1, 0);
+}
+#endif
+
+/* atomic_{cmp,}xchg is required to have exactly the same ordering semantics as
+ * {cmp,}xchg and the operations that return, so they need a barrier.  We just
+ * use the other implementations directly.
+ */
+#define ATOMIC_OP(c_t, prefix, c_or, size, asm_or)						\
+static __always_inline c_t atomic##prefix##_cmpxchg##c_or(atomic##prefix##_t *v, c_t o, c_t n) 	\
+{												\
+	return __cmpxchg(&(v->counter), o, n, size, asm_or, asm_or);				\
+}												\
+static __always_inline c_t atomic##prefix##_xchg##c_or(atomic##prefix##_t *v, c_t n) 		\
+{												\
+	return __xchg(n, &(v->counter), size, asm_or);						\
+}
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+#define ATOMIC_OPS(c_or, asm_or)			\
+	ATOMIC_OP( int,   , c_or, 4, asm_or)
+#else
+#define ATOMIC_OPS(c_or, asm_or)			\
+	ATOMIC_OP( int,   , c_or, 4, asm_or)		\
+	ATOMIC_OP(long, 64, c_or, 8, asm_or)
+#endif
+
+ATOMIC_OPS(        , .aqrl)
+ATOMIC_OPS(_acquire,   .aq)
+ATOMIC_OPS(_release,   .rl)
+ATOMIC_OPS(_relaxed,      )
+
+#undef ATOMIC_OPS
+#undef ATOMIC_OP
+
+#else /* !CONFIG_ISA_A */
+
+#include <asm-generic/atomic.h>
+
+#endif /* CONFIG_ISA_A */
+
+#endif /* _ASM_RISCV_ATOMIC_H */
diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h
new file mode 100644
index 000000000000..fe9c26f49e70
--- /dev/null
+++ b/arch/riscv/include/asm/barrier.h
@@ -0,0 +1,139 @@
+/*
+ * Based on arch/arm/include/asm/barrier.h
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2013 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_RISCV_BARRIER_H
+#define _ASM_RISCV_BARRIER_H
+
+#ifndef __ASSEMBLY__
+
+#define nop()		__asm__ __volatile__ ("nop")
+
+#define RISCV_FENCE(p, s) \
+	__asm__ __volatile__ ("fence " #p "," #s : : : "memory")
+
+/* These barries need to enforce ordering on both devices or memory. */
+#define mb()		RISCV_FENCE(iorw,iorw)
+#define rmb()		RISCV_FENCE(ir,ir)
+#define wmb()		RISCV_FENCE(ow,ow)
+
+/* These barries do not need to enforce ordering on devices, just memory. */
+#define smp_mb()	RISCV_FENCE(rw,rw)
+#define smp_rmb()	RISCV_FENCE(r,r)
+#define smp_wmb()	RISCV_FENCE(w,w)
+
+/* Our atomic operations set the AQ and RL bits and therefor we don't need to
+ * fence around atomics.
+ */
+#define __smb_mb__before_atomic()	barrier()
+#define __smb_mb__after_atomic()	barrier()
+
+/* These barries are meant to prevent memory operations inside a spinlock from
+ * moving outside of that spinlock.  Since we set the AQ and RL bits when
+ * entering or leaving spinlocks, no additional fence needs to be performed.
+ */
+#define smb_mb__before_spinlock()	barrier()
+#define smb_mb__after_spinlock()	barrier()
+
+/* FIXME: I don't think RISC-V is allowed to perform a speculative load. */
+#define smp_acquire__after_ctrl_dep()	barrier()
+
+/* The RISC-V ISA doesn't support byte or half-word AMOs, so we fall back to a
+ * regular store and a fence here.  Otherwise we emit an AMO with an AQ or RL
+ * bit set and allow the microarchitecture to avoid the other half of the AMO.
+ */
+#define __smp_store_release(p, v)					\
+do {									\
+	union { typeof(*p) __val; char __c[1]; } __u =			\
+		{ .__val = (__force typeof(*p)) (v) };			\
+	compiletime_assert_atomic_type(*p);				\
+	switch (sizeof(*p)) {						\
+	case 1:								\
+	case 2:								\
+		smb_mb();						\
+		WRITE_ONCE(*p, __u.__val);				\
+		break;							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"amoswap.w.rl zero, %1, %0"			\
+			: "+A" (*p), "r" (__u.__val)			\
+			: 						\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"amoswap.d.rl zero, %1, %0"			\
+			: "+A" (*p), "r" (__u.__val)			\
+			: 						\
+			: "memory");					\
+		break;							\
+	}								\
+} while (0)
+
+#define __smp_load_acquire(p)						\
+do {									\
+	union { typeof(*p) __val; char __c[1]; } __u =			\
+		{ .__val = (__force typeof(*p)) (v) };			\
+	compiletime_assert_atomic_type(*p);				\
+	switch (sizeof(*p)) {						\
+	case 1:								\
+	case 2:								\
+		__u.__val = READ_ONCE(*p);				\
+		smb_mb();						\
+		break;							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"amoor.w.aq %1, zero, %0"			\
+			: "+A" (*p)					\
+			: "=r" (__u.__val)				\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"amoor.d.aq %1, zero, %0"			\
+			: "+A" (*p)					\
+			: "=r" (__u.__val)				\
+			: "memory");					\
+		break;							\
+	}								\
+	__u.__val;							\
+} while (0)
+
+/* The default implementation of this uses READ_ONCE and
+ * smp_acquire__after_ctrl_dep, but since we can directly do an ACQUIRE load we
+ * can avoid the extra barrier.
+ */
+#define smp_cond_load_acquire(ptr, cond_expr) ({			\
+	typeof(ptr) __PTR = (ptr);					\
+	typeof(*ptr) VAL;						\
+	for (;;) {							\
+		VAL = __smp_load_acquire(__PTR);			\
+		if (cond_expr)						\
+			break;						\
+		cpu_relax();						\
+	}								\
+	smp_acquire__after_ctrl_dep();					\
+	VAL;								\
+})
+
+#include <asm-generic/barrier.h>
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BARRIER_H */
diff --git a/arch/riscv/include/asm/bitops.h b/arch/riscv/include/asm/bitops.h
new file mode 100644
index 000000000000..27e47858c6b1
--- /dev/null
+++ b/arch/riscv/include/asm/bitops.h
@@ -0,0 +1,228 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_BITOPS_H
+#define _ASM_RISCV_BITOPS_H
+
+#ifndef _LINUX_BITOPS_H
+#error "Only <linux/bitops.h> can be included directly"
+#endif /* _LINUX_BITOPS_H */
+
+#ifdef __KERNEL__
+
+#include <linux/compiler.h>
+#include <linux/irqflags.h>
+#include <asm/barrier.h>
+#include <asm/bitsperlong.h>
+
+#ifdef CONFIG_ISA_A
+
+#ifndef smp_mb__before_clear_bit
+#define smp_mb__before_clear_bit()  smp_mb()
+#define smp_mb__after_clear_bit()   smp_mb()
+#endif /* smp_mb__before_clear_bit */
+
+#include <asm-generic/bitops/__ffs.h>
+#include <asm-generic/bitops/ffz.h>
+#include <asm-generic/bitops/fls.h>
+#include <asm-generic/bitops/__fls.h>
+#include <asm-generic/bitops/fls64.h>
+#include <asm-generic/bitops/find.h>
+#include <asm-generic/bitops/sched.h>
+#include <asm-generic/bitops/ffs.h>
+
+#include <asm-generic/bitops/hweight.h>
+
+#if (BITS_PER_LONG == 64)
+#define __AMO(op)	"amo" #op ".d"
+#elif (BITS_PER_LONG == 32)
+#define __AMO(op)	"amo" #op ".w"
+#else
+#error "Unexpected BITS_PER_LONG"
+#endif
+
+#define __test_and_op_bit(op, mod, nr, addr)			\
+({								\
+	unsigned long __res, __mask;				\
+	__mask = BIT_MASK(nr);					\
+	__asm__ __volatile__ (					\
+		__AMO(op) " %0, %2, %1"				\
+		: "=r" (__res), "+A" (addr[BIT_WORD(nr)])	\
+		: "r" (mod(__mask)));				\
+	((__res & __mask) != 0);				\
+})
+
+#define __op_bit(op, mod, nr, addr)				\
+	__asm__ __volatile__ (					\
+		__AMO(op) " zero, %1, %0"			\
+		: "+A" (addr[BIT_WORD(nr)])			\
+		: "r" (mod(BIT_MASK(nr))))
+
+/* Bitmask modifiers */
+#define __NOP(x)	(x)
+#define __NOT(x)	(~(x))
+
+/**
+ * test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It may be reordered on other architectures than x86.
+ * It also implies a memory barrier.
+ */
+static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
+{
+	return __test_and_op_bit(or, __NOP, nr, addr);
+}
+
+/**
+ * test_and_clear_bit - Clear a bit and return its old value
+ * @nr: Bit to clear
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It can be reordered on other architectures other than x86.
+ * It also implies a memory barrier.
+ */
+static inline int test_and_clear_bit(int nr, volatile unsigned long *addr)
+{
+	return __test_and_op_bit(and, __NOT, nr, addr);
+}
+
+/**
+ * test_and_change_bit - Change a bit and return its old value
+ * @nr: Bit to change
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It also implies a memory barrier.
+ */
+static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
+{
+	return __test_and_op_bit(xor, __NOP, nr, addr);
+}
+
+/**
+ * set_bit - Atomically set a bit in memory
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This function is atomic and may not be reordered.  See __set_bit()
+ * if you do not require the atomic guarantees.
+ *
+ * Note: there are no guarantees that this function will not be reordered
+ * on non x86 architectures, so if you are writing portable code,
+ * make sure not to rely on its reordering guarantees.
+ *
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
+static inline void set_bit(int nr, volatile unsigned long *addr)
+{
+	__op_bit(or, __NOP, nr, addr);
+}
+
+/**
+ * clear_bit - Clears a bit in memory
+ * @nr: Bit to clear
+ * @addr: Address to start counting from
+ *
+ * clear_bit() is atomic and may not be reordered.  However, it does
+ * not contain a memory barrier, so if it is used for locking purposes,
+ * you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
+ * in order to ensure changes are visible on other processors.
+ */
+static inline void clear_bit(int nr, volatile unsigned long *addr)
+{
+	__op_bit(and, __NOT, nr, addr);
+}
+
+/**
+ * change_bit - Toggle a bit in memory
+ * @nr: Bit to change
+ * @addr: Address to start counting from
+ *
+ * change_bit() is atomic and may not be reordered. It may be
+ * reordered on other architectures than x86.
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
+static inline void change_bit(int nr, volatile unsigned long *addr)
+{
+	__op_bit(xor, __NOP, nr, addr);
+}
+
+/**
+ * test_and_set_bit_lock - Set a bit and return its old value, for lock
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is atomic and provides acquire barrier semantics.
+ * It can be used to implement bit locks.
+ */
+static inline int test_and_set_bit_lock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	return test_and_set_bit(nr, addr);
+}
+
+/**
+ * clear_bit_unlock - Clear a bit in memory, for unlock
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This operation is atomic and provides release barrier semantics.
+ */
+static inline void clear_bit_unlock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	clear_bit(nr, addr);
+}
+
+/**
+ * __clear_bit_unlock - Clear a bit in memory, for unlock
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This operation is like clear_bit_unlock, however it is not atomic.
+ * It does provide release barrier semantics so it can be used to unlock
+ * a bit lock, however it would only be used if no other CPU can modify
+ * any bits in the memory until the lock is released (a good example is
+ * if the bit lock itself protects access to the other bits in the word).
+ */
+static inline void __clear_bit_unlock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	clear_bit(nr, addr);
+}
+
+#undef __test_and_op_bit
+#undef __op_bit
+#undef __NOP
+#undef __NOT
+#undef __AMO
+
+#include <asm-generic/bitops/non-atomic.h>
+#include <asm-generic/bitops/le.h>
+#include <asm-generic/bitops/ext2-atomic.h>
+
+#else /* !CONFIG_ISA_A */
+
+#include <asm-generic/bitops.h>
+
+#endif /* CONFIG_ISA_A */
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_BITOPS_H */
diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
new file mode 100644
index 000000000000..0595585013b0
--- /dev/null
+++ b/arch/riscv/include/asm/cacheflush.h
@@ -0,0 +1,39 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CACHEFLUSH_H
+#define _ASM_RISCV_CACHEFLUSH_H
+
+#include <asm-generic/cacheflush.h>
+
+#undef flush_icache_range
+#undef flush_icache_user_range
+
+static inline void local_flush_icache_all(void)
+{
+	asm volatile ("fence.i" ::: "memory");
+}
+
+#ifndef CONFIG_SMP
+
+#define flush_icache_range(start, end) local_flush_icache_all()
+#define flush_icache_user_range(vma, pg, addr, len) local_flush_icache_all()
+
+#else /* CONFIG_SMP */
+
+#define flush_icache_range(start, end) sbi_remote_fence_i(0)
+#define flush_icache_user_range(vma, pg, addr, len) sbi_remote_fence_i(0)
+
+#endif /* CONFIG_SMP */
+
+#endif /* _ASM_RISCV_CACHEFLUSH_H */
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
new file mode 100644
index 000000000000..81025c056412
--- /dev/null
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -0,0 +1,138 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CMPXCHG_H
+#define _ASM_RISCV_CMPXCHG_H
+
+#include <linux/bug.h>
+
+#ifdef CONFIG_ISA_A
+
+#include <asm/barrier.h>
+
+#define __xchg(new, ptr, size, asm_or)				\
+({								\
+	__typeof__(ptr) __ptr = (ptr);				\
+	__typeof__(new) __new = (new);				\
+	__typeof__(*(ptr)) __ret;				\
+	switch (size) {						\
+	case 4:							\
+		__asm__ __volatile__ (				\
+			"amoswap.w" #asm_or " %0, %2, %1"	\
+			: "=r" (__ret), "+A" (*__ptr)		\
+			: "r" (__new));				\
+		break;						\
+	case 8:							\
+		__asm__ __volatile__ (				\
+			"amoswap.d" #asm_or " %0, %2, %1"	\
+			: "=r" (__ret), "+A" (*__ptr)		\
+			: "r" (__new));				\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+	}							\
+	__ret;							\
+})
+
+#define xchg(ptr, x)    (__xchg((x), (ptr), sizeof(*(ptr)), .aqrl))
+
+#define xchg32(ptr, x)				\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 4);	\
+	xchg((ptr), (x));			\
+})
+
+#define xchg64(ptr, x)				\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);	\
+	xchg((ptr), (x));			\
+})
+
+/*
+ * Atomic compare and exchange.  Compare OLD with MEM, if identical,
+ * store NEW in MEM.  Return the initial value in MEM.  Success is
+ * indicated by comparing RETURN with OLD.
+ */
+#define __cmpxchg(ptr, old, new, size, lrb, scb)			\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(old) __old = (old);					\
+	__typeof__(new) __new = (new);					\
+	__typeof__(*(ptr)) __ret;					\
+	register unsigned int __rc;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+		"0:"							\
+			"lr.w" #scb " %0, %2\n"				\
+			"bne         %0, %z3, 1f\n"			\
+			"sc.w" #lrb " %1, %z4, %2\n"			\
+			"bnez        %1, 0b\n"				\
+		"1:"							\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new));			\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+		"0:"							\
+			"lr.d" #scb " %0, %2\n"				\
+			"bne         %0, %z3, 1f\n"			\
+			"sc.d" #lrb " %1, %z4, %2\n"			\
+			"bnez        %1, 0b\n"				\
+		"1:"							\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new));			\
+		break;							\
+	default:							\
+		BUILD_BUG();						\
+	}								\
+	__ret;								\
+})
+
+#define cmpxchg(ptr, o, n) \
+	(__cmpxchg((ptr), (o), (n), sizeof(*(ptr)), .aqrl, .aqrl))
+
+#define cmpxchg_local(ptr, o, n) \
+	(__cmpxchg((ptr), (o), (n), sizeof(*(ptr)), , ))
+
+#define cmpxchg32(ptr, o, n)			\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 4);	\
+	cmpxchg((ptr), (o), (n));		\
+})
+
+#define cmpxchg32_local(ptr, o, n)		\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 4);	\
+	cmpxchg_local((ptr), (o), (n));		\
+})
+
+#define cmpxchg64(ptr, o, n)			\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);	\
+	cmpxchg((ptr), (o), (n));		\
+})
+
+#define cmpxchg64_local(ptr, o, n)		\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);	\
+	cmpxchg_local((ptr), (o), (n));		\
+})
+
+#else /* !CONFIG_ISA_A */
+
+#include <asm-generic/cmpxchg.h>
+
+#endif /* CONFIG_ISA_A */
+
+#endif /* _ASM_RISCV_CMPXCHG_H */
diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h
new file mode 100644
index 000000000000..9c76ecfd203d
--- /dev/null
+++ b/arch/riscv/include/asm/io.h
@@ -0,0 +1,178 @@
+/*
+ * {read,write}{b,w,l,q} based on arch/arm64/include/asm/io.h
+ *   which was based on arch/arm/include/io.h
+ *
+ * Copyright (C) 1996-2000 Russell King
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_IO_H
+#define _ASM_RISCV_IO_H
+
+#ifdef __KERNEL__
+
+#ifdef CONFIG_MMU
+
+extern void __iomem *ioremap(phys_addr_t offset, unsigned long size);
+
+/* The RISC-V ISA doesn't yet specify how to query of modify PMAs, so we can't
+ * change the properties of memory regions.  This should be fixed by the
+ * upcoming platform spec.
+ */
+#define ioremap_nocache(addr, size) ioremap((addr), (size))
+#define ioremap_wc(addr, size) ioremap((addr), (size))
+#define ioremap_wt(addr, size) ioremap((addr), (size))
+
+extern void iounmap(void __iomem *addr);
+
+#endif /* CONFIG_MMU */
+
+/* Generic IO read/write.  These perform native-endian accesses. */
+#define __raw_writeb __raw_writeb
+static inline void __raw_writeb(u8 val, volatile void __iomem *addr)
+{
+	asm volatile("sb %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#define __raw_writew __raw_writew
+static inline void __raw_writew(u16 val, volatile void __iomem *addr)
+{
+	asm volatile("sh %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#define __raw_writel __raw_writel
+static inline void __raw_writel(u32 val, volatile void __iomem *addr)
+{
+	asm volatile("sw %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#ifdef __riscv64
+#define __raw_writeq __raw_writeq
+static inline void __raw_writeq(u64 val, volatile void __iomem *addr)
+{
+	asm volatile("sd %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+#endif
+
+#define __raw_readb __raw_readb
+static inline u8 __raw_readb(const volatile void __iomem *addr)
+{
+	u8 val;
+
+	asm volatile("lb %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#define __raw_readw __raw_readw
+static inline u16 __raw_readw(const volatile void __iomem *addr)
+{
+	u16 val;
+
+	asm volatile("lh %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#define __raw_readl __raw_readl
+static inline u32 __raw_readl(const volatile void __iomem *addr)
+{
+	u32 val;
+
+	asm volatile("lw %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#ifdef __riscv64
+#define __raw_readq __raw_readq
+static inline u64 __raw_readq(const volatile void __iomem *addr)
+{
+	u64 val;
+
+	asm volatile("ld %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+#endif
+
+/* FIXME: I'm flip-flopping on whether or not we should keep this or enforce
+ * the ordering with I/O on spinlocks.  The worry is that drivers won't get
+ * this correct, but I also don't want to introduce a fence into the lock code
+ * that otherwise only uses AMOs and LR/SC.   For now I'm leaving this here:
+ * "w,o" is sufficient to ensure that all writes to the device has completed
+ * before the write to the spinlock is allowed to commit.  I surmised this from
+ * reading "ACQUIRES VS I/O ACCESSES" in memory-barries.txt.
+ */
+#define mmiowb()	__asm__ __volatile__ ("fence o,w" : : : "memory");
+
+/* Unordered I/O memory access primitives.  These are even more relaxed than
+ * the relaxed versions, as they don't even order accesses between successive
+ * operations to the I/O regions.
+ */
+#define readb_cpu(c)		({ u8  __r = __raw_readb(c); __r; })
+#define readw_cpu(c)		({ u16 __r = le16_to_cpu((__force __le16)__raw_readw(c)); __r; })
+#define readl_cpu(c)		({ u32 __r = le32_to_cpu((__force __le32)__raw_readl(c)); __r; })
+#define readq_cpu(c)		({ u64 __r = le64_to_cpu((__force __le64)__raw_readq(c)); __r; })
+
+#define writeb_cpu(v,c)		((void)__raw_writeb((v),(c)))
+#define writew_cpu(v,c)		((void)__raw_writew((__force u16)cpu_to_le16(v),(c)))
+#define writel_cpu(v,c)		((void)__raw_writel((__force u32)cpu_to_le32(v),(c)))
+#define writeq_cpu(v,c)		((void)__raw_writeq((__force u64)cpu_to_le64(v),(c)))
+
+/* Relaxed I/O memory access primitives. These follow the Device memory
+ * ordering rules but do not guarantee any ordering relative to Normal memory
+ * accesses.  The memory barries here are necessary as RISC-V doesn't define
+ * any ordering constraints on accesses to the device I/O space.  These are
+ * defined to order the indicated access (either a read or write) with all
+ * other I/O memory accesses.
+ */
+/* FIXME: The platform spec will define the default Linux-capable platform to
+ * have some extra IO ordering constraints that will make these fences
+ * unnecessary.
+ */
+#define __iorrmb()	__asm__ __volatile__ ("fence i,io" : : : "memory");
+#define __iorwmb()	__asm__ __volatile__ ("fence io,o" : : : "memory");
+
+#define readb_relaxed(c)	({ u8  __v = readb_cpu(c); __iorrmb(); __v; })
+#define readw_relaxed(c)	({ u16 __v = readw_cpu(c); __iorrmb(); __v; })
+#define readl_relaxed(c)	({ u32 __v = readl_cpu(c); __iorrmb(); __v; })
+#define readq_relaxed(c)	({ u64 __v = readq_cpu(c); __iorrmb(); __v; })
+
+#define writeb_relaxed(v,c)	({ __iorwmb(); writeb_cpu((v),(c)); })
+#define writew_relaxed(v,c)	({ __iorwmb(); writew_cpu((v),(c)); })
+#define writel_relaxed(v,c)	({ __iorwmb(); writel_cpu((v),(c)); })
+#define writeq_relaxed(v,c)	({ __iorwmb(); writeq_cpu((v),(c)); })
+
+/* I/O memory access primitives. Reads are ordered relative to any
+ * following Normal memory access. Writes are ordered relative to any prior
+ * Normal memory access.  The memory barriers here are necessary as RISC-V
+ * doesn't define any ordering between the memory space and the I/O space.
+ * They may be stronger than necessary ("i,r" and "w,o" might be sufficient),
+ * but I feel kind of queasy making these weaker in any manner than the relaxed
+ * versions above.
+ */
+#define __iormb()	__asm__ __volatile__ ("fence i,ior" : : : "memory");
+#define __iowmb()	__asm__ __volatile__ ("fence iow,o" : : : "memory");
+
+#define readb(c)		({ u8  __v = readb_cpu(c); __iormb(); __v; })
+#define readw(c)		({ u16 __v = readw_cpu(c); __iormb(); __v; })
+#define readl(c)		({ u32 __v = readl_cpu(c); __iormb(); __v; })
+#define readq(c)		({ u64 __v = readq_cpu(c); __iormb(); __v; })
+
+#define writeb(v,c)		({ __iowmb(); writeb_cpu((v),(c)); })
+#define writew(v,c)		({ __iowmb(); writew_cpu((v),(c)); })
+#define writel(v,c)		({ __iowmb(); writel_cpu((v),(c)); })
+#define writeq(v,c)		({ __iowmb(); writeq_cpu((v),(c)); })
+
+#include <asm-generic/io.h>
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_IO_H */
diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
new file mode 100644
index 000000000000..54ed73bfa972
--- /dev/null
+++ b/arch/riscv/include/asm/spinlock.h
@@ -0,0 +1,167 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SPINLOCK_H
+#define _ASM_RISCV_SPINLOCK_H
+
+#include <linux/kernel.h>
+#include <asm/current.h>
+
+/*
+ * Simple spin lock operations.  These provide no fairness guarantees.
+ */
+
+/* FIXME: Replace this with a ticket lock, like MIPS. */
+
+#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
+#define arch_spin_is_locked(x)	((x)->lock != 0)
+#define arch_spin_unlock_wait(x) \
+		do { cpu_relax(); } while ((x)->lock)
+
+static inline void arch_spin_unlock(arch_spinlock_t *lock)
+{
+	__asm__ __volatile__ (
+		"amoswap.w.rl x0, x0, %0"
+		: "=A" (lock->lock)
+		:: "memory");
+}
+
+static inline int arch_spin_trylock(arch_spinlock_t *lock)
+{
+	int tmp = 1, busy;
+
+	__asm__ __volatile__ (
+		"amoswap.w.aq %0, %2, %1"
+		: "=r" (busy), "+A" (lock->lock)
+		: "r" (tmp)
+		: "memory");
+
+	return !busy;
+}
+
+static inline void arch_spin_lock(arch_spinlock_t *lock)
+{
+	while (1) {
+		if (arch_spin_is_locked(lock))
+			continue;
+
+		if (arch_spin_trylock(lock))
+			break;
+	}
+}
+
+static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
+{
+	smp_rmb();
+	do {
+		cpu_relax();
+	} while (arch_spin_is_locked(lock));
+	smp_acquire__after_ctrl_dep();
+}
+
+/***********************************************************/
+
+static inline int arch_read_can_lock(arch_rwlock_t *lock)
+{
+	return lock->lock >= 0;
+}
+
+static inline int arch_write_can_lock(arch_rwlock_t *lock)
+{
+	return lock->lock == 0;
+}
+
+static inline void arch_read_lock(arch_rwlock_t *lock)
+{
+	int tmp;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bltz	%1, 1b\n"
+		"	addi	%1, %1, 1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		: "+A" (lock->lock), "=&r" (tmp)
+		:: "memory");
+}
+
+static inline void arch_write_lock(arch_rwlock_t *lock)
+{
+	int tmp;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bnez	%1, 1b\n"
+		"	li	%1, -1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		: "+A" (lock->lock), "=&r" (tmp)
+		:: "memory");
+}
+
+static inline int arch_read_trylock(arch_rwlock_t *lock)
+{
+	int busy;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bltz	%1, 1f\n"
+		"	addi	%1, %1, 1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		"1:\n"
+		: "+A" (lock->lock), "=&r" (busy)
+		:: "memory");
+
+	return !busy;
+}
+
+static inline int arch_write_trylock(arch_rwlock_t *lock)
+{
+	int busy;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bnez	%1, 1f\n"
+		"	li	%1, -1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		"1:\n"
+		: "+A" (lock->lock), "=&r" (busy)
+		:: "memory");
+
+	return !busy;
+}
+
+static inline void arch_read_unlock(arch_rwlock_t *lock)
+{
+	__asm__ __volatile__(
+		"amoadd.w.rl x0, %1, %0"
+		: "+A" (lock->lock)
+		: "r" (-1)
+		: "memory");
+}
+
+static inline void arch_write_unlock(arch_rwlock_t *lock)
+{
+	__asm__ __volatile__ (
+		"amoswap.w.rl x0, x0, %0"
+		: "=A" (lock->lock)
+		:: "memory");
+}
+
+#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
+#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
+
+#endif /* _ASM_RISCV_SPINLOCK_H */
diff --git a/arch/riscv/include/asm/spinlock_types.h b/arch/riscv/include/asm/spinlock_types.h
new file mode 100644
index 000000000000..83ac4ac9e2ac
--- /dev/null
+++ b/arch/riscv/include/asm/spinlock_types.h
@@ -0,0 +1,33 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SPINLOCK_TYPES_H
+#define _ASM_RISCV_SPINLOCK_TYPES_H
+
+#ifndef __LINUX_SPINLOCK_TYPES_H
+# error "please don't include this file directly"
+#endif
+
+typedef struct {
+	volatile unsigned int lock;
+} arch_spinlock_t;
+
+#define __ARCH_SPIN_LOCK_UNLOCKED	{ 0 }
+
+typedef struct {
+	volatile unsigned int lock;
+} arch_rwlock_t;
+
+#define __ARCH_RW_LOCK_UNLOCKED		{ 0 }
+
+#endif
diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h
new file mode 100644
index 000000000000..c229509288ea
--- /dev/null
+++ b/arch/riscv/include/asm/tlb.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TLB_H
+#define _ASM_RISCV_TLB_H
+
+#include <asm-generic/tlb.h>
+
+static inline void tlb_flush(struct mmu_gather *tlb)
+{
+	flush_tlb_mm(tlb->mm);
+}
+
+#endif /* _ASM_RISCV_TLB_H */
diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
new file mode 100644
index 000000000000..5ee4ae370b5e
--- /dev/null
+++ b/arch/riscv/include/asm/tlbflush.h
@@ -0,0 +1,64 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TLBFLUSH_H
+#define _ASM_RISCV_TLBFLUSH_H
+
+#ifdef CONFIG_MMU
+
+/* Flush entire local TLB */
+static inline void local_flush_tlb_all(void)
+{
+	__asm__ __volatile__ ("sfence.vma" : : : "memory");
+}
+
+/* Flush one page from local TLB */
+static inline void local_flush_tlb_page(unsigned long addr)
+{
+	__asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory");
+}
+
+#ifndef CONFIG_SMP
+
+#define flush_tlb_all() local_flush_tlb_all()
+#define flush_tlb_page(vma, addr) local_flush_tlb_page(addr)
+#define flush_tlb_range(vma, start, end) local_flush_tlb_all()
+
+#else /* CONFIG_SMP */
+
+#include <asm/sbi.h>
+
+#define flush_tlb_all() sbi_remote_sfence_vma(0, 0, -1)
+#define flush_tlb_page(vma, addr) flush_tlb_range(vma, addr, 0)
+#define flush_tlb_range(vma, start, end) \
+	sbi_remote_sfence_vma(0, start, (end) - (start))
+
+#endif /* CONFIG_SMP */
+
+/* Flush the TLB entries of the specified mm context */
+static inline void flush_tlb_mm(struct mm_struct *mm)
+{
+	flush_tlb_all();
+}
+
+/* Flush a range of kernel pages */
+static inline void flush_tlb_kernel_range(unsigned long start,
+	unsigned long end)
+{
+	flush_tlb_all();
+}
+
+#endif /* CONFIG_MMU */
+
+#endif /* _ASM_RISCV_TLBFLUSH_H */
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 2/9] RISC-V: Atomic and Locking Code
@ 2017-06-28 18:55       ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This contains all the code that directly interfaces with the RISC-V
memory model.  While this code corforms to the current RISC-V ISA
specifications (user 2.2 and priv 1.10), the memory model is somewhat
underspecified in those documents.  There is a working group that hopes
to produce a formal memory model by the end of the year, but my
understanding is that the basic definitions we're relying on here won't
change significantly.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/atomic.h         | 330 ++++++++++++++++++++++++++++++++
 arch/riscv/include/asm/barrier.h        | 139 ++++++++++++++
 arch/riscv/include/asm/bitops.h         | 228 ++++++++++++++++++++++
 arch/riscv/include/asm/cacheflush.h     |  39 ++++
 arch/riscv/include/asm/cmpxchg.h        | 138 +++++++++++++
 arch/riscv/include/asm/io.h             | 178 +++++++++++++++++
 arch/riscv/include/asm/spinlock.h       | 167 ++++++++++++++++
 arch/riscv/include/asm/spinlock_types.h |  33 ++++
 arch/riscv/include/asm/tlb.h            |  24 +++
 arch/riscv/include/asm/tlbflush.h       |  64 +++++++
 10 files changed, 1340 insertions(+)
 create mode 100644 arch/riscv/include/asm/atomic.h
 create mode 100644 arch/riscv/include/asm/barrier.h
 create mode 100644 arch/riscv/include/asm/bitops.h
 create mode 100644 arch/riscv/include/asm/cacheflush.h
 create mode 100644 arch/riscv/include/asm/cmpxchg.h
 create mode 100644 arch/riscv/include/asm/io.h
 create mode 100644 arch/riscv/include/asm/spinlock.h
 create mode 100644 arch/riscv/include/asm/spinlock_types.h
 create mode 100644 arch/riscv/include/asm/tlb.h
 create mode 100644 arch/riscv/include/asm/tlbflush.h

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
new file mode 100644
index 000000000000..7c5bfd18fe4e
--- /dev/null
+++ b/arch/riscv/include/asm/atomic.h
@@ -0,0 +1,330 @@
+/*
+ * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#ifndef _ASM_RISCV_ATOMIC_H
+#define _ASM_RISCV_ATOMIC_H
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+# include <asm-generic/atomic64.h>
+#else
+# if (__riscv_xlen < 64)
+#  error "64-bit atomics require XLEN to be at least 64"
+# endif
+#endif
+
+#ifdef CONFIG_ISA_A
+
+#include <asm/cmpxchg.h>
+#include <asm/barrier.h>
+
+#define ATOMIC_INIT(i)	{ (i) }
+static __always_inline int atomic_read(const atomic_t *v)
+{
+	return READ_ONCE(v->counter);
+}
+static __always_inline void atomic_set(atomic_t *v, int i)
+{
+	WRITE_ONCE(v->counter, i);
+}
+
+#ifndef CONFIG_GENERIC_ATOMIC64
+#define ATOMIC64_INIT(i) { (i) }
+static __always_inline int atomic64_read(const atomic64_t *v)
+{
+	return READ_ONCE(v->counter);
+}
+static __always_inline void atomic64_set(atomic64_t *v, int i)
+{
+	WRITE_ONCE(v->counter, i);
+}
+#endif
+
+/* First, the atomic ops that have no ordering constraints and therefor don't
+ * have the AQ or RL bits set.  These don't return anything, so there's only
+ * one version to worry about.
+ */
+#define ATOMIC_OP(op, asm_op, c_op, I, asm_type, c_type, prefix)				\
+static __always_inline void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v)		\
+{												\
+	__asm__ __volatile__ (									\
+		"amo" #asm_op "." #asm_type " zero, %1, %0"					\
+		: "+A" (v->counter)								\
+		: "r" (I));									\
+}
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+#define ATOMIC_OPS(op, asm_op, c_op, I)			\
+        ATOMIC_OP (op, asm_op, c_op, I, w,  int,   )
+#else
+#define ATOMIC_OPS(op, asm_op, c_op, I)			\
+        ATOMIC_OP (op, asm_op, c_op, I, w,  int,   )	\
+        ATOMIC_OP (op, asm_op, c_op, I, d, long, 64)
+#endif
+
+ATOMIC_OPS(add, add, +,  i)
+ATOMIC_OPS(sub, add, +, -i)
+/* FIXME: I could only find documentation that atomic_{add,sub,inc,dec} are
+ * barrier-free.  I'm assuming that and/or/xor have the same constraints as the
+ * others.
+ */
+ATOMIC_OPS(and, and, &,  i)
+ATOMIC_OPS( or,  or, |,  i)
+ATOMIC_OPS(xor, xor, ^,  i)
+
+#undef ATOMIC_OP
+#undef ATOMIC_OPS
+
+/* Atomic ops that have ordered, relaxed, acquire, and relese variants.
+ * There's two flavors of these: the arithmatic ops have both fetch and return
+ * versions, while the logical ops only have fetch versions. */
+#define ATOMIC_FETCH_OP(op, asm_op, c_op, I, asm_or, c_or, asm_type, c_type, prefix)			\
+static __always_inline c_type atomic##prefix##_fetch_##op##c_or(c_type i, atomic##prefix##_t *v)	\
+{													\
+	register c_type ret;										\
+	__asm__ __volatile__ (										\
+		"amo" #asm_op "." #asm_type #asm_or " %2, %1, %0"					\
+		: "+A" (v->counter), "=r" (ret)								\
+		: "r" (I));										\
+	return ret;											\
+}
+
+#define ATOMIC_OP_RETURN(op, asm_op, c_op, I, asm_or, c_or, asm_type, c_type, prefix)			\
+static __always_inline c_type atomic##prefix##_##op##_return##c_or(int i, atomic##prefix##_t *v)	\
+{													\
+        return atomic##prefix##_fetch_##op##c_or(i, v) c_op I;						\
+}
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+#define ATOMIC_OPS(op, asm_op, c_op, I, asm_or, c_or)				\
+        ATOMIC_FETCH_OP (op, asm_op, c_op, I, asm_or, c_or, w,  int,   )	\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I, asm_or, c_or, w,  int,   )
+#else
+#define ATOMIC_OPS(op, asm_op, c_op, I, asm_or, c_or)				\
+        ATOMIC_FETCH_OP (op, asm_op, c_op, I, asm_or, c_or, w,  int,   )	\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I, asm_or, c_or, w,  int,   )	\
+        ATOMIC_FETCH_OP (op, asm_op, c_op, I, asm_or, c_or, d, long, 64)	\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I, asm_or, c_or, d, long, 64)
+#endif
+
+ATOMIC_OPS(add, add, +,  i,      , _relaxed)
+ATOMIC_OPS(add, add, +,  i, .aq  , _acquire)
+ATOMIC_OPS(add, add, +,  i, .rl  , _release)
+ATOMIC_OPS(add, add, +,  i, .aqrl,         )
+
+ATOMIC_OPS(sub, add, +, -i,      , _relaxed)
+ATOMIC_OPS(sub, add, +, -i, .aq  , _acquire)
+ATOMIC_OPS(sub, add, +, -i, .rl  , _release)
+ATOMIC_OPS(sub, add, +, -i, .aqrl,         )
+
+#undef ATOMIC_OPS
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+#define ATOMIC_OPS(op, asm_op, c_op, I, asm_or, c_or)				\
+        ATOMIC_FETCH_OP(op, asm_op, c_op, I, asm_or, c_or, w,  int,   )
+#else
+#define ATOMIC_OPS(op, asm_op, c_op, I, asm_or, c_or)				\
+        ATOMIC_FETCH_OP(op, asm_op, c_op, I, asm_or, c_or, w,  int,   )		\
+        ATOMIC_FETCH_OP(op, asm_op, c_op, I, asm_or, c_or, d, long, 64)
+#endif
+
+ATOMIC_OPS(and, and, &,  i,      , _relaxed)
+ATOMIC_OPS(and, and, &,  i, .aq  , _acquire)
+ATOMIC_OPS(and, and, &,  i, .rl  , _release)
+ATOMIC_OPS(and, and, &,  i, .aqrl,         )
+
+ATOMIC_OPS( or,  or, |,  i,      , _relaxed)
+ATOMIC_OPS( or,  or, |,  i, .aq  , _acquire)
+ATOMIC_OPS( or,  or, |,  i, .rl  , _release)
+ATOMIC_OPS( or,  or, |,  i, .aqrl,         )
+
+ATOMIC_OPS(xor, xor, ^,  i,      , _relaxed)
+ATOMIC_OPS(xor, xor, ^,  i, .aq  , _acquire)
+ATOMIC_OPS(xor, xor, ^,  i, .rl  , _release)
+ATOMIC_OPS(xor, xor, ^,  i, .aqrl,         )
+
+#undef ATOMIC_OPS
+
+#undef ATOMIC_FETCH_OP
+#undef ATOMIC_OP_RETURN
+
+/* The extra atomic operations that are constructed from one of the core
+ * AMO-based operations above (aside from sub, which is easier to fit above).
+ * These are required to perform a barrier, but they're OK this way because
+ * atomic_*_return is also required to perform a barrier.
+ */
+#define ATOMIC_OP(op, func_op, comp_op, I, c_type, prefix)			\
+static __always_inline bool atomic##prefix##_##op(c_type i, atomic##prefix##_t *v) \
+{										\
+	return atomic##prefix##_##func_op##_return(i, v) comp_op I;		\
+}
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+#define ATOMIC_OPS(op, func_op, comp_op, I)			\
+        ATOMIC_OP (op, func_op, comp_op, I,  int,   )
+#else
+#define ATOMIC_OPS(op, func_op, comp_op, I)			\
+        ATOMIC_OP (op, func_op, comp_op, I,  int,   )		\
+        ATOMIC_OP (op, func_op, comp_op, I, long, 64)
+#endif
+
+ATOMIC_OPS(add_and_test, add, ==, 0)
+ATOMIC_OPS(sub_and_test, sub, ==, 0)
+ATOMIC_OPS(add_negative, add,  <, 0)
+
+#undef ATOMIC_OP
+#undef ATOMIC_OPS
+
+#define ATOMIC_OP(op, func_op, c_op, I, prefix)					\
+static __always_inline void atomic##prefix##_##op(atomic##prefix##_t *v)	\
+{										\
+	atomic##prefix##_##func_op(I, v);					\
+}
+
+#define ATOMIC_FETCH_OP(op, func_op, c_op, I, prefix)				\
+static __always_inline int atomic##prefix##_fetch_##op(atomic##prefix##_t *v)	\
+{										\
+	return atomic##prefix##_fetch_##func_op(I, v);				\
+}
+
+#define ATOMIC_OP_RETURN(op, asm_op, c_op, I, prefix)				\
+static __always_inline int atomic##prefix##_##op##_return(atomic##prefix##_t *v) \
+{										\
+        return atomic##prefix##_fetch_##op(v) c_op I;				\
+}
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+#define ATOMIC_OPS(op, asm_op, c_op, I)						\
+        ATOMIC_OP       (op, asm_op, c_op, I,   )				\
+        ATOMIC_FETCH_OP (op, asm_op, c_op, I,   )				\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I,   )
+#else
+#define ATOMIC_OPS(op, asm_op, c_op, I)						\
+        ATOMIC_OP       (op, asm_op, c_op, I,   )				\
+        ATOMIC_FETCH_OP (op, asm_op, c_op, I,   )				\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I,   )				\
+        ATOMIC_OP       (op, asm_op, c_op, I, 64)				\
+        ATOMIC_FETCH_OP (op, asm_op, c_op, I, 64)				\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I, 64)
+#endif
+
+ATOMIC_OPS(inc, add, +,  1)
+ATOMIC_OPS(dec, add, +, -1)
+
+#undef ATOMIC_OPS
+#undef ATOMIC_OP
+#undef ATOMIC_FETCH_OP
+#undef ATOMIC_OP_RETURN
+
+#define ATOMIC_OP(op, func_op, comp_op, I, prefix)				\
+static __always_inline bool atomic##prefix##_##op(atomic##prefix##_t *v)	\
+{										\
+	return atomic##prefix##_##func_op##_return(v) comp_op I;		\
+}
+
+ATOMIC_OP(inc_and_test, inc, ==, 0,   )
+ATOMIC_OP(dec_and_test, dec, ==, 0,   )
+#ifndef CONFIG_GENERIC_ATOMIC64
+ATOMIC_OP(inc_and_test, inc, ==, 0, 64)
+ATOMIC_OP(dec_and_test, dec, ==, 0, 64)
+#endif
+
+#undef ATOMIC_OP
+
+/* This is required to provide a barrier on success. */
+static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
+{
+       register int prev, rc;
+
+	__asm__ __volatile__ (
+		"0:\n\t"
+		"lr.w.aqrl %0, %2\n\t"
+		"beq       %0, %4, 1f\n\t"
+		"add       %1, %0, %3\n\t"
+		"sc.w.aqrl %1, %1, %2\n\t"
+		"bnez      %1, 0b\n\t"
+		"1:"
+		: "=&r" (prev), "=&r" (rc), "+A" (v->counter)
+		: "r" (a), "r" (u));
+	return prev;
+}
+
+#ifndef CONFIG_GENERIC_ATOMIC64
+static __always_inline long atomic64_add_unless(atomic64_t *v, long a, long u)
+{
+       register long prev, rc;
+
+	__asm__ __volatile__ (
+		"0:\n\t"
+		"lr.d.aqrl %0, %2\n\t"
+		"beq       %0, %4, 1f\n\t"
+		"add       %1, %0, %3\n\t"
+		"sc.d.aqrl %1, %1, %2\n\t"
+		"bnez      %1, 0b\n\t"
+		"1:"
+		: "=&r" (prev), "=&r" (rc), "+A" (v->counter)
+		: "r" (a), "r" (u));
+	return prev;
+}
+#endif
+
+/* The extra atomic operations that are constructed from one of the core
+ * LR/SC-based operations above.
+ */
+static __always_inline int atomic_inc_not_zero(atomic_t *v)
+{
+        return __atomic_add_unless(v, 1, 0);
+}
+
+#ifndef CONFIG_GENERIC_ATOMIC64
+static __always_inline int atomic64_inc_not_zero(atomic64_t *v)
+{
+        return atomic64_add_unless(v, 1, 0);
+}
+#endif
+
+/* atomic_{cmp,}xchg is required to have exactly the same ordering semantics as
+ * {cmp,}xchg and the operations that return, so they need a barrier.  We just
+ * use the other implementations directly.
+ */
+#define ATOMIC_OP(c_t, prefix, c_or, size, asm_or)						\
+static __always_inline c_t atomic##prefix##_cmpxchg##c_or(atomic##prefix##_t *v, c_t o, c_t n) 	\
+{												\
+	return __cmpxchg(&(v->counter), o, n, size, asm_or, asm_or);				\
+}												\
+static __always_inline c_t atomic##prefix##_xchg##c_or(atomic##prefix##_t *v, c_t n) 		\
+{												\
+	return __xchg(n, &(v->counter), size, asm_or);						\
+}
+
+#ifdef CONFIG_GENERIC_ATOMIC64
+#define ATOMIC_OPS(c_or, asm_or)			\
+	ATOMIC_OP( int,   , c_or, 4, asm_or)
+#else
+#define ATOMIC_OPS(c_or, asm_or)			\
+	ATOMIC_OP( int,   , c_or, 4, asm_or)		\
+	ATOMIC_OP(long, 64, c_or, 8, asm_or)
+#endif
+
+ATOMIC_OPS(        , .aqrl)
+ATOMIC_OPS(_acquire,   .aq)
+ATOMIC_OPS(_release,   .rl)
+ATOMIC_OPS(_relaxed,      )
+
+#undef ATOMIC_OPS
+#undef ATOMIC_OP
+
+#else /* !CONFIG_ISA_A */
+
+#include <asm-generic/atomic.h>
+
+#endif /* CONFIG_ISA_A */
+
+#endif /* _ASM_RISCV_ATOMIC_H */
diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h
new file mode 100644
index 000000000000..fe9c26f49e70
--- /dev/null
+++ b/arch/riscv/include/asm/barrier.h
@@ -0,0 +1,139 @@
+/*
+ * Based on arch/arm/include/asm/barrier.h
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2013 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_RISCV_BARRIER_H
+#define _ASM_RISCV_BARRIER_H
+
+#ifndef __ASSEMBLY__
+
+#define nop()		__asm__ __volatile__ ("nop")
+
+#define RISCV_FENCE(p, s) \
+	__asm__ __volatile__ ("fence " #p "," #s : : : "memory")
+
+/* These barries need to enforce ordering on both devices or memory. */
+#define mb()		RISCV_FENCE(iorw,iorw)
+#define rmb()		RISCV_FENCE(ir,ir)
+#define wmb()		RISCV_FENCE(ow,ow)
+
+/* These barries do not need to enforce ordering on devices, just memory. */
+#define smp_mb()	RISCV_FENCE(rw,rw)
+#define smp_rmb()	RISCV_FENCE(r,r)
+#define smp_wmb()	RISCV_FENCE(w,w)
+
+/* Our atomic operations set the AQ and RL bits and therefor we don't need to
+ * fence around atomics.
+ */
+#define __smb_mb__before_atomic()	barrier()
+#define __smb_mb__after_atomic()	barrier()
+
+/* These barries are meant to prevent memory operations inside a spinlock from
+ * moving outside of that spinlock.  Since we set the AQ and RL bits when
+ * entering or leaving spinlocks, no additional fence needs to be performed.
+ */
+#define smb_mb__before_spinlock()	barrier()
+#define smb_mb__after_spinlock()	barrier()
+
+/* FIXME: I don't think RISC-V is allowed to perform a speculative load. */
+#define smp_acquire__after_ctrl_dep()	barrier()
+
+/* The RISC-V ISA doesn't support byte or half-word AMOs, so we fall back to a
+ * regular store and a fence here.  Otherwise we emit an AMO with an AQ or RL
+ * bit set and allow the microarchitecture to avoid the other half of the AMO.
+ */
+#define __smp_store_release(p, v)					\
+do {									\
+	union { typeof(*p) __val; char __c[1]; } __u =			\
+		{ .__val = (__force typeof(*p)) (v) };			\
+	compiletime_assert_atomic_type(*p);				\
+	switch (sizeof(*p)) {						\
+	case 1:								\
+	case 2:								\
+		smb_mb();						\
+		WRITE_ONCE(*p, __u.__val);				\
+		break;							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"amoswap.w.rl zero, %1, %0"			\
+			: "+A" (*p), "r" (__u.__val)			\
+			: 						\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"amoswap.d.rl zero, %1, %0"			\
+			: "+A" (*p), "r" (__u.__val)			\
+			: 						\
+			: "memory");					\
+		break;							\
+	}								\
+} while (0)
+
+#define __smp_load_acquire(p)						\
+do {									\
+	union { typeof(*p) __val; char __c[1]; } __u =			\
+		{ .__val = (__force typeof(*p)) (v) };			\
+	compiletime_assert_atomic_type(*p);				\
+	switch (sizeof(*p)) {						\
+	case 1:								\
+	case 2:								\
+		__u.__val = READ_ONCE(*p);				\
+		smb_mb();						\
+		break;							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"amoor.w.aq %1, zero, %0"			\
+			: "+A" (*p)					\
+			: "=r" (__u.__val)				\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"amoor.d.aq %1, zero, %0"			\
+			: "+A" (*p)					\
+			: "=r" (__u.__val)				\
+			: "memory");					\
+		break;							\
+	}								\
+	__u.__val;							\
+} while (0)
+
+/* The default implementation of this uses READ_ONCE and
+ * smp_acquire__after_ctrl_dep, but since we can directly do an ACQUIRE load we
+ * can avoid the extra barrier.
+ */
+#define smp_cond_load_acquire(ptr, cond_expr) ({			\
+	typeof(ptr) __PTR = (ptr);					\
+	typeof(*ptr) VAL;						\
+	for (;;) {							\
+		VAL = __smp_load_acquire(__PTR);			\
+		if (cond_expr)						\
+			break;						\
+		cpu_relax();						\
+	}								\
+	smp_acquire__after_ctrl_dep();					\
+	VAL;								\
+})
+
+#include <asm-generic/barrier.h>
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BARRIER_H */
diff --git a/arch/riscv/include/asm/bitops.h b/arch/riscv/include/asm/bitops.h
new file mode 100644
index 000000000000..27e47858c6b1
--- /dev/null
+++ b/arch/riscv/include/asm/bitops.h
@@ -0,0 +1,228 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_BITOPS_H
+#define _ASM_RISCV_BITOPS_H
+
+#ifndef _LINUX_BITOPS_H
+#error "Only <linux/bitops.h> can be included directly"
+#endif /* _LINUX_BITOPS_H */
+
+#ifdef __KERNEL__
+
+#include <linux/compiler.h>
+#include <linux/irqflags.h>
+#include <asm/barrier.h>
+#include <asm/bitsperlong.h>
+
+#ifdef CONFIG_ISA_A
+
+#ifndef smp_mb__before_clear_bit
+#define smp_mb__before_clear_bit()  smp_mb()
+#define smp_mb__after_clear_bit()   smp_mb()
+#endif /* smp_mb__before_clear_bit */
+
+#include <asm-generic/bitops/__ffs.h>
+#include <asm-generic/bitops/ffz.h>
+#include <asm-generic/bitops/fls.h>
+#include <asm-generic/bitops/__fls.h>
+#include <asm-generic/bitops/fls64.h>
+#include <asm-generic/bitops/find.h>
+#include <asm-generic/bitops/sched.h>
+#include <asm-generic/bitops/ffs.h>
+
+#include <asm-generic/bitops/hweight.h>
+
+#if (BITS_PER_LONG == 64)
+#define __AMO(op)	"amo" #op ".d"
+#elif (BITS_PER_LONG == 32)
+#define __AMO(op)	"amo" #op ".w"
+#else
+#error "Unexpected BITS_PER_LONG"
+#endif
+
+#define __test_and_op_bit(op, mod, nr, addr)			\
+({								\
+	unsigned long __res, __mask;				\
+	__mask = BIT_MASK(nr);					\
+	__asm__ __volatile__ (					\
+		__AMO(op) " %0, %2, %1"				\
+		: "=r" (__res), "+A" (addr[BIT_WORD(nr)])	\
+		: "r" (mod(__mask)));				\
+	((__res & __mask) != 0);				\
+})
+
+#define __op_bit(op, mod, nr, addr)				\
+	__asm__ __volatile__ (					\
+		__AMO(op) " zero, %1, %0"			\
+		: "+A" (addr[BIT_WORD(nr)])			\
+		: "r" (mod(BIT_MASK(nr))))
+
+/* Bitmask modifiers */
+#define __NOP(x)	(x)
+#define __NOT(x)	(~(x))
+
+/**
+ * test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It may be reordered on other architectures than x86.
+ * It also implies a memory barrier.
+ */
+static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
+{
+	return __test_and_op_bit(or, __NOP, nr, addr);
+}
+
+/**
+ * test_and_clear_bit - Clear a bit and return its old value
+ * @nr: Bit to clear
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It can be reordered on other architectures other than x86.
+ * It also implies a memory barrier.
+ */
+static inline int test_and_clear_bit(int nr, volatile unsigned long *addr)
+{
+	return __test_and_op_bit(and, __NOT, nr, addr);
+}
+
+/**
+ * test_and_change_bit - Change a bit and return its old value
+ * @nr: Bit to change
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It also implies a memory barrier.
+ */
+static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
+{
+	return __test_and_op_bit(xor, __NOP, nr, addr);
+}
+
+/**
+ * set_bit - Atomically set a bit in memory
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This function is atomic and may not be reordered.  See __set_bit()
+ * if you do not require the atomic guarantees.
+ *
+ * Note: there are no guarantees that this function will not be reordered
+ * on non x86 architectures, so if you are writing portable code,
+ * make sure not to rely on its reordering guarantees.
+ *
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
+static inline void set_bit(int nr, volatile unsigned long *addr)
+{
+	__op_bit(or, __NOP, nr, addr);
+}
+
+/**
+ * clear_bit - Clears a bit in memory
+ * @nr: Bit to clear
+ * @addr: Address to start counting from
+ *
+ * clear_bit() is atomic and may not be reordered.  However, it does
+ * not contain a memory barrier, so if it is used for locking purposes,
+ * you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
+ * in order to ensure changes are visible on other processors.
+ */
+static inline void clear_bit(int nr, volatile unsigned long *addr)
+{
+	__op_bit(and, __NOT, nr, addr);
+}
+
+/**
+ * change_bit - Toggle a bit in memory
+ * @nr: Bit to change
+ * @addr: Address to start counting from
+ *
+ * change_bit() is atomic and may not be reordered. It may be
+ * reordered on other architectures than x86.
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
+static inline void change_bit(int nr, volatile unsigned long *addr)
+{
+	__op_bit(xor, __NOP, nr, addr);
+}
+
+/**
+ * test_and_set_bit_lock - Set a bit and return its old value, for lock
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is atomic and provides acquire barrier semantics.
+ * It can be used to implement bit locks.
+ */
+static inline int test_and_set_bit_lock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	return test_and_set_bit(nr, addr);
+}
+
+/**
+ * clear_bit_unlock - Clear a bit in memory, for unlock
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This operation is atomic and provides release barrier semantics.
+ */
+static inline void clear_bit_unlock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	clear_bit(nr, addr);
+}
+
+/**
+ * __clear_bit_unlock - Clear a bit in memory, for unlock
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This operation is like clear_bit_unlock, however it is not atomic.
+ * It does provide release barrier semantics so it can be used to unlock
+ * a bit lock, however it would only be used if no other CPU can modify
+ * any bits in the memory until the lock is released (a good example is
+ * if the bit lock itself protects access to the other bits in the word).
+ */
+static inline void __clear_bit_unlock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	clear_bit(nr, addr);
+}
+
+#undef __test_and_op_bit
+#undef __op_bit
+#undef __NOP
+#undef __NOT
+#undef __AMO
+
+#include <asm-generic/bitops/non-atomic.h>
+#include <asm-generic/bitops/le.h>
+#include <asm-generic/bitops/ext2-atomic.h>
+
+#else /* !CONFIG_ISA_A */
+
+#include <asm-generic/bitops.h>
+
+#endif /* CONFIG_ISA_A */
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_BITOPS_H */
diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
new file mode 100644
index 000000000000..0595585013b0
--- /dev/null
+++ b/arch/riscv/include/asm/cacheflush.h
@@ -0,0 +1,39 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CACHEFLUSH_H
+#define _ASM_RISCV_CACHEFLUSH_H
+
+#include <asm-generic/cacheflush.h>
+
+#undef flush_icache_range
+#undef flush_icache_user_range
+
+static inline void local_flush_icache_all(void)
+{
+	asm volatile ("fence.i" ::: "memory");
+}
+
+#ifndef CONFIG_SMP
+
+#define flush_icache_range(start, end) local_flush_icache_all()
+#define flush_icache_user_range(vma, pg, addr, len) local_flush_icache_all()
+
+#else /* CONFIG_SMP */
+
+#define flush_icache_range(start, end) sbi_remote_fence_i(0)
+#define flush_icache_user_range(vma, pg, addr, len) sbi_remote_fence_i(0)
+
+#endif /* CONFIG_SMP */
+
+#endif /* _ASM_RISCV_CACHEFLUSH_H */
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
new file mode 100644
index 000000000000..81025c056412
--- /dev/null
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -0,0 +1,138 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CMPXCHG_H
+#define _ASM_RISCV_CMPXCHG_H
+
+#include <linux/bug.h>
+
+#ifdef CONFIG_ISA_A
+
+#include <asm/barrier.h>
+
+#define __xchg(new, ptr, size, asm_or)				\
+({								\
+	__typeof__(ptr) __ptr = (ptr);				\
+	__typeof__(new) __new = (new);				\
+	__typeof__(*(ptr)) __ret;				\
+	switch (size) {						\
+	case 4:							\
+		__asm__ __volatile__ (				\
+			"amoswap.w" #asm_or " %0, %2, %1"	\
+			: "=r" (__ret), "+A" (*__ptr)		\
+			: "r" (__new));				\
+		break;						\
+	case 8:							\
+		__asm__ __volatile__ (				\
+			"amoswap.d" #asm_or " %0, %2, %1"	\
+			: "=r" (__ret), "+A" (*__ptr)		\
+			: "r" (__new));				\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+	}							\
+	__ret;							\
+})
+
+#define xchg(ptr, x)    (__xchg((x), (ptr), sizeof(*(ptr)), .aqrl))
+
+#define xchg32(ptr, x)				\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 4);	\
+	xchg((ptr), (x));			\
+})
+
+#define xchg64(ptr, x)				\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);	\
+	xchg((ptr), (x));			\
+})
+
+/*
+ * Atomic compare and exchange.  Compare OLD with MEM, if identical,
+ * store NEW in MEM.  Return the initial value in MEM.  Success is
+ * indicated by comparing RETURN with OLD.
+ */
+#define __cmpxchg(ptr, old, new, size, lrb, scb)			\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(old) __old = (old);					\
+	__typeof__(new) __new = (new);					\
+	__typeof__(*(ptr)) __ret;					\
+	register unsigned int __rc;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+		"0:"							\
+			"lr.w" #scb " %0, %2\n"				\
+			"bne         %0, %z3, 1f\n"			\
+			"sc.w" #lrb " %1, %z4, %2\n"			\
+			"bnez        %1, 0b\n"				\
+		"1:"							\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new));			\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+		"0:"							\
+			"lr.d" #scb " %0, %2\n"				\
+			"bne         %0, %z3, 1f\n"			\
+			"sc.d" #lrb " %1, %z4, %2\n"			\
+			"bnez        %1, 0b\n"				\
+		"1:"							\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new));			\
+		break;							\
+	default:							\
+		BUILD_BUG();						\
+	}								\
+	__ret;								\
+})
+
+#define cmpxchg(ptr, o, n) \
+	(__cmpxchg((ptr), (o), (n), sizeof(*(ptr)), .aqrl, .aqrl))
+
+#define cmpxchg_local(ptr, o, n) \
+	(__cmpxchg((ptr), (o), (n), sizeof(*(ptr)), , ))
+
+#define cmpxchg32(ptr, o, n)			\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 4);	\
+	cmpxchg((ptr), (o), (n));		\
+})
+
+#define cmpxchg32_local(ptr, o, n)		\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 4);	\
+	cmpxchg_local((ptr), (o), (n));		\
+})
+
+#define cmpxchg64(ptr, o, n)			\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);	\
+	cmpxchg((ptr), (o), (n));		\
+})
+
+#define cmpxchg64_local(ptr, o, n)		\
+({						\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);	\
+	cmpxchg_local((ptr), (o), (n));		\
+})
+
+#else /* !CONFIG_ISA_A */
+
+#include <asm-generic/cmpxchg.h>
+
+#endif /* CONFIG_ISA_A */
+
+#endif /* _ASM_RISCV_CMPXCHG_H */
diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h
new file mode 100644
index 000000000000..9c76ecfd203d
--- /dev/null
+++ b/arch/riscv/include/asm/io.h
@@ -0,0 +1,178 @@
+/*
+ * {read,write}{b,w,l,q} based on arch/arm64/include/asm/io.h
+ *   which was based on arch/arm/include/io.h
+ *
+ * Copyright (C) 1996-2000 Russell King
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_IO_H
+#define _ASM_RISCV_IO_H
+
+#ifdef __KERNEL__
+
+#ifdef CONFIG_MMU
+
+extern void __iomem *ioremap(phys_addr_t offset, unsigned long size);
+
+/* The RISC-V ISA doesn't yet specify how to query of modify PMAs, so we can't
+ * change the properties of memory regions.  This should be fixed by the
+ * upcoming platform spec.
+ */
+#define ioremap_nocache(addr, size) ioremap((addr), (size))
+#define ioremap_wc(addr, size) ioremap((addr), (size))
+#define ioremap_wt(addr, size) ioremap((addr), (size))
+
+extern void iounmap(void __iomem *addr);
+
+#endif /* CONFIG_MMU */
+
+/* Generic IO read/write.  These perform native-endian accesses. */
+#define __raw_writeb __raw_writeb
+static inline void __raw_writeb(u8 val, volatile void __iomem *addr)
+{
+	asm volatile("sb %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#define __raw_writew __raw_writew
+static inline void __raw_writew(u16 val, volatile void __iomem *addr)
+{
+	asm volatile("sh %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#define __raw_writel __raw_writel
+static inline void __raw_writel(u32 val, volatile void __iomem *addr)
+{
+	asm volatile("sw %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#ifdef __riscv64
+#define __raw_writeq __raw_writeq
+static inline void __raw_writeq(u64 val, volatile void __iomem *addr)
+{
+	asm volatile("sd %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+#endif
+
+#define __raw_readb __raw_readb
+static inline u8 __raw_readb(const volatile void __iomem *addr)
+{
+	u8 val;
+
+	asm volatile("lb %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#define __raw_readw __raw_readw
+static inline u16 __raw_readw(const volatile void __iomem *addr)
+{
+	u16 val;
+
+	asm volatile("lh %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#define __raw_readl __raw_readl
+static inline u32 __raw_readl(const volatile void __iomem *addr)
+{
+	u32 val;
+
+	asm volatile("lw %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#ifdef __riscv64
+#define __raw_readq __raw_readq
+static inline u64 __raw_readq(const volatile void __iomem *addr)
+{
+	u64 val;
+
+	asm volatile("ld %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+#endif
+
+/* FIXME: I'm flip-flopping on whether or not we should keep this or enforce
+ * the ordering with I/O on spinlocks.  The worry is that drivers won't get
+ * this correct, but I also don't want to introduce a fence into the lock code
+ * that otherwise only uses AMOs and LR/SC.   For now I'm leaving this here:
+ * "w,o" is sufficient to ensure that all writes to the device has completed
+ * before the write to the spinlock is allowed to commit.  I surmised this from
+ * reading "ACQUIRES VS I/O ACCESSES" in memory-barries.txt.
+ */
+#define mmiowb()	__asm__ __volatile__ ("fence o,w" : : : "memory");
+
+/* Unordered I/O memory access primitives.  These are even more relaxed than
+ * the relaxed versions, as they don't even order accesses between successive
+ * operations to the I/O regions.
+ */
+#define readb_cpu(c)		({ u8  __r = __raw_readb(c); __r; })
+#define readw_cpu(c)		({ u16 __r = le16_to_cpu((__force __le16)__raw_readw(c)); __r; })
+#define readl_cpu(c)		({ u32 __r = le32_to_cpu((__force __le32)__raw_readl(c)); __r; })
+#define readq_cpu(c)		({ u64 __r = le64_to_cpu((__force __le64)__raw_readq(c)); __r; })
+
+#define writeb_cpu(v,c)		((void)__raw_writeb((v),(c)))
+#define writew_cpu(v,c)		((void)__raw_writew((__force u16)cpu_to_le16(v),(c)))
+#define writel_cpu(v,c)		((void)__raw_writel((__force u32)cpu_to_le32(v),(c)))
+#define writeq_cpu(v,c)		((void)__raw_writeq((__force u64)cpu_to_le64(v),(c)))
+
+/* Relaxed I/O memory access primitives. These follow the Device memory
+ * ordering rules but do not guarantee any ordering relative to Normal memory
+ * accesses.  The memory barries here are necessary as RISC-V doesn't define
+ * any ordering constraints on accesses to the device I/O space.  These are
+ * defined to order the indicated access (either a read or write) with all
+ * other I/O memory accesses.
+ */
+/* FIXME: The platform spec will define the default Linux-capable platform to
+ * have some extra IO ordering constraints that will make these fences
+ * unnecessary.
+ */
+#define __iorrmb()	__asm__ __volatile__ ("fence i,io" : : : "memory");
+#define __iorwmb()	__asm__ __volatile__ ("fence io,o" : : : "memory");
+
+#define readb_relaxed(c)	({ u8  __v = readb_cpu(c); __iorrmb(); __v; })
+#define readw_relaxed(c)	({ u16 __v = readw_cpu(c); __iorrmb(); __v; })
+#define readl_relaxed(c)	({ u32 __v = readl_cpu(c); __iorrmb(); __v; })
+#define readq_relaxed(c)	({ u64 __v = readq_cpu(c); __iorrmb(); __v; })
+
+#define writeb_relaxed(v,c)	({ __iorwmb(); writeb_cpu((v),(c)); })
+#define writew_relaxed(v,c)	({ __iorwmb(); writew_cpu((v),(c)); })
+#define writel_relaxed(v,c)	({ __iorwmb(); writel_cpu((v),(c)); })
+#define writeq_relaxed(v,c)	({ __iorwmb(); writeq_cpu((v),(c)); })
+
+/* I/O memory access primitives. Reads are ordered relative to any
+ * following Normal memory access. Writes are ordered relative to any prior
+ * Normal memory access.  The memory barriers here are necessary as RISC-V
+ * doesn't define any ordering between the memory space and the I/O space.
+ * They may be stronger than necessary ("i,r" and "w,o" might be sufficient),
+ * but I feel kind of queasy making these weaker in any manner than the relaxed
+ * versions above.
+ */
+#define __iormb()	__asm__ __volatile__ ("fence i,ior" : : : "memory");
+#define __iowmb()	__asm__ __volatile__ ("fence iow,o" : : : "memory");
+
+#define readb(c)		({ u8  __v = readb_cpu(c); __iormb(); __v; })
+#define readw(c)		({ u16 __v = readw_cpu(c); __iormb(); __v; })
+#define readl(c)		({ u32 __v = readl_cpu(c); __iormb(); __v; })
+#define readq(c)		({ u64 __v = readq_cpu(c); __iormb(); __v; })
+
+#define writeb(v,c)		({ __iowmb(); writeb_cpu((v),(c)); })
+#define writew(v,c)		({ __iowmb(); writew_cpu((v),(c)); })
+#define writel(v,c)		({ __iowmb(); writel_cpu((v),(c)); })
+#define writeq(v,c)		({ __iowmb(); writeq_cpu((v),(c)); })
+
+#include <asm-generic/io.h>
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_IO_H */
diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
new file mode 100644
index 000000000000..54ed73bfa972
--- /dev/null
+++ b/arch/riscv/include/asm/spinlock.h
@@ -0,0 +1,167 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SPINLOCK_H
+#define _ASM_RISCV_SPINLOCK_H
+
+#include <linux/kernel.h>
+#include <asm/current.h>
+
+/*
+ * Simple spin lock operations.  These provide no fairness guarantees.
+ */
+
+/* FIXME: Replace this with a ticket lock, like MIPS. */
+
+#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
+#define arch_spin_is_locked(x)	((x)->lock != 0)
+#define arch_spin_unlock_wait(x) \
+		do { cpu_relax(); } while ((x)->lock)
+
+static inline void arch_spin_unlock(arch_spinlock_t *lock)
+{
+	__asm__ __volatile__ (
+		"amoswap.w.rl x0, x0, %0"
+		: "=A" (lock->lock)
+		:: "memory");
+}
+
+static inline int arch_spin_trylock(arch_spinlock_t *lock)
+{
+	int tmp = 1, busy;
+
+	__asm__ __volatile__ (
+		"amoswap.w.aq %0, %2, %1"
+		: "=r" (busy), "+A" (lock->lock)
+		: "r" (tmp)
+		: "memory");
+
+	return !busy;
+}
+
+static inline void arch_spin_lock(arch_spinlock_t *lock)
+{
+	while (1) {
+		if (arch_spin_is_locked(lock))
+			continue;
+
+		if (arch_spin_trylock(lock))
+			break;
+	}
+}
+
+static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
+{
+	smp_rmb();
+	do {
+		cpu_relax();
+	} while (arch_spin_is_locked(lock));
+	smp_acquire__after_ctrl_dep();
+}
+
+/***********************************************************/
+
+static inline int arch_read_can_lock(arch_rwlock_t *lock)
+{
+	return lock->lock >= 0;
+}
+
+static inline int arch_write_can_lock(arch_rwlock_t *lock)
+{
+	return lock->lock == 0;
+}
+
+static inline void arch_read_lock(arch_rwlock_t *lock)
+{
+	int tmp;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bltz	%1, 1b\n"
+		"	addi	%1, %1, 1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		: "+A" (lock->lock), "=&r" (tmp)
+		:: "memory");
+}
+
+static inline void arch_write_lock(arch_rwlock_t *lock)
+{
+	int tmp;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bnez	%1, 1b\n"
+		"	li	%1, -1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		: "+A" (lock->lock), "=&r" (tmp)
+		:: "memory");
+}
+
+static inline int arch_read_trylock(arch_rwlock_t *lock)
+{
+	int busy;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bltz	%1, 1f\n"
+		"	addi	%1, %1, 1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		"1:\n"
+		: "+A" (lock->lock), "=&r" (busy)
+		:: "memory");
+
+	return !busy;
+}
+
+static inline int arch_write_trylock(arch_rwlock_t *lock)
+{
+	int busy;
+
+	__asm__ __volatile__(
+		"1:	lr.w	%1, %0\n"
+		"	bnez	%1, 1f\n"
+		"	li	%1, -1\n"
+		"	sc.w.aq	%1, %1, %0\n"
+		"	bnez	%1, 1b\n"
+		"1:\n"
+		: "+A" (lock->lock), "=&r" (busy)
+		:: "memory");
+
+	return !busy;
+}
+
+static inline void arch_read_unlock(arch_rwlock_t *lock)
+{
+	__asm__ __volatile__(
+		"amoadd.w.rl x0, %1, %0"
+		: "+A" (lock->lock)
+		: "r" (-1)
+		: "memory");
+}
+
+static inline void arch_write_unlock(arch_rwlock_t *lock)
+{
+	__asm__ __volatile__ (
+		"amoswap.w.rl x0, x0, %0"
+		: "=A" (lock->lock)
+		:: "memory");
+}
+
+#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
+#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
+
+#endif /* _ASM_RISCV_SPINLOCK_H */
diff --git a/arch/riscv/include/asm/spinlock_types.h b/arch/riscv/include/asm/spinlock_types.h
new file mode 100644
index 000000000000..83ac4ac9e2ac
--- /dev/null
+++ b/arch/riscv/include/asm/spinlock_types.h
@@ -0,0 +1,33 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SPINLOCK_TYPES_H
+#define _ASM_RISCV_SPINLOCK_TYPES_H
+
+#ifndef __LINUX_SPINLOCK_TYPES_H
+# error "please don't include this file directly"
+#endif
+
+typedef struct {
+	volatile unsigned int lock;
+} arch_spinlock_t;
+
+#define __ARCH_SPIN_LOCK_UNLOCKED	{ 0 }
+
+typedef struct {
+	volatile unsigned int lock;
+} arch_rwlock_t;
+
+#define __ARCH_RW_LOCK_UNLOCKED		{ 0 }
+
+#endif
diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h
new file mode 100644
index 000000000000..c229509288ea
--- /dev/null
+++ b/arch/riscv/include/asm/tlb.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TLB_H
+#define _ASM_RISCV_TLB_H
+
+#include <asm-generic/tlb.h>
+
+static inline void tlb_flush(struct mmu_gather *tlb)
+{
+	flush_tlb_mm(tlb->mm);
+}
+
+#endif /* _ASM_RISCV_TLB_H */
diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
new file mode 100644
index 000000000000..5ee4ae370b5e
--- /dev/null
+++ b/arch/riscv/include/asm/tlbflush.h
@@ -0,0 +1,64 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TLBFLUSH_H
+#define _ASM_RISCV_TLBFLUSH_H
+
+#ifdef CONFIG_MMU
+
+/* Flush entire local TLB */
+static inline void local_flush_tlb_all(void)
+{
+	__asm__ __volatile__ ("sfence.vma" : : : "memory");
+}
+
+/* Flush one page from local TLB */
+static inline void local_flush_tlb_page(unsigned long addr)
+{
+	__asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory");
+}
+
+#ifndef CONFIG_SMP
+
+#define flush_tlb_all() local_flush_tlb_all()
+#define flush_tlb_page(vma, addr) local_flush_tlb_page(addr)
+#define flush_tlb_range(vma, start, end) local_flush_tlb_all()
+
+#else /* CONFIG_SMP */
+
+#include <asm/sbi.h>
+
+#define flush_tlb_all() sbi_remote_sfence_vma(0, 0, -1)
+#define flush_tlb_page(vma, addr) flush_tlb_range(vma, addr, 0)
+#define flush_tlb_range(vma, start, end) \
+	sbi_remote_sfence_vma(0, start, (end) - (start))
+
+#endif /* CONFIG_SMP */
+
+/* Flush the TLB entries of the specified mm context */
+static inline void flush_tlb_mm(struct mm_struct *mm)
+{
+	flush_tlb_all();
+}
+
+/* Flush a range of kernel pages */
+static inline void flush_tlb_kernel_range(unsigned long start,
+	unsigned long end)
+{
+	flush_tlb_all();
+}
+
+#endif /* CONFIG_MMU */
+
+#endif /* _ASM_RISCV_TLBFLUSH_H */
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 3/9] RISC-V: Generic library routines and assembly
  2017-06-28 18:55     ` Palmer Dabbelt
@ 2017-06-28 18:55       ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains code that is more specific to the RISC-V ISA than it
is to Linux.  It contains string and math operations, C wrappers for
various assembly instructions, stack walking code, and uaccess.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/asm.h            |  76 ++++++
 arch/riscv/include/asm/csr.h            | 125 +++++++++
 arch/riscv/include/asm/linkage.h        |  20 ++
 arch/riscv/include/asm/string.h         |  30 +++
 arch/riscv/include/asm/uaccess.h        | 440 ++++++++++++++++++++++++++++++++
 arch/riscv/include/asm/word-at-a-time.h |  55 ++++
 arch/riscv/kernel/stacktrace.c          | 177 +++++++++++++
 arch/riscv/lib/memcpy.S                 |  98 +++++++
 arch/riscv/lib/memset.S                 | 118 +++++++++
 arch/riscv/lib/uaccess.S                | 125 +++++++++
 arch/riscv/lib/udivdi3.S                |  38 +++
 11 files changed, 1302 insertions(+)
 create mode 100644 arch/riscv/include/asm/asm.h
 create mode 100644 arch/riscv/include/asm/csr.h
 create mode 100644 arch/riscv/include/asm/linkage.h
 create mode 100644 arch/riscv/include/asm/string.h
 create mode 100644 arch/riscv/include/asm/uaccess.h
 create mode 100644 arch/riscv/include/asm/word-at-a-time.h
 create mode 100644 arch/riscv/kernel/stacktrace.c
 create mode 100644 arch/riscv/lib/memcpy.S
 create mode 100644 arch/riscv/lib/memset.S
 create mode 100644 arch/riscv/lib/uaccess.S
 create mode 100644 arch/riscv/lib/udivdi3.S

diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h
new file mode 100644
index 000000000000..6cbbb6a68d76
--- /dev/null
+++ b/arch/riscv/include/asm/asm.h
@@ -0,0 +1,76 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_ASM_H
+#define _ASM_RISCV_ASM_H
+
+#ifdef __ASSEMBLY__
+#define __ASM_STR(x)	x
+#else
+#define __ASM_STR(x)	#x
+#endif
+
+#if __riscv_xlen == 64
+#define __REG_SEL(a, b)	__ASM_STR(a)
+#elif __riscv_xlen == 32
+#define __REG_SEL(a, b)	__ASM_STR(b)
+#else
+#error "Unexpected __riscv_xlen"
+#endif
+
+#define REG_L		__REG_SEL(ld, lw)
+#define REG_S		__REG_SEL(sd, sw)
+#define SZREG		__REG_SEL(8, 4)
+#define LGREG		__REG_SEL(3, 2)
+
+#if __SIZEOF_POINTER__ == 8
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.dword
+#define RISCV_SZPTR		8
+#define RISCV_LGPTR		3
+#else
+#define RISCV_PTR		".dword"
+#define RISCV_SZPTR		"8"
+#define RISCV_LGPTR		"3"
+#endif
+#elif __SIZEOF_POINTER__ == 4
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.word
+#define RISCV_SZPTR		4
+#define RISCV_LGPTR		2
+#else
+#define RISCV_PTR		".word"
+#define RISCV_SZPTR		"4"
+#define RISCV_LGPTR		"2"
+#endif
+#else
+#error "Unexpected __SIZEOF_POINTER__"
+#endif
+
+#if (__SIZEOF_INT__ == 4)
+#define INT		__ASM_STR(.word)
+#define SZINT		__ASM_STR(4)
+#define LGINT		__ASM_STR(2)
+#else
+#error "Unexpected __SIZEOF_INT__"
+#endif
+
+#if (__SIZEOF_SHORT__ == 2)
+#define SHORT		__ASM_STR(.half)
+#define SZSHORT		__ASM_STR(2)
+#define LGSHORT		__ASM_STR(1)
+#else
+#error "Unexpected __SIZEOF_SHORT__"
+#endif
+
+#endif /* _ASM_RISCV_ASM_H */
diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
new file mode 100644
index 000000000000..387d0dbf0073
--- /dev/null
+++ b/arch/riscv/include/asm/csr.h
@@ -0,0 +1,125 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CSR_H
+#define _ASM_RISCV_CSR_H
+
+#include <linux/const.h>
+
+/* Status register flags */
+#define SR_IE   _AC(0x00000002, UL) /* Interrupt Enable */
+#define SR_PIE  _AC(0x00000020, UL) /* Previous IE */
+#define SR_PS   _AC(0x00000100, UL) /* Previously Supervisor */
+#define SR_SUM  _AC(0x00040000, UL) /* Supervisor may access User Memory */
+
+#define SR_FS           _AC(0x00006000, UL) /* Floating-point Status */
+#define SR_FS_OFF       _AC(0x00000000, UL)
+#define SR_FS_INITIAL   _AC(0x00002000, UL)
+#define SR_FS_CLEAN     _AC(0x00004000, UL)
+#define SR_FS_DIRTY     _AC(0x00006000, UL)
+
+#define SR_XS           _AC(0x00018000, UL) /* Extension Status */
+#define SR_XS_OFF       _AC(0x00000000, UL)
+#define SR_XS_INITIAL   _AC(0x00008000, UL)
+#define SR_XS_CLEAN     _AC(0x00010000, UL)
+#define SR_XS_DIRTY     _AC(0x00018000, UL)
+
+#ifndef CONFIG_64BIT
+#define SR_SD   _AC(0x80000000, UL) /* FS/XS dirty */
+#else
+#define SR_SD   _AC(0x8000000000000000, UL) /* FS/XS dirty */
+#endif
+
+/* SPTBR flags */
+#if __riscv_xlen == 32
+#define SPTBR_PPN     _AC(0x003FFFFF, UL)
+#define SPTBR_MODE_32 _AC(0x80000000, UL)
+#define SPTBR_MODE    SPTBR_MODE_32
+#else
+#define SPTBR_PPN     _AC(0x00000FFFFFFFFFFF, UL)
+#define SPTBR_MODE_39 _AC(0x8000000000000000, UL)
+#define SPTBR_MODE    SPTBR_MODE_39
+#endif
+
+/* Interrupt Enable and Interrupt Pending flags */
+#define SIE_SSIE _AC(0x00000002, UL) /* Software Interrupt Enable */
+#define SIE_STIE _AC(0x00000020, UL) /* Timer Interrupt Enable */
+
+#define EXC_INST_MISALIGNED     0
+#define EXC_INST_ACCESS         1
+#define EXC_BREAKPOINT          3
+#define EXC_LOAD_ACCESS         5
+#define EXC_STORE_ACCESS        7
+#define EXC_SYSCALL             8
+#define EXC_INST_PAGE_FAULT     12
+#define EXC_LOAD_PAGE_FAULT     13
+#define EXC_STORE_PAGE_FAULT    15
+
+#ifndef __ASSEMBLY__
+
+#define csr_swap(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrw %0, " #csr ", %1"		\
+			      : "=r" (__v) : "rK" (__v));	\
+	__v;							\
+})
+
+#define csr_read(csr)						\
+({								\
+	register unsigned long __v;				\
+	__asm__ __volatile__ ("csrr %0, " #csr			\
+			      : "=r" (__v));			\
+	__v;							\
+})
+
+#define csr_write(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrw " #csr ", %0"		\
+			      : : "rK" (__v));			\
+})
+
+#define csr_read_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrs %0, " #csr ", %1"		\
+			      : "=r" (__v) : "rK" (__v));	\
+	__v;							\
+})
+
+#define csr_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrs " #csr ", %0"		\
+			      : : "rK" (__v));			\
+})
+
+#define csr_read_clear(csr, val)				\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrc %0, " #csr ", %1"		\
+			      : "=r" (__v) : "rK" (__v));	\
+	__v;							\
+})
+
+#define csr_clear(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrc " #csr ", %0"		\
+			      : : "rK" (__v));			\
+})
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_CSR_H */
diff --git a/arch/riscv/include/asm/linkage.h b/arch/riscv/include/asm/linkage.h
new file mode 100644
index 000000000000..b7b304ca89c4
--- /dev/null
+++ b/arch/riscv/include/asm/linkage.h
@@ -0,0 +1,20 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_LINKAGE_H
+#define _ASM_RISCV_LINKAGE_H
+
+#define __ALIGN		.balign 4
+#define __ALIGN_STR	".balign 4"
+
+#endif /* _ASM_RISCV_LINKAGE_H */
diff --git a/arch/riscv/include/asm/string.h b/arch/riscv/include/asm/string.h
new file mode 100644
index 000000000000..1c59cf4e8ccc
--- /dev/null
+++ b/arch/riscv/include/asm/string.h
@@ -0,0 +1,30 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_STRING_H
+#define _ASM_RISCV_STRING_H
+
+#ifdef __KERNEL__
+
+#include <linux/types.h>
+#include <linux/linkage.h>
+
+#define __HAVE_ARCH_MEMSET
+extern asmlinkage void *memset(void *, int, size_t);
+
+#define __HAVE_ARCH_MEMCPY
+extern asmlinkage void *memcpy(void *, const void *, size_t);
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_STRING_H */
diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
new file mode 100644
index 000000000000..f3ece74219d7
--- /dev/null
+++ b/arch/riscv/include/asm/uaccess.h
@@ -0,0 +1,440 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ *
+ * This file was copied from include/asm-generic/uaccess.h
+ */
+
+#ifndef _ASM_RISCV_UACCESS_H
+#define _ASM_RISCV_UACCESS_H
+
+/*
+ * User space memory access functions
+ */
+#include <linux/errno.h>
+#include <linux/compiler.h>
+#include <linux/thread_info.h>
+#include <asm/byteorder.h>
+#include <asm/asm.h>
+
+#ifdef CONFIG_RV_PUM
+#define __enable_user_access()							\
+	__asm__ __volatile__ ("csrs sstatus, %0" : : "r" (SR_SUM) : "memory")
+#define __disable_user_access()							\
+	__asm__ __volatile__ ("csrc sstatus, %0" : : "r" (SR_SUM) : "memory")
+#else
+#define __enable_user_access()
+#define __disable_user_access()
+#endif
+
+/*
+ * The fs value determines whether argument validity checking should be
+ * performed or not.  If get_fs() == USER_DS, checking is performed, with
+ * get_fs() == KERNEL_DS, checking is bypassed.
+ *
+ * For historical reasons, these macros are grossly misnamed.
+ */
+
+#define KERNEL_DS	(~0UL)
+#define USER_DS		(TASK_SIZE)
+
+#define get_ds()	(KERNEL_DS)
+#define get_fs()	(current_thread_info()->addr_limit)
+
+static inline void set_fs(mm_segment_t fs)
+{
+	current_thread_info()->addr_limit = fs;
+}
+
+#define segment_eq(a, b) ((a) == (b))
+
+#define user_addr_max()	(get_fs())
+
+
+#define VERIFY_READ	0
+#define VERIFY_WRITE	1
+
+/**
+ * access_ok: - Checks if a user space pointer is valid
+ * @type: Type of access: %VERIFY_READ or %VERIFY_WRITE.  Note that
+ *        %VERIFY_WRITE is a superset of %VERIFY_READ - if it is safe
+ *        to write to a block, it is always safe to read from it.
+ * @addr: User space pointer to start of block to check
+ * @size: Size of block to check
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * Checks if a pointer to a block of memory in user space is valid.
+ *
+ * Returns true (nonzero) if the memory block may be valid, false (zero)
+ * if it is definitely invalid.
+ *
+ * Note that, depending on architecture, this function probably just
+ * checks that the pointer is in the user space range - after calling
+ * this function, memory access functions may still return -EFAULT.
+ */
+#define access_ok(type, addr, size) ({					\
+	__chk_user_ptr(addr);						\
+	likely(__access_ok((unsigned long __force)(addr), (size)));	\
+})
+
+/* Ensure that the range [addr, addr+size) is within the process's
+ * address space
+ */
+static inline int __access_ok(unsigned long addr, unsigned long size)
+{
+	const mm_segment_t fs = get_fs();
+
+	return (size <= fs) && (addr <= (fs - size));
+}
+
+/*
+ * The exception table consists of pairs of addresses: the first is the
+ * address of an instruction that is allowed to fault, and the second is
+ * the address at which the program should continue.  No registers are
+ * modified, so it is entirely up to the continuation code to figure out
+ * what to do.
+ *
+ * All the routines below use bits of fixup code that are out of line
+ * with the main instruction path.  This means when everything is well,
+ * we don't even have to jump over them.  Further, they do not intrude
+ * on our cache or tlb entries.
+ */
+
+struct exception_table_entry {
+	unsigned long insn, fixup;
+};
+
+extern int fixup_exception(struct pt_regs *state);
+
+#if defined(__LITTLE_ENDIAN)
+#define __MSW	1
+#define __LSW	0
+#elif defined(__BIG_ENDIAN)
+#define __MSW	0
+#define	__LSW	1
+#else
+#error "Unknown endianness"
+#endif
+
+/*
+ * The "__xxx" versions of the user access functions do not verify the address
+ * space - it must have been done previously with a separate "access_ok()"
+ * call.
+ */
+
+#ifdef CONFIG_MMU
+#define __get_user_asm(insn, x, ptr, err)			\
+do {								\
+	uintptr_t __tmp;					\
+	__typeof__(x) __x;					\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	" insn " %1, %3\n"			\
+		"2:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"3:\n"						\
+		"	li %0, %4\n"				\
+		"	li %1, 0\n"				\
+		"	jump 2b, %2\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 3b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=&r" (__x), "=r" (__tmp)		\
+		: "m" (*(ptr)), "i" (-EFAULT));			\
+	__disable_user_access();				\
+	(x) = __x;						\
+} while (0)
+#endif /* CONFIG_MMU */
+
+#ifdef CONFIG_64BIT
+#define __get_user_8(x, ptr, err) \
+	__get_user_asm("ld", x, ptr, err)
+#else /* !CONFIG_64BIT */
+#ifdef CONFIG_MMU
+#define __get_user_8(x, ptr, err)				\
+do {								\
+	u32 __user *__ptr = (u32 __user *)(ptr);		\
+	u32 __lo, __hi;						\
+	uintptr_t __tmp;					\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	lw %1, %4\n"				\
+		"2:\n"						\
+		"	lw %2, %5\n"				\
+		"3:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"4:\n"						\
+		"	li %0, %6\n"				\
+		"	li %1, 0\n"				\
+		"	li %2, 0\n"				\
+		"	jump 3b, %3\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 4b\n"			\
+		"	" RISCV_PTR " 2b, 4b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=&r" (__lo), "=r" (__hi),	\
+			"=r" (__tmp)				\
+		: "m" (__ptr[__LSW]), "m" (__ptr[__MSW]),	\
+			"i" (-EFAULT));				\
+	__disable_user_access();				\
+	(x) = (__typeof__(x))((__typeof__((x)-(x)))(		\
+		(((u64)__hi << 32) | __lo)));			\
+} while (0)
+#endif /* CONFIG_MMU */
+#endif /* CONFIG_64BIT */
+
+
+/**
+ * __get_user: - Get a simple variable from user space, with less checking.
+ * @x:   Variable to store result.
+ * @ptr: Source address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple variable from user space to kernel
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and the result of
+ * dereferencing @ptr must be assignable to @x without a cast.
+ *
+ * Caller must check the pointer with access_ok() before calling this
+ * function.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ * On error, the variable @x is set to zero.
+ */
+#define __get_user(x, ptr)					\
+({								\
+	register long __gu_err = 0;				\
+	const __typeof__(*(ptr)) __user *__gu_ptr = (ptr);	\
+	__chk_user_ptr(__gu_ptr);				\
+	switch (sizeof(*__gu_ptr)) {				\
+	case 1:							\
+		__get_user_asm("lb", (x), __gu_ptr, __gu_err);	\
+		break;						\
+	case 2:							\
+		__get_user_asm("lh", (x), __gu_ptr, __gu_err);	\
+		break;						\
+	case 4:							\
+		__get_user_asm("lw", (x), __gu_ptr, __gu_err);	\
+		break;						\
+	case 8:							\
+		__get_user_8((x), __gu_ptr, __gu_err);	\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+	}							\
+	__gu_err;						\
+})
+
+/**
+ * get_user: - Get a simple variable from user space.
+ * @x:   Variable to store result.
+ * @ptr: Source address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple variable from user space to kernel
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and the result of
+ * dereferencing @ptr must be assignable to @x without a cast.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ * On error, the variable @x is set to zero.
+ */
+#define get_user(x, ptr)					\
+({								\
+	const __typeof__(*(ptr)) __user *__p = (ptr);		\
+	might_fault();						\
+	access_ok(VERIFY_READ, __p, sizeof(*__p)) ?		\
+		__get_user((x), __p) :				\
+		((x) = 0, -EFAULT);				\
+})
+
+
+#ifdef CONFIG_MMU
+#define __put_user_asm(insn, x, ptr, err)			\
+do {								\
+	uintptr_t __tmp;					\
+	__typeof__(*(ptr)) __x = x;				\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	" insn " %z3, %2\n"			\
+		"2:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"3:\n"						\
+		"	li %0, %4\n"				\
+		"	jump 2b, %1\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 3b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=r" (__tmp), "=m" (*(ptr))	\
+		: "rJ" (__x), "i" (-EFAULT));			\
+	__disable_user_access();				\
+} while (0)
+#endif /* CONFIG_MMU */
+
+
+#ifdef CONFIG_64BIT
+#define __put_user_8(x, ptr, err) \
+	__put_user_asm("sd", x, ptr, err)
+#else /* !CONFIG_64BIT */
+#ifdef CONFIG_MMU
+#define __put_user_8(x, ptr, err)				\
+do {								\
+	u32 __user *__ptr = (u32 __user *)(ptr);		\
+	u64 __x = (__typeof__((x)-(x)))(x);			\
+	uintptr_t __tmp;					\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	sw %z4, %2\n"				\
+		"2:\n"						\
+		"	sw %z5, %3\n"				\
+		"3:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"4:\n"						\
+		"	li %0, %6\n"				\
+		"	jump 2b, %1\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 4b\n"			\
+		"	" RISCV_PTR " 2b, 4b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=r" (__tmp),			\
+			"=m" (__ptr[__LSW]),			\
+			"=m" (__ptr[__MSW])			\
+		: "rJ" (__x), "rJ" (__x >> 32), "i" (-EFAULT));	\
+	__disable_user_access();				\
+} while (0)
+#endif /* CONFIG_MMU */
+#endif /* CONFIG_64BIT */
+
+
+/**
+ * __put_user: - Write a simple value into user space, with less checking.
+ * @x:   Value to copy to user space.
+ * @ptr: Destination address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple value from kernel space to user
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and @x must be assignable
+ * to the result of dereferencing @ptr.
+ *
+ * Caller must check the pointer with access_ok() before calling this
+ * function.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ */
+#define __put_user(x, ptr)					\
+({								\
+	register long __pu_err = 0;				\
+	__typeof__(*(ptr)) __user *__gu_ptr = (ptr);		\
+	__chk_user_ptr(__gu_ptr);				\
+	switch (sizeof(*__gu_ptr)) {				\
+	case 1:							\
+		__put_user_asm("sb", (x), __gu_ptr, __pu_err);	\
+		break;						\
+	case 2:							\
+		__put_user_asm("sh", (x), __gu_ptr, __pu_err);	\
+		break;						\
+	case 4:							\
+		__put_user_asm("sw", (x), __gu_ptr, __pu_err);	\
+		break;						\
+	case 8:							\
+		__put_user_8((x), __gu_ptr, __pu_err);	\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+	}							\
+	__pu_err;						\
+})
+
+/**
+ * put_user: - Write a simple value into user space.
+ * @x:   Value to copy to user space.
+ * @ptr: Destination address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple value from kernel space to user
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and @x must be assignable
+ * to the result of dereferencing @ptr.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ */
+#define put_user(x, ptr)					\
+({								\
+	__typeof__(*(ptr)) __user *__p = (ptr);			\
+	might_fault();						\
+	access_ok(VERIFY_WRITE, __p, sizeof(*__p)) ?		\
+		__put_user((x), __p) :				\
+		-EFAULT;					\
+})
+
+
+extern unsigned long __must_check __copy_user(void __user *to,
+	const void __user *from, unsigned long n);
+
+static inline unsigned long
+raw_copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+	return __copy_user(to, from, n);
+}
+
+static inline unsigned long
+raw_copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+	return __copy_user(to, from, n);
+}
+
+extern long strncpy_from_user(char *dest, const char __user *src, long count);
+
+extern long __must_check strlen_user(const char __user *str);
+extern long __must_check strnlen_user(const char __user *str, long n);
+
+extern
+unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
+
+static inline
+unsigned long __must_check clear_user(void __user *to, unsigned long n)
+{
+	might_fault();
+	return access_ok(VERIFY_WRITE, to, n) ?
+		__clear_user(to, n) : n;
+}
+
+#endif /* _ASM_RISCV_UACCESS_H */
diff --git a/arch/riscv/include/asm/word-at-a-time.h b/arch/riscv/include/asm/word-at-a-time.h
new file mode 100644
index 000000000000..aa6238791d3e
--- /dev/null
+++ b/arch/riscv/include/asm/word-at-a-time.h
@@ -0,0 +1,55 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ * Derived from arch/x86/include/asm/word-at-a-time.h
+ */
+
+#ifndef _ASM_RISCV_WORD_AT_A_TIME_H
+#define _ASM_RISCV_WORD_AT_A_TIME_H
+
+
+#include <linux/kernel.h>
+
+struct word_at_a_time {
+	const unsigned long one_bits, high_bits;
+};
+
+#define WORD_AT_A_TIME_CONSTANTS { REPEAT_BYTE(0x01), REPEAT_BYTE(0x80) }
+
+static inline unsigned long has_zero(unsigned long val,
+	unsigned long *bits, const struct word_at_a_time *c)
+{
+	unsigned long mask = ((val - c->one_bits) & ~val) & c->high_bits;
+	*bits = mask;
+	return mask;
+}
+
+static inline unsigned long prep_zero_mask(unsigned long val,
+	unsigned long bits, const struct word_at_a_time *c)
+{
+	return bits;
+}
+
+static inline unsigned long create_zero_mask(unsigned long bits)
+{
+	bits = (bits - 1) & ~bits;
+	return bits >> 7;
+}
+
+static inline unsigned long find_zero(unsigned long mask)
+{
+	return fls64(mask) >> 3;
+}
+
+/* The mask we created is directly usable as a bytemask */
+#define zero_bytemask(mask) (mask)
+
+#endif /* _ASM_RISCV_WORD_AT_A_TIME_H */
diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
new file mode 100644
index 000000000000..109f5120d5c7
--- /dev/null
+++ b/arch/riscv/kernel/stacktrace.c
@@ -0,0 +1,177 @@
+/*
+ * Copyright (C) 2008 ARM Limited
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/export.h>
+#include <linux/kallsyms.h>
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <linux/sched/task_stack.h>
+#include <linux/stacktrace.h>
+
+#ifdef CONFIG_FRAME_POINTER
+
+struct stackframe {
+	unsigned long fp;
+	unsigned long ra;
+};
+
+static void notrace walk_stackframe(struct task_struct *task,
+	struct pt_regs *regs, bool (*fn)(unsigned long, void *), void *arg)
+{
+	unsigned long fp, sp, pc;
+
+	if (regs) {
+		fp = GET_FP(regs);
+		sp = GET_USP(regs);
+		pc = GET_IP(regs);
+	} else if (task == NULL || task == current) {
+		const register unsigned long current_sp __asm__ ("sp");
+		fp = (unsigned long)__builtin_frame_address(0);
+		sp = current_sp;
+		pc = (unsigned long)walk_stackframe;
+	} else {
+		/* task blocked in __switch_to */
+		fp = task->thread.s[0];
+		sp = task->thread.sp;
+		pc = task->thread.ra;
+	}
+
+	for (;;) {
+		unsigned long low, high;
+		struct stackframe *frame;
+
+		if (unlikely(!__kernel_text_address(pc) || fn(pc, arg)))
+			break;
+
+		/* Validate frame pointer */
+		low = sp + sizeof(struct stackframe);
+		high = ALIGN(sp, THREAD_SIZE);
+		if (unlikely(fp < low || fp > high || fp & 0x7))
+			break;
+		/* Unwind stack frame */
+		frame = (struct stackframe *)fp - 1;
+		sp = fp;
+		fp = frame->fp;
+		pc = frame->ra - 0x4;
+	}
+}
+
+#else /* !CONFIG_FRAME_POINTER */
+
+static void notrace walk_stackframe(struct task_struct *task,
+	struct pt_regs *regs, bool (*fn)(unsigned long, void *), void *arg)
+{
+	unsigned long sp, pc;
+	unsigned long *ksp;
+
+	if (regs) {
+		sp = GET_USP(regs);
+		pc = GET_IP(regs);
+	} else if (task == NULL || task == current) {
+		const register unsigned long current_sp __asm__ ("sp");
+		sp = current_sp;
+		pc = (unsigned long)walk_stackframe;
+	} else {
+		/* task blocked in __switch_to */
+		sp = task->thread.sp;
+		pc = task->thread.ra;
+	}
+
+	if (unlikely(sp & 0x7))
+		return;
+
+	ksp = (unsigned long *)sp;
+	while (!kstack_end(ksp)) {
+		if (__kernel_text_address(pc) && unlikely(fn(pc, arg)))
+			break;
+		pc = (*ksp++) - 0x4;
+	}
+}
+
+#endif /* CONFIG_FRAME_POINTER */
+
+
+static bool print_trace_address(unsigned long pc, void *arg)
+{
+	print_ip_sym(pc);
+	return false;
+}
+
+void show_stack(struct task_struct *task, unsigned long *sp)
+{
+	printk("Call Trace:\n");
+	walk_stackframe(task, NULL, print_trace_address, NULL);
+}
+
+
+static bool save_wchan(unsigned long pc, void *arg)
+{
+	if (!in_sched_functions(pc)) {
+		unsigned long *p = arg;
+		*p = pc;
+		return true;
+	}
+	return false;
+}
+
+unsigned long get_wchan(struct task_struct *task)
+{
+	unsigned long pc = 0;
+
+	if (likely(task && task != current && task->state != TASK_RUNNING))
+		walk_stackframe(task, NULL, save_wchan, &pc);
+	return pc;
+}
+
+
+#ifdef CONFIG_STACKTRACE
+
+static bool __save_trace(unsigned long pc, void *arg, bool nosched)
+{
+	struct stack_trace *trace = arg;
+
+	if (unlikely(nosched && in_sched_functions(pc)))
+		return false;
+	if (unlikely(trace->skip > 0)) {
+		trace->skip--;
+		return false;
+	}
+
+	trace->entries[trace->nr_entries++] = pc;
+	return (trace->nr_entries >= trace->max_entries);
+}
+
+static bool save_trace(unsigned long pc, void *arg)
+{
+	return __save_trace(pc, arg, false);
+}
+
+/*
+ * Save stack-backtrace addresses into a stack_trace buffer.
+ */
+void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
+{
+	walk_stackframe(tsk, NULL, save_trace, trace);
+	if (trace->nr_entries < trace->max_entries)
+		trace->entries[trace->nr_entries++] = ULONG_MAX;
+}
+EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
+
+void save_stack_trace(struct stack_trace *trace)
+{
+	save_stack_trace_tsk(NULL, trace);
+}
+EXPORT_SYMBOL_GPL(save_stack_trace);
+
+#endif /* CONFIG_STACKTRACE */
diff --git a/arch/riscv/lib/memcpy.S b/arch/riscv/lib/memcpy.S
new file mode 100644
index 000000000000..5b576cf041eb
--- /dev/null
+++ b/arch/riscv/lib/memcpy.S
@@ -0,0 +1,98 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/asm.h>
+
+/* void *memcpy(void *, const void *, size_t) */
+ENTRY(memcpy)
+	move t6, a0  /* Preserve return value */
+
+	/* Defer to byte-oriented copy for small sizes */
+	sltiu a3, a2, 128
+	bnez a3, 4f
+	/* Use word-oriented copy only if low-order bits match */
+	andi a3, t6, SZREG-1
+	andi a4, a1, SZREG-1
+	bne a3, a4, 4f
+
+	beqz a3, 2f  /* Skip if already aligned */
+	/* Round to nearest double word-aligned address
+	   greater than or equal to start address */
+	andi a3, a1, ~(SZREG-1)
+	addi a3, a3, SZREG
+	/* Handle initial misalignment */
+	sub a4, a3, a1
+1:
+	lb a5, 0(a1)
+	addi a1, a1, 1
+	sb a5, 0(t6)
+	addi t6, t6, 1
+	bltu a1, a3, 1b
+	sub a2, a2, a4  /* Update count */
+
+2:
+	andi a4, a2, ~((16*SZREG)-1)
+	beqz a4, 4f
+	add a3, a1, a4
+3:
+	REG_L a4,       0(a1)
+	REG_L a5,   SZREG(a1)
+	REG_L a6, 2*SZREG(a1)
+	REG_L a7, 3*SZREG(a1)
+	REG_L t0, 4*SZREG(a1)
+	REG_L t1, 5*SZREG(a1)
+	REG_L t2, 6*SZREG(a1)
+	REG_L t3, 7*SZREG(a1)
+	REG_L t4, 8*SZREG(a1)
+	REG_L t5, 9*SZREG(a1)
+	REG_S a4,       0(t6)
+	REG_S a5,   SZREG(t6)
+	REG_S a6, 2*SZREG(t6)
+	REG_S a7, 3*SZREG(t6)
+	REG_S t0, 4*SZREG(t6)
+	REG_S t1, 5*SZREG(t6)
+	REG_S t2, 6*SZREG(t6)
+	REG_S t3, 7*SZREG(t6)
+	REG_S t4, 8*SZREG(t6)
+	REG_S t5, 9*SZREG(t6)
+	REG_L a4, 10*SZREG(a1)
+	REG_L a5, 11*SZREG(a1)
+	REG_L a6, 12*SZREG(a1)
+	REG_L a7, 13*SZREG(a1)
+	REG_L t0, 14*SZREG(a1)
+	REG_L t1, 15*SZREG(a1)
+	addi a1, a1, 16*SZREG
+	REG_S a4, 10*SZREG(t6)
+	REG_S a5, 11*SZREG(t6)
+	REG_S a6, 12*SZREG(t6)
+	REG_S a7, 13*SZREG(t6)
+	REG_S t0, 14*SZREG(t6)
+	REG_S t1, 15*SZREG(t6)
+	addi t6, t6, 16*SZREG
+	bltu a1, a3, 3b
+	andi a2, a2, (16*SZREG)-1  /* Update count */
+
+4:
+	/* Handle trailing misalignment */
+	beqz a2, 6f
+	add a3, a1, a2
+5:
+	lb a4, 0(a1)
+	addi a1, a1, 1
+	sb a4, 0(t6)
+	addi t6, t6, 1
+	bltu a1, a3, 5b
+6:
+	ret
+END(memcpy)
diff --git a/arch/riscv/lib/memset.S b/arch/riscv/lib/memset.S
new file mode 100644
index 000000000000..d83c38099653
--- /dev/null
+++ b/arch/riscv/lib/memset.S
@@ -0,0 +1,118 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#include <linux/linkage.h>
+#include <asm/asm.h>
+
+/* void *memset(void *, int, size_t) */
+ENTRY(memset)
+	move t0, a0  /* Preserve return value */
+
+	/* Defer to byte-oriented fill for small sizes */
+	sltiu a3, a2, 16
+	bnez a3, 4f
+
+	/* Round to nearest XLEN-aligned address
+	   greater than or equal to start address */
+	addi a3, t0, SZREG-1
+	andi a3, a3, ~(SZREG-1)
+	beq a3, t0, 2f  /* Skip if already aligned */
+	/* Handle initial misalignment */
+	sub a4, a3, t0
+1:
+	sb a1, 0(t0)
+	addi t0, t0, 1
+	bltu t0, a3, 1b
+	sub a2, a2, a4  /* Update count */
+
+2: /* Duff's device with 32 XLEN stores per iteration */
+	/* Broadcast value into all bytes */
+	andi a1, a1, 0xff
+	slli a3, a1, 8
+	or a1, a3, a1
+	slli a3, a1, 16
+	or a1, a3, a1
+#ifdef CONFIG_64BIT
+	slli a3, a1, 32
+	or a1, a3, a1
+#endif
+
+	/* Calculate end address */
+	andi a4, a2, ~(SZREG-1)
+	add a3, t0, a4
+
+	andi a4, a4, 31*SZREG  /* Calculate remainder */
+	beqz a4, 3f            /* Shortcut if no remainder */
+	neg a4, a4
+	addi a4, a4, 32*SZREG  /* Calculate initial offset */
+
+	/* Adjust start address with offset */
+	sub t0, t0, a4
+
+	/* Jump into loop body */
+	/* Assumes 32-bit instruction lengths */
+	la a5, 3f
+#ifdef CONFIG_64BIT
+	srli a4, a4, 1
+#endif
+	add a5, a5, a4
+	jr a5
+3:
+	REG_S a1,        0(t0)
+	REG_S a1,    SZREG(t0)
+	REG_S a1,  2*SZREG(t0)
+	REG_S a1,  3*SZREG(t0)
+	REG_S a1,  4*SZREG(t0)
+	REG_S a1,  5*SZREG(t0)
+	REG_S a1,  6*SZREG(t0)
+	REG_S a1,  7*SZREG(t0)
+	REG_S a1,  8*SZREG(t0)
+	REG_S a1,  9*SZREG(t0)
+	REG_S a1, 10*SZREG(t0)
+	REG_S a1, 11*SZREG(t0)
+	REG_S a1, 12*SZREG(t0)
+	REG_S a1, 13*SZREG(t0)
+	REG_S a1, 14*SZREG(t0)
+	REG_S a1, 15*SZREG(t0)
+	REG_S a1, 16*SZREG(t0)
+	REG_S a1, 17*SZREG(t0)
+	REG_S a1, 18*SZREG(t0)
+	REG_S a1, 19*SZREG(t0)
+	REG_S a1, 20*SZREG(t0)
+	REG_S a1, 21*SZREG(t0)
+	REG_S a1, 22*SZREG(t0)
+	REG_S a1, 23*SZREG(t0)
+	REG_S a1, 24*SZREG(t0)
+	REG_S a1, 25*SZREG(t0)
+	REG_S a1, 26*SZREG(t0)
+	REG_S a1, 27*SZREG(t0)
+	REG_S a1, 28*SZREG(t0)
+	REG_S a1, 29*SZREG(t0)
+	REG_S a1, 30*SZREG(t0)
+	REG_S a1, 31*SZREG(t0)
+	addi t0, t0, 32*SZREG
+	bltu t0, a3, 3b
+	andi a2, a2, SZREG-1  /* Update count */
+
+4:
+	/* Handle trailing misalignment */
+	beqz a2, 6f
+	add a3, t0, a2
+5:
+	sb a1, 0(t0)
+	addi t0, t0, 1
+	bltu t0, a3, 5b
+6:
+	ret
+END(memset)
diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S
new file mode 100644
index 000000000000..cba994696aff
--- /dev/null
+++ b/arch/riscv/lib/uaccess.S
@@ -0,0 +1,125 @@
+#include <linux/linkage.h>
+#include <asm/asm.h>
+#include <asm/csr.h>
+
+	.altmacro
+	.macro fixup op reg addr lbl
+	LOCAL _epc
+_epc:
+	\op \reg, \addr
+	.section __ex_table,"a"
+	.balign RISCV_SZPTR
+	RISCV_PTR _epc, \lbl
+	.previous
+	.endm
+
+ENTRY(__copy_user)
+
+#ifdef CONFIG_RV_PUM
+	/* Enable access to user memory */
+	li t6, SR_SUM
+	csrs sstatus, t6
+#endif
+
+	add a3, a1, a2
+	/* Use word-oriented copy only if low-order bits match */
+	andi t0, a0, SZREG-1
+	andi t1, a1, SZREG-1
+	bne t0, t1, 2f
+
+	addi t0, a1, SZREG-1
+	andi t1, a3, ~(SZREG-1)
+	andi t0, t0, ~(SZREG-1)
+	/* a3: terminal address of source region
+	 * t0: lowest XLEN-aligned address in source
+	 * t1: highest XLEN-aligned address in source
+	 */
+	bgeu t0, t1, 2f
+	bltu a1, t0, 4f
+1:
+	fixup REG_L, t2, (a1), 10f
+	fixup REG_S, t2, (a0), 10f
+	addi a1, a1, SZREG
+	addi a0, a0, SZREG
+	bltu a1, t1, 1b
+2:
+	bltu a1, a3, 5f
+
+3:
+#ifdef CONFIG_RV_PUM
+	/* Disable access to user memory */
+	csrc sstatus, t6
+#endif
+	li a0, 0
+	ret
+4: /* Edge case: unalignment */
+	fixup lbu, t2, (a1), 10f
+	fixup sb, t2, (a0), 10f
+	addi a1, a1, 1
+	addi a0, a0, 1
+	bltu a1, t0, 4b
+	j 1b
+5: /* Edge case: remainder */
+	fixup lbu, t2, (a1), 10f
+	fixup sb, t2, (a0), 10f
+	addi a1, a1, 1
+	addi a0, a0, 1
+	bltu a1, a3, 5b
+	j 3b
+ENDPROC(__copy_user)
+
+
+ENTRY(__clear_user)
+
+#ifdef CONFIG_RV_PUM
+	/* Enable access to user memory */
+	li t6, SR_SUM
+	csrs sstatus, t6
+#endif
+
+	add a3, a0, a1
+	addi t0, a0, SZREG-1
+	andi t1, a3, ~(SZREG-1)
+	andi t0, t0, ~(SZREG-1)
+	/* a3: terminal address of target region
+	 * t0: lowest doubleword-aligned address in target region
+	 * t1: highest doubleword-aligned address in target region
+	 */
+	bgeu t0, t1, 2f
+	bltu a0, t0, 4f
+1:
+	fixup REG_S, zero, (a0), 10f
+	addi a0, a0, SZREG
+	bltu a0, t1, 1b
+2:
+	bltu a0, a3, 5f
+
+3:
+#ifdef CONFIG_RV_PUM
+	/* Disable access to user memory */
+	csrc sstatus, t6
+#endif
+	li a0, 0
+	ret
+4: /* Edge case: unalignment */
+	fixup sb, zero, (a0), 10f
+	addi a0, a0, 1
+	bltu a0, t0, 4b
+	j 1b
+5: /* Edge case: remainder */
+	fixup sb, zero, (a0), 10f
+	addi a0, a0, 1
+	bltu a0, a3, 5b
+	j 3b
+ENDPROC(__clear_user)
+
+	.section .fixup,"ax"
+	.balign 4
+10:
+#ifdef CONFIG_RV_PUM
+	/* Disable access to user memory */
+	csrs sstatus, t6
+#endif
+	sub a0, a3, a0
+	ret
+	.previous
diff --git a/arch/riscv/lib/udivdi3.S b/arch/riscv/lib/udivdi3.S
new file mode 100644
index 000000000000..cb01ae5b181a
--- /dev/null
+++ b/arch/riscv/lib/udivdi3.S
@@ -0,0 +1,38 @@
+/*
+ * Copyright (C) 2016-2017 Free Software Foundation, Inc.
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+  .globl __udivdi3
+__udivdi3:
+  mv    a2, a1
+  mv    a1, a0
+  li    a0, -1
+  beqz  a2, .L5
+  li    a3, 1
+  bgeu  a2, a1, .L2
+.L1:
+  blez  a2, .L2
+  slli  a2, a2, 1
+  slli  a3, a3, 1
+  bgtu  a1, a2, .L1
+.L2:
+  li    a0, 0
+.L3:
+  bltu  a1, a2, .L4
+  sub   a1, a1, a2
+  or    a0, a0, a3
+.L4:
+  srli  a3, a3, 1
+  srli  a2, a2, 1
+  bnez  a3, .L3
+.L5:
+  ret
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 3/9] RISC-V: Generic library routines and assembly
@ 2017-06-28 18:55       ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains code that is more specific to the RISC-V ISA than it
is to Linux.  It contains string and math operations, C wrappers for
various assembly instructions, stack walking code, and uaccess.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/asm.h            |  76 ++++++
 arch/riscv/include/asm/csr.h            | 125 +++++++++
 arch/riscv/include/asm/linkage.h        |  20 ++
 arch/riscv/include/asm/string.h         |  30 +++
 arch/riscv/include/asm/uaccess.h        | 440 ++++++++++++++++++++++++++++++++
 arch/riscv/include/asm/word-at-a-time.h |  55 ++++
 arch/riscv/kernel/stacktrace.c          | 177 +++++++++++++
 arch/riscv/lib/memcpy.S                 |  98 +++++++
 arch/riscv/lib/memset.S                 | 118 +++++++++
 arch/riscv/lib/uaccess.S                | 125 +++++++++
 arch/riscv/lib/udivdi3.S                |  38 +++
 11 files changed, 1302 insertions(+)
 create mode 100644 arch/riscv/include/asm/asm.h
 create mode 100644 arch/riscv/include/asm/csr.h
 create mode 100644 arch/riscv/include/asm/linkage.h
 create mode 100644 arch/riscv/include/asm/string.h
 create mode 100644 arch/riscv/include/asm/uaccess.h
 create mode 100644 arch/riscv/include/asm/word-at-a-time.h
 create mode 100644 arch/riscv/kernel/stacktrace.c
 create mode 100644 arch/riscv/lib/memcpy.S
 create mode 100644 arch/riscv/lib/memset.S
 create mode 100644 arch/riscv/lib/uaccess.S
 create mode 100644 arch/riscv/lib/udivdi3.S

diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h
new file mode 100644
index 000000000000..6cbbb6a68d76
--- /dev/null
+++ b/arch/riscv/include/asm/asm.h
@@ -0,0 +1,76 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_ASM_H
+#define _ASM_RISCV_ASM_H
+
+#ifdef __ASSEMBLY__
+#define __ASM_STR(x)	x
+#else
+#define __ASM_STR(x)	#x
+#endif
+
+#if __riscv_xlen == 64
+#define __REG_SEL(a, b)	__ASM_STR(a)
+#elif __riscv_xlen == 32
+#define __REG_SEL(a, b)	__ASM_STR(b)
+#else
+#error "Unexpected __riscv_xlen"
+#endif
+
+#define REG_L		__REG_SEL(ld, lw)
+#define REG_S		__REG_SEL(sd, sw)
+#define SZREG		__REG_SEL(8, 4)
+#define LGREG		__REG_SEL(3, 2)
+
+#if __SIZEOF_POINTER__ == 8
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.dword
+#define RISCV_SZPTR		8
+#define RISCV_LGPTR		3
+#else
+#define RISCV_PTR		".dword"
+#define RISCV_SZPTR		"8"
+#define RISCV_LGPTR		"3"
+#endif
+#elif __SIZEOF_POINTER__ == 4
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.word
+#define RISCV_SZPTR		4
+#define RISCV_LGPTR		2
+#else
+#define RISCV_PTR		".word"
+#define RISCV_SZPTR		"4"
+#define RISCV_LGPTR		"2"
+#endif
+#else
+#error "Unexpected __SIZEOF_POINTER__"
+#endif
+
+#if (__SIZEOF_INT__ == 4)
+#define INT		__ASM_STR(.word)
+#define SZINT		__ASM_STR(4)
+#define LGINT		__ASM_STR(2)
+#else
+#error "Unexpected __SIZEOF_INT__"
+#endif
+
+#if (__SIZEOF_SHORT__ == 2)
+#define SHORT		__ASM_STR(.half)
+#define SZSHORT		__ASM_STR(2)
+#define LGSHORT		__ASM_STR(1)
+#else
+#error "Unexpected __SIZEOF_SHORT__"
+#endif
+
+#endif /* _ASM_RISCV_ASM_H */
diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
new file mode 100644
index 000000000000..387d0dbf0073
--- /dev/null
+++ b/arch/riscv/include/asm/csr.h
@@ -0,0 +1,125 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CSR_H
+#define _ASM_RISCV_CSR_H
+
+#include <linux/const.h>
+
+/* Status register flags */
+#define SR_IE   _AC(0x00000002, UL) /* Interrupt Enable */
+#define SR_PIE  _AC(0x00000020, UL) /* Previous IE */
+#define SR_PS   _AC(0x00000100, UL) /* Previously Supervisor */
+#define SR_SUM  _AC(0x00040000, UL) /* Supervisor may access User Memory */
+
+#define SR_FS           _AC(0x00006000, UL) /* Floating-point Status */
+#define SR_FS_OFF       _AC(0x00000000, UL)
+#define SR_FS_INITIAL   _AC(0x00002000, UL)
+#define SR_FS_CLEAN     _AC(0x00004000, UL)
+#define SR_FS_DIRTY     _AC(0x00006000, UL)
+
+#define SR_XS           _AC(0x00018000, UL) /* Extension Status */
+#define SR_XS_OFF       _AC(0x00000000, UL)
+#define SR_XS_INITIAL   _AC(0x00008000, UL)
+#define SR_XS_CLEAN     _AC(0x00010000, UL)
+#define SR_XS_DIRTY     _AC(0x00018000, UL)
+
+#ifndef CONFIG_64BIT
+#define SR_SD   _AC(0x80000000, UL) /* FS/XS dirty */
+#else
+#define SR_SD   _AC(0x8000000000000000, UL) /* FS/XS dirty */
+#endif
+
+/* SPTBR flags */
+#if __riscv_xlen == 32
+#define SPTBR_PPN     _AC(0x003FFFFF, UL)
+#define SPTBR_MODE_32 _AC(0x80000000, UL)
+#define SPTBR_MODE    SPTBR_MODE_32
+#else
+#define SPTBR_PPN     _AC(0x00000FFFFFFFFFFF, UL)
+#define SPTBR_MODE_39 _AC(0x8000000000000000, UL)
+#define SPTBR_MODE    SPTBR_MODE_39
+#endif
+
+/* Interrupt Enable and Interrupt Pending flags */
+#define SIE_SSIE _AC(0x00000002, UL) /* Software Interrupt Enable */
+#define SIE_STIE _AC(0x00000020, UL) /* Timer Interrupt Enable */
+
+#define EXC_INST_MISALIGNED     0
+#define EXC_INST_ACCESS         1
+#define EXC_BREAKPOINT          3
+#define EXC_LOAD_ACCESS         5
+#define EXC_STORE_ACCESS        7
+#define EXC_SYSCALL             8
+#define EXC_INST_PAGE_FAULT     12
+#define EXC_LOAD_PAGE_FAULT     13
+#define EXC_STORE_PAGE_FAULT    15
+
+#ifndef __ASSEMBLY__
+
+#define csr_swap(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrw %0, " #csr ", %1"		\
+			      : "=r" (__v) : "rK" (__v));	\
+	__v;							\
+})
+
+#define csr_read(csr)						\
+({								\
+	register unsigned long __v;				\
+	__asm__ __volatile__ ("csrr %0, " #csr			\
+			      : "=r" (__v));			\
+	__v;							\
+})
+
+#define csr_write(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrw " #csr ", %0"		\
+			      : : "rK" (__v));			\
+})
+
+#define csr_read_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrs %0, " #csr ", %1"		\
+			      : "=r" (__v) : "rK" (__v));	\
+	__v;							\
+})
+
+#define csr_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrs " #csr ", %0"		\
+			      : : "rK" (__v));			\
+})
+
+#define csr_read_clear(csr, val)				\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrc %0, " #csr ", %1"		\
+			      : "=r" (__v) : "rK" (__v));	\
+	__v;							\
+})
+
+#define csr_clear(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrc " #csr ", %0"		\
+			      : : "rK" (__v));			\
+})
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_CSR_H */
diff --git a/arch/riscv/include/asm/linkage.h b/arch/riscv/include/asm/linkage.h
new file mode 100644
index 000000000000..b7b304ca89c4
--- /dev/null
+++ b/arch/riscv/include/asm/linkage.h
@@ -0,0 +1,20 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_LINKAGE_H
+#define _ASM_RISCV_LINKAGE_H
+
+#define __ALIGN		.balign 4
+#define __ALIGN_STR	".balign 4"
+
+#endif /* _ASM_RISCV_LINKAGE_H */
diff --git a/arch/riscv/include/asm/string.h b/arch/riscv/include/asm/string.h
new file mode 100644
index 000000000000..1c59cf4e8ccc
--- /dev/null
+++ b/arch/riscv/include/asm/string.h
@@ -0,0 +1,30 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_STRING_H
+#define _ASM_RISCV_STRING_H
+
+#ifdef __KERNEL__
+
+#include <linux/types.h>
+#include <linux/linkage.h>
+
+#define __HAVE_ARCH_MEMSET
+extern asmlinkage void *memset(void *, int, size_t);
+
+#define __HAVE_ARCH_MEMCPY
+extern asmlinkage void *memcpy(void *, const void *, size_t);
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_STRING_H */
diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
new file mode 100644
index 000000000000..f3ece74219d7
--- /dev/null
+++ b/arch/riscv/include/asm/uaccess.h
@@ -0,0 +1,440 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ *
+ * This file was copied from include/asm-generic/uaccess.h
+ */
+
+#ifndef _ASM_RISCV_UACCESS_H
+#define _ASM_RISCV_UACCESS_H
+
+/*
+ * User space memory access functions
+ */
+#include <linux/errno.h>
+#include <linux/compiler.h>
+#include <linux/thread_info.h>
+#include <asm/byteorder.h>
+#include <asm/asm.h>
+
+#ifdef CONFIG_RV_PUM
+#define __enable_user_access()							\
+	__asm__ __volatile__ ("csrs sstatus, %0" : : "r" (SR_SUM) : "memory")
+#define __disable_user_access()							\
+	__asm__ __volatile__ ("csrc sstatus, %0" : : "r" (SR_SUM) : "memory")
+#else
+#define __enable_user_access()
+#define __disable_user_access()
+#endif
+
+/*
+ * The fs value determines whether argument validity checking should be
+ * performed or not.  If get_fs() == USER_DS, checking is performed, with
+ * get_fs() == KERNEL_DS, checking is bypassed.
+ *
+ * For historical reasons, these macros are grossly misnamed.
+ */
+
+#define KERNEL_DS	(~0UL)
+#define USER_DS		(TASK_SIZE)
+
+#define get_ds()	(KERNEL_DS)
+#define get_fs()	(current_thread_info()->addr_limit)
+
+static inline void set_fs(mm_segment_t fs)
+{
+	current_thread_info()->addr_limit = fs;
+}
+
+#define segment_eq(a, b) ((a) == (b))
+
+#define user_addr_max()	(get_fs())
+
+
+#define VERIFY_READ	0
+#define VERIFY_WRITE	1
+
+/**
+ * access_ok: - Checks if a user space pointer is valid
+ * @type: Type of access: %VERIFY_READ or %VERIFY_WRITE.  Note that
+ *        %VERIFY_WRITE is a superset of %VERIFY_READ - if it is safe
+ *        to write to a block, it is always safe to read from it.
+ * @addr: User space pointer to start of block to check
+ * @size: Size of block to check
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * Checks if a pointer to a block of memory in user space is valid.
+ *
+ * Returns true (nonzero) if the memory block may be valid, false (zero)
+ * if it is definitely invalid.
+ *
+ * Note that, depending on architecture, this function probably just
+ * checks that the pointer is in the user space range - after calling
+ * this function, memory access functions may still return -EFAULT.
+ */
+#define access_ok(type, addr, size) ({					\
+	__chk_user_ptr(addr);						\
+	likely(__access_ok((unsigned long __force)(addr), (size)));	\
+})
+
+/* Ensure that the range [addr, addr+size) is within the process's
+ * address space
+ */
+static inline int __access_ok(unsigned long addr, unsigned long size)
+{
+	const mm_segment_t fs = get_fs();
+
+	return (size <= fs) && (addr <= (fs - size));
+}
+
+/*
+ * The exception table consists of pairs of addresses: the first is the
+ * address of an instruction that is allowed to fault, and the second is
+ * the address at which the program should continue.  No registers are
+ * modified, so it is entirely up to the continuation code to figure out
+ * what to do.
+ *
+ * All the routines below use bits of fixup code that are out of line
+ * with the main instruction path.  This means when everything is well,
+ * we don't even have to jump over them.  Further, they do not intrude
+ * on our cache or tlb entries.
+ */
+
+struct exception_table_entry {
+	unsigned long insn, fixup;
+};
+
+extern int fixup_exception(struct pt_regs *state);
+
+#if defined(__LITTLE_ENDIAN)
+#define __MSW	1
+#define __LSW	0
+#elif defined(__BIG_ENDIAN)
+#define __MSW	0
+#define	__LSW	1
+#else
+#error "Unknown endianness"
+#endif
+
+/*
+ * The "__xxx" versions of the user access functions do not verify the address
+ * space - it must have been done previously with a separate "access_ok()"
+ * call.
+ */
+
+#ifdef CONFIG_MMU
+#define __get_user_asm(insn, x, ptr, err)			\
+do {								\
+	uintptr_t __tmp;					\
+	__typeof__(x) __x;					\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	" insn " %1, %3\n"			\
+		"2:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"3:\n"						\
+		"	li %0, %4\n"				\
+		"	li %1, 0\n"				\
+		"	jump 2b, %2\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 3b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=&r" (__x), "=r" (__tmp)		\
+		: "m" (*(ptr)), "i" (-EFAULT));			\
+	__disable_user_access();				\
+	(x) = __x;						\
+} while (0)
+#endif /* CONFIG_MMU */
+
+#ifdef CONFIG_64BIT
+#define __get_user_8(x, ptr, err) \
+	__get_user_asm("ld", x, ptr, err)
+#else /* !CONFIG_64BIT */
+#ifdef CONFIG_MMU
+#define __get_user_8(x, ptr, err)				\
+do {								\
+	u32 __user *__ptr = (u32 __user *)(ptr);		\
+	u32 __lo, __hi;						\
+	uintptr_t __tmp;					\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	lw %1, %4\n"				\
+		"2:\n"						\
+		"	lw %2, %5\n"				\
+		"3:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"4:\n"						\
+		"	li %0, %6\n"				\
+		"	li %1, 0\n"				\
+		"	li %2, 0\n"				\
+		"	jump 3b, %3\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 4b\n"			\
+		"	" RISCV_PTR " 2b, 4b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=&r" (__lo), "=r" (__hi),	\
+			"=r" (__tmp)				\
+		: "m" (__ptr[__LSW]), "m" (__ptr[__MSW]),	\
+			"i" (-EFAULT));				\
+	__disable_user_access();				\
+	(x) = (__typeof__(x))((__typeof__((x)-(x)))(		\
+		(((u64)__hi << 32) | __lo)));			\
+} while (0)
+#endif /* CONFIG_MMU */
+#endif /* CONFIG_64BIT */
+
+
+/**
+ * __get_user: - Get a simple variable from user space, with less checking.
+ * @x:   Variable to store result.
+ * @ptr: Source address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple variable from user space to kernel
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and the result of
+ * dereferencing @ptr must be assignable to @x without a cast.
+ *
+ * Caller must check the pointer with access_ok() before calling this
+ * function.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ * On error, the variable @x is set to zero.
+ */
+#define __get_user(x, ptr)					\
+({								\
+	register long __gu_err = 0;				\
+	const __typeof__(*(ptr)) __user *__gu_ptr = (ptr);	\
+	__chk_user_ptr(__gu_ptr);				\
+	switch (sizeof(*__gu_ptr)) {				\
+	case 1:							\
+		__get_user_asm("lb", (x), __gu_ptr, __gu_err);	\
+		break;						\
+	case 2:							\
+		__get_user_asm("lh", (x), __gu_ptr, __gu_err);	\
+		break;						\
+	case 4:							\
+		__get_user_asm("lw", (x), __gu_ptr, __gu_err);	\
+		break;						\
+	case 8:							\
+		__get_user_8((x), __gu_ptr, __gu_err);	\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+	}							\
+	__gu_err;						\
+})
+
+/**
+ * get_user: - Get a simple variable from user space.
+ * @x:   Variable to store result.
+ * @ptr: Source address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple variable from user space to kernel
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and the result of
+ * dereferencing @ptr must be assignable to @x without a cast.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ * On error, the variable @x is set to zero.
+ */
+#define get_user(x, ptr)					\
+({								\
+	const __typeof__(*(ptr)) __user *__p = (ptr);		\
+	might_fault();						\
+	access_ok(VERIFY_READ, __p, sizeof(*__p)) ?		\
+		__get_user((x), __p) :				\
+		((x) = 0, -EFAULT);				\
+})
+
+
+#ifdef CONFIG_MMU
+#define __put_user_asm(insn, x, ptr, err)			\
+do {								\
+	uintptr_t __tmp;					\
+	__typeof__(*(ptr)) __x = x;				\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	" insn " %z3, %2\n"			\
+		"2:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"3:\n"						\
+		"	li %0, %4\n"				\
+		"	jump 2b, %1\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 3b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=r" (__tmp), "=m" (*(ptr))	\
+		: "rJ" (__x), "i" (-EFAULT));			\
+	__disable_user_access();				\
+} while (0)
+#endif /* CONFIG_MMU */
+
+
+#ifdef CONFIG_64BIT
+#define __put_user_8(x, ptr, err) \
+	__put_user_asm("sd", x, ptr, err)
+#else /* !CONFIG_64BIT */
+#ifdef CONFIG_MMU
+#define __put_user_8(x, ptr, err)				\
+do {								\
+	u32 __user *__ptr = (u32 __user *)(ptr);		\
+	u64 __x = (__typeof__((x)-(x)))(x);			\
+	uintptr_t __tmp;					\
+	__enable_user_access();					\
+	__asm__ __volatile__ (					\
+		"1:\n"						\
+		"	sw %z4, %2\n"				\
+		"2:\n"						\
+		"	sw %z5, %3\n"				\
+		"3:\n"						\
+		"	.section .fixup,\"ax\"\n"		\
+		"	.balign 4\n"				\
+		"4:\n"						\
+		"	li %0, %6\n"				\
+		"	jump 2b, %1\n"				\
+		"	.previous\n"				\
+		"	.section __ex_table,\"a\"\n"		\
+		"	.balign " RISCV_SZPTR "\n"			\
+		"	" RISCV_PTR " 1b, 4b\n"			\
+		"	" RISCV_PTR " 2b, 4b\n"			\
+		"	.previous"				\
+		: "+r" (err), "=r" (__tmp),			\
+			"=m" (__ptr[__LSW]),			\
+			"=m" (__ptr[__MSW])			\
+		: "rJ" (__x), "rJ" (__x >> 32), "i" (-EFAULT));	\
+	__disable_user_access();				\
+} while (0)
+#endif /* CONFIG_MMU */
+#endif /* CONFIG_64BIT */
+
+
+/**
+ * __put_user: - Write a simple value into user space, with less checking.
+ * @x:   Value to copy to user space.
+ * @ptr: Destination address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple value from kernel space to user
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and @x must be assignable
+ * to the result of dereferencing @ptr.
+ *
+ * Caller must check the pointer with access_ok() before calling this
+ * function.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ */
+#define __put_user(x, ptr)					\
+({								\
+	register long __pu_err = 0;				\
+	__typeof__(*(ptr)) __user *__gu_ptr = (ptr);		\
+	__chk_user_ptr(__gu_ptr);				\
+	switch (sizeof(*__gu_ptr)) {				\
+	case 1:							\
+		__put_user_asm("sb", (x), __gu_ptr, __pu_err);	\
+		break;						\
+	case 2:							\
+		__put_user_asm("sh", (x), __gu_ptr, __pu_err);	\
+		break;						\
+	case 4:							\
+		__put_user_asm("sw", (x), __gu_ptr, __pu_err);	\
+		break;						\
+	case 8:							\
+		__put_user_8((x), __gu_ptr, __pu_err);	\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+	}							\
+	__pu_err;						\
+})
+
+/**
+ * put_user: - Write a simple value into user space.
+ * @x:   Value to copy to user space.
+ * @ptr: Destination address, in user space.
+ *
+ * Context: User context only.  This function may sleep.
+ *
+ * This macro copies a single simple value from kernel space to user
+ * space.  It supports simple types like char and int, but not larger
+ * data types like structures or arrays.
+ *
+ * @ptr must have pointer-to-simple-variable type, and @x must be assignable
+ * to the result of dereferencing @ptr.
+ *
+ * Returns zero on success, or -EFAULT on error.
+ */
+#define put_user(x, ptr)					\
+({								\
+	__typeof__(*(ptr)) __user *__p = (ptr);			\
+	might_fault();						\
+	access_ok(VERIFY_WRITE, __p, sizeof(*__p)) ?		\
+		__put_user((x), __p) :				\
+		-EFAULT;					\
+})
+
+
+extern unsigned long __must_check __copy_user(void __user *to,
+	const void __user *from, unsigned long n);
+
+static inline unsigned long
+raw_copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+	return __copy_user(to, from, n);
+}
+
+static inline unsigned long
+raw_copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+	return __copy_user(to, from, n);
+}
+
+extern long strncpy_from_user(char *dest, const char __user *src, long count);
+
+extern long __must_check strlen_user(const char __user *str);
+extern long __must_check strnlen_user(const char __user *str, long n);
+
+extern
+unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
+
+static inline
+unsigned long __must_check clear_user(void __user *to, unsigned long n)
+{
+	might_fault();
+	return access_ok(VERIFY_WRITE, to, n) ?
+		__clear_user(to, n) : n;
+}
+
+#endif /* _ASM_RISCV_UACCESS_H */
diff --git a/arch/riscv/include/asm/word-at-a-time.h b/arch/riscv/include/asm/word-at-a-time.h
new file mode 100644
index 000000000000..aa6238791d3e
--- /dev/null
+++ b/arch/riscv/include/asm/word-at-a-time.h
@@ -0,0 +1,55 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ * Derived from arch/x86/include/asm/word-at-a-time.h
+ */
+
+#ifndef _ASM_RISCV_WORD_AT_A_TIME_H
+#define _ASM_RISCV_WORD_AT_A_TIME_H
+
+
+#include <linux/kernel.h>
+
+struct word_at_a_time {
+	const unsigned long one_bits, high_bits;
+};
+
+#define WORD_AT_A_TIME_CONSTANTS { REPEAT_BYTE(0x01), REPEAT_BYTE(0x80) }
+
+static inline unsigned long has_zero(unsigned long val,
+	unsigned long *bits, const struct word_at_a_time *c)
+{
+	unsigned long mask = ((val - c->one_bits) & ~val) & c->high_bits;
+	*bits = mask;
+	return mask;
+}
+
+static inline unsigned long prep_zero_mask(unsigned long val,
+	unsigned long bits, const struct word_at_a_time *c)
+{
+	return bits;
+}
+
+static inline unsigned long create_zero_mask(unsigned long bits)
+{
+	bits = (bits - 1) & ~bits;
+	return bits >> 7;
+}
+
+static inline unsigned long find_zero(unsigned long mask)
+{
+	return fls64(mask) >> 3;
+}
+
+/* The mask we created is directly usable as a bytemask */
+#define zero_bytemask(mask) (mask)
+
+#endif /* _ASM_RISCV_WORD_AT_A_TIME_H */
diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
new file mode 100644
index 000000000000..109f5120d5c7
--- /dev/null
+++ b/arch/riscv/kernel/stacktrace.c
@@ -0,0 +1,177 @@
+/*
+ * Copyright (C) 2008 ARM Limited
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/export.h>
+#include <linux/kallsyms.h>
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <linux/sched/task_stack.h>
+#include <linux/stacktrace.h>
+
+#ifdef CONFIG_FRAME_POINTER
+
+struct stackframe {
+	unsigned long fp;
+	unsigned long ra;
+};
+
+static void notrace walk_stackframe(struct task_struct *task,
+	struct pt_regs *regs, bool (*fn)(unsigned long, void *), void *arg)
+{
+	unsigned long fp, sp, pc;
+
+	if (regs) {
+		fp = GET_FP(regs);
+		sp = GET_USP(regs);
+		pc = GET_IP(regs);
+	} else if (task == NULL || task == current) {
+		const register unsigned long current_sp __asm__ ("sp");
+		fp = (unsigned long)__builtin_frame_address(0);
+		sp = current_sp;
+		pc = (unsigned long)walk_stackframe;
+	} else {
+		/* task blocked in __switch_to */
+		fp = task->thread.s[0];
+		sp = task->thread.sp;
+		pc = task->thread.ra;
+	}
+
+	for (;;) {
+		unsigned long low, high;
+		struct stackframe *frame;
+
+		if (unlikely(!__kernel_text_address(pc) || fn(pc, arg)))
+			break;
+
+		/* Validate frame pointer */
+		low = sp + sizeof(struct stackframe);
+		high = ALIGN(sp, THREAD_SIZE);
+		if (unlikely(fp < low || fp > high || fp & 0x7))
+			break;
+		/* Unwind stack frame */
+		frame = (struct stackframe *)fp - 1;
+		sp = fp;
+		fp = frame->fp;
+		pc = frame->ra - 0x4;
+	}
+}
+
+#else /* !CONFIG_FRAME_POINTER */
+
+static void notrace walk_stackframe(struct task_struct *task,
+	struct pt_regs *regs, bool (*fn)(unsigned long, void *), void *arg)
+{
+	unsigned long sp, pc;
+	unsigned long *ksp;
+
+	if (regs) {
+		sp = GET_USP(regs);
+		pc = GET_IP(regs);
+	} else if (task == NULL || task == current) {
+		const register unsigned long current_sp __asm__ ("sp");
+		sp = current_sp;
+		pc = (unsigned long)walk_stackframe;
+	} else {
+		/* task blocked in __switch_to */
+		sp = task->thread.sp;
+		pc = task->thread.ra;
+	}
+
+	if (unlikely(sp & 0x7))
+		return;
+
+	ksp = (unsigned long *)sp;
+	while (!kstack_end(ksp)) {
+		if (__kernel_text_address(pc) && unlikely(fn(pc, arg)))
+			break;
+		pc = (*ksp++) - 0x4;
+	}
+}
+
+#endif /* CONFIG_FRAME_POINTER */
+
+
+static bool print_trace_address(unsigned long pc, void *arg)
+{
+	print_ip_sym(pc);
+	return false;
+}
+
+void show_stack(struct task_struct *task, unsigned long *sp)
+{
+	printk("Call Trace:\n");
+	walk_stackframe(task, NULL, print_trace_address, NULL);
+}
+
+
+static bool save_wchan(unsigned long pc, void *arg)
+{
+	if (!in_sched_functions(pc)) {
+		unsigned long *p = arg;
+		*p = pc;
+		return true;
+	}
+	return false;
+}
+
+unsigned long get_wchan(struct task_struct *task)
+{
+	unsigned long pc = 0;
+
+	if (likely(task && task != current && task->state != TASK_RUNNING))
+		walk_stackframe(task, NULL, save_wchan, &pc);
+	return pc;
+}
+
+
+#ifdef CONFIG_STACKTRACE
+
+static bool __save_trace(unsigned long pc, void *arg, bool nosched)
+{
+	struct stack_trace *trace = arg;
+
+	if (unlikely(nosched && in_sched_functions(pc)))
+		return false;
+	if (unlikely(trace->skip > 0)) {
+		trace->skip--;
+		return false;
+	}
+
+	trace->entries[trace->nr_entries++] = pc;
+	return (trace->nr_entries >= trace->max_entries);
+}
+
+static bool save_trace(unsigned long pc, void *arg)
+{
+	return __save_trace(pc, arg, false);
+}
+
+/*
+ * Save stack-backtrace addresses into a stack_trace buffer.
+ */
+void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
+{
+	walk_stackframe(tsk, NULL, save_trace, trace);
+	if (trace->nr_entries < trace->max_entries)
+		trace->entries[trace->nr_entries++] = ULONG_MAX;
+}
+EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
+
+void save_stack_trace(struct stack_trace *trace)
+{
+	save_stack_trace_tsk(NULL, trace);
+}
+EXPORT_SYMBOL_GPL(save_stack_trace);
+
+#endif /* CONFIG_STACKTRACE */
diff --git a/arch/riscv/lib/memcpy.S b/arch/riscv/lib/memcpy.S
new file mode 100644
index 000000000000..5b576cf041eb
--- /dev/null
+++ b/arch/riscv/lib/memcpy.S
@@ -0,0 +1,98 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/asm.h>
+
+/* void *memcpy(void *, const void *, size_t) */
+ENTRY(memcpy)
+	move t6, a0  /* Preserve return value */
+
+	/* Defer to byte-oriented copy for small sizes */
+	sltiu a3, a2, 128
+	bnez a3, 4f
+	/* Use word-oriented copy only if low-order bits match */
+	andi a3, t6, SZREG-1
+	andi a4, a1, SZREG-1
+	bne a3, a4, 4f
+
+	beqz a3, 2f  /* Skip if already aligned */
+	/* Round to nearest double word-aligned address
+	   greater than or equal to start address */
+	andi a3, a1, ~(SZREG-1)
+	addi a3, a3, SZREG
+	/* Handle initial misalignment */
+	sub a4, a3, a1
+1:
+	lb a5, 0(a1)
+	addi a1, a1, 1
+	sb a5, 0(t6)
+	addi t6, t6, 1
+	bltu a1, a3, 1b
+	sub a2, a2, a4  /* Update count */
+
+2:
+	andi a4, a2, ~((16*SZREG)-1)
+	beqz a4, 4f
+	add a3, a1, a4
+3:
+	REG_L a4,       0(a1)
+	REG_L a5,   SZREG(a1)
+	REG_L a6, 2*SZREG(a1)
+	REG_L a7, 3*SZREG(a1)
+	REG_L t0, 4*SZREG(a1)
+	REG_L t1, 5*SZREG(a1)
+	REG_L t2, 6*SZREG(a1)
+	REG_L t3, 7*SZREG(a1)
+	REG_L t4, 8*SZREG(a1)
+	REG_L t5, 9*SZREG(a1)
+	REG_S a4,       0(t6)
+	REG_S a5,   SZREG(t6)
+	REG_S a6, 2*SZREG(t6)
+	REG_S a7, 3*SZREG(t6)
+	REG_S t0, 4*SZREG(t6)
+	REG_S t1, 5*SZREG(t6)
+	REG_S t2, 6*SZREG(t6)
+	REG_S t3, 7*SZREG(t6)
+	REG_S t4, 8*SZREG(t6)
+	REG_S t5, 9*SZREG(t6)
+	REG_L a4, 10*SZREG(a1)
+	REG_L a5, 11*SZREG(a1)
+	REG_L a6, 12*SZREG(a1)
+	REG_L a7, 13*SZREG(a1)
+	REG_L t0, 14*SZREG(a1)
+	REG_L t1, 15*SZREG(a1)
+	addi a1, a1, 16*SZREG
+	REG_S a4, 10*SZREG(t6)
+	REG_S a5, 11*SZREG(t6)
+	REG_S a6, 12*SZREG(t6)
+	REG_S a7, 13*SZREG(t6)
+	REG_S t0, 14*SZREG(t6)
+	REG_S t1, 15*SZREG(t6)
+	addi t6, t6, 16*SZREG
+	bltu a1, a3, 3b
+	andi a2, a2, (16*SZREG)-1  /* Update count */
+
+4:
+	/* Handle trailing misalignment */
+	beqz a2, 6f
+	add a3, a1, a2
+5:
+	lb a4, 0(a1)
+	addi a1, a1, 1
+	sb a4, 0(t6)
+	addi t6, t6, 1
+	bltu a1, a3, 5b
+6:
+	ret
+END(memcpy)
diff --git a/arch/riscv/lib/memset.S b/arch/riscv/lib/memset.S
new file mode 100644
index 000000000000..d83c38099653
--- /dev/null
+++ b/arch/riscv/lib/memset.S
@@ -0,0 +1,118 @@
+/*
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#include <linux/linkage.h>
+#include <asm/asm.h>
+
+/* void *memset(void *, int, size_t) */
+ENTRY(memset)
+	move t0, a0  /* Preserve return value */
+
+	/* Defer to byte-oriented fill for small sizes */
+	sltiu a3, a2, 16
+	bnez a3, 4f
+
+	/* Round to nearest XLEN-aligned address
+	   greater than or equal to start address */
+	addi a3, t0, SZREG-1
+	andi a3, a3, ~(SZREG-1)
+	beq a3, t0, 2f  /* Skip if already aligned */
+	/* Handle initial misalignment */
+	sub a4, a3, t0
+1:
+	sb a1, 0(t0)
+	addi t0, t0, 1
+	bltu t0, a3, 1b
+	sub a2, a2, a4  /* Update count */
+
+2: /* Duff's device with 32 XLEN stores per iteration */
+	/* Broadcast value into all bytes */
+	andi a1, a1, 0xff
+	slli a3, a1, 8
+	or a1, a3, a1
+	slli a3, a1, 16
+	or a1, a3, a1
+#ifdef CONFIG_64BIT
+	slli a3, a1, 32
+	or a1, a3, a1
+#endif
+
+	/* Calculate end address */
+	andi a4, a2, ~(SZREG-1)
+	add a3, t0, a4
+
+	andi a4, a4, 31*SZREG  /* Calculate remainder */
+	beqz a4, 3f            /* Shortcut if no remainder */
+	neg a4, a4
+	addi a4, a4, 32*SZREG  /* Calculate initial offset */
+
+	/* Adjust start address with offset */
+	sub t0, t0, a4
+
+	/* Jump into loop body */
+	/* Assumes 32-bit instruction lengths */
+	la a5, 3f
+#ifdef CONFIG_64BIT
+	srli a4, a4, 1
+#endif
+	add a5, a5, a4
+	jr a5
+3:
+	REG_S a1,        0(t0)
+	REG_S a1,    SZREG(t0)
+	REG_S a1,  2*SZREG(t0)
+	REG_S a1,  3*SZREG(t0)
+	REG_S a1,  4*SZREG(t0)
+	REG_S a1,  5*SZREG(t0)
+	REG_S a1,  6*SZREG(t0)
+	REG_S a1,  7*SZREG(t0)
+	REG_S a1,  8*SZREG(t0)
+	REG_S a1,  9*SZREG(t0)
+	REG_S a1, 10*SZREG(t0)
+	REG_S a1, 11*SZREG(t0)
+	REG_S a1, 12*SZREG(t0)
+	REG_S a1, 13*SZREG(t0)
+	REG_S a1, 14*SZREG(t0)
+	REG_S a1, 15*SZREG(t0)
+	REG_S a1, 16*SZREG(t0)
+	REG_S a1, 17*SZREG(t0)
+	REG_S a1, 18*SZREG(t0)
+	REG_S a1, 19*SZREG(t0)
+	REG_S a1, 20*SZREG(t0)
+	REG_S a1, 21*SZREG(t0)
+	REG_S a1, 22*SZREG(t0)
+	REG_S a1, 23*SZREG(t0)
+	REG_S a1, 24*SZREG(t0)
+	REG_S a1, 25*SZREG(t0)
+	REG_S a1, 26*SZREG(t0)
+	REG_S a1, 27*SZREG(t0)
+	REG_S a1, 28*SZREG(t0)
+	REG_S a1, 29*SZREG(t0)
+	REG_S a1, 30*SZREG(t0)
+	REG_S a1, 31*SZREG(t0)
+	addi t0, t0, 32*SZREG
+	bltu t0, a3, 3b
+	andi a2, a2, SZREG-1  /* Update count */
+
+4:
+	/* Handle trailing misalignment */
+	beqz a2, 6f
+	add a3, t0, a2
+5:
+	sb a1, 0(t0)
+	addi t0, t0, 1
+	bltu t0, a3, 5b
+6:
+	ret
+END(memset)
diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S
new file mode 100644
index 000000000000..cba994696aff
--- /dev/null
+++ b/arch/riscv/lib/uaccess.S
@@ -0,0 +1,125 @@
+#include <linux/linkage.h>
+#include <asm/asm.h>
+#include <asm/csr.h>
+
+	.altmacro
+	.macro fixup op reg addr lbl
+	LOCAL _epc
+_epc:
+	\op \reg, \addr
+	.section __ex_table,"a"
+	.balign RISCV_SZPTR
+	RISCV_PTR _epc, \lbl
+	.previous
+	.endm
+
+ENTRY(__copy_user)
+
+#ifdef CONFIG_RV_PUM
+	/* Enable access to user memory */
+	li t6, SR_SUM
+	csrs sstatus, t6
+#endif
+
+	add a3, a1, a2
+	/* Use word-oriented copy only if low-order bits match */
+	andi t0, a0, SZREG-1
+	andi t1, a1, SZREG-1
+	bne t0, t1, 2f
+
+	addi t0, a1, SZREG-1
+	andi t1, a3, ~(SZREG-1)
+	andi t0, t0, ~(SZREG-1)
+	/* a3: terminal address of source region
+	 * t0: lowest XLEN-aligned address in source
+	 * t1: highest XLEN-aligned address in source
+	 */
+	bgeu t0, t1, 2f
+	bltu a1, t0, 4f
+1:
+	fixup REG_L, t2, (a1), 10f
+	fixup REG_S, t2, (a0), 10f
+	addi a1, a1, SZREG
+	addi a0, a0, SZREG
+	bltu a1, t1, 1b
+2:
+	bltu a1, a3, 5f
+
+3:
+#ifdef CONFIG_RV_PUM
+	/* Disable access to user memory */
+	csrc sstatus, t6
+#endif
+	li a0, 0
+	ret
+4: /* Edge case: unalignment */
+	fixup lbu, t2, (a1), 10f
+	fixup sb, t2, (a0), 10f
+	addi a1, a1, 1
+	addi a0, a0, 1
+	bltu a1, t0, 4b
+	j 1b
+5: /* Edge case: remainder */
+	fixup lbu, t2, (a1), 10f
+	fixup sb, t2, (a0), 10f
+	addi a1, a1, 1
+	addi a0, a0, 1
+	bltu a1, a3, 5b
+	j 3b
+ENDPROC(__copy_user)
+
+
+ENTRY(__clear_user)
+
+#ifdef CONFIG_RV_PUM
+	/* Enable access to user memory */
+	li t6, SR_SUM
+	csrs sstatus, t6
+#endif
+
+	add a3, a0, a1
+	addi t0, a0, SZREG-1
+	andi t1, a3, ~(SZREG-1)
+	andi t0, t0, ~(SZREG-1)
+	/* a3: terminal address of target region
+	 * t0: lowest doubleword-aligned address in target region
+	 * t1: highest doubleword-aligned address in target region
+	 */
+	bgeu t0, t1, 2f
+	bltu a0, t0, 4f
+1:
+	fixup REG_S, zero, (a0), 10f
+	addi a0, a0, SZREG
+	bltu a0, t1, 1b
+2:
+	bltu a0, a3, 5f
+
+3:
+#ifdef CONFIG_RV_PUM
+	/* Disable access to user memory */
+	csrc sstatus, t6
+#endif
+	li a0, 0
+	ret
+4: /* Edge case: unalignment */
+	fixup sb, zero, (a0), 10f
+	addi a0, a0, 1
+	bltu a0, t0, 4b
+	j 1b
+5: /* Edge case: remainder */
+	fixup sb, zero, (a0), 10f
+	addi a0, a0, 1
+	bltu a0, a3, 5b
+	j 3b
+ENDPROC(__clear_user)
+
+	.section .fixup,"ax"
+	.balign 4
+10:
+#ifdef CONFIG_RV_PUM
+	/* Disable access to user memory */
+	csrs sstatus, t6
+#endif
+	sub a0, a3, a0
+	ret
+	.previous
diff --git a/arch/riscv/lib/udivdi3.S b/arch/riscv/lib/udivdi3.S
new file mode 100644
index 000000000000..cb01ae5b181a
--- /dev/null
+++ b/arch/riscv/lib/udivdi3.S
@@ -0,0 +1,38 @@
+/*
+ * Copyright (C) 2016-2017 Free Software Foundation, Inc.
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+  .globl __udivdi3
+__udivdi3:
+  mv    a2, a1
+  mv    a1, a0
+  li    a0, -1
+  beqz  a2, .L5
+  li    a3, 1
+  bgeu  a2, a1, .L2
+.L1:
+  blez  a2, .L2
+  slli  a2, a2, 1
+  slli  a3, a3, 1
+  bgtu  a1, a2, .L1
+.L2:
+  li    a0, 0
+.L3:
+  bltu  a1, a2, .L4
+  sub   a1, a1, a2
+  or    a0, a0, a3
+.L4:
+  srli  a3, a3, 1
+  srli  a2, a2, 1
+  bnez  a3, .L3
+.L5:
+  ret
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 4/9] RISC-V: ELF and module implementation
  2017-06-28 18:55     ` Palmer Dabbelt
@ 2017-06-28 18:55       ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains the code that interfaces with ELF objects on RISC-V
systems, the vast majority of which is present to load kernel modules.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/compat.h | 31 +++++++++++++++
 arch/riscv/include/asm/elf.h    | 85 +++++++++++++++++++++++++++++++++++++++++
 arch/riscv/mm/extable.c         | 37 ++++++++++++++++++
 3 files changed, 153 insertions(+)
 create mode 100644 arch/riscv/include/asm/compat.h
 create mode 100644 arch/riscv/include/asm/elf.h
 create mode 100644 arch/riscv/mm/extable.c

diff --git a/arch/riscv/include/asm/compat.h b/arch/riscv/include/asm/compat.h
new file mode 100644
index 000000000000..b42bf54f42e4
--- /dev/null
+++ b/arch/riscv/include/asm/compat.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_COMPAT_H
+#define __ASM_COMPAT_H
+#ifdef __KERNEL__
+#ifdef CONFIG_COMPAT
+
+#if defined(CONFIG_64BIT)
+#define COMPAT_UTS_MACHINE "riscv64\0\0"
+#elif defined(CONFIG_32BIT)
+#define COMPAT_UTS_MACHINE "riscv32\0\0"
+#else
+#error "Unknown RISC-V base ISA"
+#endif
+
+#endif /*CONFIG_COMPAT*/
+#endif /*__KERNEL__*/
+#endif /*__ASM_COMPAT_H*/
diff --git a/arch/riscv/include/asm/elf.h b/arch/riscv/include/asm/elf.h
new file mode 100644
index 000000000000..add1245690f6
--- /dev/null
+++ b/arch/riscv/include/asm/elf.h
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) 2003 Matjaz Breskvar <phoenix@bsemi.com>
+ * Copyright (C) 2010-2011 Jonas Bonn <jonas@southpole.se>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef _ASM_RISCV_ELF_H
+#define _ASM_RISCV_ELF_H
+
+#include <uapi/asm/elf.h>
+#include <asm/auxvec.h>
+#include <asm/byteorder.h>
+
+/* TODO: Move definition into include/uapi/linux/elf-em.h */
+#define EM_RISCV	0xF3
+
+/*
+ * These are used to set parameters in the core dumps.
+ */
+#define ELF_ARCH	EM_RISCV
+
+#ifdef CONFIG_64BIT
+#define ELF_CLASS	ELFCLASS64
+#else
+#define ELF_CLASS	ELFCLASS32
+#endif
+
+#if defined(__LITTLE_ENDIAN)
+#define ELF_DATA	ELFDATA2LSB
+#elif defined(__BIG_ENDIAN)
+#define ELF_DATA	ELFDATA2MSB
+#else
+#error "Unknown endianness"
+#endif
+
+/*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+#define elf_check_arch(x) ((x)->e_machine == EM_RISCV)
+
+#define CORE_DUMP_USE_REGSET
+#define ELF_EXEC_PAGESIZE	(PAGE_SIZE)
+
+/*
+ * This is the location that an ET_DYN program is loaded if exec'ed.  Typical
+ * use of this is to invoke "./ld.so someprog" to test out a new version of
+ * the loader.  We need to make sure that it is out of the way of the program
+ * that it will "exec", and that there is sufficient room for the brk.
+ */
+#define ELF_ET_DYN_BASE		((TASK_SIZE / 3) * 2)
+
+/*
+ * This yields a mask that user programs can use to figure out what
+ * instruction set this CPU supports.  This could be done in user space,
+ * but it's not easy, and we've already done it here.
+ */
+#define ELF_HWCAP	(0)
+
+/*
+ * This yields a string that ld.so will use to load implementation
+ * specific libraries for optimization.  This is more specific in
+ * intent than poking at uname or /proc/cpuinfo.
+ */
+#define ELF_PLATFORM	(NULL)
+
+#define ARCH_DLINFO						\
+do {								\
+	NEW_AUX_ENT(AT_SYSINFO_EHDR,				\
+		(elf_addr_t)current->mm->context.vdso);		\
+} while (0)
+
+
+#ifdef __KERNEL__
+#define ARCH_HAS_SETUP_ADDITIONAL_PAGES
+struct linux_binprm;
+extern int arch_setup_additional_pages(struct linux_binprm *bprm,
+	int uses_interp);
+#endif
+
+#endif /* _ASM_RISCV_ELF_H */
diff --git a/arch/riscv/mm/extable.c b/arch/riscv/mm/extable.c
new file mode 100644
index 000000000000..11bb9417123b
--- /dev/null
+++ b/arch/riscv/mm/extable.c
@@ -0,0 +1,37 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+
+#include <linux/extable.h>
+#include <linux/module.h>
+#include <linux/uaccess.h>
+
+int fixup_exception(struct pt_regs *regs)
+{
+	const struct exception_table_entry *fixup;
+
+	fixup = search_exception_tables(regs->sepc);
+	if (fixup) {
+		regs->sepc = fixup->fixup;
+		return 1;
+	}
+	return 0;
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 4/9] RISC-V: ELF and module implementation
  2017-06-28 18:55     ` Palmer Dabbelt
                       ` (4 preceding siblings ...)
  (?)
@ 2017-06-28 18:55     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains the code that interfaces with ELF objects on RISC-V
systems, the vast majority of which is present to load kernel modules.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/compat.h | 31 +++++++++++++++
 arch/riscv/include/asm/elf.h    | 85 +++++++++++++++++++++++++++++++++++++++++
 arch/riscv/mm/extable.c         | 37 ++++++++++++++++++
 3 files changed, 153 insertions(+)
 create mode 100644 arch/riscv/include/asm/compat.h
 create mode 100644 arch/riscv/include/asm/elf.h
 create mode 100644 arch/riscv/mm/extable.c

diff --git a/arch/riscv/include/asm/compat.h b/arch/riscv/include/asm/compat.h
new file mode 100644
index 000000000000..b42bf54f42e4
--- /dev/null
+++ b/arch/riscv/include/asm/compat.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_COMPAT_H
+#define __ASM_COMPAT_H
+#ifdef __KERNEL__
+#ifdef CONFIG_COMPAT
+
+#if defined(CONFIG_64BIT)
+#define COMPAT_UTS_MACHINE "riscv64\0\0"
+#elif defined(CONFIG_32BIT)
+#define COMPAT_UTS_MACHINE "riscv32\0\0"
+#else
+#error "Unknown RISC-V base ISA"
+#endif
+
+#endif /*CONFIG_COMPAT*/
+#endif /*__KERNEL__*/
+#endif /*__ASM_COMPAT_H*/
diff --git a/arch/riscv/include/asm/elf.h b/arch/riscv/include/asm/elf.h
new file mode 100644
index 000000000000..add1245690f6
--- /dev/null
+++ b/arch/riscv/include/asm/elf.h
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) 2003 Matjaz Breskvar <phoenix@bsemi.com>
+ * Copyright (C) 2010-2011 Jonas Bonn <jonas@southpole.se>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef _ASM_RISCV_ELF_H
+#define _ASM_RISCV_ELF_H
+
+#include <uapi/asm/elf.h>
+#include <asm/auxvec.h>
+#include <asm/byteorder.h>
+
+/* TODO: Move definition into include/uapi/linux/elf-em.h */
+#define EM_RISCV	0xF3
+
+/*
+ * These are used to set parameters in the core dumps.
+ */
+#define ELF_ARCH	EM_RISCV
+
+#ifdef CONFIG_64BIT
+#define ELF_CLASS	ELFCLASS64
+#else
+#define ELF_CLASS	ELFCLASS32
+#endif
+
+#if defined(__LITTLE_ENDIAN)
+#define ELF_DATA	ELFDATA2LSB
+#elif defined(__BIG_ENDIAN)
+#define ELF_DATA	ELFDATA2MSB
+#else
+#error "Unknown endianness"
+#endif
+
+/*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+#define elf_check_arch(x) ((x)->e_machine == EM_RISCV)
+
+#define CORE_DUMP_USE_REGSET
+#define ELF_EXEC_PAGESIZE	(PAGE_SIZE)
+
+/*
+ * This is the location that an ET_DYN program is loaded if exec'ed.  Typical
+ * use of this is to invoke "./ld.so someprog" to test out a new version of
+ * the loader.  We need to make sure that it is out of the way of the program
+ * that it will "exec", and that there is sufficient room for the brk.
+ */
+#define ELF_ET_DYN_BASE		((TASK_SIZE / 3) * 2)
+
+/*
+ * This yields a mask that user programs can use to figure out what
+ * instruction set this CPU supports.  This could be done in user space,
+ * but it's not easy, and we've already done it here.
+ */
+#define ELF_HWCAP	(0)
+
+/*
+ * This yields a string that ld.so will use to load implementation
+ * specific libraries for optimization.  This is more specific in
+ * intent than poking at uname or /proc/cpuinfo.
+ */
+#define ELF_PLATFORM	(NULL)
+
+#define ARCH_DLINFO						\
+do {								\
+	NEW_AUX_ENT(AT_SYSINFO_EHDR,				\
+		(elf_addr_t)current->mm->context.vdso);		\
+} while (0)
+
+
+#ifdef __KERNEL__
+#define ARCH_HAS_SETUP_ADDITIONAL_PAGES
+struct linux_binprm;
+extern int arch_setup_additional_pages(struct linux_binprm *bprm,
+	int uses_interp);
+#endif
+
+#endif /* _ASM_RISCV_ELF_H */
diff --git a/arch/riscv/mm/extable.c b/arch/riscv/mm/extable.c
new file mode 100644
index 000000000000..11bb9417123b
--- /dev/null
+++ b/arch/riscv/mm/extable.c
@@ -0,0 +1,37 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+
+#include <linux/extable.h>
+#include <linux/module.h>
+#include <linux/uaccess.h>
+
+int fixup_exception(struct pt_regs *regs)
+{
+	const struct exception_table_entry *fixup;
+
+	fixup = search_exception_tables(regs->sepc);
+	if (fixup) {
+		regs->sepc = fixup->fixup;
+		return 1;
+	}
+	return 0;
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 4/9] RISC-V: ELF and module implementation
@ 2017-06-28 18:55       ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains the code that interfaces with ELF objects on RISC-V
systems, the vast majority of which is present to load kernel modules.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/compat.h | 31 +++++++++++++++
 arch/riscv/include/asm/elf.h    | 85 +++++++++++++++++++++++++++++++++++++++++
 arch/riscv/mm/extable.c         | 37 ++++++++++++++++++
 3 files changed, 153 insertions(+)
 create mode 100644 arch/riscv/include/asm/compat.h
 create mode 100644 arch/riscv/include/asm/elf.h
 create mode 100644 arch/riscv/mm/extable.c

diff --git a/arch/riscv/include/asm/compat.h b/arch/riscv/include/asm/compat.h
new file mode 100644
index 000000000000..b42bf54f42e4
--- /dev/null
+++ b/arch/riscv/include/asm/compat.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_COMPAT_H
+#define __ASM_COMPAT_H
+#ifdef __KERNEL__
+#ifdef CONFIG_COMPAT
+
+#if defined(CONFIG_64BIT)
+#define COMPAT_UTS_MACHINE "riscv64\0\0"
+#elif defined(CONFIG_32BIT)
+#define COMPAT_UTS_MACHINE "riscv32\0\0"
+#else
+#error "Unknown RISC-V base ISA"
+#endif
+
+#endif /*CONFIG_COMPAT*/
+#endif /*__KERNEL__*/
+#endif /*__ASM_COMPAT_H*/
diff --git a/arch/riscv/include/asm/elf.h b/arch/riscv/include/asm/elf.h
new file mode 100644
index 000000000000..add1245690f6
--- /dev/null
+++ b/arch/riscv/include/asm/elf.h
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) 2003 Matjaz Breskvar <phoenix@bsemi.com>
+ * Copyright (C) 2010-2011 Jonas Bonn <jonas@southpole.se>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef _ASM_RISCV_ELF_H
+#define _ASM_RISCV_ELF_H
+
+#include <uapi/asm/elf.h>
+#include <asm/auxvec.h>
+#include <asm/byteorder.h>
+
+/* TODO: Move definition into include/uapi/linux/elf-em.h */
+#define EM_RISCV	0xF3
+
+/*
+ * These are used to set parameters in the core dumps.
+ */
+#define ELF_ARCH	EM_RISCV
+
+#ifdef CONFIG_64BIT
+#define ELF_CLASS	ELFCLASS64
+#else
+#define ELF_CLASS	ELFCLASS32
+#endif
+
+#if defined(__LITTLE_ENDIAN)
+#define ELF_DATA	ELFDATA2LSB
+#elif defined(__BIG_ENDIAN)
+#define ELF_DATA	ELFDATA2MSB
+#else
+#error "Unknown endianness"
+#endif
+
+/*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+#define elf_check_arch(x) ((x)->e_machine == EM_RISCV)
+
+#define CORE_DUMP_USE_REGSET
+#define ELF_EXEC_PAGESIZE	(PAGE_SIZE)
+
+/*
+ * This is the location that an ET_DYN program is loaded if exec'ed.  Typical
+ * use of this is to invoke "./ld.so someprog" to test out a new version of
+ * the loader.  We need to make sure that it is out of the way of the program
+ * that it will "exec", and that there is sufficient room for the brk.
+ */
+#define ELF_ET_DYN_BASE		((TASK_SIZE / 3) * 2)
+
+/*
+ * This yields a mask that user programs can use to figure out what
+ * instruction set this CPU supports.  This could be done in user space,
+ * but it's not easy, and we've already done it here.
+ */
+#define ELF_HWCAP	(0)
+
+/*
+ * This yields a string that ld.so will use to load implementation
+ * specific libraries for optimization.  This is more specific in
+ * intent than poking at uname or /proc/cpuinfo.
+ */
+#define ELF_PLATFORM	(NULL)
+
+#define ARCH_DLINFO						\
+do {								\
+	NEW_AUX_ENT(AT_SYSINFO_EHDR,				\
+		(elf_addr_t)current->mm->context.vdso);		\
+} while (0)
+
+
+#ifdef __KERNEL__
+#define ARCH_HAS_SETUP_ADDITIONAL_PAGES
+struct linux_binprm;
+extern int arch_setup_additional_pages(struct linux_binprm *bprm,
+	int uses_interp);
+#endif
+
+#endif /* _ASM_RISCV_ELF_H */
diff --git a/arch/riscv/mm/extable.c b/arch/riscv/mm/extable.c
new file mode 100644
index 000000000000..11bb9417123b
--- /dev/null
+++ b/arch/riscv/mm/extable.c
@@ -0,0 +1,37 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2013 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+
+#include <linux/extable.h>
+#include <linux/module.h>
+#include <linux/uaccess.h>
+
+int fixup_exception(struct pt_regs *regs)
+{
+	const struct exception_table_entry *fixup;
+
+	fixup = search_exception_tables(regs->sepc);
+	if (fixup) {
+		regs->sepc = fixup->fixup;
+		return 1;
+	}
+	return 0;
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 5/9] RISC-V: Task implementation
  2017-06-28 18:55     ` Palmer Dabbelt
@ 2017-06-28 18:55       ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains the implementation of tasks on RISC-V, most of which
is involved in task switching.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/asm-offsets.h |   1 +
 arch/riscv/include/asm/current.h     |  43 ++++
 arch/riscv/include/asm/kprobes.h     |  22 ++
 arch/riscv/include/asm/processor.h   | 102 ++++++++
 arch/riscv/include/asm/switch_to.h   |  69 ++++++
 arch/riscv/include/asm/thread_info.h |  96 ++++++++
 arch/riscv/kernel/asm-offsets.c      | 316 +++++++++++++++++++++++++
 arch/riscv/kernel/entry.S            | 436 +++++++++++++++++++++++++++++++++++
 arch/riscv/kernel/process.c          | 131 +++++++++++
 9 files changed, 1216 insertions(+)
 create mode 100644 arch/riscv/include/asm/asm-offsets.h
 create mode 100644 arch/riscv/include/asm/current.h
 create mode 100644 arch/riscv/include/asm/kprobes.h
 create mode 100644 arch/riscv/include/asm/processor.h
 create mode 100644 arch/riscv/include/asm/switch_to.h
 create mode 100644 arch/riscv/include/asm/thread_info.h
 create mode 100644 arch/riscv/kernel/asm-offsets.c
 create mode 100644 arch/riscv/kernel/entry.S
 create mode 100644 arch/riscv/kernel/process.c

diff --git a/arch/riscv/include/asm/asm-offsets.h b/arch/riscv/include/asm/asm-offsets.h
new file mode 100644
index 000000000000..d370ee36a182
--- /dev/null
+++ b/arch/riscv/include/asm/asm-offsets.h
@@ -0,0 +1 @@
+#include <generated/asm-offsets.h>
diff --git a/arch/riscv/include/asm/current.h b/arch/riscv/include/asm/current.h
new file mode 100644
index 000000000000..032a24ba5308
--- /dev/null
+++ b/arch/riscv/include/asm/current.h
@@ -0,0 +1,43 @@
+/*
+ * Based on arm/arm64/include/asm/current.h
+ *
+ * Copyright (C) 2016 ARM
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef __ASM_CURRENT_H
+#define __ASM_CURRENT_H
+
+#include <linux/compiler.h>
+
+#ifndef __ASSEMBLY__
+
+struct task_struct;
+
+/* This only works because "struct thread_info" is at offset 0 from "struct
+ * task_struct".  This constraint seems to be necessary on other architectures
+ * as well, but __switch_to enforces it.  We can't check TASK_TI here because
+ * <asm/asm-offsets.h> includes this, and I can't get the definition of "struct
+ * task_struct" here due to some header ordering problems.
+ */
+static __always_inline struct task_struct *get_current(void)
+{
+	register struct task_struct *tp __asm__("tp");
+	return tp;
+}
+
+#define current get_current()
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_CURRENT_H */
diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
new file mode 100644
index 000000000000..1190de7a0f74
--- /dev/null
+++ b/arch/riscv/include/asm/kprobes.h
@@ -0,0 +1,22 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef ASM_RISCV_KPROBES_H
+#define ASM_RISCV_KPROBES_H
+
+#ifdef CONFIG_KPROBES
+#error "RISC-V doesn't skpport CONFIG_KPROBES"
+#endif
+
+#endif
diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
new file mode 100644
index 000000000000..65aa014db9b4
--- /dev/null
+++ b/arch/riscv/include/asm/processor.h
@@ -0,0 +1,102 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PROCESSOR_H
+#define _ASM_RISCV_PROCESSOR_H
+
+#include <linux/const.h>
+
+#include <asm/ptrace.h>
+
+/*
+ * This decides where the kernel will search for a free chunk of vm
+ * space during mmap's.
+ */
+#define TASK_UNMAPPED_BASE	PAGE_ALIGN(TASK_SIZE >> 1)
+
+#ifdef __KERNEL__
+#define STACK_TOP		TASK_SIZE
+#define STACK_TOP_MAX		STACK_TOP
+#define STACK_ALIGN		16
+#endif /* __KERNEL__ */
+
+#ifndef __ASSEMBLY__
+
+struct task_struct;
+struct pt_regs;
+
+/*
+ * Default implementation of macro that returns current
+ * instruction pointer ("program counter").
+ */
+#define current_text_addr()	({ __label__ _l; _l: &&_l; })
+
+/* CPU-specific state of a task */
+struct thread_struct {
+	/* Callee-saved registers */
+	unsigned long ra;
+	unsigned long sp;	/* Kernel mode stack */
+	unsigned long s[12];	/* s[0]: frame pointer */
+	struct __riscv_d_ext_state fstate;
+};
+
+#define INIT_THREAD {					\
+	.sp = sizeof(init_stack) + (long)&init_stack,	\
+}
+
+/* Return saved (kernel) PC of a blocked thread. */
+#define thread_saved_pc(t)	((t)->thread.ra)
+#define thread_saved_sp(t)	((t)->thread.sp)
+#define thread_saved_fp(t)	((t)->thread.s[0])
+
+#define task_pt_regs(tsk)						\
+	((struct pt_regs *)(task_stack_page(tsk) + THREAD_SIZE		\
+			    - ALIGN(sizeof(struct pt_regs), STACK_ALIGN)))
+
+#define KSTK_EIP(tsk)		(task_pt_regs(tsk)->sepc)
+#define KSTK_ESP(tsk)		(task_pt_regs(tsk)->sp)
+
+
+/* Do necessary setup to start up a newly executed thread. */
+extern void start_thread(struct pt_regs *regs,
+			unsigned long pc, unsigned long sp);
+
+/* Free all resources held by a thread. */
+static inline void release_thread(struct task_struct *dead_task)
+{
+}
+
+extern unsigned long get_wchan(struct task_struct *p);
+
+
+static inline void cpu_relax(void)
+{
+#ifdef __riscv_muldiv
+	int dummy;
+	/* In lieu of a halt instruction, induce a long-latency stall. */
+	__asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy));
+#endif
+	barrier();
+}
+
+static inline void wait_for_interrupt(void)
+{
+	__asm__ __volatile__ ("wfi");
+}
+
+struct device_node;
+extern int riscv_of_processor_hart(struct device_node *node);
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PROCESSOR_H */
diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h
new file mode 100644
index 000000000000..dd6b05bff75b
--- /dev/null
+++ b/arch/riscv/include/asm/switch_to.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SWITCH_TO_H
+#define _ASM_RISCV_SWITCH_TO_H
+
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/csr.h>
+
+extern void __fstate_save(struct task_struct *save_to);
+extern void __fstate_restore(struct task_struct *restore_from);
+
+static inline void __fstate_clean(struct pt_regs *regs)
+{
+	regs->sstatus |= (regs->sstatus & ~(SR_FS)) | SR_FS_CLEAN;
+}
+
+static inline void fstate_save(struct task_struct *task,
+			       struct pt_regs *regs)
+{
+	if ((regs->sstatus & SR_FS) == SR_FS_DIRTY) {
+		__fstate_save(task);
+		__fstate_clean(regs);
+	}
+}
+
+static inline void fstate_restore(struct task_struct *task,
+				  struct pt_regs *regs)
+{
+	if ((regs->sstatus & SR_FS) != SR_FS_OFF) {
+		__fstate_restore(task);
+		__fstate_clean(regs);
+	}
+}
+
+static inline void __switch_to_aux(struct task_struct *prev,
+				   struct task_struct *next)
+{
+	struct pt_regs *regs;
+
+	regs = task_pt_regs(prev);
+	if (unlikely(regs->sstatus & SR_SD))
+		fstate_save(prev, regs);
+	fstate_restore(next, task_pt_regs(next));
+}
+
+extern struct task_struct *__switch_to(struct task_struct *,
+				       struct task_struct *);
+
+#define switch_to(prev, next, last)			\
+do {							\
+	struct task_struct *__prev = (prev);		\
+	struct task_struct *__next = (next);		\
+	__switch_to_aux(__prev, __next);		\
+	((last) = __switch_to(__prev, __next));		\
+} while (0)
+
+#endif /* _ASM_RISCV_SWITCH_TO_H */
diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
new file mode 100644
index 000000000000..f05b7d68c1ad
--- /dev/null
+++ b/arch/riscv/include/asm/thread_info.h
@@ -0,0 +1,96 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_THREAD_INFO_H
+#define _ASM_RISCV_THREAD_INFO_H
+
+#ifdef __KERNEL__
+
+#include <asm/page.h>
+#include <linux/const.h>
+
+/* thread information allocation */
+#define THREAD_SIZE_ORDER	(1)
+#define THREAD_SIZE		(PAGE_SIZE << THREAD_SIZE_ORDER)
+
+#ifndef __ASSEMBLY__
+
+#include <asm/processor.h>
+#include <asm/csr.h>
+
+typedef unsigned long mm_segment_t;
+
+/*
+ * low level task data that entry.S needs immediate access to
+ * - this struct should fit entirely inside of one cache line
+ * - if the members of this struct changes, the assembly constants
+ *   in asm-offsets.c must be updated accordingly
+ * - thread_info is included in task_struct at an offset of 0.  This means that
+ *   tp points to both thread_info and task_struct.
+ */
+struct thread_info {
+	unsigned long		flags;		/* low level flags */
+	int                     preempt_count;  /* 0=>preemptible, <0=>BUG */
+	mm_segment_t		addr_limit;
+	/* These stack pointers are overwritten on every system call or
+	 * exception.  SP is also saved to the stack it can be recovered when
+	 * overwritten.
+	 */
+	long			kernel_sp;	/* Kernel stack pointer */
+	long			user_sp;	/* User stack pointer */
+};
+
+/*
+ * macros/functions for gaining access to the thread information structure
+ *
+ * preempt_count needs to be 1 initially, until the scheduler is functional.
+ */
+#define INIT_THREAD_INFO(tsk)			\
+{						\
+	.flags		= 0,			\
+	.preempt_count	= INIT_PREEMPT_COUNT,	\
+	.addr_limit	= KERNEL_DS,		\
+}
+
+#define init_stack		(init_thread_union.stack)
+
+#endif /* !__ASSEMBLY__ */
+
+/*
+ * thread information flags
+ * - these are process state flags that various assembly files may need to
+ *   access
+ * - pending work-to-be-done flags are in lowest half-word
+ * - other flags in upper half-word(s)
+ */
+#define TIF_SYSCALL_TRACE	0	/* syscall trace active */
+#define TIF_NOTIFY_RESUME	1	/* callback before returning to user */
+#define TIF_SIGPENDING		2	/* signal pending */
+#define TIF_NEED_RESCHED	3	/* rescheduling necessary */
+#define TIF_RESTORE_SIGMASK	4	/* restore signal mask in do_signal() */
+#define TIF_MEMDIE		5	/* is terminating due to OOM killer */
+#define TIF_SYSCALL_TRACEPOINT  6       /* syscall tracepoint instrumentation */
+
+#define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+#define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
+#define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
+#define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
+
+#define _TIF_WORK_MASK \
+	(_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED)
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_THREAD_INFO_H */
diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
new file mode 100644
index 000000000000..2ead5037528c
--- /dev/null
+++ b/arch/riscv/kernel/asm-offsets.c
@@ -0,0 +1,316 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/kbuild.h>
+#include <linux/sched.h>
+#include <asm/thread_info.h>
+#include <asm/ptrace.h>
+
+void asm_offsets(void)
+{
+	OFFSET(TASK_THREAD_RA, task_struct, thread.ra);
+	OFFSET(TASK_THREAD_SP, task_struct, thread.sp);
+	OFFSET(TASK_THREAD_S0, task_struct, thread.s[0]);
+	OFFSET(TASK_THREAD_S1, task_struct, thread.s[1]);
+	OFFSET(TASK_THREAD_S2, task_struct, thread.s[2]);
+	OFFSET(TASK_THREAD_S3, task_struct, thread.s[3]);
+	OFFSET(TASK_THREAD_S4, task_struct, thread.s[4]);
+	OFFSET(TASK_THREAD_S5, task_struct, thread.s[5]);
+	OFFSET(TASK_THREAD_S6, task_struct, thread.s[6]);
+	OFFSET(TASK_THREAD_S7, task_struct, thread.s[7]);
+	OFFSET(TASK_THREAD_S8, task_struct, thread.s[8]);
+	OFFSET(TASK_THREAD_S9, task_struct, thread.s[9]);
+	OFFSET(TASK_THREAD_S10, task_struct, thread.s[10]);
+	OFFSET(TASK_THREAD_S11, task_struct, thread.s[11]);
+	OFFSET(TASK_THREAD_SP, task_struct, thread.sp);
+	OFFSET(TASK_STACK, task_struct, stack);
+	OFFSET(TASK_TI, task_struct, thread_info);
+	OFFSET(TASK_TI_FLAGS, task_struct, thread_info.flags);
+	OFFSET(TASK_TI_KERNEL_SP, task_struct, thread_info.kernel_sp);
+	OFFSET(TASK_TI_USER_SP, task_struct, thread_info.user_sp);
+
+	OFFSET(TASK_THREAD_F0,  task_struct, thread.fstate.f[0]);
+	OFFSET(TASK_THREAD_F1,  task_struct, thread.fstate.f[1]);
+	OFFSET(TASK_THREAD_F2,  task_struct, thread.fstate.f[2]);
+	OFFSET(TASK_THREAD_F3,  task_struct, thread.fstate.f[3]);
+	OFFSET(TASK_THREAD_F4,  task_struct, thread.fstate.f[4]);
+	OFFSET(TASK_THREAD_F5,  task_struct, thread.fstate.f[5]);
+	OFFSET(TASK_THREAD_F6,  task_struct, thread.fstate.f[6]);
+	OFFSET(TASK_THREAD_F7,  task_struct, thread.fstate.f[7]);
+	OFFSET(TASK_THREAD_F8,  task_struct, thread.fstate.f[8]);
+	OFFSET(TASK_THREAD_F9,  task_struct, thread.fstate.f[9]);
+	OFFSET(TASK_THREAD_F10, task_struct, thread.fstate.f[10]);
+	OFFSET(TASK_THREAD_F11, task_struct, thread.fstate.f[11]);
+	OFFSET(TASK_THREAD_F12, task_struct, thread.fstate.f[12]);
+	OFFSET(TASK_THREAD_F13, task_struct, thread.fstate.f[13]);
+	OFFSET(TASK_THREAD_F14, task_struct, thread.fstate.f[14]);
+	OFFSET(TASK_THREAD_F15, task_struct, thread.fstate.f[15]);
+	OFFSET(TASK_THREAD_F16, task_struct, thread.fstate.f[16]);
+	OFFSET(TASK_THREAD_F17, task_struct, thread.fstate.f[17]);
+	OFFSET(TASK_THREAD_F18, task_struct, thread.fstate.f[18]);
+	OFFSET(TASK_THREAD_F19, task_struct, thread.fstate.f[19]);
+	OFFSET(TASK_THREAD_F20, task_struct, thread.fstate.f[20]);
+	OFFSET(TASK_THREAD_F21, task_struct, thread.fstate.f[21]);
+	OFFSET(TASK_THREAD_F22, task_struct, thread.fstate.f[22]);
+	OFFSET(TASK_THREAD_F23, task_struct, thread.fstate.f[23]);
+	OFFSET(TASK_THREAD_F24, task_struct, thread.fstate.f[24]);
+	OFFSET(TASK_THREAD_F25, task_struct, thread.fstate.f[25]);
+	OFFSET(TASK_THREAD_F26, task_struct, thread.fstate.f[26]);
+	OFFSET(TASK_THREAD_F27, task_struct, thread.fstate.f[27]);
+	OFFSET(TASK_THREAD_F28, task_struct, thread.fstate.f[28]);
+	OFFSET(TASK_THREAD_F29, task_struct, thread.fstate.f[29]);
+	OFFSET(TASK_THREAD_F30, task_struct, thread.fstate.f[30]);
+	OFFSET(TASK_THREAD_F31, task_struct, thread.fstate.f[31]);
+	OFFSET(TASK_THREAD_FCSR, task_struct, thread.fstate.fcsr);
+
+	DEFINE(PT_SIZE, sizeof(struct pt_regs));
+	OFFSET(PT_SEPC, pt_regs, sepc);
+	OFFSET(PT_RA, pt_regs, ra);
+	OFFSET(PT_FP, pt_regs, s0);
+	OFFSET(PT_S0, pt_regs, s0);
+	OFFSET(PT_S1, pt_regs, s1);
+	OFFSET(PT_S2, pt_regs, s2);
+	OFFSET(PT_S3, pt_regs, s3);
+	OFFSET(PT_S4, pt_regs, s4);
+	OFFSET(PT_S5, pt_regs, s5);
+	OFFSET(PT_S6, pt_regs, s6);
+	OFFSET(PT_S7, pt_regs, s7);
+	OFFSET(PT_S8, pt_regs, s8);
+	OFFSET(PT_S9, pt_regs, s9);
+	OFFSET(PT_S10, pt_regs, s10);
+	OFFSET(PT_S11, pt_regs, s11);
+	OFFSET(PT_SP, pt_regs, sp);
+	OFFSET(PT_TP, pt_regs, tp);
+	OFFSET(PT_A0, pt_regs, a0);
+	OFFSET(PT_A1, pt_regs, a1);
+	OFFSET(PT_A2, pt_regs, a2);
+	OFFSET(PT_A3, pt_regs, a3);
+	OFFSET(PT_A4, pt_regs, a4);
+	OFFSET(PT_A5, pt_regs, a5);
+	OFFSET(PT_A6, pt_regs, a6);
+	OFFSET(PT_A7, pt_regs, a7);
+	OFFSET(PT_T0, pt_regs, t0);
+	OFFSET(PT_T1, pt_regs, t1);
+	OFFSET(PT_T2, pt_regs, t2);
+	OFFSET(PT_T3, pt_regs, t3);
+	OFFSET(PT_T4, pt_regs, t4);
+	OFFSET(PT_T5, pt_regs, t5);
+	OFFSET(PT_T6, pt_regs, t6);
+	OFFSET(PT_GP, pt_regs, gp);
+	OFFSET(PT_SSTATUS, pt_regs, sstatus);
+	OFFSET(PT_SBADADDR, pt_regs, sbadaddr);
+	OFFSET(PT_SCAUSE, pt_regs, scause);
+
+	/* THREAD_{F,X}* might be larger than a S-type offset can handle, but
+	 * these are used in performance-sensitive assembly so we can't resort
+	 * to loading the long immediate every time.
+	 */
+	DEFINE(TASK_THREAD_RA_RA,
+		  offsetof(struct task_struct, thread.ra)
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_SP_RA,
+		  offsetof(struct task_struct, thread.sp)
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S0_RA,
+		  offsetof(struct task_struct, thread.s[0])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S1_RA,
+		  offsetof(struct task_struct, thread.s[1])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S2_RA,
+		  offsetof(struct task_struct, thread.s[2])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S3_RA,
+		  offsetof(struct task_struct, thread.s[3])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S4_RA,
+		  offsetof(struct task_struct, thread.s[4])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S5_RA,
+		  offsetof(struct task_struct, thread.s[5])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S6_RA,
+		  offsetof(struct task_struct, thread.s[6])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S7_RA,
+		  offsetof(struct task_struct, thread.s[7])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S8_RA,
+		  offsetof(struct task_struct, thread.s[8])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S9_RA,
+		  offsetof(struct task_struct, thread.s[9])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S10_RA,
+		  offsetof(struct task_struct, thread.s[10])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S11_RA,
+		  offsetof(struct task_struct, thread.s[11])
+		- offsetof(struct task_struct, thread.ra)
+	);
+
+	DEFINE(TASK_THREAD_F0_F0,
+		  offsetof(struct task_struct, thread.fstate.f[0])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F1_F0,
+		  offsetof(struct task_struct, thread.fstate.f[1])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F2_F0,
+		  offsetof(struct task_struct, thread.fstate.f[2])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F3_F0,
+		  offsetof(struct task_struct, thread.fstate.f[3])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F4_F0,
+		  offsetof(struct task_struct, thread.fstate.f[4])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F5_F0,
+		  offsetof(struct task_struct, thread.fstate.f[5])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F6_F0,
+		  offsetof(struct task_struct, thread.fstate.f[6])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F7_F0,
+		  offsetof(struct task_struct, thread.fstate.f[7])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F8_F0,
+		  offsetof(struct task_struct, thread.fstate.f[8])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F9_F0,
+		  offsetof(struct task_struct, thread.fstate.f[9])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F10_F0,
+		  offsetof(struct task_struct, thread.fstate.f[10])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F11_F0,
+		  offsetof(struct task_struct, thread.fstate.f[11])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F12_F0,
+		  offsetof(struct task_struct, thread.fstate.f[12])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F13_F0,
+		  offsetof(struct task_struct, thread.fstate.f[13])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F14_F0,
+		  offsetof(struct task_struct, thread.fstate.f[14])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F15_F0,
+		  offsetof(struct task_struct, thread.fstate.f[15])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F16_F0,
+		  offsetof(struct task_struct, thread.fstate.f[16])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F17_F0,
+		  offsetof(struct task_struct, thread.fstate.f[17])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F18_F0,
+		  offsetof(struct task_struct, thread.fstate.f[18])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F19_F0,
+		  offsetof(struct task_struct, thread.fstate.f[19])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F20_F0,
+		  offsetof(struct task_struct, thread.fstate.f[20])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F21_F0,
+		  offsetof(struct task_struct, thread.fstate.f[21])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F22_F0,
+		  offsetof(struct task_struct, thread.fstate.f[22])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F23_F0,
+		  offsetof(struct task_struct, thread.fstate.f[23])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F24_F0,
+		  offsetof(struct task_struct, thread.fstate.f[24])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F25_F0,
+		  offsetof(struct task_struct, thread.fstate.f[25])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F26_F0,
+		  offsetof(struct task_struct, thread.fstate.f[26])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F27_F0,
+		  offsetof(struct task_struct, thread.fstate.f[27])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F28_F0,
+		  offsetof(struct task_struct, thread.fstate.f[28])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F29_F0,
+		  offsetof(struct task_struct, thread.fstate.f[29])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F30_F0,
+		  offsetof(struct task_struct, thread.fstate.f[30])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F31_F0,
+		  offsetof(struct task_struct, thread.fstate.f[31])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_FCSR_F0,
+		  offsetof(struct task_struct, thread.fstate.fcsr)
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+
+	/* The assembler needs access to THREAD_SIZE as well. */
+	DEFINE(ASM_THREAD_SIZE, THREAD_SIZE);
+
+	/* We allocate a pt_regs on the stack when entering the kernel.  This
+	 * ensures the alignment is sane.
+	 */
+	DEFINE(PT_SIZE_ON_STACK, ALIGN(sizeof(struct pt_regs), STACK_ALIGN));
+}
diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
new file mode 100644
index 000000000000..aa4593c2eaef
--- /dev/null
+++ b/arch/riscv/kernel/entry.S
@@ -0,0 +1,436 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+
+#include <asm/asm.h>
+#include <asm/csr.h>
+#include <asm/unistd.h>
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+
+	.text
+	.altmacro
+
+/* Prepares to enter a system call or exception by saving all registers to the
+ * stack.
+ */
+	.macro SAVE_ALL
+	LOCAL _restore_kernel_tpsp
+	LOCAL _save_context
+
+	/* If coming from userspace, preserve the user thread pointer and load
+	   the kernel thread pointer.  If we came from the kernel, sscratch
+	   will contain 0, and we should continue on the current TP. */
+	csrrw tp, sscratch, tp
+	bnez tp, _save_context
+
+_restore_kernel_tpsp:
+	csrr tp, sscratch
+	REG_S sp, TASK_TI_KERNEL_SP(tp)
+_save_context:
+	REG_S sp, TASK_TI_USER_SP(tp)
+	REG_L sp, TASK_TI_KERNEL_SP(tp)
+	addi sp, sp, -(PT_SIZE_ON_STACK)
+	REG_S x1,  PT_RA(sp)
+	REG_S x3,  PT_GP(sp)
+	REG_S x5,  PT_T0(sp)
+	REG_S x6,  PT_T1(sp)
+	REG_S x7,  PT_T2(sp)
+	REG_S x8,  PT_S0(sp)
+	REG_S x9,  PT_S1(sp)
+	REG_S x10, PT_A0(sp)
+	REG_S x11, PT_A1(sp)
+	REG_S x12, PT_A2(sp)
+	REG_S x13, PT_A3(sp)
+	REG_S x14, PT_A4(sp)
+	REG_S x15, PT_A5(sp)
+	REG_S x16, PT_A6(sp)
+	REG_S x17, PT_A7(sp)
+	REG_S x18, PT_S2(sp)
+	REG_S x19, PT_S3(sp)
+	REG_S x20, PT_S4(sp)
+	REG_S x21, PT_S5(sp)
+	REG_S x22, PT_S6(sp)
+	REG_S x23, PT_S7(sp)
+	REG_S x24, PT_S8(sp)
+	REG_S x25, PT_S9(sp)
+	REG_S x26, PT_S10(sp)
+	REG_S x27, PT_S11(sp)
+	REG_S x28, PT_T3(sp)
+	REG_S x29, PT_T4(sp)
+	REG_S x30, PT_T5(sp)
+	REG_S x31, PT_T6(sp)
+
+	/* Disable FPU to detect illegal usage of
+	   floating point in kernel space */
+	li t0, SR_FS
+
+	REG_L s0, TASK_TI_USER_SP(tp)
+	csrrc s1, sstatus, t0
+	csrr s2, sepc
+	csrr s3, sbadaddr
+	csrr s4, scause
+	csrr s5, sscratch
+	REG_S s0, PT_SP(sp)
+	REG_S s1, PT_SSTATUS(sp)
+	REG_S s2, PT_SEPC(sp)
+	REG_S s3, PT_SBADADDR(sp)
+	REG_S s4, PT_SCAUSE(sp)
+	REG_S s5, PT_TP(sp)
+	.endm
+
+/* Prepares to return from a system call or exception by restoring all
+ * registers from the stack.
+ */
+	.macro RESTORE_ALL
+	REG_L a0, PT_SSTATUS(sp)
+	REG_L a2, PT_SEPC(sp)
+	csrw sstatus, a0
+	csrw sepc, a2
+
+	REG_L x1,  PT_RA(sp)
+	REG_L x3,  PT_GP(sp)
+	REG_L x4,  PT_TP(sp)
+	REG_L x5,  PT_T0(sp)
+	REG_L x6,  PT_T1(sp)
+	REG_L x7,  PT_T2(sp)
+	REG_L x8,  PT_S0(sp)
+	REG_L x9,  PT_S1(sp)
+	REG_L x10, PT_A0(sp)
+	REG_L x11, PT_A1(sp)
+	REG_L x12, PT_A2(sp)
+	REG_L x13, PT_A3(sp)
+	REG_L x14, PT_A4(sp)
+	REG_L x15, PT_A5(sp)
+	REG_L x16, PT_A6(sp)
+	REG_L x17, PT_A7(sp)
+	REG_L x18, PT_S2(sp)
+	REG_L x19, PT_S3(sp)
+	REG_L x20, PT_S4(sp)
+	REG_L x21, PT_S5(sp)
+	REG_L x22, PT_S6(sp)
+	REG_L x23, PT_S7(sp)
+	REG_L x24, PT_S8(sp)
+	REG_L x25, PT_S9(sp)
+	REG_L x26, PT_S10(sp)
+	REG_L x27, PT_S11(sp)
+	REG_L x28, PT_T3(sp)
+	REG_L x29, PT_T4(sp)
+	REG_L x30, PT_T5(sp)
+	REG_L x31, PT_T6(sp)
+
+	REG_L x2,  PT_SP(sp)
+	.endm
+
+ENTRY(handle_exception)
+	SAVE_ALL
+
+	/* Set sscratch register to 0, so that if a recursive exception
+	   occurs, the exception vector knows it came from the kernel */
+	csrw sscratch, x0
+
+	la gp, __global_pointer$
+
+	la ra, ret_from_exception
+	/* MSB of cause differentiates between
+	   interrupts and exceptions */
+	bge s4, zero, 1f
+
+	/* Handle interrupts */
+	slli a0, s4, 1
+	srli a0, a0, 1
+	move a1, sp /* pt_regs */
+	tail do_IRQ
+1:
+	/* Handle syscalls */
+	li t0, EXC_SYSCALL
+	beq s4, t0, handle_syscall
+
+	/* Handle other exceptions */
+	slli t0, s4, RISCV_LGPTR
+	la t1, excp_vect_table
+	la t2, excp_vect_table_end
+	move a0, sp /* pt_regs */
+	add t0, t1, t0
+	/* Check if exception code lies within bounds */
+	bgeu t0, t2, 1f
+	REG_L t0, 0(t0)
+	jr t0
+1:
+	tail do_trap_unknown
+
+handle_syscall:
+	/* Advance SEPC to avoid executing the original
+	   scall instruction on sret */
+	addi s2, s2, 0x4
+	REG_S s2, PT_SEPC(sp)
+	/* System calls run with interrupts enabled */
+	csrs sstatus, SR_IE
+	/* Trace syscalls, but only if requested by the user. */
+	REG_L t0, TASK_TI_FLAGS(tp)
+	andi t0, t0, _TIF_SYSCALL_TRACE
+	bnez t0, handle_syscall_trace_enter
+check_syscall_nr:
+	/* Check to make sure we don't jump to a bogus syscall number. */
+	li t0, __NR_syscalls
+	la s0, sys_ni_syscall
+	/* Syscall number held in a7 */
+	bgeu a7, t0, 1f
+	la s0, sys_call_table
+	slli t0, a7, RISCV_LGPTR
+	add s0, s0, t0
+	REG_L s0, 0(s0)
+1:
+	jalr s0
+
+ret_from_syscall:
+	/* Set user a0 to kernel a0 */
+	REG_S a0, PT_A0(sp)
+	/* Trace syscalls, but only if requested by the user. */
+	REG_L t0, TASK_TI_FLAGS(tp)
+	andi t0, t0, _TIF_SYSCALL_TRACE
+	bnez t0, handle_syscall_trace_exit
+
+ret_from_exception:
+	REG_L s0, PT_SSTATUS(sp)
+	csrc sstatus, SR_IE
+	andi s0, s0, SR_PS
+	bnez s0, restore_all
+
+resume_userspace:
+	/* Interrupts must be disabled here so flags are checked atomically */
+	REG_L s0, TASK_TI_FLAGS(tp) /* current_thread_info->flags */
+	andi s1, s0, _TIF_WORK_MASK
+	bnez s1, work_pending
+
+	/* Save unwound kernel stack pointer in thread_info */
+	addi s0, sp, PT_SIZE_ON_STACK
+	REG_S s0, TASK_TI_KERNEL_SP(tp)
+
+	/* Save TP into sscratch, so we can find the kernel data structures
+	 * again. */
+	csrw sscratch, tp
+
+restore_all:
+	RESTORE_ALL
+	sret
+
+work_pending:
+	/* Enter slow path for supplementary processing */
+	la ra, ret_from_exception
+	andi s1, s0, _TIF_NEED_RESCHED
+	bnez s1, work_resched
+work_notifysig:
+	/* Handle pending signals and notify-resume requests */
+	csrs sstatus, SR_IE /* Enable interrupts for do_notify_resume() */
+	move a0, sp /* pt_regs */
+	move a1, s0 /* current_thread_info->flags */
+	tail do_notify_resume
+work_resched:
+	tail schedule
+
+/* Slow paths for ptrace. */
+handle_syscall_trace_enter:
+	move a0, sp
+	call do_syscall_trace_enter
+	REG_L a0, PT_A0(sp)
+	REG_L a1, PT_A1(sp)
+	REG_L a2, PT_A2(sp)
+	REG_L a3, PT_A3(sp)
+	REG_L a4, PT_A4(sp)
+	REG_L a5, PT_A5(sp)
+	REG_L a6, PT_A6(sp)
+	REG_L a7, PT_A7(sp)
+	j check_syscall_nr
+handle_syscall_trace_exit:
+	move a0, sp
+	call do_syscall_trace_exit
+	j ret_from_exception
+
+END(handle_exception)
+
+ENTRY(ret_from_fork)
+	la ra, ret_from_exception
+	tail schedule_tail
+ENDPROC(ret_from_fork)
+
+ENTRY(ret_from_kernel_thread)
+	call schedule_tail
+	/* Call fn(arg) */
+	la ra, ret_from_exception
+	move a0, s1
+	jr s0
+ENDPROC(ret_from_kernel_thread)
+
+
+/*
+ * Integer register context switch
+ * The callee-saved registers must be saved and restored.
+ *
+ *   a0: previous task_struct (must be preserved across the switch)
+ *   a1: next task_struct
+ */
+ENTRY(__switch_to)
+	/* Save context into prev->thread */
+	li    a2,  TASK_THREAD_RA
+	add   a0, a0, a2
+	add   a2, a1, a2
+	REG_S ra,  TASK_THREAD_RA_RA(a0)
+	REG_S sp,  TASK_THREAD_SP_RA(a0)
+	REG_S s0,  TASK_THREAD_S0_RA(a0)
+	REG_S s1,  TASK_THREAD_S1_RA(a0)
+	REG_S s2,  TASK_THREAD_S2_RA(a0)
+	REG_S s3,  TASK_THREAD_S3_RA(a0)
+	REG_S s4,  TASK_THREAD_S4_RA(a0)
+	REG_S s5,  TASK_THREAD_S5_RA(a0)
+	REG_S s6,  TASK_THREAD_S6_RA(a0)
+	REG_S s7,  TASK_THREAD_S7_RA(a0)
+	REG_S s8,  TASK_THREAD_S8_RA(a0)
+	REG_S s9,  TASK_THREAD_S9_RA(a0)
+	REG_S s10, TASK_THREAD_S10_RA(a0)
+	REG_S s11, TASK_THREAD_S11_RA(a0)
+	/* Restore context from next->thread */
+	REG_L ra,  TASK_THREAD_RA_RA(a2)
+	REG_L sp,  TASK_THREAD_SP_RA(a2)
+	REG_L s0,  TASK_THREAD_S0_RA(a2)
+	REG_L s1,  TASK_THREAD_S1_RA(a2)
+	REG_L s2,  TASK_THREAD_S2_RA(a2)
+	REG_L s3,  TASK_THREAD_S3_RA(a2)
+	REG_L s4,  TASK_THREAD_S4_RA(a2)
+	REG_L s5,  TASK_THREAD_S5_RA(a2)
+	REG_L s6,  TASK_THREAD_S6_RA(a2)
+	REG_L s7,  TASK_THREAD_S7_RA(a2)
+	REG_L s8,  TASK_THREAD_S8_RA(a2)
+	REG_L s9,  TASK_THREAD_S9_RA(a2)
+	REG_L s10, TASK_THREAD_S10_RA(a2)
+	REG_L s11, TASK_THREAD_S11_RA(a2)
+#if TASK_TI != 0
+#error "TASK_TI != 0: tp will contain a 'struct thread_info', not a 'struct task_struct' so get_current() won't work."
+	addi tp, a1, TASK_TI
+#else
+	move tp, a1
+#endif
+	ret
+ENDPROC(__switch_to)
+
+ENTRY(__fstate_save)
+	li  a2,  TASK_THREAD_F0
+	add a0, a0, a2
+	li t1, SR_FS
+	csrs sstatus, t1
+	frcsr t0
+	fsd f0,  TASK_THREAD_F0_F0(a0)
+	fsd f1,  TASK_THREAD_F1_F0(a0)
+	fsd f2,  TASK_THREAD_F2_F0(a0)
+	fsd f3,  TASK_THREAD_F3_F0(a0)
+	fsd f4,  TASK_THREAD_F4_F0(a0)
+	fsd f5,  TASK_THREAD_F5_F0(a0)
+	fsd f6,  TASK_THREAD_F6_F0(a0)
+	fsd f7,  TASK_THREAD_F7_F0(a0)
+	fsd f8,  TASK_THREAD_F8_F0(a0)
+	fsd f9,  TASK_THREAD_F9_F0(a0)
+	fsd f10, TASK_THREAD_F10_F0(a0)
+	fsd f11, TASK_THREAD_F11_F0(a0)
+	fsd f12, TASK_THREAD_F12_F0(a0)
+	fsd f13, TASK_THREAD_F13_F0(a0)
+	fsd f14, TASK_THREAD_F14_F0(a0)
+	fsd f15, TASK_THREAD_F15_F0(a0)
+	fsd f16, TASK_THREAD_F16_F0(a0)
+	fsd f17, TASK_THREAD_F17_F0(a0)
+	fsd f18, TASK_THREAD_F18_F0(a0)
+	fsd f19, TASK_THREAD_F19_F0(a0)
+	fsd f20, TASK_THREAD_F20_F0(a0)
+	fsd f21, TASK_THREAD_F21_F0(a0)
+	fsd f22, TASK_THREAD_F22_F0(a0)
+	fsd f23, TASK_THREAD_F23_F0(a0)
+	fsd f24, TASK_THREAD_F24_F0(a0)
+	fsd f25, TASK_THREAD_F25_F0(a0)
+	fsd f26, TASK_THREAD_F26_F0(a0)
+	fsd f27, TASK_THREAD_F27_F0(a0)
+	fsd f28, TASK_THREAD_F28_F0(a0)
+	fsd f29, TASK_THREAD_F29_F0(a0)
+	fsd f30, TASK_THREAD_F30_F0(a0)
+	fsd f31, TASK_THREAD_F31_F0(a0)
+	sw t0, TASK_THREAD_FCSR_F0(a0)
+	csrc sstatus, t1
+	ret
+ENDPROC(__fstate_save)
+
+ENTRY(__fstate_restore)
+	li  a2,  TASK_THREAD_F0
+	add a0, a0, a2
+	li t1, SR_FS
+	lw t0, TASK_THREAD_FCSR_F0(a0)
+	csrs sstatus, t1
+	fld f0,  TASK_THREAD_F0_F0(a0)
+	fld f1,  TASK_THREAD_F1_F0(a0)
+	fld f2,  TASK_THREAD_F2_F0(a0)
+	fld f3,  TASK_THREAD_F3_F0(a0)
+	fld f4,  TASK_THREAD_F4_F0(a0)
+	fld f5,  TASK_THREAD_F5_F0(a0)
+	fld f6,  TASK_THREAD_F6_F0(a0)
+	fld f7,  TASK_THREAD_F7_F0(a0)
+	fld f8,  TASK_THREAD_F8_F0(a0)
+	fld f9,  TASK_THREAD_F9_F0(a0)
+	fld f10, TASK_THREAD_F10_F0(a0)
+	fld f11, TASK_THREAD_F11_F0(a0)
+	fld f12, TASK_THREAD_F12_F0(a0)
+	fld f13, TASK_THREAD_F13_F0(a0)
+	fld f14, TASK_THREAD_F14_F0(a0)
+	fld f15, TASK_THREAD_F15_F0(a0)
+	fld f16, TASK_THREAD_F16_F0(a0)
+	fld f17, TASK_THREAD_F17_F0(a0)
+	fld f18, TASK_THREAD_F18_F0(a0)
+	fld f19, TASK_THREAD_F19_F0(a0)
+	fld f20, TASK_THREAD_F20_F0(a0)
+	fld f21, TASK_THREAD_F21_F0(a0)
+	fld f22, TASK_THREAD_F22_F0(a0)
+	fld f23, TASK_THREAD_F23_F0(a0)
+	fld f24, TASK_THREAD_F24_F0(a0)
+	fld f25, TASK_THREAD_F25_F0(a0)
+	fld f26, TASK_THREAD_F26_F0(a0)
+	fld f27, TASK_THREAD_F27_F0(a0)
+	fld f28, TASK_THREAD_F28_F0(a0)
+	fld f29, TASK_THREAD_F29_F0(a0)
+	fld f30, TASK_THREAD_F30_F0(a0)
+	fld f31, TASK_THREAD_F31_F0(a0)
+	fscsr t0
+	csrc sstatus, t1
+	ret
+ENDPROC(__fstate_restore)
+
+
+	.section ".rodata"
+	/* Exception vector table */
+ENTRY(excp_vect_table)
+	RISCV_PTR do_trap_insn_misaligned
+	RISCV_PTR do_trap_insn_fault
+	RISCV_PTR do_trap_insn_illegal
+	RISCV_PTR do_trap_break
+	RISCV_PTR do_trap_load_misaligned
+	RISCV_PTR do_trap_load_fault
+	RISCV_PTR do_trap_store_misaligned
+	RISCV_PTR do_trap_store_fault
+	RISCV_PTR do_trap_ecall_u /* system call, gets intercepted */
+	RISCV_PTR do_trap_ecall_s
+	RISCV_PTR do_trap_unknown
+	RISCV_PTR do_trap_ecall_m
+	RISCV_PTR do_page_fault   /* instruction page fault */
+	RISCV_PTR do_page_fault   /* load page fault */
+	RISCV_PTR do_trap_unknown
+	RISCV_PTR do_page_fault   /* store page fault */
+excp_vect_table_end:
+END(excp_vect_table)
diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
new file mode 100644
index 000000000000..b13d3ea3bf79
--- /dev/null
+++ b/arch/riscv/kernel/process.c
@@ -0,0 +1,131 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/tick.h>
+#include <linux/ptrace.h>
+
+#include <asm/unistd.h>
+#include <asm/uaccess.h>
+#include <asm/processor.h>
+#include <asm/csr.h>
+#include <asm/string.h>
+#include <asm/switch_to.h>
+
+extern asmlinkage void ret_from_fork(void);
+extern asmlinkage void ret_from_kernel_thread(void);
+
+void arch_cpu_idle(void)
+{
+	wait_for_interrupt();
+	local_irq_enable();
+}
+
+void show_regs(struct pt_regs *regs)
+{
+	show_regs_print_info(KERN_DEFAULT);
+
+	printk(KERN_CONT "sepc: " REG_FMT " ra : " REG_FMT " sp : " REG_FMT "\n",
+		regs->sepc, regs->ra, regs->sp);
+	printk(KERN_CONT " gp : " REG_FMT " tp : " REG_FMT " t0 : " REG_FMT "\n",
+		regs->gp, regs->tp, regs->t0);
+	printk(KERN_CONT " t1 : " REG_FMT " t2 : " REG_FMT " s0 : " REG_FMT "\n",
+		regs->t1, regs->t2, regs->s0);
+	printk(KERN_CONT " s1 : " REG_FMT " a0 : " REG_FMT " a1 : " REG_FMT "\n",
+		regs->s1, regs->a0, regs->a1);
+	printk(KERN_CONT " a2 : " REG_FMT " a3 : " REG_FMT " a4 : " REG_FMT "\n",
+		regs->a2, regs->a3, regs->a4);
+	printk(KERN_CONT " a5 : " REG_FMT " a6 : " REG_FMT " a7 : " REG_FMT "\n",
+		regs->a5, regs->a6, regs->a7);
+	printk(KERN_CONT " s2 : " REG_FMT " s3 : " REG_FMT " s4 : " REG_FMT "\n",
+		regs->s2, regs->s3, regs->s4);
+	printk(KERN_CONT " s5 : " REG_FMT " s6 : " REG_FMT " s7 : " REG_FMT "\n",
+		regs->s5, regs->s6, regs->s7);
+	printk(KERN_CONT " s8 : " REG_FMT " s9 : " REG_FMT " s10: " REG_FMT "\n",
+		regs->s8, regs->s9, regs->s10);
+	printk(KERN_CONT " s11: " REG_FMT " t3 : " REG_FMT " t4 : " REG_FMT "\n",
+		regs->s11, regs->t3, regs->t4);
+	printk(KERN_CONT " t5 : " REG_FMT " t6 : " REG_FMT "\n",
+		regs->t5, regs->t6);
+
+	printk(KERN_CONT "sstatus: " REG_FMT " sbadaddr: " REG_FMT " scause: " REG_FMT "\n",
+		regs->sstatus, regs->sbadaddr, regs->scause);
+}
+
+void start_thread(struct pt_regs *regs, unsigned long pc,
+	unsigned long sp)
+{
+	regs->sstatus = SR_PIE /* User mode, irqs on */ | SR_FS_INITIAL;
+#ifndef CONFIG_RV_PUM
+	regs->sstatus |= SR_SUM;
+#endif
+	regs->sepc = pc;
+	regs->sp = sp;
+	set_fs(USER_DS);
+}
+
+void flush_thread(void)
+{
+	/* Reset FPU context
+	 *	frm: round to nearest, ties to even (IEEE default)
+	 *	fflags: accrued exceptions cleared
+	 */
+	memset(&current->thread.fstate, 0, sizeof(current->thread.fstate));
+}
+
+int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+{
+	fstate_save(src, task_pt_regs(src));
+	*dst = *src;
+	return 0;
+}
+
+int copy_thread(unsigned long clone_flags, unsigned long usp,
+	unsigned long arg, struct task_struct *p)
+{
+	struct pt_regs *childregs = task_pt_regs(p);
+
+	/* p->thread holds context to be restored by __switch_to() */
+	if (unlikely(p->flags & PF_KTHREAD)) {
+		/* Kernel thread */
+		const register unsigned long gp __asm__ ("gp");
+		memset(childregs, 0, sizeof(struct pt_regs));
+		childregs->gp = gp;
+		childregs->sstatus = SR_PS | SR_PIE; /* Supervisor, irqs on */
+
+		p->thread.ra = (unsigned long)ret_from_kernel_thread;
+		p->thread.s[0] = usp; /* fn */
+		p->thread.s[1] = arg;
+	} else {
+		*childregs = *(current_pt_regs());
+		if (usp) /* User fork */
+			childregs->sp = usp;
+		if (clone_flags & CLONE_SETTLS)
+			childregs->tp = childregs->a5;
+		childregs->a0 = 0; /* Return value of fork() */
+		p->thread.ra = (unsigned long)ret_from_fork;
+	}
+	p->thread.sp = (unsigned long)childregs; /* kernel sp */
+	return 0;
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 5/9] RISC-V: Task implementation
  2017-06-28 18:55     ` Palmer Dabbelt
                       ` (7 preceding siblings ...)
  (?)
@ 2017-06-28 18:55     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains the implementation of tasks on RISC-V, most of which
is involved in task switching.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/asm-offsets.h |   1 +
 arch/riscv/include/asm/current.h     |  43 ++++
 arch/riscv/include/asm/kprobes.h     |  22 ++
 arch/riscv/include/asm/processor.h   | 102 ++++++++
 arch/riscv/include/asm/switch_to.h   |  69 ++++++
 arch/riscv/include/asm/thread_info.h |  96 ++++++++
 arch/riscv/kernel/asm-offsets.c      | 316 +++++++++++++++++++++++++
 arch/riscv/kernel/entry.S            | 436 +++++++++++++++++++++++++++++++++++
 arch/riscv/kernel/process.c          | 131 +++++++++++
 9 files changed, 1216 insertions(+)
 create mode 100644 arch/riscv/include/asm/asm-offsets.h
 create mode 100644 arch/riscv/include/asm/current.h
 create mode 100644 arch/riscv/include/asm/kprobes.h
 create mode 100644 arch/riscv/include/asm/processor.h
 create mode 100644 arch/riscv/include/asm/switch_to.h
 create mode 100644 arch/riscv/include/asm/thread_info.h
 create mode 100644 arch/riscv/kernel/asm-offsets.c
 create mode 100644 arch/riscv/kernel/entry.S
 create mode 100644 arch/riscv/kernel/process.c

diff --git a/arch/riscv/include/asm/asm-offsets.h b/arch/riscv/include/asm/asm-offsets.h
new file mode 100644
index 000000000000..d370ee36a182
--- /dev/null
+++ b/arch/riscv/include/asm/asm-offsets.h
@@ -0,0 +1 @@
+#include <generated/asm-offsets.h>
diff --git a/arch/riscv/include/asm/current.h b/arch/riscv/include/asm/current.h
new file mode 100644
index 000000000000..032a24ba5308
--- /dev/null
+++ b/arch/riscv/include/asm/current.h
@@ -0,0 +1,43 @@
+/*
+ * Based on arm/arm64/include/asm/current.h
+ *
+ * Copyright (C) 2016 ARM
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef __ASM_CURRENT_H
+#define __ASM_CURRENT_H
+
+#include <linux/compiler.h>
+
+#ifndef __ASSEMBLY__
+
+struct task_struct;
+
+/* This only works because "struct thread_info" is at offset 0 from "struct
+ * task_struct".  This constraint seems to be necessary on other architectures
+ * as well, but __switch_to enforces it.  We can't check TASK_TI here because
+ * <asm/asm-offsets.h> includes this, and I can't get the definition of "struct
+ * task_struct" here due to some header ordering problems.
+ */
+static __always_inline struct task_struct *get_current(void)
+{
+	register struct task_struct *tp __asm__("tp");
+	return tp;
+}
+
+#define current get_current()
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_CURRENT_H */
diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
new file mode 100644
index 000000000000..1190de7a0f74
--- /dev/null
+++ b/arch/riscv/include/asm/kprobes.h
@@ -0,0 +1,22 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef ASM_RISCV_KPROBES_H
+#define ASM_RISCV_KPROBES_H
+
+#ifdef CONFIG_KPROBES
+#error "RISC-V doesn't skpport CONFIG_KPROBES"
+#endif
+
+#endif
diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
new file mode 100644
index 000000000000..65aa014db9b4
--- /dev/null
+++ b/arch/riscv/include/asm/processor.h
@@ -0,0 +1,102 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PROCESSOR_H
+#define _ASM_RISCV_PROCESSOR_H
+
+#include <linux/const.h>
+
+#include <asm/ptrace.h>
+
+/*
+ * This decides where the kernel will search for a free chunk of vm
+ * space during mmap's.
+ */
+#define TASK_UNMAPPED_BASE	PAGE_ALIGN(TASK_SIZE >> 1)
+
+#ifdef __KERNEL__
+#define STACK_TOP		TASK_SIZE
+#define STACK_TOP_MAX		STACK_TOP
+#define STACK_ALIGN		16
+#endif /* __KERNEL__ */
+
+#ifndef __ASSEMBLY__
+
+struct task_struct;
+struct pt_regs;
+
+/*
+ * Default implementation of macro that returns current
+ * instruction pointer ("program counter").
+ */
+#define current_text_addr()	({ __label__ _l; _l: &&_l; })
+
+/* CPU-specific state of a task */
+struct thread_struct {
+	/* Callee-saved registers */
+	unsigned long ra;
+	unsigned long sp;	/* Kernel mode stack */
+	unsigned long s[12];	/* s[0]: frame pointer */
+	struct __riscv_d_ext_state fstate;
+};
+
+#define INIT_THREAD {					\
+	.sp = sizeof(init_stack) + (long)&init_stack,	\
+}
+
+/* Return saved (kernel) PC of a blocked thread. */
+#define thread_saved_pc(t)	((t)->thread.ra)
+#define thread_saved_sp(t)	((t)->thread.sp)
+#define thread_saved_fp(t)	((t)->thread.s[0])
+
+#define task_pt_regs(tsk)						\
+	((struct pt_regs *)(task_stack_page(tsk) + THREAD_SIZE		\
+			    - ALIGN(sizeof(struct pt_regs), STACK_ALIGN)))
+
+#define KSTK_EIP(tsk)		(task_pt_regs(tsk)->sepc)
+#define KSTK_ESP(tsk)		(task_pt_regs(tsk)->sp)
+
+
+/* Do necessary setup to start up a newly executed thread. */
+extern void start_thread(struct pt_regs *regs,
+			unsigned long pc, unsigned long sp);
+
+/* Free all resources held by a thread. */
+static inline void release_thread(struct task_struct *dead_task)
+{
+}
+
+extern unsigned long get_wchan(struct task_struct *p);
+
+
+static inline void cpu_relax(void)
+{
+#ifdef __riscv_muldiv
+	int dummy;
+	/* In lieu of a halt instruction, induce a long-latency stall. */
+	__asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy));
+#endif
+	barrier();
+}
+
+static inline void wait_for_interrupt(void)
+{
+	__asm__ __volatile__ ("wfi");
+}
+
+struct device_node;
+extern int riscv_of_processor_hart(struct device_node *node);
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PROCESSOR_H */
diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h
new file mode 100644
index 000000000000..dd6b05bff75b
--- /dev/null
+++ b/arch/riscv/include/asm/switch_to.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SWITCH_TO_H
+#define _ASM_RISCV_SWITCH_TO_H
+
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/csr.h>
+
+extern void __fstate_save(struct task_struct *save_to);
+extern void __fstate_restore(struct task_struct *restore_from);
+
+static inline void __fstate_clean(struct pt_regs *regs)
+{
+	regs->sstatus |= (regs->sstatus & ~(SR_FS)) | SR_FS_CLEAN;
+}
+
+static inline void fstate_save(struct task_struct *task,
+			       struct pt_regs *regs)
+{
+	if ((regs->sstatus & SR_FS) == SR_FS_DIRTY) {
+		__fstate_save(task);
+		__fstate_clean(regs);
+	}
+}
+
+static inline void fstate_restore(struct task_struct *task,
+				  struct pt_regs *regs)
+{
+	if ((regs->sstatus & SR_FS) != SR_FS_OFF) {
+		__fstate_restore(task);
+		__fstate_clean(regs);
+	}
+}
+
+static inline void __switch_to_aux(struct task_struct *prev,
+				   struct task_struct *next)
+{
+	struct pt_regs *regs;
+
+	regs = task_pt_regs(prev);
+	if (unlikely(regs->sstatus & SR_SD))
+		fstate_save(prev, regs);
+	fstate_restore(next, task_pt_regs(next));
+}
+
+extern struct task_struct *__switch_to(struct task_struct *,
+				       struct task_struct *);
+
+#define switch_to(prev, next, last)			\
+do {							\
+	struct task_struct *__prev = (prev);		\
+	struct task_struct *__next = (next);		\
+	__switch_to_aux(__prev, __next);		\
+	((last) = __switch_to(__prev, __next));		\
+} while (0)
+
+#endif /* _ASM_RISCV_SWITCH_TO_H */
diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
new file mode 100644
index 000000000000..f05b7d68c1ad
--- /dev/null
+++ b/arch/riscv/include/asm/thread_info.h
@@ -0,0 +1,96 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_THREAD_INFO_H
+#define _ASM_RISCV_THREAD_INFO_H
+
+#ifdef __KERNEL__
+
+#include <asm/page.h>
+#include <linux/const.h>
+
+/* thread information allocation */
+#define THREAD_SIZE_ORDER	(1)
+#define THREAD_SIZE		(PAGE_SIZE << THREAD_SIZE_ORDER)
+
+#ifndef __ASSEMBLY__
+
+#include <asm/processor.h>
+#include <asm/csr.h>
+
+typedef unsigned long mm_segment_t;
+
+/*
+ * low level task data that entry.S needs immediate access to
+ * - this struct should fit entirely inside of one cache line
+ * - if the members of this struct changes, the assembly constants
+ *   in asm-offsets.c must be updated accordingly
+ * - thread_info is included in task_struct at an offset of 0.  This means that
+ *   tp points to both thread_info and task_struct.
+ */
+struct thread_info {
+	unsigned long		flags;		/* low level flags */
+	int                     preempt_count;  /* 0=>preemptible, <0=>BUG */
+	mm_segment_t		addr_limit;
+	/* These stack pointers are overwritten on every system call or
+	 * exception.  SP is also saved to the stack it can be recovered when
+	 * overwritten.
+	 */
+	long			kernel_sp;	/* Kernel stack pointer */
+	long			user_sp;	/* User stack pointer */
+};
+
+/*
+ * macros/functions for gaining access to the thread information structure
+ *
+ * preempt_count needs to be 1 initially, until the scheduler is functional.
+ */
+#define INIT_THREAD_INFO(tsk)			\
+{						\
+	.flags		= 0,			\
+	.preempt_count	= INIT_PREEMPT_COUNT,	\
+	.addr_limit	= KERNEL_DS,		\
+}
+
+#define init_stack		(init_thread_union.stack)
+
+#endif /* !__ASSEMBLY__ */
+
+/*
+ * thread information flags
+ * - these are process state flags that various assembly files may need to
+ *   access
+ * - pending work-to-be-done flags are in lowest half-word
+ * - other flags in upper half-word(s)
+ */
+#define TIF_SYSCALL_TRACE	0	/* syscall trace active */
+#define TIF_NOTIFY_RESUME	1	/* callback before returning to user */
+#define TIF_SIGPENDING		2	/* signal pending */
+#define TIF_NEED_RESCHED	3	/* rescheduling necessary */
+#define TIF_RESTORE_SIGMASK	4	/* restore signal mask in do_signal() */
+#define TIF_MEMDIE		5	/* is terminating due to OOM killer */
+#define TIF_SYSCALL_TRACEPOINT  6       /* syscall tracepoint instrumentation */
+
+#define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+#define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
+#define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
+#define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
+
+#define _TIF_WORK_MASK \
+	(_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED)
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_THREAD_INFO_H */
diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
new file mode 100644
index 000000000000..2ead5037528c
--- /dev/null
+++ b/arch/riscv/kernel/asm-offsets.c
@@ -0,0 +1,316 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/kbuild.h>
+#include <linux/sched.h>
+#include <asm/thread_info.h>
+#include <asm/ptrace.h>
+
+void asm_offsets(void)
+{
+	OFFSET(TASK_THREAD_RA, task_struct, thread.ra);
+	OFFSET(TASK_THREAD_SP, task_struct, thread.sp);
+	OFFSET(TASK_THREAD_S0, task_struct, thread.s[0]);
+	OFFSET(TASK_THREAD_S1, task_struct, thread.s[1]);
+	OFFSET(TASK_THREAD_S2, task_struct, thread.s[2]);
+	OFFSET(TASK_THREAD_S3, task_struct, thread.s[3]);
+	OFFSET(TASK_THREAD_S4, task_struct, thread.s[4]);
+	OFFSET(TASK_THREAD_S5, task_struct, thread.s[5]);
+	OFFSET(TASK_THREAD_S6, task_struct, thread.s[6]);
+	OFFSET(TASK_THREAD_S7, task_struct, thread.s[7]);
+	OFFSET(TASK_THREAD_S8, task_struct, thread.s[8]);
+	OFFSET(TASK_THREAD_S9, task_struct, thread.s[9]);
+	OFFSET(TASK_THREAD_S10, task_struct, thread.s[10]);
+	OFFSET(TASK_THREAD_S11, task_struct, thread.s[11]);
+	OFFSET(TASK_THREAD_SP, task_struct, thread.sp);
+	OFFSET(TASK_STACK, task_struct, stack);
+	OFFSET(TASK_TI, task_struct, thread_info);
+	OFFSET(TASK_TI_FLAGS, task_struct, thread_info.flags);
+	OFFSET(TASK_TI_KERNEL_SP, task_struct, thread_info.kernel_sp);
+	OFFSET(TASK_TI_USER_SP, task_struct, thread_info.user_sp);
+
+	OFFSET(TASK_THREAD_F0,  task_struct, thread.fstate.f[0]);
+	OFFSET(TASK_THREAD_F1,  task_struct, thread.fstate.f[1]);
+	OFFSET(TASK_THREAD_F2,  task_struct, thread.fstate.f[2]);
+	OFFSET(TASK_THREAD_F3,  task_struct, thread.fstate.f[3]);
+	OFFSET(TASK_THREAD_F4,  task_struct, thread.fstate.f[4]);
+	OFFSET(TASK_THREAD_F5,  task_struct, thread.fstate.f[5]);
+	OFFSET(TASK_THREAD_F6,  task_struct, thread.fstate.f[6]);
+	OFFSET(TASK_THREAD_F7,  task_struct, thread.fstate.f[7]);
+	OFFSET(TASK_THREAD_F8,  task_struct, thread.fstate.f[8]);
+	OFFSET(TASK_THREAD_F9,  task_struct, thread.fstate.f[9]);
+	OFFSET(TASK_THREAD_F10, task_struct, thread.fstate.f[10]);
+	OFFSET(TASK_THREAD_F11, task_struct, thread.fstate.f[11]);
+	OFFSET(TASK_THREAD_F12, task_struct, thread.fstate.f[12]);
+	OFFSET(TASK_THREAD_F13, task_struct, thread.fstate.f[13]);
+	OFFSET(TASK_THREAD_F14, task_struct, thread.fstate.f[14]);
+	OFFSET(TASK_THREAD_F15, task_struct, thread.fstate.f[15]);
+	OFFSET(TASK_THREAD_F16, task_struct, thread.fstate.f[16]);
+	OFFSET(TASK_THREAD_F17, task_struct, thread.fstate.f[17]);
+	OFFSET(TASK_THREAD_F18, task_struct, thread.fstate.f[18]);
+	OFFSET(TASK_THREAD_F19, task_struct, thread.fstate.f[19]);
+	OFFSET(TASK_THREAD_F20, task_struct, thread.fstate.f[20]);
+	OFFSET(TASK_THREAD_F21, task_struct, thread.fstate.f[21]);
+	OFFSET(TASK_THREAD_F22, task_struct, thread.fstate.f[22]);
+	OFFSET(TASK_THREAD_F23, task_struct, thread.fstate.f[23]);
+	OFFSET(TASK_THREAD_F24, task_struct, thread.fstate.f[24]);
+	OFFSET(TASK_THREAD_F25, task_struct, thread.fstate.f[25]);
+	OFFSET(TASK_THREAD_F26, task_struct, thread.fstate.f[26]);
+	OFFSET(TASK_THREAD_F27, task_struct, thread.fstate.f[27]);
+	OFFSET(TASK_THREAD_F28, task_struct, thread.fstate.f[28]);
+	OFFSET(TASK_THREAD_F29, task_struct, thread.fstate.f[29]);
+	OFFSET(TASK_THREAD_F30, task_struct, thread.fstate.f[30]);
+	OFFSET(TASK_THREAD_F31, task_struct, thread.fstate.f[31]);
+	OFFSET(TASK_THREAD_FCSR, task_struct, thread.fstate.fcsr);
+
+	DEFINE(PT_SIZE, sizeof(struct pt_regs));
+	OFFSET(PT_SEPC, pt_regs, sepc);
+	OFFSET(PT_RA, pt_regs, ra);
+	OFFSET(PT_FP, pt_regs, s0);
+	OFFSET(PT_S0, pt_regs, s0);
+	OFFSET(PT_S1, pt_regs, s1);
+	OFFSET(PT_S2, pt_regs, s2);
+	OFFSET(PT_S3, pt_regs, s3);
+	OFFSET(PT_S4, pt_regs, s4);
+	OFFSET(PT_S5, pt_regs, s5);
+	OFFSET(PT_S6, pt_regs, s6);
+	OFFSET(PT_S7, pt_regs, s7);
+	OFFSET(PT_S8, pt_regs, s8);
+	OFFSET(PT_S9, pt_regs, s9);
+	OFFSET(PT_S10, pt_regs, s10);
+	OFFSET(PT_S11, pt_regs, s11);
+	OFFSET(PT_SP, pt_regs, sp);
+	OFFSET(PT_TP, pt_regs, tp);
+	OFFSET(PT_A0, pt_regs, a0);
+	OFFSET(PT_A1, pt_regs, a1);
+	OFFSET(PT_A2, pt_regs, a2);
+	OFFSET(PT_A3, pt_regs, a3);
+	OFFSET(PT_A4, pt_regs, a4);
+	OFFSET(PT_A5, pt_regs, a5);
+	OFFSET(PT_A6, pt_regs, a6);
+	OFFSET(PT_A7, pt_regs, a7);
+	OFFSET(PT_T0, pt_regs, t0);
+	OFFSET(PT_T1, pt_regs, t1);
+	OFFSET(PT_T2, pt_regs, t2);
+	OFFSET(PT_T3, pt_regs, t3);
+	OFFSET(PT_T4, pt_regs, t4);
+	OFFSET(PT_T5, pt_regs, t5);
+	OFFSET(PT_T6, pt_regs, t6);
+	OFFSET(PT_GP, pt_regs, gp);
+	OFFSET(PT_SSTATUS, pt_regs, sstatus);
+	OFFSET(PT_SBADADDR, pt_regs, sbadaddr);
+	OFFSET(PT_SCAUSE, pt_regs, scause);
+
+	/* THREAD_{F,X}* might be larger than a S-type offset can handle, but
+	 * these are used in performance-sensitive assembly so we can't resort
+	 * to loading the long immediate every time.
+	 */
+	DEFINE(TASK_THREAD_RA_RA,
+		  offsetof(struct task_struct, thread.ra)
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_SP_RA,
+		  offsetof(struct task_struct, thread.sp)
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S0_RA,
+		  offsetof(struct task_struct, thread.s[0])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S1_RA,
+		  offsetof(struct task_struct, thread.s[1])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S2_RA,
+		  offsetof(struct task_struct, thread.s[2])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S3_RA,
+		  offsetof(struct task_struct, thread.s[3])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S4_RA,
+		  offsetof(struct task_struct, thread.s[4])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S5_RA,
+		  offsetof(struct task_struct, thread.s[5])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S6_RA,
+		  offsetof(struct task_struct, thread.s[6])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S7_RA,
+		  offsetof(struct task_struct, thread.s[7])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S8_RA,
+		  offsetof(struct task_struct, thread.s[8])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S9_RA,
+		  offsetof(struct task_struct, thread.s[9])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S10_RA,
+		  offsetof(struct task_struct, thread.s[10])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S11_RA,
+		  offsetof(struct task_struct, thread.s[11])
+		- offsetof(struct task_struct, thread.ra)
+	);
+
+	DEFINE(TASK_THREAD_F0_F0,
+		  offsetof(struct task_struct, thread.fstate.f[0])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F1_F0,
+		  offsetof(struct task_struct, thread.fstate.f[1])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F2_F0,
+		  offsetof(struct task_struct, thread.fstate.f[2])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F3_F0,
+		  offsetof(struct task_struct, thread.fstate.f[3])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F4_F0,
+		  offsetof(struct task_struct, thread.fstate.f[4])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F5_F0,
+		  offsetof(struct task_struct, thread.fstate.f[5])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F6_F0,
+		  offsetof(struct task_struct, thread.fstate.f[6])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F7_F0,
+		  offsetof(struct task_struct, thread.fstate.f[7])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F8_F0,
+		  offsetof(struct task_struct, thread.fstate.f[8])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F9_F0,
+		  offsetof(struct task_struct, thread.fstate.f[9])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F10_F0,
+		  offsetof(struct task_struct, thread.fstate.f[10])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F11_F0,
+		  offsetof(struct task_struct, thread.fstate.f[11])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F12_F0,
+		  offsetof(struct task_struct, thread.fstate.f[12])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F13_F0,
+		  offsetof(struct task_struct, thread.fstate.f[13])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F14_F0,
+		  offsetof(struct task_struct, thread.fstate.f[14])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F15_F0,
+		  offsetof(struct task_struct, thread.fstate.f[15])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F16_F0,
+		  offsetof(struct task_struct, thread.fstate.f[16])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F17_F0,
+		  offsetof(struct task_struct, thread.fstate.f[17])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F18_F0,
+		  offsetof(struct task_struct, thread.fstate.f[18])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F19_F0,
+		  offsetof(struct task_struct, thread.fstate.f[19])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F20_F0,
+		  offsetof(struct task_struct, thread.fstate.f[20])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F21_F0,
+		  offsetof(struct task_struct, thread.fstate.f[21])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F22_F0,
+		  offsetof(struct task_struct, thread.fstate.f[22])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F23_F0,
+		  offsetof(struct task_struct, thread.fstate.f[23])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F24_F0,
+		  offsetof(struct task_struct, thread.fstate.f[24])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F25_F0,
+		  offsetof(struct task_struct, thread.fstate.f[25])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F26_F0,
+		  offsetof(struct task_struct, thread.fstate.f[26])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F27_F0,
+		  offsetof(struct task_struct, thread.fstate.f[27])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F28_F0,
+		  offsetof(struct task_struct, thread.fstate.f[28])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F29_F0,
+		  offsetof(struct task_struct, thread.fstate.f[29])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F30_F0,
+		  offsetof(struct task_struct, thread.fstate.f[30])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F31_F0,
+		  offsetof(struct task_struct, thread.fstate.f[31])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_FCSR_F0,
+		  offsetof(struct task_struct, thread.fstate.fcsr)
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+
+	/* The assembler needs access to THREAD_SIZE as well. */
+	DEFINE(ASM_THREAD_SIZE, THREAD_SIZE);
+
+	/* We allocate a pt_regs on the stack when entering the kernel.  This
+	 * ensures the alignment is sane.
+	 */
+	DEFINE(PT_SIZE_ON_STACK, ALIGN(sizeof(struct pt_regs), STACK_ALIGN));
+}
diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
new file mode 100644
index 000000000000..aa4593c2eaef
--- /dev/null
+++ b/arch/riscv/kernel/entry.S
@@ -0,0 +1,436 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+
+#include <asm/asm.h>
+#include <asm/csr.h>
+#include <asm/unistd.h>
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+
+	.text
+	.altmacro
+
+/* Prepares to enter a system call or exception by saving all registers to the
+ * stack.
+ */
+	.macro SAVE_ALL
+	LOCAL _restore_kernel_tpsp
+	LOCAL _save_context
+
+	/* If coming from userspace, preserve the user thread pointer and load
+	   the kernel thread pointer.  If we came from the kernel, sscratch
+	   will contain 0, and we should continue on the current TP. */
+	csrrw tp, sscratch, tp
+	bnez tp, _save_context
+
+_restore_kernel_tpsp:
+	csrr tp, sscratch
+	REG_S sp, TASK_TI_KERNEL_SP(tp)
+_save_context:
+	REG_S sp, TASK_TI_USER_SP(tp)
+	REG_L sp, TASK_TI_KERNEL_SP(tp)
+	addi sp, sp, -(PT_SIZE_ON_STACK)
+	REG_S x1,  PT_RA(sp)
+	REG_S x3,  PT_GP(sp)
+	REG_S x5,  PT_T0(sp)
+	REG_S x6,  PT_T1(sp)
+	REG_S x7,  PT_T2(sp)
+	REG_S x8,  PT_S0(sp)
+	REG_S x9,  PT_S1(sp)
+	REG_S x10, PT_A0(sp)
+	REG_S x11, PT_A1(sp)
+	REG_S x12, PT_A2(sp)
+	REG_S x13, PT_A3(sp)
+	REG_S x14, PT_A4(sp)
+	REG_S x15, PT_A5(sp)
+	REG_S x16, PT_A6(sp)
+	REG_S x17, PT_A7(sp)
+	REG_S x18, PT_S2(sp)
+	REG_S x19, PT_S3(sp)
+	REG_S x20, PT_S4(sp)
+	REG_S x21, PT_S5(sp)
+	REG_S x22, PT_S6(sp)
+	REG_S x23, PT_S7(sp)
+	REG_S x24, PT_S8(sp)
+	REG_S x25, PT_S9(sp)
+	REG_S x26, PT_S10(sp)
+	REG_S x27, PT_S11(sp)
+	REG_S x28, PT_T3(sp)
+	REG_S x29, PT_T4(sp)
+	REG_S x30, PT_T5(sp)
+	REG_S x31, PT_T6(sp)
+
+	/* Disable FPU to detect illegal usage of
+	   floating point in kernel space */
+	li t0, SR_FS
+
+	REG_L s0, TASK_TI_USER_SP(tp)
+	csrrc s1, sstatus, t0
+	csrr s2, sepc
+	csrr s3, sbadaddr
+	csrr s4, scause
+	csrr s5, sscratch
+	REG_S s0, PT_SP(sp)
+	REG_S s1, PT_SSTATUS(sp)
+	REG_S s2, PT_SEPC(sp)
+	REG_S s3, PT_SBADADDR(sp)
+	REG_S s4, PT_SCAUSE(sp)
+	REG_S s5, PT_TP(sp)
+	.endm
+
+/* Prepares to return from a system call or exception by restoring all
+ * registers from the stack.
+ */
+	.macro RESTORE_ALL
+	REG_L a0, PT_SSTATUS(sp)
+	REG_L a2, PT_SEPC(sp)
+	csrw sstatus, a0
+	csrw sepc, a2
+
+	REG_L x1,  PT_RA(sp)
+	REG_L x3,  PT_GP(sp)
+	REG_L x4,  PT_TP(sp)
+	REG_L x5,  PT_T0(sp)
+	REG_L x6,  PT_T1(sp)
+	REG_L x7,  PT_T2(sp)
+	REG_L x8,  PT_S0(sp)
+	REG_L x9,  PT_S1(sp)
+	REG_L x10, PT_A0(sp)
+	REG_L x11, PT_A1(sp)
+	REG_L x12, PT_A2(sp)
+	REG_L x13, PT_A3(sp)
+	REG_L x14, PT_A4(sp)
+	REG_L x15, PT_A5(sp)
+	REG_L x16, PT_A6(sp)
+	REG_L x17, PT_A7(sp)
+	REG_L x18, PT_S2(sp)
+	REG_L x19, PT_S3(sp)
+	REG_L x20, PT_S4(sp)
+	REG_L x21, PT_S5(sp)
+	REG_L x22, PT_S6(sp)
+	REG_L x23, PT_S7(sp)
+	REG_L x24, PT_S8(sp)
+	REG_L x25, PT_S9(sp)
+	REG_L x26, PT_S10(sp)
+	REG_L x27, PT_S11(sp)
+	REG_L x28, PT_T3(sp)
+	REG_L x29, PT_T4(sp)
+	REG_L x30, PT_T5(sp)
+	REG_L x31, PT_T6(sp)
+
+	REG_L x2,  PT_SP(sp)
+	.endm
+
+ENTRY(handle_exception)
+	SAVE_ALL
+
+	/* Set sscratch register to 0, so that if a recursive exception
+	   occurs, the exception vector knows it came from the kernel */
+	csrw sscratch, x0
+
+	la gp, __global_pointer$
+
+	la ra, ret_from_exception
+	/* MSB of cause differentiates between
+	   interrupts and exceptions */
+	bge s4, zero, 1f
+
+	/* Handle interrupts */
+	slli a0, s4, 1
+	srli a0, a0, 1
+	move a1, sp /* pt_regs */
+	tail do_IRQ
+1:
+	/* Handle syscalls */
+	li t0, EXC_SYSCALL
+	beq s4, t0, handle_syscall
+
+	/* Handle other exceptions */
+	slli t0, s4, RISCV_LGPTR
+	la t1, excp_vect_table
+	la t2, excp_vect_table_end
+	move a0, sp /* pt_regs */
+	add t0, t1, t0
+	/* Check if exception code lies within bounds */
+	bgeu t0, t2, 1f
+	REG_L t0, 0(t0)
+	jr t0
+1:
+	tail do_trap_unknown
+
+handle_syscall:
+	/* Advance SEPC to avoid executing the original
+	   scall instruction on sret */
+	addi s2, s2, 0x4
+	REG_S s2, PT_SEPC(sp)
+	/* System calls run with interrupts enabled */
+	csrs sstatus, SR_IE
+	/* Trace syscalls, but only if requested by the user. */
+	REG_L t0, TASK_TI_FLAGS(tp)
+	andi t0, t0, _TIF_SYSCALL_TRACE
+	bnez t0, handle_syscall_trace_enter
+check_syscall_nr:
+	/* Check to make sure we don't jump to a bogus syscall number. */
+	li t0, __NR_syscalls
+	la s0, sys_ni_syscall
+	/* Syscall number held in a7 */
+	bgeu a7, t0, 1f
+	la s0, sys_call_table
+	slli t0, a7, RISCV_LGPTR
+	add s0, s0, t0
+	REG_L s0, 0(s0)
+1:
+	jalr s0
+
+ret_from_syscall:
+	/* Set user a0 to kernel a0 */
+	REG_S a0, PT_A0(sp)
+	/* Trace syscalls, but only if requested by the user. */
+	REG_L t0, TASK_TI_FLAGS(tp)
+	andi t0, t0, _TIF_SYSCALL_TRACE
+	bnez t0, handle_syscall_trace_exit
+
+ret_from_exception:
+	REG_L s0, PT_SSTATUS(sp)
+	csrc sstatus, SR_IE
+	andi s0, s0, SR_PS
+	bnez s0, restore_all
+
+resume_userspace:
+	/* Interrupts must be disabled here so flags are checked atomically */
+	REG_L s0, TASK_TI_FLAGS(tp) /* current_thread_info->flags */
+	andi s1, s0, _TIF_WORK_MASK
+	bnez s1, work_pending
+
+	/* Save unwound kernel stack pointer in thread_info */
+	addi s0, sp, PT_SIZE_ON_STACK
+	REG_S s0, TASK_TI_KERNEL_SP(tp)
+
+	/* Save TP into sscratch, so we can find the kernel data structures
+	 * again. */
+	csrw sscratch, tp
+
+restore_all:
+	RESTORE_ALL
+	sret
+
+work_pending:
+	/* Enter slow path for supplementary processing */
+	la ra, ret_from_exception
+	andi s1, s0, _TIF_NEED_RESCHED
+	bnez s1, work_resched
+work_notifysig:
+	/* Handle pending signals and notify-resume requests */
+	csrs sstatus, SR_IE /* Enable interrupts for do_notify_resume() */
+	move a0, sp /* pt_regs */
+	move a1, s0 /* current_thread_info->flags */
+	tail do_notify_resume
+work_resched:
+	tail schedule
+
+/* Slow paths for ptrace. */
+handle_syscall_trace_enter:
+	move a0, sp
+	call do_syscall_trace_enter
+	REG_L a0, PT_A0(sp)
+	REG_L a1, PT_A1(sp)
+	REG_L a2, PT_A2(sp)
+	REG_L a3, PT_A3(sp)
+	REG_L a4, PT_A4(sp)
+	REG_L a5, PT_A5(sp)
+	REG_L a6, PT_A6(sp)
+	REG_L a7, PT_A7(sp)
+	j check_syscall_nr
+handle_syscall_trace_exit:
+	move a0, sp
+	call do_syscall_trace_exit
+	j ret_from_exception
+
+END(handle_exception)
+
+ENTRY(ret_from_fork)
+	la ra, ret_from_exception
+	tail schedule_tail
+ENDPROC(ret_from_fork)
+
+ENTRY(ret_from_kernel_thread)
+	call schedule_tail
+	/* Call fn(arg) */
+	la ra, ret_from_exception
+	move a0, s1
+	jr s0
+ENDPROC(ret_from_kernel_thread)
+
+
+/*
+ * Integer register context switch
+ * The callee-saved registers must be saved and restored.
+ *
+ *   a0: previous task_struct (must be preserved across the switch)
+ *   a1: next task_struct
+ */
+ENTRY(__switch_to)
+	/* Save context into prev->thread */
+	li    a2,  TASK_THREAD_RA
+	add   a0, a0, a2
+	add   a2, a1, a2
+	REG_S ra,  TASK_THREAD_RA_RA(a0)
+	REG_S sp,  TASK_THREAD_SP_RA(a0)
+	REG_S s0,  TASK_THREAD_S0_RA(a0)
+	REG_S s1,  TASK_THREAD_S1_RA(a0)
+	REG_S s2,  TASK_THREAD_S2_RA(a0)
+	REG_S s3,  TASK_THREAD_S3_RA(a0)
+	REG_S s4,  TASK_THREAD_S4_RA(a0)
+	REG_S s5,  TASK_THREAD_S5_RA(a0)
+	REG_S s6,  TASK_THREAD_S6_RA(a0)
+	REG_S s7,  TASK_THREAD_S7_RA(a0)
+	REG_S s8,  TASK_THREAD_S8_RA(a0)
+	REG_S s9,  TASK_THREAD_S9_RA(a0)
+	REG_S s10, TASK_THREAD_S10_RA(a0)
+	REG_S s11, TASK_THREAD_S11_RA(a0)
+	/* Restore context from next->thread */
+	REG_L ra,  TASK_THREAD_RA_RA(a2)
+	REG_L sp,  TASK_THREAD_SP_RA(a2)
+	REG_L s0,  TASK_THREAD_S0_RA(a2)
+	REG_L s1,  TASK_THREAD_S1_RA(a2)
+	REG_L s2,  TASK_THREAD_S2_RA(a2)
+	REG_L s3,  TASK_THREAD_S3_RA(a2)
+	REG_L s4,  TASK_THREAD_S4_RA(a2)
+	REG_L s5,  TASK_THREAD_S5_RA(a2)
+	REG_L s6,  TASK_THREAD_S6_RA(a2)
+	REG_L s7,  TASK_THREAD_S7_RA(a2)
+	REG_L s8,  TASK_THREAD_S8_RA(a2)
+	REG_L s9,  TASK_THREAD_S9_RA(a2)
+	REG_L s10, TASK_THREAD_S10_RA(a2)
+	REG_L s11, TASK_THREAD_S11_RA(a2)
+#if TASK_TI != 0
+#error "TASK_TI != 0: tp will contain a 'struct thread_info', not a 'struct task_struct' so get_current() won't work."
+	addi tp, a1, TASK_TI
+#else
+	move tp, a1
+#endif
+	ret
+ENDPROC(__switch_to)
+
+ENTRY(__fstate_save)
+	li  a2,  TASK_THREAD_F0
+	add a0, a0, a2
+	li t1, SR_FS
+	csrs sstatus, t1
+	frcsr t0
+	fsd f0,  TASK_THREAD_F0_F0(a0)
+	fsd f1,  TASK_THREAD_F1_F0(a0)
+	fsd f2,  TASK_THREAD_F2_F0(a0)
+	fsd f3,  TASK_THREAD_F3_F0(a0)
+	fsd f4,  TASK_THREAD_F4_F0(a0)
+	fsd f5,  TASK_THREAD_F5_F0(a0)
+	fsd f6,  TASK_THREAD_F6_F0(a0)
+	fsd f7,  TASK_THREAD_F7_F0(a0)
+	fsd f8,  TASK_THREAD_F8_F0(a0)
+	fsd f9,  TASK_THREAD_F9_F0(a0)
+	fsd f10, TASK_THREAD_F10_F0(a0)
+	fsd f11, TASK_THREAD_F11_F0(a0)
+	fsd f12, TASK_THREAD_F12_F0(a0)
+	fsd f13, TASK_THREAD_F13_F0(a0)
+	fsd f14, TASK_THREAD_F14_F0(a0)
+	fsd f15, TASK_THREAD_F15_F0(a0)
+	fsd f16, TASK_THREAD_F16_F0(a0)
+	fsd f17, TASK_THREAD_F17_F0(a0)
+	fsd f18, TASK_THREAD_F18_F0(a0)
+	fsd f19, TASK_THREAD_F19_F0(a0)
+	fsd f20, TASK_THREAD_F20_F0(a0)
+	fsd f21, TASK_THREAD_F21_F0(a0)
+	fsd f22, TASK_THREAD_F22_F0(a0)
+	fsd f23, TASK_THREAD_F23_F0(a0)
+	fsd f24, TASK_THREAD_F24_F0(a0)
+	fsd f25, TASK_THREAD_F25_F0(a0)
+	fsd f26, TASK_THREAD_F26_F0(a0)
+	fsd f27, TASK_THREAD_F27_F0(a0)
+	fsd f28, TASK_THREAD_F28_F0(a0)
+	fsd f29, TASK_THREAD_F29_F0(a0)
+	fsd f30, TASK_THREAD_F30_F0(a0)
+	fsd f31, TASK_THREAD_F31_F0(a0)
+	sw t0, TASK_THREAD_FCSR_F0(a0)
+	csrc sstatus, t1
+	ret
+ENDPROC(__fstate_save)
+
+ENTRY(__fstate_restore)
+	li  a2,  TASK_THREAD_F0
+	add a0, a0, a2
+	li t1, SR_FS
+	lw t0, TASK_THREAD_FCSR_F0(a0)
+	csrs sstatus, t1
+	fld f0,  TASK_THREAD_F0_F0(a0)
+	fld f1,  TASK_THREAD_F1_F0(a0)
+	fld f2,  TASK_THREAD_F2_F0(a0)
+	fld f3,  TASK_THREAD_F3_F0(a0)
+	fld f4,  TASK_THREAD_F4_F0(a0)
+	fld f5,  TASK_THREAD_F5_F0(a0)
+	fld f6,  TASK_THREAD_F6_F0(a0)
+	fld f7,  TASK_THREAD_F7_F0(a0)
+	fld f8,  TASK_THREAD_F8_F0(a0)
+	fld f9,  TASK_THREAD_F9_F0(a0)
+	fld f10, TASK_THREAD_F10_F0(a0)
+	fld f11, TASK_THREAD_F11_F0(a0)
+	fld f12, TASK_THREAD_F12_F0(a0)
+	fld f13, TASK_THREAD_F13_F0(a0)
+	fld f14, TASK_THREAD_F14_F0(a0)
+	fld f15, TASK_THREAD_F15_F0(a0)
+	fld f16, TASK_THREAD_F16_F0(a0)
+	fld f17, TASK_THREAD_F17_F0(a0)
+	fld f18, TASK_THREAD_F18_F0(a0)
+	fld f19, TASK_THREAD_F19_F0(a0)
+	fld f20, TASK_THREAD_F20_F0(a0)
+	fld f21, TASK_THREAD_F21_F0(a0)
+	fld f22, TASK_THREAD_F22_F0(a0)
+	fld f23, TASK_THREAD_F23_F0(a0)
+	fld f24, TASK_THREAD_F24_F0(a0)
+	fld f25, TASK_THREAD_F25_F0(a0)
+	fld f26, TASK_THREAD_F26_F0(a0)
+	fld f27, TASK_THREAD_F27_F0(a0)
+	fld f28, TASK_THREAD_F28_F0(a0)
+	fld f29, TASK_THREAD_F29_F0(a0)
+	fld f30, TASK_THREAD_F30_F0(a0)
+	fld f31, TASK_THREAD_F31_F0(a0)
+	fscsr t0
+	csrc sstatus, t1
+	ret
+ENDPROC(__fstate_restore)
+
+
+	.section ".rodata"
+	/* Exception vector table */
+ENTRY(excp_vect_table)
+	RISCV_PTR do_trap_insn_misaligned
+	RISCV_PTR do_trap_insn_fault
+	RISCV_PTR do_trap_insn_illegal
+	RISCV_PTR do_trap_break
+	RISCV_PTR do_trap_load_misaligned
+	RISCV_PTR do_trap_load_fault
+	RISCV_PTR do_trap_store_misaligned
+	RISCV_PTR do_trap_store_fault
+	RISCV_PTR do_trap_ecall_u /* system call, gets intercepted */
+	RISCV_PTR do_trap_ecall_s
+	RISCV_PTR do_trap_unknown
+	RISCV_PTR do_trap_ecall_m
+	RISCV_PTR do_page_fault   /* instruction page fault */
+	RISCV_PTR do_page_fault   /* load page fault */
+	RISCV_PTR do_trap_unknown
+	RISCV_PTR do_page_fault   /* store page fault */
+excp_vect_table_end:
+END(excp_vect_table)
diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
new file mode 100644
index 000000000000..b13d3ea3bf79
--- /dev/null
+++ b/arch/riscv/kernel/process.c
@@ -0,0 +1,131 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/tick.h>
+#include <linux/ptrace.h>
+
+#include <asm/unistd.h>
+#include <asm/uaccess.h>
+#include <asm/processor.h>
+#include <asm/csr.h>
+#include <asm/string.h>
+#include <asm/switch_to.h>
+
+extern asmlinkage void ret_from_fork(void);
+extern asmlinkage void ret_from_kernel_thread(void);
+
+void arch_cpu_idle(void)
+{
+	wait_for_interrupt();
+	local_irq_enable();
+}
+
+void show_regs(struct pt_regs *regs)
+{
+	show_regs_print_info(KERN_DEFAULT);
+
+	printk(KERN_CONT "sepc: " REG_FMT " ra : " REG_FMT " sp : " REG_FMT "\n",
+		regs->sepc, regs->ra, regs->sp);
+	printk(KERN_CONT " gp : " REG_FMT " tp : " REG_FMT " t0 : " REG_FMT "\n",
+		regs->gp, regs->tp, regs->t0);
+	printk(KERN_CONT " t1 : " REG_FMT " t2 : " REG_FMT " s0 : " REG_FMT "\n",
+		regs->t1, regs->t2, regs->s0);
+	printk(KERN_CONT " s1 : " REG_FMT " a0 : " REG_FMT " a1 : " REG_FMT "\n",
+		regs->s1, regs->a0, regs->a1);
+	printk(KERN_CONT " a2 : " REG_FMT " a3 : " REG_FMT " a4 : " REG_FMT "\n",
+		regs->a2, regs->a3, regs->a4);
+	printk(KERN_CONT " a5 : " REG_FMT " a6 : " REG_FMT " a7 : " REG_FMT "\n",
+		regs->a5, regs->a6, regs->a7);
+	printk(KERN_CONT " s2 : " REG_FMT " s3 : " REG_FMT " s4 : " REG_FMT "\n",
+		regs->s2, regs->s3, regs->s4);
+	printk(KERN_CONT " s5 : " REG_FMT " s6 : " REG_FMT " s7 : " REG_FMT "\n",
+		regs->s5, regs->s6, regs->s7);
+	printk(KERN_CONT " s8 : " REG_FMT " s9 : " REG_FMT " s10: " REG_FMT "\n",
+		regs->s8, regs->s9, regs->s10);
+	printk(KERN_CONT " s11: " REG_FMT " t3 : " REG_FMT " t4 : " REG_FMT "\n",
+		regs->s11, regs->t3, regs->t4);
+	printk(KERN_CONT " t5 : " REG_FMT " t6 : " REG_FMT "\n",
+		regs->t5, regs->t6);
+
+	printk(KERN_CONT "sstatus: " REG_FMT " sbadaddr: " REG_FMT " scause: " REG_FMT "\n",
+		regs->sstatus, regs->sbadaddr, regs->scause);
+}
+
+void start_thread(struct pt_regs *regs, unsigned long pc,
+	unsigned long sp)
+{
+	regs->sstatus = SR_PIE /* User mode, irqs on */ | SR_FS_INITIAL;
+#ifndef CONFIG_RV_PUM
+	regs->sstatus |= SR_SUM;
+#endif
+	regs->sepc = pc;
+	regs->sp = sp;
+	set_fs(USER_DS);
+}
+
+void flush_thread(void)
+{
+	/* Reset FPU context
+	 *	frm: round to nearest, ties to even (IEEE default)
+	 *	fflags: accrued exceptions cleared
+	 */
+	memset(&current->thread.fstate, 0, sizeof(current->thread.fstate));
+}
+
+int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+{
+	fstate_save(src, task_pt_regs(src));
+	*dst = *src;
+	return 0;
+}
+
+int copy_thread(unsigned long clone_flags, unsigned long usp,
+	unsigned long arg, struct task_struct *p)
+{
+	struct pt_regs *childregs = task_pt_regs(p);
+
+	/* p->thread holds context to be restored by __switch_to() */
+	if (unlikely(p->flags & PF_KTHREAD)) {
+		/* Kernel thread */
+		const register unsigned long gp __asm__ ("gp");
+		memset(childregs, 0, sizeof(struct pt_regs));
+		childregs->gp = gp;
+		childregs->sstatus = SR_PS | SR_PIE; /* Supervisor, irqs on */
+
+		p->thread.ra = (unsigned long)ret_from_kernel_thread;
+		p->thread.s[0] = usp; /* fn */
+		p->thread.s[1] = arg;
+	} else {
+		*childregs = *(current_pt_regs());
+		if (usp) /* User fork */
+			childregs->sp = usp;
+		if (clone_flags & CLONE_SETTLS)
+			childregs->tp = childregs->a5;
+		childregs->a0 = 0; /* Return value of fork() */
+		p->thread.ra = (unsigned long)ret_from_fork;
+	}
+	p->thread.sp = (unsigned long)childregs; /* kernel sp */
+	return 0;
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 5/9] RISC-V: Task implementation
@ 2017-06-28 18:55       ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains the implementation of tasks on RISC-V, most of which
is involved in task switching.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/asm-offsets.h |   1 +
 arch/riscv/include/asm/current.h     |  43 ++++
 arch/riscv/include/asm/kprobes.h     |  22 ++
 arch/riscv/include/asm/processor.h   | 102 ++++++++
 arch/riscv/include/asm/switch_to.h   |  69 ++++++
 arch/riscv/include/asm/thread_info.h |  96 ++++++++
 arch/riscv/kernel/asm-offsets.c      | 316 +++++++++++++++++++++++++
 arch/riscv/kernel/entry.S            | 436 +++++++++++++++++++++++++++++++++++
 arch/riscv/kernel/process.c          | 131 +++++++++++
 9 files changed, 1216 insertions(+)
 create mode 100644 arch/riscv/include/asm/asm-offsets.h
 create mode 100644 arch/riscv/include/asm/current.h
 create mode 100644 arch/riscv/include/asm/kprobes.h
 create mode 100644 arch/riscv/include/asm/processor.h
 create mode 100644 arch/riscv/include/asm/switch_to.h
 create mode 100644 arch/riscv/include/asm/thread_info.h
 create mode 100644 arch/riscv/kernel/asm-offsets.c
 create mode 100644 arch/riscv/kernel/entry.S
 create mode 100644 arch/riscv/kernel/process.c

diff --git a/arch/riscv/include/asm/asm-offsets.h b/arch/riscv/include/asm/asm-offsets.h
new file mode 100644
index 000000000000..d370ee36a182
--- /dev/null
+++ b/arch/riscv/include/asm/asm-offsets.h
@@ -0,0 +1 @@
+#include <generated/asm-offsets.h>
diff --git a/arch/riscv/include/asm/current.h b/arch/riscv/include/asm/current.h
new file mode 100644
index 000000000000..032a24ba5308
--- /dev/null
+++ b/arch/riscv/include/asm/current.h
@@ -0,0 +1,43 @@
+/*
+ * Based on arm/arm64/include/asm/current.h
+ *
+ * Copyright (C) 2016 ARM
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef __ASM_CURRENT_H
+#define __ASM_CURRENT_H
+
+#include <linux/compiler.h>
+
+#ifndef __ASSEMBLY__
+
+struct task_struct;
+
+/* This only works because "struct thread_info" is at offset 0 from "struct
+ * task_struct".  This constraint seems to be necessary on other architectures
+ * as well, but __switch_to enforces it.  We can't check TASK_TI here because
+ * <asm/asm-offsets.h> includes this, and I can't get the definition of "struct
+ * task_struct" here due to some header ordering problems.
+ */
+static __always_inline struct task_struct *get_current(void)
+{
+	register struct task_struct *tp __asm__("tp");
+	return tp;
+}
+
+#define current get_current()
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_CURRENT_H */
diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
new file mode 100644
index 000000000000..1190de7a0f74
--- /dev/null
+++ b/arch/riscv/include/asm/kprobes.h
@@ -0,0 +1,22 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef ASM_RISCV_KPROBES_H
+#define ASM_RISCV_KPROBES_H
+
+#ifdef CONFIG_KPROBES
+#error "RISC-V doesn't skpport CONFIG_KPROBES"
+#endif
+
+#endif
diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
new file mode 100644
index 000000000000..65aa014db9b4
--- /dev/null
+++ b/arch/riscv/include/asm/processor.h
@@ -0,0 +1,102 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PROCESSOR_H
+#define _ASM_RISCV_PROCESSOR_H
+
+#include <linux/const.h>
+
+#include <asm/ptrace.h>
+
+/*
+ * This decides where the kernel will search for a free chunk of vm
+ * space during mmap's.
+ */
+#define TASK_UNMAPPED_BASE	PAGE_ALIGN(TASK_SIZE >> 1)
+
+#ifdef __KERNEL__
+#define STACK_TOP		TASK_SIZE
+#define STACK_TOP_MAX		STACK_TOP
+#define STACK_ALIGN		16
+#endif /* __KERNEL__ */
+
+#ifndef __ASSEMBLY__
+
+struct task_struct;
+struct pt_regs;
+
+/*
+ * Default implementation of macro that returns current
+ * instruction pointer ("program counter").
+ */
+#define current_text_addr()	({ __label__ _l; _l: &&_l; })
+
+/* CPU-specific state of a task */
+struct thread_struct {
+	/* Callee-saved registers */
+	unsigned long ra;
+	unsigned long sp;	/* Kernel mode stack */
+	unsigned long s[12];	/* s[0]: frame pointer */
+	struct __riscv_d_ext_state fstate;
+};
+
+#define INIT_THREAD {					\
+	.sp = sizeof(init_stack) + (long)&init_stack,	\
+}
+
+/* Return saved (kernel) PC of a blocked thread. */
+#define thread_saved_pc(t)	((t)->thread.ra)
+#define thread_saved_sp(t)	((t)->thread.sp)
+#define thread_saved_fp(t)	((t)->thread.s[0])
+
+#define task_pt_regs(tsk)						\
+	((struct pt_regs *)(task_stack_page(tsk) + THREAD_SIZE		\
+			    - ALIGN(sizeof(struct pt_regs), STACK_ALIGN)))
+
+#define KSTK_EIP(tsk)		(task_pt_regs(tsk)->sepc)
+#define KSTK_ESP(tsk)		(task_pt_regs(tsk)->sp)
+
+
+/* Do necessary setup to start up a newly executed thread. */
+extern void start_thread(struct pt_regs *regs,
+			unsigned long pc, unsigned long sp);
+
+/* Free all resources held by a thread. */
+static inline void release_thread(struct task_struct *dead_task)
+{
+}
+
+extern unsigned long get_wchan(struct task_struct *p);
+
+
+static inline void cpu_relax(void)
+{
+#ifdef __riscv_muldiv
+	int dummy;
+	/* In lieu of a halt instruction, induce a long-latency stall. */
+	__asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy));
+#endif
+	barrier();
+}
+
+static inline void wait_for_interrupt(void)
+{
+	__asm__ __volatile__ ("wfi");
+}
+
+struct device_node;
+extern int riscv_of_processor_hart(struct device_node *node);
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PROCESSOR_H */
diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h
new file mode 100644
index 000000000000..dd6b05bff75b
--- /dev/null
+++ b/arch/riscv/include/asm/switch_to.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SWITCH_TO_H
+#define _ASM_RISCV_SWITCH_TO_H
+
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/csr.h>
+
+extern void __fstate_save(struct task_struct *save_to);
+extern void __fstate_restore(struct task_struct *restore_from);
+
+static inline void __fstate_clean(struct pt_regs *regs)
+{
+	regs->sstatus |= (regs->sstatus & ~(SR_FS)) | SR_FS_CLEAN;
+}
+
+static inline void fstate_save(struct task_struct *task,
+			       struct pt_regs *regs)
+{
+	if ((regs->sstatus & SR_FS) == SR_FS_DIRTY) {
+		__fstate_save(task);
+		__fstate_clean(regs);
+	}
+}
+
+static inline void fstate_restore(struct task_struct *task,
+				  struct pt_regs *regs)
+{
+	if ((regs->sstatus & SR_FS) != SR_FS_OFF) {
+		__fstate_restore(task);
+		__fstate_clean(regs);
+	}
+}
+
+static inline void __switch_to_aux(struct task_struct *prev,
+				   struct task_struct *next)
+{
+	struct pt_regs *regs;
+
+	regs = task_pt_regs(prev);
+	if (unlikely(regs->sstatus & SR_SD))
+		fstate_save(prev, regs);
+	fstate_restore(next, task_pt_regs(next));
+}
+
+extern struct task_struct *__switch_to(struct task_struct *,
+				       struct task_struct *);
+
+#define switch_to(prev, next, last)			\
+do {							\
+	struct task_struct *__prev = (prev);		\
+	struct task_struct *__next = (next);		\
+	__switch_to_aux(__prev, __next);		\
+	((last) = __switch_to(__prev, __next));		\
+} while (0)
+
+#endif /* _ASM_RISCV_SWITCH_TO_H */
diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
new file mode 100644
index 000000000000..f05b7d68c1ad
--- /dev/null
+++ b/arch/riscv/include/asm/thread_info.h
@@ -0,0 +1,96 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_THREAD_INFO_H
+#define _ASM_RISCV_THREAD_INFO_H
+
+#ifdef __KERNEL__
+
+#include <asm/page.h>
+#include <linux/const.h>
+
+/* thread information allocation */
+#define THREAD_SIZE_ORDER	(1)
+#define THREAD_SIZE		(PAGE_SIZE << THREAD_SIZE_ORDER)
+
+#ifndef __ASSEMBLY__
+
+#include <asm/processor.h>
+#include <asm/csr.h>
+
+typedef unsigned long mm_segment_t;
+
+/*
+ * low level task data that entry.S needs immediate access to
+ * - this struct should fit entirely inside of one cache line
+ * - if the members of this struct changes, the assembly constants
+ *   in asm-offsets.c must be updated accordingly
+ * - thread_info is included in task_struct at an offset of 0.  This means that
+ *   tp points to both thread_info and task_struct.
+ */
+struct thread_info {
+	unsigned long		flags;		/* low level flags */
+	int                     preempt_count;  /* 0=>preemptible, <0=>BUG */
+	mm_segment_t		addr_limit;
+	/* These stack pointers are overwritten on every system call or
+	 * exception.  SP is also saved to the stack it can be recovered when
+	 * overwritten.
+	 */
+	long			kernel_sp;	/* Kernel stack pointer */
+	long			user_sp;	/* User stack pointer */
+};
+
+/*
+ * macros/functions for gaining access to the thread information structure
+ *
+ * preempt_count needs to be 1 initially, until the scheduler is functional.
+ */
+#define INIT_THREAD_INFO(tsk)			\
+{						\
+	.flags		= 0,			\
+	.preempt_count	= INIT_PREEMPT_COUNT,	\
+	.addr_limit	= KERNEL_DS,		\
+}
+
+#define init_stack		(init_thread_union.stack)
+
+#endif /* !__ASSEMBLY__ */
+
+/*
+ * thread information flags
+ * - these are process state flags that various assembly files may need to
+ *   access
+ * - pending work-to-be-done flags are in lowest half-word
+ * - other flags in upper half-word(s)
+ */
+#define TIF_SYSCALL_TRACE	0	/* syscall trace active */
+#define TIF_NOTIFY_RESUME	1	/* callback before returning to user */
+#define TIF_SIGPENDING		2	/* signal pending */
+#define TIF_NEED_RESCHED	3	/* rescheduling necessary */
+#define TIF_RESTORE_SIGMASK	4	/* restore signal mask in do_signal() */
+#define TIF_MEMDIE		5	/* is terminating due to OOM killer */
+#define TIF_SYSCALL_TRACEPOINT  6       /* syscall tracepoint instrumentation */
+
+#define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+#define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
+#define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
+#define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
+
+#define _TIF_WORK_MASK \
+	(_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED)
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_RISCV_THREAD_INFO_H */
diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
new file mode 100644
index 000000000000..2ead5037528c
--- /dev/null
+++ b/arch/riscv/kernel/asm-offsets.c
@@ -0,0 +1,316 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/kbuild.h>
+#include <linux/sched.h>
+#include <asm/thread_info.h>
+#include <asm/ptrace.h>
+
+void asm_offsets(void)
+{
+	OFFSET(TASK_THREAD_RA, task_struct, thread.ra);
+	OFFSET(TASK_THREAD_SP, task_struct, thread.sp);
+	OFFSET(TASK_THREAD_S0, task_struct, thread.s[0]);
+	OFFSET(TASK_THREAD_S1, task_struct, thread.s[1]);
+	OFFSET(TASK_THREAD_S2, task_struct, thread.s[2]);
+	OFFSET(TASK_THREAD_S3, task_struct, thread.s[3]);
+	OFFSET(TASK_THREAD_S4, task_struct, thread.s[4]);
+	OFFSET(TASK_THREAD_S5, task_struct, thread.s[5]);
+	OFFSET(TASK_THREAD_S6, task_struct, thread.s[6]);
+	OFFSET(TASK_THREAD_S7, task_struct, thread.s[7]);
+	OFFSET(TASK_THREAD_S8, task_struct, thread.s[8]);
+	OFFSET(TASK_THREAD_S9, task_struct, thread.s[9]);
+	OFFSET(TASK_THREAD_S10, task_struct, thread.s[10]);
+	OFFSET(TASK_THREAD_S11, task_struct, thread.s[11]);
+	OFFSET(TASK_THREAD_SP, task_struct, thread.sp);
+	OFFSET(TASK_STACK, task_struct, stack);
+	OFFSET(TASK_TI, task_struct, thread_info);
+	OFFSET(TASK_TI_FLAGS, task_struct, thread_info.flags);
+	OFFSET(TASK_TI_KERNEL_SP, task_struct, thread_info.kernel_sp);
+	OFFSET(TASK_TI_USER_SP, task_struct, thread_info.user_sp);
+
+	OFFSET(TASK_THREAD_F0,  task_struct, thread.fstate.f[0]);
+	OFFSET(TASK_THREAD_F1,  task_struct, thread.fstate.f[1]);
+	OFFSET(TASK_THREAD_F2,  task_struct, thread.fstate.f[2]);
+	OFFSET(TASK_THREAD_F3,  task_struct, thread.fstate.f[3]);
+	OFFSET(TASK_THREAD_F4,  task_struct, thread.fstate.f[4]);
+	OFFSET(TASK_THREAD_F5,  task_struct, thread.fstate.f[5]);
+	OFFSET(TASK_THREAD_F6,  task_struct, thread.fstate.f[6]);
+	OFFSET(TASK_THREAD_F7,  task_struct, thread.fstate.f[7]);
+	OFFSET(TASK_THREAD_F8,  task_struct, thread.fstate.f[8]);
+	OFFSET(TASK_THREAD_F9,  task_struct, thread.fstate.f[9]);
+	OFFSET(TASK_THREAD_F10, task_struct, thread.fstate.f[10]);
+	OFFSET(TASK_THREAD_F11, task_struct, thread.fstate.f[11]);
+	OFFSET(TASK_THREAD_F12, task_struct, thread.fstate.f[12]);
+	OFFSET(TASK_THREAD_F13, task_struct, thread.fstate.f[13]);
+	OFFSET(TASK_THREAD_F14, task_struct, thread.fstate.f[14]);
+	OFFSET(TASK_THREAD_F15, task_struct, thread.fstate.f[15]);
+	OFFSET(TASK_THREAD_F16, task_struct, thread.fstate.f[16]);
+	OFFSET(TASK_THREAD_F17, task_struct, thread.fstate.f[17]);
+	OFFSET(TASK_THREAD_F18, task_struct, thread.fstate.f[18]);
+	OFFSET(TASK_THREAD_F19, task_struct, thread.fstate.f[19]);
+	OFFSET(TASK_THREAD_F20, task_struct, thread.fstate.f[20]);
+	OFFSET(TASK_THREAD_F21, task_struct, thread.fstate.f[21]);
+	OFFSET(TASK_THREAD_F22, task_struct, thread.fstate.f[22]);
+	OFFSET(TASK_THREAD_F23, task_struct, thread.fstate.f[23]);
+	OFFSET(TASK_THREAD_F24, task_struct, thread.fstate.f[24]);
+	OFFSET(TASK_THREAD_F25, task_struct, thread.fstate.f[25]);
+	OFFSET(TASK_THREAD_F26, task_struct, thread.fstate.f[26]);
+	OFFSET(TASK_THREAD_F27, task_struct, thread.fstate.f[27]);
+	OFFSET(TASK_THREAD_F28, task_struct, thread.fstate.f[28]);
+	OFFSET(TASK_THREAD_F29, task_struct, thread.fstate.f[29]);
+	OFFSET(TASK_THREAD_F30, task_struct, thread.fstate.f[30]);
+	OFFSET(TASK_THREAD_F31, task_struct, thread.fstate.f[31]);
+	OFFSET(TASK_THREAD_FCSR, task_struct, thread.fstate.fcsr);
+
+	DEFINE(PT_SIZE, sizeof(struct pt_regs));
+	OFFSET(PT_SEPC, pt_regs, sepc);
+	OFFSET(PT_RA, pt_regs, ra);
+	OFFSET(PT_FP, pt_regs, s0);
+	OFFSET(PT_S0, pt_regs, s0);
+	OFFSET(PT_S1, pt_regs, s1);
+	OFFSET(PT_S2, pt_regs, s2);
+	OFFSET(PT_S3, pt_regs, s3);
+	OFFSET(PT_S4, pt_regs, s4);
+	OFFSET(PT_S5, pt_regs, s5);
+	OFFSET(PT_S6, pt_regs, s6);
+	OFFSET(PT_S7, pt_regs, s7);
+	OFFSET(PT_S8, pt_regs, s8);
+	OFFSET(PT_S9, pt_regs, s9);
+	OFFSET(PT_S10, pt_regs, s10);
+	OFFSET(PT_S11, pt_regs, s11);
+	OFFSET(PT_SP, pt_regs, sp);
+	OFFSET(PT_TP, pt_regs, tp);
+	OFFSET(PT_A0, pt_regs, a0);
+	OFFSET(PT_A1, pt_regs, a1);
+	OFFSET(PT_A2, pt_regs, a2);
+	OFFSET(PT_A3, pt_regs, a3);
+	OFFSET(PT_A4, pt_regs, a4);
+	OFFSET(PT_A5, pt_regs, a5);
+	OFFSET(PT_A6, pt_regs, a6);
+	OFFSET(PT_A7, pt_regs, a7);
+	OFFSET(PT_T0, pt_regs, t0);
+	OFFSET(PT_T1, pt_regs, t1);
+	OFFSET(PT_T2, pt_regs, t2);
+	OFFSET(PT_T3, pt_regs, t3);
+	OFFSET(PT_T4, pt_regs, t4);
+	OFFSET(PT_T5, pt_regs, t5);
+	OFFSET(PT_T6, pt_regs, t6);
+	OFFSET(PT_GP, pt_regs, gp);
+	OFFSET(PT_SSTATUS, pt_regs, sstatus);
+	OFFSET(PT_SBADADDR, pt_regs, sbadaddr);
+	OFFSET(PT_SCAUSE, pt_regs, scause);
+
+	/* THREAD_{F,X}* might be larger than a S-type offset can handle, but
+	 * these are used in performance-sensitive assembly so we can't resort
+	 * to loading the long immediate every time.
+	 */
+	DEFINE(TASK_THREAD_RA_RA,
+		  offsetof(struct task_struct, thread.ra)
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_SP_RA,
+		  offsetof(struct task_struct, thread.sp)
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S0_RA,
+		  offsetof(struct task_struct, thread.s[0])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S1_RA,
+		  offsetof(struct task_struct, thread.s[1])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S2_RA,
+		  offsetof(struct task_struct, thread.s[2])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S3_RA,
+		  offsetof(struct task_struct, thread.s[3])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S4_RA,
+		  offsetof(struct task_struct, thread.s[4])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S5_RA,
+		  offsetof(struct task_struct, thread.s[5])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S6_RA,
+		  offsetof(struct task_struct, thread.s[6])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S7_RA,
+		  offsetof(struct task_struct, thread.s[7])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S8_RA,
+		  offsetof(struct task_struct, thread.s[8])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S9_RA,
+		  offsetof(struct task_struct, thread.s[9])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S10_RA,
+		  offsetof(struct task_struct, thread.s[10])
+		- offsetof(struct task_struct, thread.ra)
+	);
+	DEFINE(TASK_THREAD_S11_RA,
+		  offsetof(struct task_struct, thread.s[11])
+		- offsetof(struct task_struct, thread.ra)
+	);
+
+	DEFINE(TASK_THREAD_F0_F0,
+		  offsetof(struct task_struct, thread.fstate.f[0])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F1_F0,
+		  offsetof(struct task_struct, thread.fstate.f[1])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F2_F0,
+		  offsetof(struct task_struct, thread.fstate.f[2])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F3_F0,
+		  offsetof(struct task_struct, thread.fstate.f[3])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F4_F0,
+		  offsetof(struct task_struct, thread.fstate.f[4])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F5_F0,
+		  offsetof(struct task_struct, thread.fstate.f[5])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F6_F0,
+		  offsetof(struct task_struct, thread.fstate.f[6])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F7_F0,
+		  offsetof(struct task_struct, thread.fstate.f[7])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F8_F0,
+		  offsetof(struct task_struct, thread.fstate.f[8])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F9_F0,
+		  offsetof(struct task_struct, thread.fstate.f[9])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F10_F0,
+		  offsetof(struct task_struct, thread.fstate.f[10])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F11_F0,
+		  offsetof(struct task_struct, thread.fstate.f[11])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F12_F0,
+		  offsetof(struct task_struct, thread.fstate.f[12])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F13_F0,
+		  offsetof(struct task_struct, thread.fstate.f[13])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F14_F0,
+		  offsetof(struct task_struct, thread.fstate.f[14])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F15_F0,
+		  offsetof(struct task_struct, thread.fstate.f[15])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F16_F0,
+		  offsetof(struct task_struct, thread.fstate.f[16])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F17_F0,
+		  offsetof(struct task_struct, thread.fstate.f[17])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F18_F0,
+		  offsetof(struct task_struct, thread.fstate.f[18])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F19_F0,
+		  offsetof(struct task_struct, thread.fstate.f[19])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F20_F0,
+		  offsetof(struct task_struct, thread.fstate.f[20])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F21_F0,
+		  offsetof(struct task_struct, thread.fstate.f[21])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F22_F0,
+		  offsetof(struct task_struct, thread.fstate.f[22])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F23_F0,
+		  offsetof(struct task_struct, thread.fstate.f[23])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F24_F0,
+		  offsetof(struct task_struct, thread.fstate.f[24])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F25_F0,
+		  offsetof(struct task_struct, thread.fstate.f[25])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F26_F0,
+		  offsetof(struct task_struct, thread.fstate.f[26])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F27_F0,
+		  offsetof(struct task_struct, thread.fstate.f[27])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F28_F0,
+		  offsetof(struct task_struct, thread.fstate.f[28])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F29_F0,
+		  offsetof(struct task_struct, thread.fstate.f[29])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F30_F0,
+		  offsetof(struct task_struct, thread.fstate.f[30])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_F31_F0,
+		  offsetof(struct task_struct, thread.fstate.f[31])
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+	DEFINE(TASK_THREAD_FCSR_F0,
+		  offsetof(struct task_struct, thread.fstate.fcsr)
+		- offsetof(struct task_struct, thread.fstate.f[0])
+	);
+
+	/* The assembler needs access to THREAD_SIZE as well. */
+	DEFINE(ASM_THREAD_SIZE, THREAD_SIZE);
+
+	/* We allocate a pt_regs on the stack when entering the kernel.  This
+	 * ensures the alignment is sane.
+	 */
+	DEFINE(PT_SIZE_ON_STACK, ALIGN(sizeof(struct pt_regs), STACK_ALIGN));
+}
diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
new file mode 100644
index 000000000000..aa4593c2eaef
--- /dev/null
+++ b/arch/riscv/kernel/entry.S
@@ -0,0 +1,436 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+
+#include <asm/asm.h>
+#include <asm/csr.h>
+#include <asm/unistd.h>
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+
+	.text
+	.altmacro
+
+/* Prepares to enter a system call or exception by saving all registers to the
+ * stack.
+ */
+	.macro SAVE_ALL
+	LOCAL _restore_kernel_tpsp
+	LOCAL _save_context
+
+	/* If coming from userspace, preserve the user thread pointer and load
+	   the kernel thread pointer.  If we came from the kernel, sscratch
+	   will contain 0, and we should continue on the current TP. */
+	csrrw tp, sscratch, tp
+	bnez tp, _save_context
+
+_restore_kernel_tpsp:
+	csrr tp, sscratch
+	REG_S sp, TASK_TI_KERNEL_SP(tp)
+_save_context:
+	REG_S sp, TASK_TI_USER_SP(tp)
+	REG_L sp, TASK_TI_KERNEL_SP(tp)
+	addi sp, sp, -(PT_SIZE_ON_STACK)
+	REG_S x1,  PT_RA(sp)
+	REG_S x3,  PT_GP(sp)
+	REG_S x5,  PT_T0(sp)
+	REG_S x6,  PT_T1(sp)
+	REG_S x7,  PT_T2(sp)
+	REG_S x8,  PT_S0(sp)
+	REG_S x9,  PT_S1(sp)
+	REG_S x10, PT_A0(sp)
+	REG_S x11, PT_A1(sp)
+	REG_S x12, PT_A2(sp)
+	REG_S x13, PT_A3(sp)
+	REG_S x14, PT_A4(sp)
+	REG_S x15, PT_A5(sp)
+	REG_S x16, PT_A6(sp)
+	REG_S x17, PT_A7(sp)
+	REG_S x18, PT_S2(sp)
+	REG_S x19, PT_S3(sp)
+	REG_S x20, PT_S4(sp)
+	REG_S x21, PT_S5(sp)
+	REG_S x22, PT_S6(sp)
+	REG_S x23, PT_S7(sp)
+	REG_S x24, PT_S8(sp)
+	REG_S x25, PT_S9(sp)
+	REG_S x26, PT_S10(sp)
+	REG_S x27, PT_S11(sp)
+	REG_S x28, PT_T3(sp)
+	REG_S x29, PT_T4(sp)
+	REG_S x30, PT_T5(sp)
+	REG_S x31, PT_T6(sp)
+
+	/* Disable FPU to detect illegal usage of
+	   floating point in kernel space */
+	li t0, SR_FS
+
+	REG_L s0, TASK_TI_USER_SP(tp)
+	csrrc s1, sstatus, t0
+	csrr s2, sepc
+	csrr s3, sbadaddr
+	csrr s4, scause
+	csrr s5, sscratch
+	REG_S s0, PT_SP(sp)
+	REG_S s1, PT_SSTATUS(sp)
+	REG_S s2, PT_SEPC(sp)
+	REG_S s3, PT_SBADADDR(sp)
+	REG_S s4, PT_SCAUSE(sp)
+	REG_S s5, PT_TP(sp)
+	.endm
+
+/* Prepares to return from a system call or exception by restoring all
+ * registers from the stack.
+ */
+	.macro RESTORE_ALL
+	REG_L a0, PT_SSTATUS(sp)
+	REG_L a2, PT_SEPC(sp)
+	csrw sstatus, a0
+	csrw sepc, a2
+
+	REG_L x1,  PT_RA(sp)
+	REG_L x3,  PT_GP(sp)
+	REG_L x4,  PT_TP(sp)
+	REG_L x5,  PT_T0(sp)
+	REG_L x6,  PT_T1(sp)
+	REG_L x7,  PT_T2(sp)
+	REG_L x8,  PT_S0(sp)
+	REG_L x9,  PT_S1(sp)
+	REG_L x10, PT_A0(sp)
+	REG_L x11, PT_A1(sp)
+	REG_L x12, PT_A2(sp)
+	REG_L x13, PT_A3(sp)
+	REG_L x14, PT_A4(sp)
+	REG_L x15, PT_A5(sp)
+	REG_L x16, PT_A6(sp)
+	REG_L x17, PT_A7(sp)
+	REG_L x18, PT_S2(sp)
+	REG_L x19, PT_S3(sp)
+	REG_L x20, PT_S4(sp)
+	REG_L x21, PT_S5(sp)
+	REG_L x22, PT_S6(sp)
+	REG_L x23, PT_S7(sp)
+	REG_L x24, PT_S8(sp)
+	REG_L x25, PT_S9(sp)
+	REG_L x26, PT_S10(sp)
+	REG_L x27, PT_S11(sp)
+	REG_L x28, PT_T3(sp)
+	REG_L x29, PT_T4(sp)
+	REG_L x30, PT_T5(sp)
+	REG_L x31, PT_T6(sp)
+
+	REG_L x2,  PT_SP(sp)
+	.endm
+
+ENTRY(handle_exception)
+	SAVE_ALL
+
+	/* Set sscratch register to 0, so that if a recursive exception
+	   occurs, the exception vector knows it came from the kernel */
+	csrw sscratch, x0
+
+	la gp, __global_pointer$
+
+	la ra, ret_from_exception
+	/* MSB of cause differentiates between
+	   interrupts and exceptions */
+	bge s4, zero, 1f
+
+	/* Handle interrupts */
+	slli a0, s4, 1
+	srli a0, a0, 1
+	move a1, sp /* pt_regs */
+	tail do_IRQ
+1:
+	/* Handle syscalls */
+	li t0, EXC_SYSCALL
+	beq s4, t0, handle_syscall
+
+	/* Handle other exceptions */
+	slli t0, s4, RISCV_LGPTR
+	la t1, excp_vect_table
+	la t2, excp_vect_table_end
+	move a0, sp /* pt_regs */
+	add t0, t1, t0
+	/* Check if exception code lies within bounds */
+	bgeu t0, t2, 1f
+	REG_L t0, 0(t0)
+	jr t0
+1:
+	tail do_trap_unknown
+
+handle_syscall:
+	/* Advance SEPC to avoid executing the original
+	   scall instruction on sret */
+	addi s2, s2, 0x4
+	REG_S s2, PT_SEPC(sp)
+	/* System calls run with interrupts enabled */
+	csrs sstatus, SR_IE
+	/* Trace syscalls, but only if requested by the user. */
+	REG_L t0, TASK_TI_FLAGS(tp)
+	andi t0, t0, _TIF_SYSCALL_TRACE
+	bnez t0, handle_syscall_trace_enter
+check_syscall_nr:
+	/* Check to make sure we don't jump to a bogus syscall number. */
+	li t0, __NR_syscalls
+	la s0, sys_ni_syscall
+	/* Syscall number held in a7 */
+	bgeu a7, t0, 1f
+	la s0, sys_call_table
+	slli t0, a7, RISCV_LGPTR
+	add s0, s0, t0
+	REG_L s0, 0(s0)
+1:
+	jalr s0
+
+ret_from_syscall:
+	/* Set user a0 to kernel a0 */
+	REG_S a0, PT_A0(sp)
+	/* Trace syscalls, but only if requested by the user. */
+	REG_L t0, TASK_TI_FLAGS(tp)
+	andi t0, t0, _TIF_SYSCALL_TRACE
+	bnez t0, handle_syscall_trace_exit
+
+ret_from_exception:
+	REG_L s0, PT_SSTATUS(sp)
+	csrc sstatus, SR_IE
+	andi s0, s0, SR_PS
+	bnez s0, restore_all
+
+resume_userspace:
+	/* Interrupts must be disabled here so flags are checked atomically */
+	REG_L s0, TASK_TI_FLAGS(tp) /* current_thread_info->flags */
+	andi s1, s0, _TIF_WORK_MASK
+	bnez s1, work_pending
+
+	/* Save unwound kernel stack pointer in thread_info */
+	addi s0, sp, PT_SIZE_ON_STACK
+	REG_S s0, TASK_TI_KERNEL_SP(tp)
+
+	/* Save TP into sscratch, so we can find the kernel data structures
+	 * again. */
+	csrw sscratch, tp
+
+restore_all:
+	RESTORE_ALL
+	sret
+
+work_pending:
+	/* Enter slow path for supplementary processing */
+	la ra, ret_from_exception
+	andi s1, s0, _TIF_NEED_RESCHED
+	bnez s1, work_resched
+work_notifysig:
+	/* Handle pending signals and notify-resume requests */
+	csrs sstatus, SR_IE /* Enable interrupts for do_notify_resume() */
+	move a0, sp /* pt_regs */
+	move a1, s0 /* current_thread_info->flags */
+	tail do_notify_resume
+work_resched:
+	tail schedule
+
+/* Slow paths for ptrace. */
+handle_syscall_trace_enter:
+	move a0, sp
+	call do_syscall_trace_enter
+	REG_L a0, PT_A0(sp)
+	REG_L a1, PT_A1(sp)
+	REG_L a2, PT_A2(sp)
+	REG_L a3, PT_A3(sp)
+	REG_L a4, PT_A4(sp)
+	REG_L a5, PT_A5(sp)
+	REG_L a6, PT_A6(sp)
+	REG_L a7, PT_A7(sp)
+	j check_syscall_nr
+handle_syscall_trace_exit:
+	move a0, sp
+	call do_syscall_trace_exit
+	j ret_from_exception
+
+END(handle_exception)
+
+ENTRY(ret_from_fork)
+	la ra, ret_from_exception
+	tail schedule_tail
+ENDPROC(ret_from_fork)
+
+ENTRY(ret_from_kernel_thread)
+	call schedule_tail
+	/* Call fn(arg) */
+	la ra, ret_from_exception
+	move a0, s1
+	jr s0
+ENDPROC(ret_from_kernel_thread)
+
+
+/*
+ * Integer register context switch
+ * The callee-saved registers must be saved and restored.
+ *
+ *   a0: previous task_struct (must be preserved across the switch)
+ *   a1: next task_struct
+ */
+ENTRY(__switch_to)
+	/* Save context into prev->thread */
+	li    a2,  TASK_THREAD_RA
+	add   a0, a0, a2
+	add   a2, a1, a2
+	REG_S ra,  TASK_THREAD_RA_RA(a0)
+	REG_S sp,  TASK_THREAD_SP_RA(a0)
+	REG_S s0,  TASK_THREAD_S0_RA(a0)
+	REG_S s1,  TASK_THREAD_S1_RA(a0)
+	REG_S s2,  TASK_THREAD_S2_RA(a0)
+	REG_S s3,  TASK_THREAD_S3_RA(a0)
+	REG_S s4,  TASK_THREAD_S4_RA(a0)
+	REG_S s5,  TASK_THREAD_S5_RA(a0)
+	REG_S s6,  TASK_THREAD_S6_RA(a0)
+	REG_S s7,  TASK_THREAD_S7_RA(a0)
+	REG_S s8,  TASK_THREAD_S8_RA(a0)
+	REG_S s9,  TASK_THREAD_S9_RA(a0)
+	REG_S s10, TASK_THREAD_S10_RA(a0)
+	REG_S s11, TASK_THREAD_S11_RA(a0)
+	/* Restore context from next->thread */
+	REG_L ra,  TASK_THREAD_RA_RA(a2)
+	REG_L sp,  TASK_THREAD_SP_RA(a2)
+	REG_L s0,  TASK_THREAD_S0_RA(a2)
+	REG_L s1,  TASK_THREAD_S1_RA(a2)
+	REG_L s2,  TASK_THREAD_S2_RA(a2)
+	REG_L s3,  TASK_THREAD_S3_RA(a2)
+	REG_L s4,  TASK_THREAD_S4_RA(a2)
+	REG_L s5,  TASK_THREAD_S5_RA(a2)
+	REG_L s6,  TASK_THREAD_S6_RA(a2)
+	REG_L s7,  TASK_THREAD_S7_RA(a2)
+	REG_L s8,  TASK_THREAD_S8_RA(a2)
+	REG_L s9,  TASK_THREAD_S9_RA(a2)
+	REG_L s10, TASK_THREAD_S10_RA(a2)
+	REG_L s11, TASK_THREAD_S11_RA(a2)
+#if TASK_TI != 0
+#error "TASK_TI != 0: tp will contain a 'struct thread_info', not a 'struct task_struct' so get_current() won't work."
+	addi tp, a1, TASK_TI
+#else
+	move tp, a1
+#endif
+	ret
+ENDPROC(__switch_to)
+
+ENTRY(__fstate_save)
+	li  a2,  TASK_THREAD_F0
+	add a0, a0, a2
+	li t1, SR_FS
+	csrs sstatus, t1
+	frcsr t0
+	fsd f0,  TASK_THREAD_F0_F0(a0)
+	fsd f1,  TASK_THREAD_F1_F0(a0)
+	fsd f2,  TASK_THREAD_F2_F0(a0)
+	fsd f3,  TASK_THREAD_F3_F0(a0)
+	fsd f4,  TASK_THREAD_F4_F0(a0)
+	fsd f5,  TASK_THREAD_F5_F0(a0)
+	fsd f6,  TASK_THREAD_F6_F0(a0)
+	fsd f7,  TASK_THREAD_F7_F0(a0)
+	fsd f8,  TASK_THREAD_F8_F0(a0)
+	fsd f9,  TASK_THREAD_F9_F0(a0)
+	fsd f10, TASK_THREAD_F10_F0(a0)
+	fsd f11, TASK_THREAD_F11_F0(a0)
+	fsd f12, TASK_THREAD_F12_F0(a0)
+	fsd f13, TASK_THREAD_F13_F0(a0)
+	fsd f14, TASK_THREAD_F14_F0(a0)
+	fsd f15, TASK_THREAD_F15_F0(a0)
+	fsd f16, TASK_THREAD_F16_F0(a0)
+	fsd f17, TASK_THREAD_F17_F0(a0)
+	fsd f18, TASK_THREAD_F18_F0(a0)
+	fsd f19, TASK_THREAD_F19_F0(a0)
+	fsd f20, TASK_THREAD_F20_F0(a0)
+	fsd f21, TASK_THREAD_F21_F0(a0)
+	fsd f22, TASK_THREAD_F22_F0(a0)
+	fsd f23, TASK_THREAD_F23_F0(a0)
+	fsd f24, TASK_THREAD_F24_F0(a0)
+	fsd f25, TASK_THREAD_F25_F0(a0)
+	fsd f26, TASK_THREAD_F26_F0(a0)
+	fsd f27, TASK_THREAD_F27_F0(a0)
+	fsd f28, TASK_THREAD_F28_F0(a0)
+	fsd f29, TASK_THREAD_F29_F0(a0)
+	fsd f30, TASK_THREAD_F30_F0(a0)
+	fsd f31, TASK_THREAD_F31_F0(a0)
+	sw t0, TASK_THREAD_FCSR_F0(a0)
+	csrc sstatus, t1
+	ret
+ENDPROC(__fstate_save)
+
+ENTRY(__fstate_restore)
+	li  a2,  TASK_THREAD_F0
+	add a0, a0, a2
+	li t1, SR_FS
+	lw t0, TASK_THREAD_FCSR_F0(a0)
+	csrs sstatus, t1
+	fld f0,  TASK_THREAD_F0_F0(a0)
+	fld f1,  TASK_THREAD_F1_F0(a0)
+	fld f2,  TASK_THREAD_F2_F0(a0)
+	fld f3,  TASK_THREAD_F3_F0(a0)
+	fld f4,  TASK_THREAD_F4_F0(a0)
+	fld f5,  TASK_THREAD_F5_F0(a0)
+	fld f6,  TASK_THREAD_F6_F0(a0)
+	fld f7,  TASK_THREAD_F7_F0(a0)
+	fld f8,  TASK_THREAD_F8_F0(a0)
+	fld f9,  TASK_THREAD_F9_F0(a0)
+	fld f10, TASK_THREAD_F10_F0(a0)
+	fld f11, TASK_THREAD_F11_F0(a0)
+	fld f12, TASK_THREAD_F12_F0(a0)
+	fld f13, TASK_THREAD_F13_F0(a0)
+	fld f14, TASK_THREAD_F14_F0(a0)
+	fld f15, TASK_THREAD_F15_F0(a0)
+	fld f16, TASK_THREAD_F16_F0(a0)
+	fld f17, TASK_THREAD_F17_F0(a0)
+	fld f18, TASK_THREAD_F18_F0(a0)
+	fld f19, TASK_THREAD_F19_F0(a0)
+	fld f20, TASK_THREAD_F20_F0(a0)
+	fld f21, TASK_THREAD_F21_F0(a0)
+	fld f22, TASK_THREAD_F22_F0(a0)
+	fld f23, TASK_THREAD_F23_F0(a0)
+	fld f24, TASK_THREAD_F24_F0(a0)
+	fld f25, TASK_THREAD_F25_F0(a0)
+	fld f26, TASK_THREAD_F26_F0(a0)
+	fld f27, TASK_THREAD_F27_F0(a0)
+	fld f28, TASK_THREAD_F28_F0(a0)
+	fld f29, TASK_THREAD_F29_F0(a0)
+	fld f30, TASK_THREAD_F30_F0(a0)
+	fld f31, TASK_THREAD_F31_F0(a0)
+	fscsr t0
+	csrc sstatus, t1
+	ret
+ENDPROC(__fstate_restore)
+
+
+	.section ".rodata"
+	/* Exception vector table */
+ENTRY(excp_vect_table)
+	RISCV_PTR do_trap_insn_misaligned
+	RISCV_PTR do_trap_insn_fault
+	RISCV_PTR do_trap_insn_illegal
+	RISCV_PTR do_trap_break
+	RISCV_PTR do_trap_load_misaligned
+	RISCV_PTR do_trap_load_fault
+	RISCV_PTR do_trap_store_misaligned
+	RISCV_PTR do_trap_store_fault
+	RISCV_PTR do_trap_ecall_u /* system call, gets intercepted */
+	RISCV_PTR do_trap_ecall_s
+	RISCV_PTR do_trap_unknown
+	RISCV_PTR do_trap_ecall_m
+	RISCV_PTR do_page_fault   /* instruction page fault */
+	RISCV_PTR do_page_fault   /* load page fault */
+	RISCV_PTR do_trap_unknown
+	RISCV_PTR do_page_fault   /* store page fault */
+excp_vect_table_end:
+END(excp_vect_table)
diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
new file mode 100644
index 000000000000..b13d3ea3bf79
--- /dev/null
+++ b/arch/riscv/kernel/process.c
@@ -0,0 +1,131 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/tick.h>
+#include <linux/ptrace.h>
+
+#include <asm/unistd.h>
+#include <asm/uaccess.h>
+#include <asm/processor.h>
+#include <asm/csr.h>
+#include <asm/string.h>
+#include <asm/switch_to.h>
+
+extern asmlinkage void ret_from_fork(void);
+extern asmlinkage void ret_from_kernel_thread(void);
+
+void arch_cpu_idle(void)
+{
+	wait_for_interrupt();
+	local_irq_enable();
+}
+
+void show_regs(struct pt_regs *regs)
+{
+	show_regs_print_info(KERN_DEFAULT);
+
+	printk(KERN_CONT "sepc: " REG_FMT " ra : " REG_FMT " sp : " REG_FMT "\n",
+		regs->sepc, regs->ra, regs->sp);
+	printk(KERN_CONT " gp : " REG_FMT " tp : " REG_FMT " t0 : " REG_FMT "\n",
+		regs->gp, regs->tp, regs->t0);
+	printk(KERN_CONT " t1 : " REG_FMT " t2 : " REG_FMT " s0 : " REG_FMT "\n",
+		regs->t1, regs->t2, regs->s0);
+	printk(KERN_CONT " s1 : " REG_FMT " a0 : " REG_FMT " a1 : " REG_FMT "\n",
+		regs->s1, regs->a0, regs->a1);
+	printk(KERN_CONT " a2 : " REG_FMT " a3 : " REG_FMT " a4 : " REG_FMT "\n",
+		regs->a2, regs->a3, regs->a4);
+	printk(KERN_CONT " a5 : " REG_FMT " a6 : " REG_FMT " a7 : " REG_FMT "\n",
+		regs->a5, regs->a6, regs->a7);
+	printk(KERN_CONT " s2 : " REG_FMT " s3 : " REG_FMT " s4 : " REG_FMT "\n",
+		regs->s2, regs->s3, regs->s4);
+	printk(KERN_CONT " s5 : " REG_FMT " s6 : " REG_FMT " s7 : " REG_FMT "\n",
+		regs->s5, regs->s6, regs->s7);
+	printk(KERN_CONT " s8 : " REG_FMT " s9 : " REG_FMT " s10: " REG_FMT "\n",
+		regs->s8, regs->s9, regs->s10);
+	printk(KERN_CONT " s11: " REG_FMT " t3 : " REG_FMT " t4 : " REG_FMT "\n",
+		regs->s11, regs->t3, regs->t4);
+	printk(KERN_CONT " t5 : " REG_FMT " t6 : " REG_FMT "\n",
+		regs->t5, regs->t6);
+
+	printk(KERN_CONT "sstatus: " REG_FMT " sbadaddr: " REG_FMT " scause: " REG_FMT "\n",
+		regs->sstatus, regs->sbadaddr, regs->scause);
+}
+
+void start_thread(struct pt_regs *regs, unsigned long pc,
+	unsigned long sp)
+{
+	regs->sstatus = SR_PIE /* User mode, irqs on */ | SR_FS_INITIAL;
+#ifndef CONFIG_RV_PUM
+	regs->sstatus |= SR_SUM;
+#endif
+	regs->sepc = pc;
+	regs->sp = sp;
+	set_fs(USER_DS);
+}
+
+void flush_thread(void)
+{
+	/* Reset FPU context
+	 *	frm: round to nearest, ties to even (IEEE default)
+	 *	fflags: accrued exceptions cleared
+	 */
+	memset(&current->thread.fstate, 0, sizeof(current->thread.fstate));
+}
+
+int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+{
+	fstate_save(src, task_pt_regs(src));
+	*dst = *src;
+	return 0;
+}
+
+int copy_thread(unsigned long clone_flags, unsigned long usp,
+	unsigned long arg, struct task_struct *p)
+{
+	struct pt_regs *childregs = task_pt_regs(p);
+
+	/* p->thread holds context to be restored by __switch_to() */
+	if (unlikely(p->flags & PF_KTHREAD)) {
+		/* Kernel thread */
+		const register unsigned long gp __asm__ ("gp");
+		memset(childregs, 0, sizeof(struct pt_regs));
+		childregs->gp = gp;
+		childregs->sstatus = SR_PS | SR_PIE; /* Supervisor, irqs on */
+
+		p->thread.ra = (unsigned long)ret_from_kernel_thread;
+		p->thread.s[0] = usp; /* fn */
+		p->thread.s[1] = arg;
+	} else {
+		*childregs = *(current_pt_regs());
+		if (usp) /* User fork */
+			childregs->sp = usp;
+		if (clone_flags & CLONE_SETTLS)
+			childregs->tp = childregs->a5;
+		childregs->a0 = 0; /* Return value of fork() */
+		p->thread.ra = (unsigned long)ret_from_fork;
+	}
+	p->thread.sp = (unsigned long)childregs; /* kernel sp */
+	return 0;
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 6/9] RISC-V: Device, timer, IRQs, and the SBI
  2017-06-28 18:55     ` Palmer Dabbelt
@ 2017-06-28 18:55       ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains code that interfaces with devices that are mandated
by the RISC-V supervisor specification and that don't have explicit
drivers anywhere else in the tree.  This includes the staticly defined
interrupts, the CSR-mapped timer, and virtualized SBI devices.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/delay.h       |  28 +++++++++
 arch/riscv/include/asm/device.h      |  27 +++++++++
 arch/riscv/include/asm/dma-mapping.h |  41 ++++++++++++++
 arch/riscv/include/asm/irq.h         |  31 ++++++++++
 arch/riscv/include/asm/irqflags.h    |  63 +++++++++++++++++++++
 arch/riscv/include/asm/pci.h         |  50 ++++++++++++++++
 arch/riscv/include/asm/sbi.h         | 100 ++++++++++++++++++++++++++++++++
 arch/riscv/include/asm/timex.h       |  63 +++++++++++++++++++++
 arch/riscv/lib/delay.c               | 107 +++++++++++++++++++++++++++++++++++
 arch/riscv/mm/ioremap.c              |  92 ++++++++++++++++++++++++++++++
 10 files changed, 602 insertions(+)
 create mode 100644 arch/riscv/include/asm/delay.h
 create mode 100644 arch/riscv/include/asm/device.h
 create mode 100644 arch/riscv/include/asm/dma-mapping.h
 create mode 100644 arch/riscv/include/asm/irq.h
 create mode 100644 arch/riscv/include/asm/irqflags.h
 create mode 100644 arch/riscv/include/asm/pci.h
 create mode 100644 arch/riscv/include/asm/sbi.h
 create mode 100644 arch/riscv/include/asm/timex.h
 create mode 100644 arch/riscv/lib/delay.c
 create mode 100644 arch/riscv/mm/ioremap.c

diff --git a/arch/riscv/include/asm/delay.h b/arch/riscv/include/asm/delay.h
new file mode 100644
index 000000000000..cbb0c9eb96cb
--- /dev/null
+++ b/arch/riscv/include/asm/delay.h
@@ -0,0 +1,28 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2016 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_DELAY_H
+#define _ASM_RISCV_DELAY_H
+
+extern unsigned long riscv_timebase;
+
+#define udelay udelay
+extern void udelay(unsigned long usecs);
+
+#define ndelay ndelay
+extern void ndelay(unsigned long nsecs);
+
+extern void __delay(unsigned long cycles);
+
+#endif /* _ASM_RISCV_DELAY_H */
diff --git a/arch/riscv/include/asm/device.h b/arch/riscv/include/asm/device.h
new file mode 100644
index 000000000000..28975e528d2f
--- /dev/null
+++ b/arch/riscv/include/asm/device.h
@@ -0,0 +1,27 @@
+/*
+ * Copyright (C) 2016 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef _ASM_RISCV_DEVICE_H
+#define _ASM_RISCV_DEVICE_H
+
+#include <linux/sysfs.h>
+
+struct dev_archdata {
+	struct dma_map_ops *dma_ops;
+};
+
+struct pdev_archdata {
+};
+
+#endif /* _ASM_RISCV_DEVICE_H */
diff --git a/arch/riscv/include/asm/dma-mapping.h b/arch/riscv/include/asm/dma-mapping.h
new file mode 100644
index 000000000000..9485a58e839e
--- /dev/null
+++ b/arch/riscv/include/asm/dma-mapping.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2003-2004 Hewlett-Packard Co
+ *	David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2016 SiFive, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_RISCV_DMA_MAPPING_H
+#define __ASM_RISCV_DMA_MAPPING_H
+
+#ifdef __KERNEL__
+
+/* Use ops->dma_mapping_error (if it exists) or assume success */
+// #undef DMA_ERROR_CODE
+
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
+{
+	return &dma_noop_ops;
+}
+
+static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
+{
+	if (!dev->dma_mask)
+		return false;
+
+	return addr + size - 1 <= *dev->dma_mask;
+}
+
+#endif	/* __KERNEL__ */
+#endif	/* __ASM_RISCV_DMA_MAPPING_H */
diff --git a/arch/riscv/include/asm/irq.h b/arch/riscv/include/asm/irq.h
new file mode 100644
index 000000000000..e64c61e89f76
--- /dev/null
+++ b/arch/riscv/include/asm/irq.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_IRQ_H
+#define _ASM_RISCV_IRQ_H
+
+#define NR_IRQS         0
+
+#define INTERRUPT_CAUSE_SOFTWARE    1
+#define INTERRUPT_CAUSE_TIMER       5
+#define INTERRUPT_CAUSE_EXTERNAL    9
+
+void riscv_timer_interrupt(void);
+
+#include <asm-generic/irq.h>
+
+/* The value of csr sie before init_traps runs (core is up) */
+DECLARE_PER_CPU(atomic_long_t, riscv_early_sie);
+
+#endif /* _ASM_RISCV_IRQ_H */
diff --git a/arch/riscv/include/asm/irqflags.h b/arch/riscv/include/asm/irqflags.h
new file mode 100644
index 000000000000..6fdc860d7f84
--- /dev/null
+++ b/arch/riscv/include/asm/irqflags.h
@@ -0,0 +1,63 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef _ASM_RISCV_IRQFLAGS_H
+#define _ASM_RISCV_IRQFLAGS_H
+
+#include <asm/processor.h>
+#include <asm/csr.h>
+
+/* read interrupt enabled status */
+static inline unsigned long arch_local_save_flags(void)
+{
+	return csr_read(sstatus);
+}
+
+/* unconditionally enable interrupts */
+static inline void arch_local_irq_enable(void)
+{
+	csr_set(sstatus, SR_IE);
+}
+
+/* unconditionally disable interrupts */
+static inline void arch_local_irq_disable(void)
+{
+	csr_clear(sstatus, SR_IE);
+}
+
+/* get status and disable interrupts */
+static inline unsigned long arch_local_irq_save(void)
+{
+	return csr_read_clear(sstatus, SR_IE);
+}
+
+/* test flags */
+static inline int arch_irqs_disabled_flags(unsigned long flags)
+{
+	return !(flags & SR_IE);
+}
+
+/* test hardware interrupt enable bit */
+static inline int arch_irqs_disabled(void)
+{
+	return arch_irqs_disabled_flags(arch_local_save_flags());
+}
+
+/* set interrupt enabled status */
+static inline void arch_local_irq_restore(unsigned long flags)
+{
+	csr_set(sstatus, flags & SR_IE);
+}
+
+#endif /* _ASM_RISCV_IRQFLAGS_H */
diff --git a/arch/riscv/include/asm/pci.h b/arch/riscv/include/asm/pci.h
new file mode 100644
index 000000000000..c829a4ce8d25
--- /dev/null
+++ b/arch/riscv/include/asm/pci.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright (C) 2016 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef __ASM_RISCV_PCI_H
+#define __ASM_RISCV_PCI_H
+#ifdef __KERNEL__
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/io.h>
+
+#define PCIBIOS_MIN_IO		0
+#define PCIBIOS_MIN_MEM		0
+
+/* RISC-V shim does not initialize PCI bus */
+#define pcibios_assign_all_busses() 1
+
+/* RISC-V TileLink and PCIe share the share address space */
+#define PCI_DMA_BUS_IS_PHYS 1
+
+extern int isa_dma_bridge_buggy;
+
+#ifdef CONFIG_PCI
+static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
+{
+	/* no legacy IRQ on risc-v */
+	return -ENODEV;
+}
+
+static inline int pci_proc_domain(struct pci_bus *bus)
+{
+	/* always show the domain in /proc */
+	return 1;
+}
+#endif  /* CONFIG_PCI */
+
+#endif  /* __KERNEL__ */
+#endif  /* __ASM_PCI_H */
diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
new file mode 100644
index 000000000000..b6bb10b92fe2
--- /dev/null
+++ b/arch/riscv/include/asm/sbi.h
@@ -0,0 +1,100 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SBI_H
+#define _ASM_RISCV_SBI_H
+
+#include <linux/types.h>
+
+#define SBI_SET_TIMER 0
+#define SBI_CONSOLE_PUTCHAR 1
+#define SBI_CONSOLE_GETCHAR 2
+#define SBI_CLEAR_IPI 3
+#define SBI_SEND_IPI 4
+#define SBI_REMOTE_FENCE_I 5
+#define SBI_REMOTE_SFENCE_VMA 6
+#define SBI_REMOTE_SFENCE_VMA_ASID 7
+#define SBI_SHUTDOWN 8
+
+#define SBI_CALL(which, arg0, arg1, arg2) ({			\
+	register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0);	\
+	register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1);	\
+	register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2);	\
+	register uintptr_t a7 asm ("a7") = (uintptr_t)(which);	\
+	asm volatile ("ecall"					\
+		      : "+r" (a0)				\
+		      : "r" (a1), "r" (a2), "r" (a7)		\
+		      : "memory");				\
+	a0;							\
+})
+
+/* Lazy implementations until SBI is finalized */
+#define SBI_CALL_0(which) SBI_CALL(which, 0, 0, 0)
+#define SBI_CALL_1(which, arg0) SBI_CALL(which, arg0, 0, 0)
+#define SBI_CALL_2(which, arg0, arg1) SBI_CALL(which, arg0, arg1, 0)
+
+static inline void sbi_console_putchar(int ch)
+{
+	SBI_CALL_1(SBI_CONSOLE_PUTCHAR, ch);
+}
+
+static inline int sbi_console_getchar(void)
+{
+	return SBI_CALL_0(SBI_CONSOLE_GETCHAR);
+}
+
+static inline void sbi_set_timer(uint64_t stime_value)
+{
+#if __riscv_xlen == 32
+	SBI_CALL_2(SBI_SET_TIMER, stime_value, stime_value >> 32);
+#else
+	SBI_CALL_1(SBI_SET_TIMER, stime_value);
+#endif
+}
+
+static inline void sbi_shutdown(void)
+{
+	SBI_CALL_0(SBI_SHUTDOWN);
+}
+
+static inline void sbi_clear_ipi(void)
+{
+	SBI_CALL_0(SBI_CLEAR_IPI);
+}
+
+static inline void sbi_send_ipi(const unsigned long *hart_mask)
+{
+	SBI_CALL_1(SBI_SEND_IPI, hart_mask);
+}
+
+static inline void sbi_remote_fence_i(const unsigned long *hart_mask)
+{
+	SBI_CALL_1(SBI_REMOTE_FENCE_I, hart_mask);
+}
+
+static inline void sbi_remote_sfence_vma(const unsigned long *hart_mask,
+					 unsigned long start,
+					 unsigned long size)
+{
+	SBI_CALL_1(SBI_REMOTE_SFENCE_VMA, hart_mask);
+}
+
+static inline void sbi_remote_sfence_vma_asid(const unsigned long *hart_mask,
+					      unsigned long start,
+					      unsigned long size,
+					      unsigned long asid)
+{
+	SBI_CALL_1(SBI_REMOTE_SFENCE_VMA_ASID, hart_mask);
+}
+
+#endif
diff --git a/arch/riscv/include/asm/timex.h b/arch/riscv/include/asm/timex.h
new file mode 100644
index 000000000000..199f71108b99
--- /dev/null
+++ b/arch/riscv/include/asm/timex.h
@@ -0,0 +1,63 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TIMEX_H
+#define _ASM_RISCV_TIMEX_H
+
+#include <asm/param.h>
+
+#ifdef __KERNEL__
+
+typedef unsigned long cycles_t;
+
+static inline cycles_t get_cycles(void)
+{
+	cycles_t n;
+
+	__asm__ __volatile__ (
+		"rdtime %0"
+		: "=r" (n));
+	return n;
+}
+
+#ifdef CONFIG_64BIT
+static inline uint64_t get_cycles64(void)
+{
+        return get_cycles();
+}
+#else
+static inline uint64_t get_cycles64(void)
+{
+	u32 lo, hi, tmp;
+	__asm__ __volatile__ (
+		"1:\n"
+		"rdtimeh %0\n"
+		"rdtime %1\n"
+		"rdtimeh %2\n"
+		"bne %0, %2, 1b"
+		: "=&r" (hi), "=&r" (lo), "=&r" (tmp));
+	return ((u64)hi << 32) | lo;
+}
+#endif
+
+#define ARCH_HAS_READ_CURRENT_TIMER
+
+static inline int read_current_timer(unsigned long *timer_val)
+{
+	*timer_val = get_cycles();
+	return 0;
+}
+
+#endif
+
+#endif /* _ASM_RISCV_TIMEX_H */
diff --git a/arch/riscv/lib/delay.c b/arch/riscv/lib/delay.c
new file mode 100644
index 000000000000..55c7d9c9986a
--- /dev/null
+++ b/arch/riscv/lib/delay.c
@@ -0,0 +1,107 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/delay.h>
+#include <linux/param.h>
+#include <linux/timex.h>
+#include <linux/export.h>
+
+/*
+ * This is copies from arch/arm/include/asm/delay.h
+ *
+ * Loop (or tick) based delay:
+ *
+ * loops = loops_per_jiffy * jiffies_per_sec * delay_us / us_per_sec
+ *
+ * where:
+ *
+ * jiffies_per_sec = HZ
+ * us_per_sec = 1000000
+ *
+ * Therefore the constant part is HZ / 1000000 which is a small
+ * fractional number. To make this usable with integer math, we
+ * scale up this constant by 2^31, perform the actual multiplication,
+ * and scale the result back down by 2^31 with a simple shift:
+ *
+ * loops = (loops_per_jiffy * delay_us * UDELAY_MULT) >> 31
+ *
+ * where:
+ *
+ * UDELAY_MULT = 2^31 * HZ / 1000000
+ *             = (2^31 / 1000000) * HZ
+ *             = 2147.483648 * HZ
+ *             = 2147 * HZ + 483648 * HZ / 1000000
+ *
+ * 31 is the biggest scale shift value that won't overflow 32 bits for
+ * delay_us * UDELAY_MULT assuming HZ <= 1000 and delay_us <= 2000.
+ */
+#define MAX_UDELAY_US	2000
+#define MAX_UDELAY_HZ	1000
+#define UDELAY_MULT	(2147UL * HZ + 483648UL * HZ / 1000000UL)
+#define UDELAY_SHIFT	31
+
+#if HZ > MAX_UDELAY_HZ
+#error "HZ > MAX_UDELAY_HZ"
+#endif
+
+/* RISC-V supports both UDELAY and NDELAY.  This is largely the same as above,
+ * but with different constants.  I added 10 bits to the shift to get this, but
+ * the result is that I need a 64-bit multiply, which is slow on 32-bit
+ * platforms.
+ *
+ * NDELAY_MULT = 2^41 * HZ / 1000000000
+ *             = (2^41 / 1000000000) * HZ
+ *             = 2199.02325555 * HZ
+ *             = 2199 * HZ + 23255550 * HZ / 1000000000
+ *
+ * The maximum here is to avoid 64-bit overflow, but it isn't checked as it
+ * won't happen.
+ */
+#define MAX_NDELAY_NS   (1ULL << 42)
+#define MAX_NDELAY_HZ	MAX_UDELAY_HZ
+#define NDELAY_MULT	((unsigned long long)(2199ULL * HZ + 23255550ULL * HZ / 1000000000ULL))
+#define NDELAY_SHIFT	41
+
+#if HZ > MAX_NDELAY_HZ
+#error "HZ > MAX_NDELAY_HZ"
+#endif
+
+void __delay(unsigned long cycles)
+{
+	u64 t0 = get_cycles();
+
+	while ((unsigned long)(get_cycles() - t0) < cycles)
+		cpu_relax();
+}
+
+void udelay(unsigned long usecs)
+{
+	unsigned long ucycles = usecs * lpj_fine * UDELAY_MULT;
+
+	if (unlikely(usecs > MAX_UDELAY_US)) {
+		__delay((u64)usecs * riscv_timebase / 1000000ULL);
+		return;
+	}
+
+	__delay(ucycles >> UDELAY_SHIFT);
+}
+EXPORT_SYMBOL(udelay);
+
+void ndelay(unsigned long nsecs)
+{
+	/* This doesn't bother checking for overflow, as it won't happen (it's
+	 * an hour) of delay. */
+	unsigned long long ncycles = nsecs * lpj_fine * NDELAY_MULT;
+	__delay(ncycles >> NDELAY_SHIFT);
+}
+EXPORT_SYMBOL(ndelay);
diff --git a/arch/riscv/mm/ioremap.c b/arch/riscv/mm/ioremap.c
new file mode 100644
index 000000000000..e99194a4077e
--- /dev/null
+++ b/arch/riscv/mm/ioremap.c
@@ -0,0 +1,92 @@
+/*
+ * (C) Copyright 1995 1996 Linus Torvalds
+ * (C) Copyright 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <linux/io.h>
+
+#include <asm/pgtable.h>
+
+/*
+ * Remap an arbitrary physical address space into the kernel virtual
+ * address space. Needed when the kernel wants to access high addresses
+ * directly.
+ *
+ * NOTE! We need to allow non-page-aligned mappings too: we will obviously
+ * have to convert them into an offset in a page-aligned mapping, but the
+ * caller shouldn't need to know that small detail.
+ */
+static void __iomem *__ioremap_caller(phys_addr_t addr, size_t size,
+	pgprot_t prot, void *caller)
+{
+	phys_addr_t last_addr;
+	unsigned long offset, vaddr;
+	struct vm_struct *area;
+
+	/* Disallow wrap-around or zero size */
+	last_addr = addr + size - 1;
+	if (!size || last_addr < addr)
+		return NULL;
+
+	/* Page-align mappings */
+	offset = addr & (~PAGE_MASK);
+	addr &= PAGE_MASK;
+	size = PAGE_ALIGN(size + offset);
+
+	area = get_vm_area_caller(size, VM_IOREMAP, caller);
+	if (!area)
+		return NULL;
+	vaddr = (unsigned long)area->addr;
+
+	if (ioremap_page_range(vaddr, vaddr + size, addr, prot)) {
+		free_vm_area(area);
+		return NULL;
+	}
+
+	return (void __iomem *)(vaddr + offset);
+}
+
+/*
+ * ioremap     -   map bus memory into CPU space
+ * @offset:    bus address of the memory
+ * @size:      size of the resource to map
+ *
+ * ioremap performs a platform specific sequence of operations to
+ * make bus memory CPU accessible via the readb/readw/readl/writeb/
+ * writew/writel functions and the other mmio helpers. The returned
+ * address is not guaranteed to be usable directly as a virtual
+ * address.
+ *
+ * Must be freed with iounmap.
+ */
+void __iomem *ioremap(phys_addr_t offset, unsigned long size)
+{
+	return __ioremap_caller(offset, size, PAGE_KERNEL,
+		__builtin_return_address(0));
+}
+EXPORT_SYMBOL(ioremap);
+
+
+/**
+ * iounmap - Free a IO remapping
+ * @addr: virtual address from ioremap_*
+ *
+ * Caller must ensure there is only one unmapping for the same pointer.
+ */
+void iounmap(void __iomem *addr)
+{
+	vunmap((void *)((unsigned long)addr & PAGE_MASK));
+}
+EXPORT_SYMBOL(iounmap);
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 6/9] RISC-V: Device, timer, IRQs, and the SBI
@ 2017-06-28 18:55       ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains code that interfaces with devices that are mandated
by the RISC-V supervisor specification and that don't have explicit
drivers anywhere else in the tree.  This includes the staticly defined
interrupts, the CSR-mapped timer, and virtualized SBI devices.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/delay.h       |  28 +++++++++
 arch/riscv/include/asm/device.h      |  27 +++++++++
 arch/riscv/include/asm/dma-mapping.h |  41 ++++++++++++++
 arch/riscv/include/asm/irq.h         |  31 ++++++++++
 arch/riscv/include/asm/irqflags.h    |  63 +++++++++++++++++++++
 arch/riscv/include/asm/pci.h         |  50 ++++++++++++++++
 arch/riscv/include/asm/sbi.h         | 100 ++++++++++++++++++++++++++++++++
 arch/riscv/include/asm/timex.h       |  63 +++++++++++++++++++++
 arch/riscv/lib/delay.c               | 107 +++++++++++++++++++++++++++++++++++
 arch/riscv/mm/ioremap.c              |  92 ++++++++++++++++++++++++++++++
 10 files changed, 602 insertions(+)
 create mode 100644 arch/riscv/include/asm/delay.h
 create mode 100644 arch/riscv/include/asm/device.h
 create mode 100644 arch/riscv/include/asm/dma-mapping.h
 create mode 100644 arch/riscv/include/asm/irq.h
 create mode 100644 arch/riscv/include/asm/irqflags.h
 create mode 100644 arch/riscv/include/asm/pci.h
 create mode 100644 arch/riscv/include/asm/sbi.h
 create mode 100644 arch/riscv/include/asm/timex.h
 create mode 100644 arch/riscv/lib/delay.c
 create mode 100644 arch/riscv/mm/ioremap.c

diff --git a/arch/riscv/include/asm/delay.h b/arch/riscv/include/asm/delay.h
new file mode 100644
index 000000000000..cbb0c9eb96cb
--- /dev/null
+++ b/arch/riscv/include/asm/delay.h
@@ -0,0 +1,28 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2016 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_DELAY_H
+#define _ASM_RISCV_DELAY_H
+
+extern unsigned long riscv_timebase;
+
+#define udelay udelay
+extern void udelay(unsigned long usecs);
+
+#define ndelay ndelay
+extern void ndelay(unsigned long nsecs);
+
+extern void __delay(unsigned long cycles);
+
+#endif /* _ASM_RISCV_DELAY_H */
diff --git a/arch/riscv/include/asm/device.h b/arch/riscv/include/asm/device.h
new file mode 100644
index 000000000000..28975e528d2f
--- /dev/null
+++ b/arch/riscv/include/asm/device.h
@@ -0,0 +1,27 @@
+/*
+ * Copyright (C) 2016 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef _ASM_RISCV_DEVICE_H
+#define _ASM_RISCV_DEVICE_H
+
+#include <linux/sysfs.h>
+
+struct dev_archdata {
+	struct dma_map_ops *dma_ops;
+};
+
+struct pdev_archdata {
+};
+
+#endif /* _ASM_RISCV_DEVICE_H */
diff --git a/arch/riscv/include/asm/dma-mapping.h b/arch/riscv/include/asm/dma-mapping.h
new file mode 100644
index 000000000000..9485a58e839e
--- /dev/null
+++ b/arch/riscv/include/asm/dma-mapping.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2003-2004 Hewlett-Packard Co
+ *	David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2016 SiFive, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_RISCV_DMA_MAPPING_H
+#define __ASM_RISCV_DMA_MAPPING_H
+
+#ifdef __KERNEL__
+
+/* Use ops->dma_mapping_error (if it exists) or assume success */
+// #undef DMA_ERROR_CODE
+
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
+{
+	return &dma_noop_ops;
+}
+
+static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
+{
+	if (!dev->dma_mask)
+		return false;
+
+	return addr + size - 1 <= *dev->dma_mask;
+}
+
+#endif	/* __KERNEL__ */
+#endif	/* __ASM_RISCV_DMA_MAPPING_H */
diff --git a/arch/riscv/include/asm/irq.h b/arch/riscv/include/asm/irq.h
new file mode 100644
index 000000000000..e64c61e89f76
--- /dev/null
+++ b/arch/riscv/include/asm/irq.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_IRQ_H
+#define _ASM_RISCV_IRQ_H
+
+#define NR_IRQS         0
+
+#define INTERRUPT_CAUSE_SOFTWARE    1
+#define INTERRUPT_CAUSE_TIMER       5
+#define INTERRUPT_CAUSE_EXTERNAL    9
+
+void riscv_timer_interrupt(void);
+
+#include <asm-generic/irq.h>
+
+/* The value of csr sie before init_traps runs (core is up) */
+DECLARE_PER_CPU(atomic_long_t, riscv_early_sie);
+
+#endif /* _ASM_RISCV_IRQ_H */
diff --git a/arch/riscv/include/asm/irqflags.h b/arch/riscv/include/asm/irqflags.h
new file mode 100644
index 000000000000..6fdc860d7f84
--- /dev/null
+++ b/arch/riscv/include/asm/irqflags.h
@@ -0,0 +1,63 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef _ASM_RISCV_IRQFLAGS_H
+#define _ASM_RISCV_IRQFLAGS_H
+
+#include <asm/processor.h>
+#include <asm/csr.h>
+
+/* read interrupt enabled status */
+static inline unsigned long arch_local_save_flags(void)
+{
+	return csr_read(sstatus);
+}
+
+/* unconditionally enable interrupts */
+static inline void arch_local_irq_enable(void)
+{
+	csr_set(sstatus, SR_IE);
+}
+
+/* unconditionally disable interrupts */
+static inline void arch_local_irq_disable(void)
+{
+	csr_clear(sstatus, SR_IE);
+}
+
+/* get status and disable interrupts */
+static inline unsigned long arch_local_irq_save(void)
+{
+	return csr_read_clear(sstatus, SR_IE);
+}
+
+/* test flags */
+static inline int arch_irqs_disabled_flags(unsigned long flags)
+{
+	return !(flags & SR_IE);
+}
+
+/* test hardware interrupt enable bit */
+static inline int arch_irqs_disabled(void)
+{
+	return arch_irqs_disabled_flags(arch_local_save_flags());
+}
+
+/* set interrupt enabled status */
+static inline void arch_local_irq_restore(unsigned long flags)
+{
+	csr_set(sstatus, flags & SR_IE);
+}
+
+#endif /* _ASM_RISCV_IRQFLAGS_H */
diff --git a/arch/riscv/include/asm/pci.h b/arch/riscv/include/asm/pci.h
new file mode 100644
index 000000000000..c829a4ce8d25
--- /dev/null
+++ b/arch/riscv/include/asm/pci.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright (C) 2016 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef __ASM_RISCV_PCI_H
+#define __ASM_RISCV_PCI_H
+#ifdef __KERNEL__
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/io.h>
+
+#define PCIBIOS_MIN_IO		0
+#define PCIBIOS_MIN_MEM		0
+
+/* RISC-V shim does not initialize PCI bus */
+#define pcibios_assign_all_busses() 1
+
+/* RISC-V TileLink and PCIe share the share address space */
+#define PCI_DMA_BUS_IS_PHYS 1
+
+extern int isa_dma_bridge_buggy;
+
+#ifdef CONFIG_PCI
+static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
+{
+	/* no legacy IRQ on risc-v */
+	return -ENODEV;
+}
+
+static inline int pci_proc_domain(struct pci_bus *bus)
+{
+	/* always show the domain in /proc */
+	return 1;
+}
+#endif  /* CONFIG_PCI */
+
+#endif  /* __KERNEL__ */
+#endif  /* __ASM_PCI_H */
diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
new file mode 100644
index 000000000000..b6bb10b92fe2
--- /dev/null
+++ b/arch/riscv/include/asm/sbi.h
@@ -0,0 +1,100 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SBI_H
+#define _ASM_RISCV_SBI_H
+
+#include <linux/types.h>
+
+#define SBI_SET_TIMER 0
+#define SBI_CONSOLE_PUTCHAR 1
+#define SBI_CONSOLE_GETCHAR 2
+#define SBI_CLEAR_IPI 3
+#define SBI_SEND_IPI 4
+#define SBI_REMOTE_FENCE_I 5
+#define SBI_REMOTE_SFENCE_VMA 6
+#define SBI_REMOTE_SFENCE_VMA_ASID 7
+#define SBI_SHUTDOWN 8
+
+#define SBI_CALL(which, arg0, arg1, arg2) ({			\
+	register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0);	\
+	register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1);	\
+	register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2);	\
+	register uintptr_t a7 asm ("a7") = (uintptr_t)(which);	\
+	asm volatile ("ecall"					\
+		      : "+r" (a0)				\
+		      : "r" (a1), "r" (a2), "r" (a7)		\
+		      : "memory");				\
+	a0;							\
+})
+
+/* Lazy implementations until SBI is finalized */
+#define SBI_CALL_0(which) SBI_CALL(which, 0, 0, 0)
+#define SBI_CALL_1(which, arg0) SBI_CALL(which, arg0, 0, 0)
+#define SBI_CALL_2(which, arg0, arg1) SBI_CALL(which, arg0, arg1, 0)
+
+static inline void sbi_console_putchar(int ch)
+{
+	SBI_CALL_1(SBI_CONSOLE_PUTCHAR, ch);
+}
+
+static inline int sbi_console_getchar(void)
+{
+	return SBI_CALL_0(SBI_CONSOLE_GETCHAR);
+}
+
+static inline void sbi_set_timer(uint64_t stime_value)
+{
+#if __riscv_xlen == 32
+	SBI_CALL_2(SBI_SET_TIMER, stime_value, stime_value >> 32);
+#else
+	SBI_CALL_1(SBI_SET_TIMER, stime_value);
+#endif
+}
+
+static inline void sbi_shutdown(void)
+{
+	SBI_CALL_0(SBI_SHUTDOWN);
+}
+
+static inline void sbi_clear_ipi(void)
+{
+	SBI_CALL_0(SBI_CLEAR_IPI);
+}
+
+static inline void sbi_send_ipi(const unsigned long *hart_mask)
+{
+	SBI_CALL_1(SBI_SEND_IPI, hart_mask);
+}
+
+static inline void sbi_remote_fence_i(const unsigned long *hart_mask)
+{
+	SBI_CALL_1(SBI_REMOTE_FENCE_I, hart_mask);
+}
+
+static inline void sbi_remote_sfence_vma(const unsigned long *hart_mask,
+					 unsigned long start,
+					 unsigned long size)
+{
+	SBI_CALL_1(SBI_REMOTE_SFENCE_VMA, hart_mask);
+}
+
+static inline void sbi_remote_sfence_vma_asid(const unsigned long *hart_mask,
+					      unsigned long start,
+					      unsigned long size,
+					      unsigned long asid)
+{
+	SBI_CALL_1(SBI_REMOTE_SFENCE_VMA_ASID, hart_mask);
+}
+
+#endif
diff --git a/arch/riscv/include/asm/timex.h b/arch/riscv/include/asm/timex.h
new file mode 100644
index 000000000000..199f71108b99
--- /dev/null
+++ b/arch/riscv/include/asm/timex.h
@@ -0,0 +1,63 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TIMEX_H
+#define _ASM_RISCV_TIMEX_H
+
+#include <asm/param.h>
+
+#ifdef __KERNEL__
+
+typedef unsigned long cycles_t;
+
+static inline cycles_t get_cycles(void)
+{
+	cycles_t n;
+
+	__asm__ __volatile__ (
+		"rdtime %0"
+		: "=r" (n));
+	return n;
+}
+
+#ifdef CONFIG_64BIT
+static inline uint64_t get_cycles64(void)
+{
+        return get_cycles();
+}
+#else
+static inline uint64_t get_cycles64(void)
+{
+	u32 lo, hi, tmp;
+	__asm__ __volatile__ (
+		"1:\n"
+		"rdtimeh %0\n"
+		"rdtime %1\n"
+		"rdtimeh %2\n"
+		"bne %0, %2, 1b"
+		: "=&r" (hi), "=&r" (lo), "=&r" (tmp));
+	return ((u64)hi << 32) | lo;
+}
+#endif
+
+#define ARCH_HAS_READ_CURRENT_TIMER
+
+static inline int read_current_timer(unsigned long *timer_val)
+{
+	*timer_val = get_cycles();
+	return 0;
+}
+
+#endif
+
+#endif /* _ASM_RISCV_TIMEX_H */
diff --git a/arch/riscv/lib/delay.c b/arch/riscv/lib/delay.c
new file mode 100644
index 000000000000..55c7d9c9986a
--- /dev/null
+++ b/arch/riscv/lib/delay.c
@@ -0,0 +1,107 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/delay.h>
+#include <linux/param.h>
+#include <linux/timex.h>
+#include <linux/export.h>
+
+/*
+ * This is copies from arch/arm/include/asm/delay.h
+ *
+ * Loop (or tick) based delay:
+ *
+ * loops = loops_per_jiffy * jiffies_per_sec * delay_us / us_per_sec
+ *
+ * where:
+ *
+ * jiffies_per_sec = HZ
+ * us_per_sec = 1000000
+ *
+ * Therefore the constant part is HZ / 1000000 which is a small
+ * fractional number. To make this usable with integer math, we
+ * scale up this constant by 2^31, perform the actual multiplication,
+ * and scale the result back down by 2^31 with a simple shift:
+ *
+ * loops = (loops_per_jiffy * delay_us * UDELAY_MULT) >> 31
+ *
+ * where:
+ *
+ * UDELAY_MULT = 2^31 * HZ / 1000000
+ *             = (2^31 / 1000000) * HZ
+ *             = 2147.483648 * HZ
+ *             = 2147 * HZ + 483648 * HZ / 1000000
+ *
+ * 31 is the biggest scale shift value that won't overflow 32 bits for
+ * delay_us * UDELAY_MULT assuming HZ <= 1000 and delay_us <= 2000.
+ */
+#define MAX_UDELAY_US	2000
+#define MAX_UDELAY_HZ	1000
+#define UDELAY_MULT	(2147UL * HZ + 483648UL * HZ / 1000000UL)
+#define UDELAY_SHIFT	31
+
+#if HZ > MAX_UDELAY_HZ
+#error "HZ > MAX_UDELAY_HZ"
+#endif
+
+/* RISC-V supports both UDELAY and NDELAY.  This is largely the same as above,
+ * but with different constants.  I added 10 bits to the shift to get this, but
+ * the result is that I need a 64-bit multiply, which is slow on 32-bit
+ * platforms.
+ *
+ * NDELAY_MULT = 2^41 * HZ / 1000000000
+ *             = (2^41 / 1000000000) * HZ
+ *             = 2199.02325555 * HZ
+ *             = 2199 * HZ + 23255550 * HZ / 1000000000
+ *
+ * The maximum here is to avoid 64-bit overflow, but it isn't checked as it
+ * won't happen.
+ */
+#define MAX_NDELAY_NS   (1ULL << 42)
+#define MAX_NDELAY_HZ	MAX_UDELAY_HZ
+#define NDELAY_MULT	((unsigned long long)(2199ULL * HZ + 23255550ULL * HZ / 1000000000ULL))
+#define NDELAY_SHIFT	41
+
+#if HZ > MAX_NDELAY_HZ
+#error "HZ > MAX_NDELAY_HZ"
+#endif
+
+void __delay(unsigned long cycles)
+{
+	u64 t0 = get_cycles();
+
+	while ((unsigned long)(get_cycles() - t0) < cycles)
+		cpu_relax();
+}
+
+void udelay(unsigned long usecs)
+{
+	unsigned long ucycles = usecs * lpj_fine * UDELAY_MULT;
+
+	if (unlikely(usecs > MAX_UDELAY_US)) {
+		__delay((u64)usecs * riscv_timebase / 1000000ULL);
+		return;
+	}
+
+	__delay(ucycles >> UDELAY_SHIFT);
+}
+EXPORT_SYMBOL(udelay);
+
+void ndelay(unsigned long nsecs)
+{
+	/* This doesn't bother checking for overflow, as it won't happen (it's
+	 * an hour) of delay. */
+	unsigned long long ncycles = nsecs * lpj_fine * NDELAY_MULT;
+	__delay(ncycles >> NDELAY_SHIFT);
+}
+EXPORT_SYMBOL(ndelay);
diff --git a/arch/riscv/mm/ioremap.c b/arch/riscv/mm/ioremap.c
new file mode 100644
index 000000000000..e99194a4077e
--- /dev/null
+++ b/arch/riscv/mm/ioremap.c
@@ -0,0 +1,92 @@
+/*
+ * (C) Copyright 1995 1996 Linus Torvalds
+ * (C) Copyright 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <linux/io.h>
+
+#include <asm/pgtable.h>
+
+/*
+ * Remap an arbitrary physical address space into the kernel virtual
+ * address space. Needed when the kernel wants to access high addresses
+ * directly.
+ *
+ * NOTE! We need to allow non-page-aligned mappings too: we will obviously
+ * have to convert them into an offset in a page-aligned mapping, but the
+ * caller shouldn't need to know that small detail.
+ */
+static void __iomem *__ioremap_caller(phys_addr_t addr, size_t size,
+	pgprot_t prot, void *caller)
+{
+	phys_addr_t last_addr;
+	unsigned long offset, vaddr;
+	struct vm_struct *area;
+
+	/* Disallow wrap-around or zero size */
+	last_addr = addr + size - 1;
+	if (!size || last_addr < addr)
+		return NULL;
+
+	/* Page-align mappings */
+	offset = addr & (~PAGE_MASK);
+	addr &= PAGE_MASK;
+	size = PAGE_ALIGN(size + offset);
+
+	area = get_vm_area_caller(size, VM_IOREMAP, caller);
+	if (!area)
+		return NULL;
+	vaddr = (unsigned long)area->addr;
+
+	if (ioremap_page_range(vaddr, vaddr + size, addr, prot)) {
+		free_vm_area(area);
+		return NULL;
+	}
+
+	return (void __iomem *)(vaddr + offset);
+}
+
+/*
+ * ioremap     -   map bus memory into CPU space
+ * @offset:    bus address of the memory
+ * @size:      size of the resource to map
+ *
+ * ioremap performs a platform specific sequence of operations to
+ * make bus memory CPU accessible via the readb/readw/readl/writeb/
+ * writew/writel functions and the other mmio helpers. The returned
+ * address is not guaranteed to be usable directly as a virtual
+ * address.
+ *
+ * Must be freed with iounmap.
+ */
+void __iomem *ioremap(phys_addr_t offset, unsigned long size)
+{
+	return __ioremap_caller(offset, size, PAGE_KERNEL,
+		__builtin_return_address(0));
+}
+EXPORT_SYMBOL(ioremap);
+
+
+/**
+ * iounmap - Free a IO remapping
+ * @addr: virtual address from ioremap_*
+ *
+ * Caller must ensure there is only one unmapping for the same pointer.
+ */
+void iounmap(void __iomem *addr)
+{
+	vunmap((void *)((unsigned long)addr & PAGE_MASK));
+}
+EXPORT_SYMBOL(iounmap);
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 7/9] RISC-V: Paging and MMU
  2017-06-28 18:55     ` Palmer Dabbelt
@ 2017-06-28 18:55       ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains code to manage the RISC-V MMU, including definitions
of the page tables and the page walking code.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/mmu_context.h  |  69 ++++++
 arch/riscv/include/asm/page.h         | 138 +++++++++++
 arch/riscv/include/asm/pgalloc.h      | 124 ++++++++++
 arch/riscv/include/asm/pgtable-32.h   |  25 ++
 arch/riscv/include/asm/pgtable-64.h   |  84 +++++++
 arch/riscv/include/asm/pgtable-bits.h |  48 ++++
 arch/riscv/include/asm/pgtable.h      | 427 ++++++++++++++++++++++++++++++++++
 arch/riscv/mm/fault.c                 | 280 ++++++++++++++++++++++
 8 files changed, 1195 insertions(+)
 create mode 100644 arch/riscv/include/asm/mmu_context.h
 create mode 100644 arch/riscv/include/asm/page.h
 create mode 100644 arch/riscv/include/asm/pgalloc.h
 create mode 100644 arch/riscv/include/asm/pgtable-32.h
 create mode 100644 arch/riscv/include/asm/pgtable-64.h
 create mode 100644 arch/riscv/include/asm/pgtable-bits.h
 create mode 100644 arch/riscv/include/asm/pgtable.h
 create mode 100644 arch/riscv/mm/fault.c

diff --git a/arch/riscv/include/asm/mmu_context.h b/arch/riscv/include/asm/mmu_context.h
new file mode 100644
index 000000000000..de1fc1631fc4
--- /dev/null
+++ b/arch/riscv/include/asm/mmu_context.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_MMU_CONTEXT_H
+#define _ASM_RISCV_MMU_CONTEXT_H
+
+#include <asm-generic/mm_hooks.h>
+
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <asm/tlbflush.h>
+
+static inline void enter_lazy_tlb(struct mm_struct *mm,
+	struct task_struct *task)
+{
+}
+
+/* Initialize context-related info for a new mm_struct */
+static inline int init_new_context(struct task_struct *task,
+	struct mm_struct *mm)
+{
+	return 0;
+}
+
+static inline void destroy_context(struct mm_struct *mm)
+{
+}
+
+static inline pgd_t *current_pgdir(void)
+{
+	return pfn_to_virt(csr_read(sptbr) & SPTBR_PPN);
+}
+
+static inline void set_pgdir(pgd_t *pgd)
+{
+	csr_write(sptbr, virt_to_pfn(pgd) | SPTBR_MODE);
+}
+
+static inline void switch_mm(struct mm_struct *prev,
+	struct mm_struct *next, struct task_struct *task)
+{
+	if (likely(prev != next)) {
+		set_pgdir(next->pgd);
+		local_flush_tlb_all();
+	}
+}
+
+static inline void activate_mm(struct mm_struct *prev,
+			       struct mm_struct *next)
+{
+	switch_mm(prev, next, NULL);
+}
+
+static inline void deactivate_mm(struct task_struct *task,
+	struct mm_struct *mm)
+{
+}
+
+#endif /* _ASM_RISCV_MMU_CONTEXT_H */
diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
new file mode 100644
index 000000000000..e1491c20d6fd
--- /dev/null
+++ b/arch/riscv/include/asm/page.h
@@ -0,0 +1,138 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ * Copyright (C) 2017 XiaojingZhu <zhuxiaoj@ict.ac.cn>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PAGE_H
+#define _ASM_RISCV_PAGE_H
+
+#include <linux/pfn.h>
+#include <linux/const.h>
+
+#define PAGE_SHIFT	(12)
+#define PAGE_SIZE	(_AC(1, UL) << PAGE_SHIFT)
+#define PAGE_MASK	(~(PAGE_SIZE - 1))
+
+#ifdef __KERNEL__
+
+/*
+ * PAGE_OFFSET -- the first address of the first page of memory.
+ * When not using MMU this corresponds to the first free page in
+ * physical memory (aligned on a page boundary).
+ */
+#ifdef CONFIG_64BIT
+#define PAGE_OFFSET		_AC(0xffffffff80000000, UL)
+#else
+#define PAGE_OFFSET		_AC(0xc0000000, UL)
+#endif
+
+#define KERN_VIRT_SIZE (-PAGE_OFFSET)
+
+#ifndef __ASSEMBLY__
+
+#define PAGE_UP(addr)	(((addr)+((PAGE_SIZE)-1))&(~((PAGE_SIZE)-1)))
+#define PAGE_DOWN(addr)	((addr)&(~((PAGE_SIZE)-1)))
+
+/* align addr on a size boundary - adjust address up/down if needed */
+#define _ALIGN_UP(addr, size)	(((addr)+((size)-1))&(~((size)-1)))
+#define _ALIGN_DOWN(addr, size)	((addr)&(~((size)-1)))
+
+/* align addr on a size boundary - adjust address up if needed */
+#define _ALIGN(addr, size)	_ALIGN_UP(addr, size)
+
+#define clear_page(pgaddr)			memset((pgaddr), 0, PAGE_SIZE)
+#define copy_page(to, from)			memcpy((to), (from), PAGE_SIZE)
+
+#define clear_user_page(pgaddr, vaddr, page)	memset((pgaddr), 0, PAGE_SIZE)
+#define copy_user_page(vto, vfrom, vaddr, topg) \
+			memcpy((vto), (vfrom), PAGE_SIZE)
+
+/*
+ * Use struct definitions to apply C type checking
+ */
+
+/* Page Global Directory entry */
+typedef struct {
+	unsigned long pgd;
+} pgd_t;
+
+/* Page Table entry */
+typedef struct {
+	unsigned long pte;
+} pte_t;
+
+typedef struct {
+	unsigned long pgprot;
+} pgprot_t;
+
+typedef struct page *pgtable_t;
+
+#define pte_val(x)	((x).pte)
+#define pgd_val(x)	((x).pgd)
+#define pgprot_val(x)	((x).pgprot)
+
+#define __pte(x)	((pte_t) { (x) })
+#define __pgd(x)	((pgd_t) { (x) })
+#define __pgprot(x)	((pgprot_t) { (x) })
+
+#ifdef CONFIG_64BITS
+#define PTE_FMT "%016lx"
+#else
+#define PTE_FMT "%08lx"
+#endif
+
+extern unsigned long va_pa_offset;
+extern unsigned long pfn_base;
+
+extern unsigned long max_low_pfn;
+extern unsigned long min_low_pfn;
+
+#define __pa(x)		((unsigned long)(x) - va_pa_offset)
+#define __va(x)		((void *)((unsigned long) (x) + va_pa_offset))
+
+#define phys_to_pfn(phys)	(PFN_DOWN(phys))
+#define pfn_to_phys(pfn)	(PFN_PHYS(pfn))
+
+#define virt_to_pfn(vaddr)	(phys_to_pfn(__pa(vaddr)))
+#define pfn_to_virt(pfn)	(__va(pfn_to_phys(pfn)))
+
+#define virt_to_page(vaddr)	(pfn_to_page(virt_to_pfn(vaddr)))
+#define page_to_virt(page)	(pfn_to_virt(page_to_pfn(page)))
+
+#define page_to_phys(page)	(pfn_to_phys(page_to_pfn(page)))
+#define page_to_bus(page)	(page_to_phys(page))
+#define phys_to_page(paddr)	(pfn_to_page(phys_to_pfn(paddr)))
+
+#define pfn_valid(pfn) \
+	(((pfn) >= pfn_base) && (((pfn)-pfn_base) < max_mapnr))
+
+#define ARCH_PFN_OFFSET		(pfn_base)
+
+#endif /* __ASSEMBLY__ */
+
+#define virt_addr_valid(vaddr)	(pfn_valid(virt_to_pfn(vaddr)))
+
+#endif /* __KERNEL__ */
+
+#define VM_DATA_DEFAULT_FLAGS	(VM_READ | VM_WRITE | \
+				 VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
+
+#include <asm-generic/memory_model.h>
+#include <asm-generic/getorder.h>
+
+/* vDSO support */
+/* We do define AT_SYSINFO_EHDR but don't use the gate mechanism */
+#define __HAVE_ARCH_GATE_AREA
+
+#endif /* _ASM_RISCV_PAGE_H */
diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h
new file mode 100644
index 000000000000..b40074bcb164
--- /dev/null
+++ b/arch/riscv/include/asm/pgalloc.h
@@ -0,0 +1,124 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGALLOC_H
+#define _ASM_RISCV_PGALLOC_H
+
+#include <linux/mm.h>
+#include <asm/tlb.h>
+
+static inline void pmd_populate_kernel(struct mm_struct *mm,
+	pmd_t *pmd, pte_t *pte)
+{
+	unsigned long pfn = virt_to_pfn(pte);
+
+	set_pmd(pmd, __pmd((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+
+static inline void pmd_populate(struct mm_struct *mm,
+	pmd_t *pmd, pgtable_t pte)
+{
+	unsigned long pfn = virt_to_pfn(page_address(pte));
+
+	set_pmd(pmd, __pmd((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+
+#ifndef __PAGETABLE_PMD_FOLDED
+static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+{
+	unsigned long pfn = virt_to_pfn(pmd);
+
+	set_pud(pud, __pud((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+#endif /* __PAGETABLE_PMD_FOLDED */
+
+#define pmd_pgtable(pmd)	pmd_page(pmd)
+
+static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+{
+	pgd_t *pgd;
+
+	pgd = (pgd_t *)__get_free_page(GFP_KERNEL);
+	if (likely(pgd != NULL)) {
+		memset(pgd, 0, USER_PTRS_PER_PGD * sizeof(pgd_t));
+		/* Copy kernel mappings */
+		memcpy(pgd + USER_PTRS_PER_PGD,
+			init_mm.pgd + USER_PTRS_PER_PGD,
+			(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
+	}
+	return pgd;
+}
+
+static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
+{
+	free_page((unsigned long)pgd);
+}
+
+#ifndef __PAGETABLE_PMD_FOLDED
+
+static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
+{
+	return (pmd_t *)__get_free_page(
+		GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+}
+
+static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
+{
+	free_page((unsigned long)pmd);
+}
+
+#define __pmd_free_tlb(tlb, pmd, addr)  pmd_free((tlb)->mm, pmd)
+
+#endif /* __PAGETABLE_PMD_FOLDED */
+
+static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
+	unsigned long address)
+{
+	return (pte_t *)__get_free_page(
+		GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+}
+
+static inline struct page *pte_alloc_one(struct mm_struct *mm,
+	unsigned long address)
+{
+	struct page *pte;
+
+	pte = alloc_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+	if (likely(pte != NULL))
+		pgtable_page_ctor(pte);
+	return pte;
+}
+
+static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
+{
+	free_page((unsigned long)pte);
+}
+
+static inline void pte_free(struct mm_struct *mm, pgtable_t pte)
+{
+	pgtable_page_dtor(pte);
+	__free_page(pte);
+}
+
+#define __pte_free_tlb(tlb, pte, buf)   \
+do {                                    \
+	pgtable_page_dtor(pte);         \
+	tlb_remove_page((tlb), pte);    \
+} while (0)
+
+static inline void check_pgt_cache(void)
+{
+}
+
+#endif /* _ASM_RISCV_PGALLOC_H */
diff --git a/arch/riscv/include/asm/pgtable-32.h b/arch/riscv/include/asm/pgtable-32.h
new file mode 100644
index 000000000000..d61974b74182
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-32.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_32_H
+#define _ASM_RISCV_PGTABLE_32_H
+
+#include <asm-generic/pgtable-nopmd.h>
+#include <linux/const.h>
+
+/* Size of region mapped by a page global directory */
+#define PGDIR_SHIFT     22
+#define PGDIR_SIZE      (_AC(1, UL) << PGDIR_SHIFT)
+#define PGDIR_MASK      (~(PGDIR_SIZE - 1))
+
+#endif /* _ASM_RISCV_PGTABLE_32_H */
diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h
new file mode 100644
index 000000000000..7aa0ea9bd8bb
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-64.h
@@ -0,0 +1,84 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_64_H
+#define _ASM_RISCV_PGTABLE_64_H
+
+#include <linux/const.h>
+
+#define PGDIR_SHIFT     30
+/* Size of region mapped by a page global directory */
+#define PGDIR_SIZE      (_AC(1, UL) << PGDIR_SHIFT)
+#define PGDIR_MASK      (~(PGDIR_SIZE - 1))
+
+#define PMD_SHIFT       21
+/* Size of region mapped by a page middle directory */
+#define PMD_SIZE        (_AC(1, UL) << PMD_SHIFT)
+#define PMD_MASK        (~(PMD_SIZE - 1))
+
+/* Page Middle Directory entry */
+typedef struct {
+	unsigned long pmd;
+} pmd_t;
+
+#define pmd_val(x)      ((x).pmd)
+#define __pmd(x)        ((pmd_t) { (x) })
+
+#define PTRS_PER_PMD    (PAGE_SIZE / sizeof(pmd_t))
+
+static inline int pud_present(pud_t pud)
+{
+	return (pud_val(pud) & _PAGE_PRESENT);
+}
+
+static inline int pud_none(pud_t pud)
+{
+	return (pud_val(pud) == 0);
+}
+
+static inline int pud_bad(pud_t pud)
+{
+	return !pud_present(pud);
+}
+
+static inline void set_pud(pud_t *pudp, pud_t pud)
+{
+	*pudp = pud;
+}
+
+static inline void pud_clear(pud_t *pudp)
+{
+	set_pud(pudp, __pud(0));
+}
+
+static inline unsigned long pud_page_vaddr(pud_t pud)
+{
+	return (unsigned long)pfn_to_virt(pud_val(pud) >> _PAGE_PFN_SHIFT);
+}
+
+#define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
+
+static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
+{
+	return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(addr);
+}
+
+static inline pmd_t pfn_pmd(unsigned long pfn, pgprot_t prot)
+{
+	return __pmd((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+#define pmd_ERROR(e) \
+	pr_err("%s:%d: bad pmd %016lx.\n", __FILE__, __LINE__, pmd_val(e))
+
+#endif /* _ASM_RISCV_PGTABLE_64_H */
diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm/pgtable-bits.h
new file mode 100644
index 000000000000..997ddbb1d370
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-bits.h
@@ -0,0 +1,48 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_BITS_H
+#define _ASM_RISCV_PGTABLE_BITS_H
+
+/*
+ * PTE format:
+ * | XLEN-1  10 | 9             8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0
+ *       PFN      reserved for SW   D   A   G   U   X   W   R   V
+ */
+
+#define _PAGE_ACCESSED_OFFSET 6
+
+#define _PAGE_PRESENT   (1 << 0)
+#define _PAGE_READ      (1 << 1)    /* Readable */
+#define _PAGE_WRITE     (1 << 2)    /* Writable */
+#define _PAGE_EXEC      (1 << 3)    /* Executable */
+#define _PAGE_USER      (1 << 4)    /* User */
+#define _PAGE_GLOBAL    (1 << 5)    /* Global */
+#define _PAGE_ACCESSED  (1 << 6)    /* Set by hardware on any access */
+#define _PAGE_DIRTY     (1 << 7)    /* Set by hardware on any write */
+#define _PAGE_SOFT      (1 << 8)    /* Reserved for software */
+
+#define _PAGE_SPECIAL   _PAGE_SOFT
+#define _PAGE_TABLE     _PAGE_PRESENT
+
+#define _PAGE_PFN_SHIFT 10
+
+/* Set of bits to preserve across pte_modify() */
+#define _PAGE_CHG_MASK  (~(unsigned long)(_PAGE_PRESENT | _PAGE_READ |	\
+					  _PAGE_WRITE | _PAGE_EXEC |	\
+					  _PAGE_USER | _PAGE_GLOBAL))
+
+/* Advertise support for _PAGE_SPECIAL */
+#define __HAVE_ARCH_PTE_SPECIAL
+
+#endif /* _ASM_RISCV_PGTABLE_BITS_H */
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
new file mode 100644
index 000000000000..8bb44014f5c3
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable.h
@@ -0,0 +1,427 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_H
+#define _ASM_RISCV_PGTABLE_H
+
+#include <linux/mmzone.h>
+
+#include <asm/pgtable-bits.h>
+
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_MMU
+
+/* Page Upper Directory not used in RISC-V */
+#include <asm-generic/pgtable-nopud.h>
+#include <asm/page.h>
+#include <asm/tlbflush.h>
+#include <linux/mm_types.h>
+
+#ifdef CONFIG_64BIT
+#include <asm/pgtable-64.h>
+#else
+#include <asm/pgtable-32.h>
+#endif /* CONFIG_64BIT */
+
+/* Number of entries in the page global directory */
+#define PTRS_PER_PGD    (PAGE_SIZE / sizeof(pgd_t))
+/* Number of entries in the page table */
+#define PTRS_PER_PTE    (PAGE_SIZE / sizeof(pte_t))
+
+/* Number of PGD entries that a user-mode program can use */
+#define USER_PTRS_PER_PGD   (TASK_SIZE / PGDIR_SIZE)
+#define FIRST_USER_ADDRESS  0
+
+/* Page protection bits */
+#define _PAGE_BASE	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER)
+
+#define PAGE_NONE		__pgprot(0)
+#define PAGE_READ		__pgprot(_PAGE_BASE | _PAGE_READ)
+#define PAGE_WRITE		__pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_WRITE)
+#define PAGE_EXEC		__pgprot(_PAGE_BASE | _PAGE_EXEC)
+#define PAGE_READ_EXEC		__pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC)
+#define PAGE_WRITE_EXEC		__pgprot(_PAGE_BASE | _PAGE_READ |	\
+					 _PAGE_EXEC | _PAGE_WRITE)
+
+#define PAGE_COPY		PAGE_READ
+#define PAGE_COPY_EXEC		PAGE_EXEC
+#define PAGE_COPY_READ_EXEC	PAGE_READ_EXEC
+#define PAGE_SHARED		PAGE_WRITE
+#define PAGE_SHARED_EXEC	PAGE_WRITE_EXEC
+
+#define _PAGE_KERNEL		(_PAGE_READ \
+				| _PAGE_WRITE \
+				| _PAGE_PRESENT \
+				| _PAGE_ACCESSED \
+				| _PAGE_DIRTY)
+
+#define PAGE_KERNEL		__pgprot(_PAGE_KERNEL)
+#define PAGE_KERNEL_EXEC	__pgprot(_PAGE_KERNEL | _PAGE_EXEC)
+
+extern pgd_t swapper_pg_dir[];
+
+/* MAP_PRIVATE permissions: xwr (copy-on-write) */
+#define __P000	PAGE_NONE
+#define __P001	PAGE_READ
+#define __P010	PAGE_COPY
+#define __P011	PAGE_COPY
+#define __P100	PAGE_EXEC
+#define __P101	PAGE_READ_EXEC
+#define __P110	PAGE_COPY_EXEC
+#define __P111	PAGE_COPY_READ_EXEC
+
+/* MAP_SHARED permissions: xwr */
+#define __S000	PAGE_NONE
+#define __S001	PAGE_READ
+#define __S010	PAGE_SHARED
+#define __S011	PAGE_SHARED
+#define __S100	PAGE_EXEC
+#define __S101	PAGE_READ_EXEC
+#define __S110	PAGE_SHARED_EXEC
+#define __S111	PAGE_SHARED_EXEC
+
+/*
+ * ZERO_PAGE is a global shared page that is always zero,
+ * used for zero-mapped memory areas, etc.
+ */
+extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
+
+static inline int pmd_present(pmd_t pmd)
+{
+	return (pmd_val(pmd) & _PAGE_PRESENT);
+}
+
+static inline int pmd_none(pmd_t pmd)
+{
+	return (pmd_val(pmd) == 0);
+}
+
+static inline int pmd_bad(pmd_t pmd)
+{
+	return !pmd_present(pmd);
+}
+
+static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
+{
+	*pmdp = pmd;
+}
+
+static inline void pmd_clear(pmd_t *pmdp)
+{
+	set_pmd(pmdp, __pmd(0));
+}
+
+
+static inline pgd_t pfn_pgd(unsigned long pfn, pgprot_t prot)
+{
+	return __pgd((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+#define pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
+
+/* Locate an entry in the page global directory */
+static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr)
+{
+	return mm->pgd + pgd_index(addr);
+}
+/* Locate an entry in the kernel page global directory */
+#define pgd_offset_k(addr)      pgd_offset(&init_mm, (addr))
+
+static inline struct page *pmd_page(pmd_t pmd)
+{
+	return pfn_to_page(pmd_val(pmd) >> _PAGE_PFN_SHIFT);
+}
+
+static inline unsigned long pmd_page_vaddr(pmd_t pmd)
+{
+	return (unsigned long)pfn_to_virt(pmd_val(pmd) >> _PAGE_PFN_SHIFT);
+}
+
+/* Yields the page frame number (PFN) of a page table entry */
+static inline unsigned long pte_pfn(pte_t pte)
+{
+	return (pte_val(pte) >> _PAGE_PFN_SHIFT);
+}
+
+#define pte_page(x)     pfn_to_page(pte_pfn(x))
+
+/* Constructs a page table entry */
+static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot)
+{
+	return __pte((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+static inline pte_t mk_pte(struct page *page, pgprot_t prot)
+{
+	return pfn_pte(page_to_pfn(page), prot);
+}
+
+#define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+
+static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long addr)
+{
+	return (pte_t *)pmd_page_vaddr(*pmd) + pte_index(addr);
+}
+
+#define pte_offset_map(dir, addr)	pte_offset_kernel((dir), (addr))
+#define pte_unmap(pte)			((void)(pte))
+
+/*
+ * Certain architectures need to do special things when PTEs within
+ * a page table are directly modified.  Thus, the following hook is
+ * made available.
+ */
+static inline void set_pte(pte_t *ptep, pte_t pteval)
+{
+	*ptep = pteval;
+}
+
+static inline void set_pte_at(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep, pte_t pteval)
+{
+	set_pte(ptep, pteval);
+}
+
+static inline void pte_clear(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep)
+{
+	set_pte_at(mm, addr, ptep, __pte(0));
+}
+
+static inline int pte_present(pte_t pte)
+{
+	return (pte_val(pte) & _PAGE_PRESENT);
+}
+
+static inline int pte_none(pte_t pte)
+{
+	return (pte_val(pte) == 0);
+}
+
+/* static inline int pte_read(pte_t pte) */
+
+static inline int pte_write(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_WRITE;
+}
+
+static inline int pte_huge(pte_t pte)
+{
+	return pte_present(pte)
+		&& (pte_val(pte) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC));
+}
+
+/* static inline int pte_exec(pte_t pte) */
+
+static inline int pte_dirty(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_DIRTY;
+}
+
+static inline int pte_young(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_ACCESSED;
+}
+
+static inline int pte_special(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_SPECIAL;
+}
+
+/* static inline pte_t pte_rdprotect(pte_t pte) */
+
+static inline pte_t pte_wrprotect(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_WRITE));
+}
+
+/* static inline pte_t pte_mkread(pte_t pte) */
+
+static inline pte_t pte_mkwrite(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_WRITE);
+}
+
+/* static inline pte_t pte_mkexec(pte_t pte) */
+
+static inline pte_t pte_mkdirty(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_DIRTY);
+}
+
+static inline pte_t pte_mkclean(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_DIRTY));
+}
+
+static inline pte_t pte_mkyoung(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_ACCESSED);
+}
+
+static inline pte_t pte_mkold(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_ACCESSED));
+}
+
+static inline pte_t pte_mkspecial(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_SPECIAL);
+}
+
+/* Modify page protection bits */
+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+	return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
+}
+
+#define pgd_ERROR(e) \
+	pr_err("%s:%d: bad pgd " PTE_FMT ".\n", __FILE__, __LINE__, pgd_val(e))
+
+
+/* Commit new configuration to MMU hardware */
+static inline void update_mmu_cache(struct vm_area_struct *vma,
+	unsigned long address, pte_t *ptep)
+{
+	/* The kernel assumes that TLBs don't cache invalid entries, but
+	 * in RISC-V, SFENCE.VMA specifies an ordering constraint, not a
+	 * cache flush; it is necessary even after writing invalid entries.
+	 * Relying on flush_tlb_fix_spurious_fault would suffice, but
+	 * the extra traps reduce performance.  So, eagerly SFENCE.VMA.
+	 */
+	local_flush_tlb_page(address);
+}
+
+#define __HAVE_ARCH_PTE_SAME
+static inline int pte_same(pte_t pte_a, pte_t pte_b)
+{
+	return pte_val(pte_a) == pte_val(pte_b);
+}
+
+#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
+static inline int ptep_set_access_flags(struct vm_area_struct *vma,
+					unsigned long address, pte_t *ptep,
+					pte_t entry, int dirty)
+{
+	if (!pte_same(*ptep, entry))
+		set_pte_at(vma->vm_mm, address, ptep, entry);
+	/* update_mmu_cache will unconditionally execute, handling both
+	 * the case that the PTE changed and the spurious fault case.
+	 */
+	return true;
+}
+
+#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
+static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+}
+
+#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
+static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
+					    unsigned long address,
+					    pte_t *ptep)
+{
+	if (!pte_young(*ptep))
+		return 0;
+	return test_and_clear_bit(_PAGE_ACCESSED_OFFSET, &pte_val(*ptep));
+}
+
+#define __HAVE_ARCH_PTEP_SET_WRPROTECT
+static inline void ptep_set_wrprotect(struct mm_struct *mm,
+				      unsigned long address, pte_t *ptep)
+{
+	atomic_long_and(~(unsigned long)_PAGE_WRITE, (atomic_long_t *)ptep);
+}
+
+#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
+static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
+					 unsigned long address, pte_t *ptep)
+{
+	/*
+	 * This comment is borrowed from x86, but applies equally to RISC-V:
+	 *
+	 * Clearing the accessed bit without a TLB flush
+	 * doesn't cause data corruption. [ It could cause incorrect
+	 * page aging and the (mistaken) reclaim of hot pages, but the
+	 * chance of that should be relatively low. ]
+	 *
+	 * So as a performance optimization don't flush the TLB when
+	 * clearing the accessed bit, it will eventually be flushed by
+	 * a context switch or a VM operation anyway. [ In the rare
+	 * event of it not getting flushed for a long time the delay
+	 * shouldn't really matter because there's no real memory
+	 * pressure for swapout to react to. ]
+	 */
+	return ptep_test_and_clear_young(vma, address, ptep);
+}
+
+/*
+ * Encode and decode a swap entry
+ *
+ * Format of swap PTE:
+ *	bit            0:	_PAGE_PRESENT (zero)
+ *	bit            1:	reserved for future use (zero)
+ *	bits      2 to 6:	swap type
+ *	bits 7 to XLEN-1:	swap offset
+ */
+#define __SWP_TYPE_SHIFT	2
+#define __SWP_TYPE_BITS		5
+#define __SWP_TYPE_MASK		((1UL << __SWP_TYPE_BITS) - 1)
+#define __SWP_OFFSET_SHIFT	(__SWP_TYPE_BITS + __SWP_TYPE_SHIFT)
+
+#define MAX_SWAPFILES_CHECK()	\
+	BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > __SWP_TYPE_BITS)
+
+#define __swp_type(x)	(((x).val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK)
+#define __swp_offset(x)	((x).val >> __SWP_OFFSET_SHIFT)
+#define __swp_entry(type, offset) ((swp_entry_t) \
+	{ ((type) << __SWP_TYPE_SHIFT) | ((offset) << __SWP_OFFSET_SHIFT) })
+
+#define __pte_to_swp_entry(pte)	((swp_entry_t) { pte_val(pte) })
+#define __swp_entry_to_pte(x)	((pte_t) { (x).val })
+
+#ifdef CONFIG_FLATMEM
+#define kern_addr_valid(addr)   (1) /* FIXME */
+#endif
+
+extern void paging_init(void);
+
+static inline void pgtable_cache_init(void)
+{
+	/* No page table caches to initialize */
+}
+
+#endif /* CONFIG_MMU */
+
+#define VMALLOC_SIZE     (KERN_VIRT_SIZE >> 1)
+#define VMALLOC_END      (PAGE_OFFSET - 1)
+#define VMALLOC_START    (PAGE_OFFSET - VMALLOC_SIZE)
+
+/* Task size is 0x40000000000 for RV64 or 0xb800000 for RV32.
+ * Note that PGDIR_SIZE must evenly divide TASK_SIZE.
+ */
+#ifdef CONFIG_64BIT
+#define TASK_SIZE (PGDIR_SIZE * PTRS_PER_PGD / 2)
+#else
+#define TASK_SIZE VMALLOC_START
+#endif
+
+#include <asm-generic/pgtable.h>
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PGTABLE_H */
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
new file mode 100644
index 000000000000..b2a431c7f233
--- /dev/null
+++ b/arch/riscv/mm/fault.c
@@ -0,0 +1,280 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+
+#include <linux/mm.h>
+#include <linux/kernel.h>
+#include <linux/interrupt.h>
+#include <linux/perf_event.h>
+#include <linux/signal.h>
+#include <linux/uaccess.h>
+
+#include <asm/pgalloc.h>
+#include <asm/ptrace.h>
+#include <asm/uaccess.h>
+
+/*
+ * This routine handles page faults.  It determines the address and the
+ * problem, and then passes it off to one of the appropriate routines.
+ */
+asmlinkage void do_page_fault(struct pt_regs *regs)
+{
+	struct task_struct *tsk;
+	struct vm_area_struct *vma;
+	struct mm_struct *mm;
+	unsigned long addr, cause;
+	unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
+	int fault, code = SEGV_MAPERR;
+
+	cause = regs->scause;
+	addr = regs->sbadaddr;
+
+	tsk = current;
+	mm = tsk->mm;
+
+	/*
+	 * Fault-in kernel-space virtual memory on-demand.
+	 * The 'reference' page table is init_mm.pgd.
+	 *
+	 * NOTE! We MUST NOT take any locks for this case. We may
+	 * be in an interrupt or a critical region, and should
+	 * only copy the information from the master page table,
+	 * nothing more.
+	 */
+	if (unlikely((addr >= VMALLOC_START) && (addr <= VMALLOC_END)))
+		goto vmalloc_fault;
+
+	/* Enable interrupts if they were enabled in the parent context. */
+	if (likely(regs->sstatus & SR_PIE))
+		local_irq_enable();
+
+	/*
+	 * If we're in an interrupt, have no user context, or are running
+	 * in an atomic region, then we must not take the fault.
+	 */
+	if (unlikely(faulthandler_disabled() || !mm))
+		goto no_context;
+
+	if (user_mode(regs))
+		flags |= FAULT_FLAG_USER;
+
+	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
+
+retry:
+	down_read(&mm->mmap_sem);
+	vma = find_vma(mm, addr);
+	if (unlikely(!vma))
+		goto bad_area;
+	if (likely(vma->vm_start <= addr))
+		goto good_area;
+	if (unlikely(!(vma->vm_flags & VM_GROWSDOWN)))
+		goto bad_area;
+	if (unlikely(expand_stack(vma, addr)))
+		goto bad_area;
+
+	/*
+	 * Ok, we have a good vm_area for this memory access, so
+	 * we can handle it.
+	 */
+good_area:
+	code = SEGV_ACCERR;
+
+	switch (cause) {
+	case EXC_INST_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_EXEC))
+			goto bad_area;
+		break;
+	case EXC_LOAD_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_READ))
+			goto bad_area;
+		break;
+	case EXC_STORE_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_WRITE))
+			goto bad_area;
+		flags |= FAULT_FLAG_WRITE;
+		break;
+	default:
+		panic("%s: unhandled cause %lu", __func__, cause);
+	}
+
+	/*
+	 * If for any reason at all we could not handle the fault,
+	 * make sure we exit gracefully rather than endlessly redo
+	 * the fault.
+	 */
+	fault = handle_mm_fault(vma, addr, flags);
+
+	/*
+	 * If we need to retry but a fatal signal is pending, handle the
+	 * signal first. We do not need to release the mmap_sem because it
+	 * would already be released in __lock_page_or_retry in mm/filemap.c.
+	 */
+	if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(tsk))
+		return;
+
+	if (unlikely(fault & VM_FAULT_ERROR)) {
+		if (fault & VM_FAULT_OOM)
+			goto out_of_memory;
+		else if (fault & VM_FAULT_SIGBUS)
+			goto do_sigbus;
+		BUG();
+	}
+
+	/*
+	 * Major/minor page fault accounting is only done on the
+	 * initial attempt. If we go through a retry, it is extremely
+	 * likely that the page will be found in page cache at that point.
+	 */
+	if (flags & FAULT_FLAG_ALLOW_RETRY) {
+		if (fault & VM_FAULT_MAJOR) {
+			tsk->maj_flt++;
+			perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ,
+				      1, regs, addr);
+		} else {
+			tsk->min_flt++;
+			perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN,
+				      1, regs, addr);
+		}
+		if (fault & VM_FAULT_RETRY) {
+			/*
+			 * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
+			 * of starvation.
+			 */
+			flags &= ~(FAULT_FLAG_ALLOW_RETRY);
+			flags |= FAULT_FLAG_TRIED;
+
+			/*
+			 * No need to up_read(&mm->mmap_sem) as we would
+			 * have already released it in __lock_page_or_retry
+			 * in mm/filemap.c.
+			 */
+			goto retry;
+		}
+	}
+
+	up_read(&mm->mmap_sem);
+	return;
+
+	/*
+	 * Something tried to access memory that isn't in our memory map.
+	 * Fix it, but check if it's kernel or user first.
+	 */
+bad_area:
+	up_read(&mm->mmap_sem);
+	/* User mode accesses just cause a SIGSEGV */
+	if (user_mode(regs)) {
+		do_trap(regs, SIGSEGV, code, addr, tsk);
+		return;
+	}
+
+no_context:
+	/* Are we prepared to handle this kernel fault? */
+	if (fixup_exception(regs))
+		return;
+
+	/*
+	 * Oops. The kernel tried to access some bad page. We'll have to
+	 * terminate things with extreme prejudice.
+	 */
+	bust_spinlocks(1);
+	pr_alert("Unable to handle kernel %s at virtual address " REG_FMT "\n",
+		(addr < PAGE_SIZE) ? "NULL pointer dereference" :
+		"paging request", addr);
+	die(regs, "Oops");
+	do_exit(SIGKILL);
+
+	/*
+	 * We ran out of memory, call the OOM killer, and return the userspace
+	 * (which will retry the fault, or kill us if we got oom-killed).
+	 */
+out_of_memory:
+	up_read(&mm->mmap_sem);
+	if (!user_mode(regs))
+		goto no_context;
+	pagefault_out_of_memory();
+	return;
+
+do_sigbus:
+	up_read(&mm->mmap_sem);
+	/* Kernel mode? Handle exceptions or die */
+	if (!user_mode(regs))
+		goto no_context;
+	do_trap(regs, SIGBUS, BUS_ADRERR, addr, tsk);
+	return;
+
+vmalloc_fault:
+	{
+		pgd_t *pgd, *pgd_k;
+		pud_t *pud, *pud_k;
+		p4d_t *p4d, *p4d_k;
+		pmd_t *pmd, *pmd_k;
+		pte_t *pte_k;
+		int index;
+
+		if (user_mode(regs))
+			goto bad_area;
+
+		/*
+		 * Synchronize this task's top level page-table
+		 * with the 'reference' page table.
+		 *
+		 * Do _not_ use "tsk->active_mm->pgd" here.
+		 * We might be inside an interrupt in the middle
+		 * of a task switch.
+		 */
+		index = pgd_index(addr);
+		pgd = (pgd_t *)pfn_to_virt(csr_read(sptbr)) + index;
+		pgd_k = init_mm.pgd + index;
+
+		if (!pgd_present(*pgd_k))
+			goto no_context;
+		set_pgd(pgd, *pgd_k);
+
+		p4d = p4d_offset(pgd, addr);
+		p4d_k = p4d_offset(pgd_k, addr);
+		if (!p4d_present(*p4d_k))
+			goto no_context;
+
+		pud = pud_offset(p4d, addr);
+		pud_k = pud_offset(p4d_k, addr);
+		if (!pud_present(*pud_k))
+			goto no_context;
+
+		/* Since the vmalloc area is global, it is unnecessary
+		 * to copy individual PTEs
+		 */
+		pmd = pmd_offset(pud, addr);
+		pmd_k = pmd_offset(pud_k, addr);
+		if (!pmd_present(*pmd_k))
+			goto no_context;
+		set_pmd(pmd, *pmd_k);
+
+		/* Make sure the actual PTE exists as well to
+		 * catch kernel vmalloc-area accesses to non-mapped
+		 * addresses. If we don't do this, this will just
+		 * silently loop forever.
+		 */
+		pte_k = pte_offset_kernel(pmd_k, addr);
+		if (!pte_present(*pte_k))
+			goto no_context;
+		return;
+	}
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 7/9] RISC-V: Paging and MMU
  2017-06-28 18:55     ` Palmer Dabbelt
                       ` (10 preceding siblings ...)
  (?)
@ 2017-06-28 18:55     ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains code to manage the RISC-V MMU, including definitions
of the page tables and the page walking code.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/mmu_context.h  |  69 ++++++
 arch/riscv/include/asm/page.h         | 138 +++++++++++
 arch/riscv/include/asm/pgalloc.h      | 124 ++++++++++
 arch/riscv/include/asm/pgtable-32.h   |  25 ++
 arch/riscv/include/asm/pgtable-64.h   |  84 +++++++
 arch/riscv/include/asm/pgtable-bits.h |  48 ++++
 arch/riscv/include/asm/pgtable.h      | 427 ++++++++++++++++++++++++++++++++++
 arch/riscv/mm/fault.c                 | 280 ++++++++++++++++++++++
 8 files changed, 1195 insertions(+)
 create mode 100644 arch/riscv/include/asm/mmu_context.h
 create mode 100644 arch/riscv/include/asm/page.h
 create mode 100644 arch/riscv/include/asm/pgalloc.h
 create mode 100644 arch/riscv/include/asm/pgtable-32.h
 create mode 100644 arch/riscv/include/asm/pgtable-64.h
 create mode 100644 arch/riscv/include/asm/pgtable-bits.h
 create mode 100644 arch/riscv/include/asm/pgtable.h
 create mode 100644 arch/riscv/mm/fault.c

diff --git a/arch/riscv/include/asm/mmu_context.h b/arch/riscv/include/asm/mmu_context.h
new file mode 100644
index 000000000000..de1fc1631fc4
--- /dev/null
+++ b/arch/riscv/include/asm/mmu_context.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_MMU_CONTEXT_H
+#define _ASM_RISCV_MMU_CONTEXT_H
+
+#include <asm-generic/mm_hooks.h>
+
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <asm/tlbflush.h>
+
+static inline void enter_lazy_tlb(struct mm_struct *mm,
+	struct task_struct *task)
+{
+}
+
+/* Initialize context-related info for a new mm_struct */
+static inline int init_new_context(struct task_struct *task,
+	struct mm_struct *mm)
+{
+	return 0;
+}
+
+static inline void destroy_context(struct mm_struct *mm)
+{
+}
+
+static inline pgd_t *current_pgdir(void)
+{
+	return pfn_to_virt(csr_read(sptbr) & SPTBR_PPN);
+}
+
+static inline void set_pgdir(pgd_t *pgd)
+{
+	csr_write(sptbr, virt_to_pfn(pgd) | SPTBR_MODE);
+}
+
+static inline void switch_mm(struct mm_struct *prev,
+	struct mm_struct *next, struct task_struct *task)
+{
+	if (likely(prev != next)) {
+		set_pgdir(next->pgd);
+		local_flush_tlb_all();
+	}
+}
+
+static inline void activate_mm(struct mm_struct *prev,
+			       struct mm_struct *next)
+{
+	switch_mm(prev, next, NULL);
+}
+
+static inline void deactivate_mm(struct task_struct *task,
+	struct mm_struct *mm)
+{
+}
+
+#endif /* _ASM_RISCV_MMU_CONTEXT_H */
diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
new file mode 100644
index 000000000000..e1491c20d6fd
--- /dev/null
+++ b/arch/riscv/include/asm/page.h
@@ -0,0 +1,138 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ * Copyright (C) 2017 XiaojingZhu <zhuxiaoj@ict.ac.cn>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PAGE_H
+#define _ASM_RISCV_PAGE_H
+
+#include <linux/pfn.h>
+#include <linux/const.h>
+
+#define PAGE_SHIFT	(12)
+#define PAGE_SIZE	(_AC(1, UL) << PAGE_SHIFT)
+#define PAGE_MASK	(~(PAGE_SIZE - 1))
+
+#ifdef __KERNEL__
+
+/*
+ * PAGE_OFFSET -- the first address of the first page of memory.
+ * When not using MMU this corresponds to the first free page in
+ * physical memory (aligned on a page boundary).
+ */
+#ifdef CONFIG_64BIT
+#define PAGE_OFFSET		_AC(0xffffffff80000000, UL)
+#else
+#define PAGE_OFFSET		_AC(0xc0000000, UL)
+#endif
+
+#define KERN_VIRT_SIZE (-PAGE_OFFSET)
+
+#ifndef __ASSEMBLY__
+
+#define PAGE_UP(addr)	(((addr)+((PAGE_SIZE)-1))&(~((PAGE_SIZE)-1)))
+#define PAGE_DOWN(addr)	((addr)&(~((PAGE_SIZE)-1)))
+
+/* align addr on a size boundary - adjust address up/down if needed */
+#define _ALIGN_UP(addr, size)	(((addr)+((size)-1))&(~((size)-1)))
+#define _ALIGN_DOWN(addr, size)	((addr)&(~((size)-1)))
+
+/* align addr on a size boundary - adjust address up if needed */
+#define _ALIGN(addr, size)	_ALIGN_UP(addr, size)
+
+#define clear_page(pgaddr)			memset((pgaddr), 0, PAGE_SIZE)
+#define copy_page(to, from)			memcpy((to), (from), PAGE_SIZE)
+
+#define clear_user_page(pgaddr, vaddr, page)	memset((pgaddr), 0, PAGE_SIZE)
+#define copy_user_page(vto, vfrom, vaddr, topg) \
+			memcpy((vto), (vfrom), PAGE_SIZE)
+
+/*
+ * Use struct definitions to apply C type checking
+ */
+
+/* Page Global Directory entry */
+typedef struct {
+	unsigned long pgd;
+} pgd_t;
+
+/* Page Table entry */
+typedef struct {
+	unsigned long pte;
+} pte_t;
+
+typedef struct {
+	unsigned long pgprot;
+} pgprot_t;
+
+typedef struct page *pgtable_t;
+
+#define pte_val(x)	((x).pte)
+#define pgd_val(x)	((x).pgd)
+#define pgprot_val(x)	((x).pgprot)
+
+#define __pte(x)	((pte_t) { (x) })
+#define __pgd(x)	((pgd_t) { (x) })
+#define __pgprot(x)	((pgprot_t) { (x) })
+
+#ifdef CONFIG_64BITS
+#define PTE_FMT "%016lx"
+#else
+#define PTE_FMT "%08lx"
+#endif
+
+extern unsigned long va_pa_offset;
+extern unsigned long pfn_base;
+
+extern unsigned long max_low_pfn;
+extern unsigned long min_low_pfn;
+
+#define __pa(x)		((unsigned long)(x) - va_pa_offset)
+#define __va(x)		((void *)((unsigned long) (x) + va_pa_offset))
+
+#define phys_to_pfn(phys)	(PFN_DOWN(phys))
+#define pfn_to_phys(pfn)	(PFN_PHYS(pfn))
+
+#define virt_to_pfn(vaddr)	(phys_to_pfn(__pa(vaddr)))
+#define pfn_to_virt(pfn)	(__va(pfn_to_phys(pfn)))
+
+#define virt_to_page(vaddr)	(pfn_to_page(virt_to_pfn(vaddr)))
+#define page_to_virt(page)	(pfn_to_virt(page_to_pfn(page)))
+
+#define page_to_phys(page)	(pfn_to_phys(page_to_pfn(page)))
+#define page_to_bus(page)	(page_to_phys(page))
+#define phys_to_page(paddr)	(pfn_to_page(phys_to_pfn(paddr)))
+
+#define pfn_valid(pfn) \
+	(((pfn) >= pfn_base) && (((pfn)-pfn_base) < max_mapnr))
+
+#define ARCH_PFN_OFFSET		(pfn_base)
+
+#endif /* __ASSEMBLY__ */
+
+#define virt_addr_valid(vaddr)	(pfn_valid(virt_to_pfn(vaddr)))
+
+#endif /* __KERNEL__ */
+
+#define VM_DATA_DEFAULT_FLAGS	(VM_READ | VM_WRITE | \
+				 VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
+
+#include <asm-generic/memory_model.h>
+#include <asm-generic/getorder.h>
+
+/* vDSO support */
+/* We do define AT_SYSINFO_EHDR but don't use the gate mechanism */
+#define __HAVE_ARCH_GATE_AREA
+
+#endif /* _ASM_RISCV_PAGE_H */
diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h
new file mode 100644
index 000000000000..b40074bcb164
--- /dev/null
+++ b/arch/riscv/include/asm/pgalloc.h
@@ -0,0 +1,124 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGALLOC_H
+#define _ASM_RISCV_PGALLOC_H
+
+#include <linux/mm.h>
+#include <asm/tlb.h>
+
+static inline void pmd_populate_kernel(struct mm_struct *mm,
+	pmd_t *pmd, pte_t *pte)
+{
+	unsigned long pfn = virt_to_pfn(pte);
+
+	set_pmd(pmd, __pmd((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+
+static inline void pmd_populate(struct mm_struct *mm,
+	pmd_t *pmd, pgtable_t pte)
+{
+	unsigned long pfn = virt_to_pfn(page_address(pte));
+
+	set_pmd(pmd, __pmd((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+
+#ifndef __PAGETABLE_PMD_FOLDED
+static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+{
+	unsigned long pfn = virt_to_pfn(pmd);
+
+	set_pud(pud, __pud((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+#endif /* __PAGETABLE_PMD_FOLDED */
+
+#define pmd_pgtable(pmd)	pmd_page(pmd)
+
+static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+{
+	pgd_t *pgd;
+
+	pgd = (pgd_t *)__get_free_page(GFP_KERNEL);
+	if (likely(pgd != NULL)) {
+		memset(pgd, 0, USER_PTRS_PER_PGD * sizeof(pgd_t));
+		/* Copy kernel mappings */
+		memcpy(pgd + USER_PTRS_PER_PGD,
+			init_mm.pgd + USER_PTRS_PER_PGD,
+			(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
+	}
+	return pgd;
+}
+
+static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
+{
+	free_page((unsigned long)pgd);
+}
+
+#ifndef __PAGETABLE_PMD_FOLDED
+
+static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
+{
+	return (pmd_t *)__get_free_page(
+		GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+}
+
+static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
+{
+	free_page((unsigned long)pmd);
+}
+
+#define __pmd_free_tlb(tlb, pmd, addr)  pmd_free((tlb)->mm, pmd)
+
+#endif /* __PAGETABLE_PMD_FOLDED */
+
+static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
+	unsigned long address)
+{
+	return (pte_t *)__get_free_page(
+		GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+}
+
+static inline struct page *pte_alloc_one(struct mm_struct *mm,
+	unsigned long address)
+{
+	struct page *pte;
+
+	pte = alloc_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+	if (likely(pte != NULL))
+		pgtable_page_ctor(pte);
+	return pte;
+}
+
+static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
+{
+	free_page((unsigned long)pte);
+}
+
+static inline void pte_free(struct mm_struct *mm, pgtable_t pte)
+{
+	pgtable_page_dtor(pte);
+	__free_page(pte);
+}
+
+#define __pte_free_tlb(tlb, pte, buf)   \
+do {                                    \
+	pgtable_page_dtor(pte);         \
+	tlb_remove_page((tlb), pte);    \
+} while (0)
+
+static inline void check_pgt_cache(void)
+{
+}
+
+#endif /* _ASM_RISCV_PGALLOC_H */
diff --git a/arch/riscv/include/asm/pgtable-32.h b/arch/riscv/include/asm/pgtable-32.h
new file mode 100644
index 000000000000..d61974b74182
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-32.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_32_H
+#define _ASM_RISCV_PGTABLE_32_H
+
+#include <asm-generic/pgtable-nopmd.h>
+#include <linux/const.h>
+
+/* Size of region mapped by a page global directory */
+#define PGDIR_SHIFT     22
+#define PGDIR_SIZE      (_AC(1, UL) << PGDIR_SHIFT)
+#define PGDIR_MASK      (~(PGDIR_SIZE - 1))
+
+#endif /* _ASM_RISCV_PGTABLE_32_H */
diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h
new file mode 100644
index 000000000000..7aa0ea9bd8bb
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-64.h
@@ -0,0 +1,84 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_64_H
+#define _ASM_RISCV_PGTABLE_64_H
+
+#include <linux/const.h>
+
+#define PGDIR_SHIFT     30
+/* Size of region mapped by a page global directory */
+#define PGDIR_SIZE      (_AC(1, UL) << PGDIR_SHIFT)
+#define PGDIR_MASK      (~(PGDIR_SIZE - 1))
+
+#define PMD_SHIFT       21
+/* Size of region mapped by a page middle directory */
+#define PMD_SIZE        (_AC(1, UL) << PMD_SHIFT)
+#define PMD_MASK        (~(PMD_SIZE - 1))
+
+/* Page Middle Directory entry */
+typedef struct {
+	unsigned long pmd;
+} pmd_t;
+
+#define pmd_val(x)      ((x).pmd)
+#define __pmd(x)        ((pmd_t) { (x) })
+
+#define PTRS_PER_PMD    (PAGE_SIZE / sizeof(pmd_t))
+
+static inline int pud_present(pud_t pud)
+{
+	return (pud_val(pud) & _PAGE_PRESENT);
+}
+
+static inline int pud_none(pud_t pud)
+{
+	return (pud_val(pud) == 0);
+}
+
+static inline int pud_bad(pud_t pud)
+{
+	return !pud_present(pud);
+}
+
+static inline void set_pud(pud_t *pudp, pud_t pud)
+{
+	*pudp = pud;
+}
+
+static inline void pud_clear(pud_t *pudp)
+{
+	set_pud(pudp, __pud(0));
+}
+
+static inline unsigned long pud_page_vaddr(pud_t pud)
+{
+	return (unsigned long)pfn_to_virt(pud_val(pud) >> _PAGE_PFN_SHIFT);
+}
+
+#define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
+
+static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
+{
+	return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(addr);
+}
+
+static inline pmd_t pfn_pmd(unsigned long pfn, pgprot_t prot)
+{
+	return __pmd((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+#define pmd_ERROR(e) \
+	pr_err("%s:%d: bad pmd %016lx.\n", __FILE__, __LINE__, pmd_val(e))
+
+#endif /* _ASM_RISCV_PGTABLE_64_H */
diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm/pgtable-bits.h
new file mode 100644
index 000000000000..997ddbb1d370
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-bits.h
@@ -0,0 +1,48 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_BITS_H
+#define _ASM_RISCV_PGTABLE_BITS_H
+
+/*
+ * PTE format:
+ * | XLEN-1  10 | 9             8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0
+ *       PFN      reserved for SW   D   A   G   U   X   W   R   V
+ */
+
+#define _PAGE_ACCESSED_OFFSET 6
+
+#define _PAGE_PRESENT   (1 << 0)
+#define _PAGE_READ      (1 << 1)    /* Readable */
+#define _PAGE_WRITE     (1 << 2)    /* Writable */
+#define _PAGE_EXEC      (1 << 3)    /* Executable */
+#define _PAGE_USER      (1 << 4)    /* User */
+#define _PAGE_GLOBAL    (1 << 5)    /* Global */
+#define _PAGE_ACCESSED  (1 << 6)    /* Set by hardware on any access */
+#define _PAGE_DIRTY     (1 << 7)    /* Set by hardware on any write */
+#define _PAGE_SOFT      (1 << 8)    /* Reserved for software */
+
+#define _PAGE_SPECIAL   _PAGE_SOFT
+#define _PAGE_TABLE     _PAGE_PRESENT
+
+#define _PAGE_PFN_SHIFT 10
+
+/* Set of bits to preserve across pte_modify() */
+#define _PAGE_CHG_MASK  (~(unsigned long)(_PAGE_PRESENT | _PAGE_READ |	\
+					  _PAGE_WRITE | _PAGE_EXEC |	\
+					  _PAGE_USER | _PAGE_GLOBAL))
+
+/* Advertise support for _PAGE_SPECIAL */
+#define __HAVE_ARCH_PTE_SPECIAL
+
+#endif /* _ASM_RISCV_PGTABLE_BITS_H */
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
new file mode 100644
index 000000000000..8bb44014f5c3
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable.h
@@ -0,0 +1,427 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_H
+#define _ASM_RISCV_PGTABLE_H
+
+#include <linux/mmzone.h>
+
+#include <asm/pgtable-bits.h>
+
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_MMU
+
+/* Page Upper Directory not used in RISC-V */
+#include <asm-generic/pgtable-nopud.h>
+#include <asm/page.h>
+#include <asm/tlbflush.h>
+#include <linux/mm_types.h>
+
+#ifdef CONFIG_64BIT
+#include <asm/pgtable-64.h>
+#else
+#include <asm/pgtable-32.h>
+#endif /* CONFIG_64BIT */
+
+/* Number of entries in the page global directory */
+#define PTRS_PER_PGD    (PAGE_SIZE / sizeof(pgd_t))
+/* Number of entries in the page table */
+#define PTRS_PER_PTE    (PAGE_SIZE / sizeof(pte_t))
+
+/* Number of PGD entries that a user-mode program can use */
+#define USER_PTRS_PER_PGD   (TASK_SIZE / PGDIR_SIZE)
+#define FIRST_USER_ADDRESS  0
+
+/* Page protection bits */
+#define _PAGE_BASE	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER)
+
+#define PAGE_NONE		__pgprot(0)
+#define PAGE_READ		__pgprot(_PAGE_BASE | _PAGE_READ)
+#define PAGE_WRITE		__pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_WRITE)
+#define PAGE_EXEC		__pgprot(_PAGE_BASE | _PAGE_EXEC)
+#define PAGE_READ_EXEC		__pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC)
+#define PAGE_WRITE_EXEC		__pgprot(_PAGE_BASE | _PAGE_READ |	\
+					 _PAGE_EXEC | _PAGE_WRITE)
+
+#define PAGE_COPY		PAGE_READ
+#define PAGE_COPY_EXEC		PAGE_EXEC
+#define PAGE_COPY_READ_EXEC	PAGE_READ_EXEC
+#define PAGE_SHARED		PAGE_WRITE
+#define PAGE_SHARED_EXEC	PAGE_WRITE_EXEC
+
+#define _PAGE_KERNEL		(_PAGE_READ \
+				| _PAGE_WRITE \
+				| _PAGE_PRESENT \
+				| _PAGE_ACCESSED \
+				| _PAGE_DIRTY)
+
+#define PAGE_KERNEL		__pgprot(_PAGE_KERNEL)
+#define PAGE_KERNEL_EXEC	__pgprot(_PAGE_KERNEL | _PAGE_EXEC)
+
+extern pgd_t swapper_pg_dir[];
+
+/* MAP_PRIVATE permissions: xwr (copy-on-write) */
+#define __P000	PAGE_NONE
+#define __P001	PAGE_READ
+#define __P010	PAGE_COPY
+#define __P011	PAGE_COPY
+#define __P100	PAGE_EXEC
+#define __P101	PAGE_READ_EXEC
+#define __P110	PAGE_COPY_EXEC
+#define __P111	PAGE_COPY_READ_EXEC
+
+/* MAP_SHARED permissions: xwr */
+#define __S000	PAGE_NONE
+#define __S001	PAGE_READ
+#define __S010	PAGE_SHARED
+#define __S011	PAGE_SHARED
+#define __S100	PAGE_EXEC
+#define __S101	PAGE_READ_EXEC
+#define __S110	PAGE_SHARED_EXEC
+#define __S111	PAGE_SHARED_EXEC
+
+/*
+ * ZERO_PAGE is a global shared page that is always zero,
+ * used for zero-mapped memory areas, etc.
+ */
+extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
+
+static inline int pmd_present(pmd_t pmd)
+{
+	return (pmd_val(pmd) & _PAGE_PRESENT);
+}
+
+static inline int pmd_none(pmd_t pmd)
+{
+	return (pmd_val(pmd) == 0);
+}
+
+static inline int pmd_bad(pmd_t pmd)
+{
+	return !pmd_present(pmd);
+}
+
+static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
+{
+	*pmdp = pmd;
+}
+
+static inline void pmd_clear(pmd_t *pmdp)
+{
+	set_pmd(pmdp, __pmd(0));
+}
+
+
+static inline pgd_t pfn_pgd(unsigned long pfn, pgprot_t prot)
+{
+	return __pgd((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+#define pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
+
+/* Locate an entry in the page global directory */
+static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr)
+{
+	return mm->pgd + pgd_index(addr);
+}
+/* Locate an entry in the kernel page global directory */
+#define pgd_offset_k(addr)      pgd_offset(&init_mm, (addr))
+
+static inline struct page *pmd_page(pmd_t pmd)
+{
+	return pfn_to_page(pmd_val(pmd) >> _PAGE_PFN_SHIFT);
+}
+
+static inline unsigned long pmd_page_vaddr(pmd_t pmd)
+{
+	return (unsigned long)pfn_to_virt(pmd_val(pmd) >> _PAGE_PFN_SHIFT);
+}
+
+/* Yields the page frame number (PFN) of a page table entry */
+static inline unsigned long pte_pfn(pte_t pte)
+{
+	return (pte_val(pte) >> _PAGE_PFN_SHIFT);
+}
+
+#define pte_page(x)     pfn_to_page(pte_pfn(x))
+
+/* Constructs a page table entry */
+static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot)
+{
+	return __pte((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+static inline pte_t mk_pte(struct page *page, pgprot_t prot)
+{
+	return pfn_pte(page_to_pfn(page), prot);
+}
+
+#define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+
+static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long addr)
+{
+	return (pte_t *)pmd_page_vaddr(*pmd) + pte_index(addr);
+}
+
+#define pte_offset_map(dir, addr)	pte_offset_kernel((dir), (addr))
+#define pte_unmap(pte)			((void)(pte))
+
+/*
+ * Certain architectures need to do special things when PTEs within
+ * a page table are directly modified.  Thus, the following hook is
+ * made available.
+ */
+static inline void set_pte(pte_t *ptep, pte_t pteval)
+{
+	*ptep = pteval;
+}
+
+static inline void set_pte_at(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep, pte_t pteval)
+{
+	set_pte(ptep, pteval);
+}
+
+static inline void pte_clear(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep)
+{
+	set_pte_at(mm, addr, ptep, __pte(0));
+}
+
+static inline int pte_present(pte_t pte)
+{
+	return (pte_val(pte) & _PAGE_PRESENT);
+}
+
+static inline int pte_none(pte_t pte)
+{
+	return (pte_val(pte) == 0);
+}
+
+/* static inline int pte_read(pte_t pte) */
+
+static inline int pte_write(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_WRITE;
+}
+
+static inline int pte_huge(pte_t pte)
+{
+	return pte_present(pte)
+		&& (pte_val(pte) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC));
+}
+
+/* static inline int pte_exec(pte_t pte) */
+
+static inline int pte_dirty(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_DIRTY;
+}
+
+static inline int pte_young(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_ACCESSED;
+}
+
+static inline int pte_special(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_SPECIAL;
+}
+
+/* static inline pte_t pte_rdprotect(pte_t pte) */
+
+static inline pte_t pte_wrprotect(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_WRITE));
+}
+
+/* static inline pte_t pte_mkread(pte_t pte) */
+
+static inline pte_t pte_mkwrite(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_WRITE);
+}
+
+/* static inline pte_t pte_mkexec(pte_t pte) */
+
+static inline pte_t pte_mkdirty(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_DIRTY);
+}
+
+static inline pte_t pte_mkclean(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_DIRTY));
+}
+
+static inline pte_t pte_mkyoung(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_ACCESSED);
+}
+
+static inline pte_t pte_mkold(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_ACCESSED));
+}
+
+static inline pte_t pte_mkspecial(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_SPECIAL);
+}
+
+/* Modify page protection bits */
+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+	return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
+}
+
+#define pgd_ERROR(e) \
+	pr_err("%s:%d: bad pgd " PTE_FMT ".\n", __FILE__, __LINE__, pgd_val(e))
+
+
+/* Commit new configuration to MMU hardware */
+static inline void update_mmu_cache(struct vm_area_struct *vma,
+	unsigned long address, pte_t *ptep)
+{
+	/* The kernel assumes that TLBs don't cache invalid entries, but
+	 * in RISC-V, SFENCE.VMA specifies an ordering constraint, not a
+	 * cache flush; it is necessary even after writing invalid entries.
+	 * Relying on flush_tlb_fix_spurious_fault would suffice, but
+	 * the extra traps reduce performance.  So, eagerly SFENCE.VMA.
+	 */
+	local_flush_tlb_page(address);
+}
+
+#define __HAVE_ARCH_PTE_SAME
+static inline int pte_same(pte_t pte_a, pte_t pte_b)
+{
+	return pte_val(pte_a) == pte_val(pte_b);
+}
+
+#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
+static inline int ptep_set_access_flags(struct vm_area_struct *vma,
+					unsigned long address, pte_t *ptep,
+					pte_t entry, int dirty)
+{
+	if (!pte_same(*ptep, entry))
+		set_pte_at(vma->vm_mm, address, ptep, entry);
+	/* update_mmu_cache will unconditionally execute, handling both
+	 * the case that the PTE changed and the spurious fault case.
+	 */
+	return true;
+}
+
+#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
+static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+}
+
+#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
+static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
+					    unsigned long address,
+					    pte_t *ptep)
+{
+	if (!pte_young(*ptep))
+		return 0;
+	return test_and_clear_bit(_PAGE_ACCESSED_OFFSET, &pte_val(*ptep));
+}
+
+#define __HAVE_ARCH_PTEP_SET_WRPROTECT
+static inline void ptep_set_wrprotect(struct mm_struct *mm,
+				      unsigned long address, pte_t *ptep)
+{
+	atomic_long_and(~(unsigned long)_PAGE_WRITE, (atomic_long_t *)ptep);
+}
+
+#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
+static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
+					 unsigned long address, pte_t *ptep)
+{
+	/*
+	 * This comment is borrowed from x86, but applies equally to RISC-V:
+	 *
+	 * Clearing the accessed bit without a TLB flush
+	 * doesn't cause data corruption. [ It could cause incorrect
+	 * page aging and the (mistaken) reclaim of hot pages, but the
+	 * chance of that should be relatively low. ]
+	 *
+	 * So as a performance optimization don't flush the TLB when
+	 * clearing the accessed bit, it will eventually be flushed by
+	 * a context switch or a VM operation anyway. [ In the rare
+	 * event of it not getting flushed for a long time the delay
+	 * shouldn't really matter because there's no real memory
+	 * pressure for swapout to react to. ]
+	 */
+	return ptep_test_and_clear_young(vma, address, ptep);
+}
+
+/*
+ * Encode and decode a swap entry
+ *
+ * Format of swap PTE:
+ *	bit            0:	_PAGE_PRESENT (zero)
+ *	bit            1:	reserved for future use (zero)
+ *	bits      2 to 6:	swap type
+ *	bits 7 to XLEN-1:	swap offset
+ */
+#define __SWP_TYPE_SHIFT	2
+#define __SWP_TYPE_BITS		5
+#define __SWP_TYPE_MASK		((1UL << __SWP_TYPE_BITS) - 1)
+#define __SWP_OFFSET_SHIFT	(__SWP_TYPE_BITS + __SWP_TYPE_SHIFT)
+
+#define MAX_SWAPFILES_CHECK()	\
+	BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > __SWP_TYPE_BITS)
+
+#define __swp_type(x)	(((x).val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK)
+#define __swp_offset(x)	((x).val >> __SWP_OFFSET_SHIFT)
+#define __swp_entry(type, offset) ((swp_entry_t) \
+	{ ((type) << __SWP_TYPE_SHIFT) | ((offset) << __SWP_OFFSET_SHIFT) })
+
+#define __pte_to_swp_entry(pte)	((swp_entry_t) { pte_val(pte) })
+#define __swp_entry_to_pte(x)	((pte_t) { (x).val })
+
+#ifdef CONFIG_FLATMEM
+#define kern_addr_valid(addr)   (1) /* FIXME */
+#endif
+
+extern void paging_init(void);
+
+static inline void pgtable_cache_init(void)
+{
+	/* No page table caches to initialize */
+}
+
+#endif /* CONFIG_MMU */
+
+#define VMALLOC_SIZE     (KERN_VIRT_SIZE >> 1)
+#define VMALLOC_END      (PAGE_OFFSET - 1)
+#define VMALLOC_START    (PAGE_OFFSET - VMALLOC_SIZE)
+
+/* Task size is 0x40000000000 for RV64 or 0xb800000 for RV32.
+ * Note that PGDIR_SIZE must evenly divide TASK_SIZE.
+ */
+#ifdef CONFIG_64BIT
+#define TASK_SIZE (PGDIR_SIZE * PTRS_PER_PGD / 2)
+#else
+#define TASK_SIZE VMALLOC_START
+#endif
+
+#include <asm-generic/pgtable.h>
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PGTABLE_H */
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
new file mode 100644
index 000000000000..b2a431c7f233
--- /dev/null
+++ b/arch/riscv/mm/fault.c
@@ -0,0 +1,280 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+
+#include <linux/mm.h>
+#include <linux/kernel.h>
+#include <linux/interrupt.h>
+#include <linux/perf_event.h>
+#include <linux/signal.h>
+#include <linux/uaccess.h>
+
+#include <asm/pgalloc.h>
+#include <asm/ptrace.h>
+#include <asm/uaccess.h>
+
+/*
+ * This routine handles page faults.  It determines the address and the
+ * problem, and then passes it off to one of the appropriate routines.
+ */
+asmlinkage void do_page_fault(struct pt_regs *regs)
+{
+	struct task_struct *tsk;
+	struct vm_area_struct *vma;
+	struct mm_struct *mm;
+	unsigned long addr, cause;
+	unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
+	int fault, code = SEGV_MAPERR;
+
+	cause = regs->scause;
+	addr = regs->sbadaddr;
+
+	tsk = current;
+	mm = tsk->mm;
+
+	/*
+	 * Fault-in kernel-space virtual memory on-demand.
+	 * The 'reference' page table is init_mm.pgd.
+	 *
+	 * NOTE! We MUST NOT take any locks for this case. We may
+	 * be in an interrupt or a critical region, and should
+	 * only copy the information from the master page table,
+	 * nothing more.
+	 */
+	if (unlikely((addr >= VMALLOC_START) && (addr <= VMALLOC_END)))
+		goto vmalloc_fault;
+
+	/* Enable interrupts if they were enabled in the parent context. */
+	if (likely(regs->sstatus & SR_PIE))
+		local_irq_enable();
+
+	/*
+	 * If we're in an interrupt, have no user context, or are running
+	 * in an atomic region, then we must not take the fault.
+	 */
+	if (unlikely(faulthandler_disabled() || !mm))
+		goto no_context;
+
+	if (user_mode(regs))
+		flags |= FAULT_FLAG_USER;
+
+	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
+
+retry:
+	down_read(&mm->mmap_sem);
+	vma = find_vma(mm, addr);
+	if (unlikely(!vma))
+		goto bad_area;
+	if (likely(vma->vm_start <= addr))
+		goto good_area;
+	if (unlikely(!(vma->vm_flags & VM_GROWSDOWN)))
+		goto bad_area;
+	if (unlikely(expand_stack(vma, addr)))
+		goto bad_area;
+
+	/*
+	 * Ok, we have a good vm_area for this memory access, so
+	 * we can handle it.
+	 */
+good_area:
+	code = SEGV_ACCERR;
+
+	switch (cause) {
+	case EXC_INST_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_EXEC))
+			goto bad_area;
+		break;
+	case EXC_LOAD_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_READ))
+			goto bad_area;
+		break;
+	case EXC_STORE_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_WRITE))
+			goto bad_area;
+		flags |= FAULT_FLAG_WRITE;
+		break;
+	default:
+		panic("%s: unhandled cause %lu", __func__, cause);
+	}
+
+	/*
+	 * If for any reason at all we could not handle the fault,
+	 * make sure we exit gracefully rather than endlessly redo
+	 * the fault.
+	 */
+	fault = handle_mm_fault(vma, addr, flags);
+
+	/*
+	 * If we need to retry but a fatal signal is pending, handle the
+	 * signal first. We do not need to release the mmap_sem because it
+	 * would already be released in __lock_page_or_retry in mm/filemap.c.
+	 */
+	if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(tsk))
+		return;
+
+	if (unlikely(fault & VM_FAULT_ERROR)) {
+		if (fault & VM_FAULT_OOM)
+			goto out_of_memory;
+		else if (fault & VM_FAULT_SIGBUS)
+			goto do_sigbus;
+		BUG();
+	}
+
+	/*
+	 * Major/minor page fault accounting is only done on the
+	 * initial attempt. If we go through a retry, it is extremely
+	 * likely that the page will be found in page cache at that point.
+	 */
+	if (flags & FAULT_FLAG_ALLOW_RETRY) {
+		if (fault & VM_FAULT_MAJOR) {
+			tsk->maj_flt++;
+			perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ,
+				      1, regs, addr);
+		} else {
+			tsk->min_flt++;
+			perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN,
+				      1, regs, addr);
+		}
+		if (fault & VM_FAULT_RETRY) {
+			/*
+			 * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
+			 * of starvation.
+			 */
+			flags &= ~(FAULT_FLAG_ALLOW_RETRY);
+			flags |= FAULT_FLAG_TRIED;
+
+			/*
+			 * No need to up_read(&mm->mmap_sem) as we would
+			 * have already released it in __lock_page_or_retry
+			 * in mm/filemap.c.
+			 */
+			goto retry;
+		}
+	}
+
+	up_read(&mm->mmap_sem);
+	return;
+
+	/*
+	 * Something tried to access memory that isn't in our memory map.
+	 * Fix it, but check if it's kernel or user first.
+	 */
+bad_area:
+	up_read(&mm->mmap_sem);
+	/* User mode accesses just cause a SIGSEGV */
+	if (user_mode(regs)) {
+		do_trap(regs, SIGSEGV, code, addr, tsk);
+		return;
+	}
+
+no_context:
+	/* Are we prepared to handle this kernel fault? */
+	if (fixup_exception(regs))
+		return;
+
+	/*
+	 * Oops. The kernel tried to access some bad page. We'll have to
+	 * terminate things with extreme prejudice.
+	 */
+	bust_spinlocks(1);
+	pr_alert("Unable to handle kernel %s at virtual address " REG_FMT "\n",
+		(addr < PAGE_SIZE) ? "NULL pointer dereference" :
+		"paging request", addr);
+	die(regs, "Oops");
+	do_exit(SIGKILL);
+
+	/*
+	 * We ran out of memory, call the OOM killer, and return the userspace
+	 * (which will retry the fault, or kill us if we got oom-killed).
+	 */
+out_of_memory:
+	up_read(&mm->mmap_sem);
+	if (!user_mode(regs))
+		goto no_context;
+	pagefault_out_of_memory();
+	return;
+
+do_sigbus:
+	up_read(&mm->mmap_sem);
+	/* Kernel mode? Handle exceptions or die */
+	if (!user_mode(regs))
+		goto no_context;
+	do_trap(regs, SIGBUS, BUS_ADRERR, addr, tsk);
+	return;
+
+vmalloc_fault:
+	{
+		pgd_t *pgd, *pgd_k;
+		pud_t *pud, *pud_k;
+		p4d_t *p4d, *p4d_k;
+		pmd_t *pmd, *pmd_k;
+		pte_t *pte_k;
+		int index;
+
+		if (user_mode(regs))
+			goto bad_area;
+
+		/*
+		 * Synchronize this task's top level page-table
+		 * with the 'reference' page table.
+		 *
+		 * Do _not_ use "tsk->active_mm->pgd" here.
+		 * We might be inside an interrupt in the middle
+		 * of a task switch.
+		 */
+		index = pgd_index(addr);
+		pgd = (pgd_t *)pfn_to_virt(csr_read(sptbr)) + index;
+		pgd_k = init_mm.pgd + index;
+
+		if (!pgd_present(*pgd_k))
+			goto no_context;
+		set_pgd(pgd, *pgd_k);
+
+		p4d = p4d_offset(pgd, addr);
+		p4d_k = p4d_offset(pgd_k, addr);
+		if (!p4d_present(*p4d_k))
+			goto no_context;
+
+		pud = pud_offset(p4d, addr);
+		pud_k = pud_offset(p4d_k, addr);
+		if (!pud_present(*pud_k))
+			goto no_context;
+
+		/* Since the vmalloc area is global, it is unnecessary
+		 * to copy individual PTEs
+		 */
+		pmd = pmd_offset(pud, addr);
+		pmd_k = pmd_offset(pud_k, addr);
+		if (!pmd_present(*pmd_k))
+			goto no_context;
+		set_pmd(pmd, *pmd_k);
+
+		/* Make sure the actual PTE exists as well to
+		 * catch kernel vmalloc-area accesses to non-mapped
+		 * addresses. If we don't do this, this will just
+		 * silently loop forever.
+		 */
+		pte_k = pte_offset_kernel(pmd_k, addr);
+		if (!pte_present(*pte_k))
+			goto no_context;
+		return;
+	}
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 7/9] RISC-V: Paging and MMU
@ 2017-06-28 18:55       ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains code to manage the RISC-V MMU, including definitions
of the page tables and the page walking code.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/mmu_context.h  |  69 ++++++
 arch/riscv/include/asm/page.h         | 138 +++++++++++
 arch/riscv/include/asm/pgalloc.h      | 124 ++++++++++
 arch/riscv/include/asm/pgtable-32.h   |  25 ++
 arch/riscv/include/asm/pgtable-64.h   |  84 +++++++
 arch/riscv/include/asm/pgtable-bits.h |  48 ++++
 arch/riscv/include/asm/pgtable.h      | 427 ++++++++++++++++++++++++++++++++++
 arch/riscv/mm/fault.c                 | 280 ++++++++++++++++++++++
 8 files changed, 1195 insertions(+)
 create mode 100644 arch/riscv/include/asm/mmu_context.h
 create mode 100644 arch/riscv/include/asm/page.h
 create mode 100644 arch/riscv/include/asm/pgalloc.h
 create mode 100644 arch/riscv/include/asm/pgtable-32.h
 create mode 100644 arch/riscv/include/asm/pgtable-64.h
 create mode 100644 arch/riscv/include/asm/pgtable-bits.h
 create mode 100644 arch/riscv/include/asm/pgtable.h
 create mode 100644 arch/riscv/mm/fault.c

diff --git a/arch/riscv/include/asm/mmu_context.h b/arch/riscv/include/asm/mmu_context.h
new file mode 100644
index 000000000000..de1fc1631fc4
--- /dev/null
+++ b/arch/riscv/include/asm/mmu_context.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_MMU_CONTEXT_H
+#define _ASM_RISCV_MMU_CONTEXT_H
+
+#include <asm-generic/mm_hooks.h>
+
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <asm/tlbflush.h>
+
+static inline void enter_lazy_tlb(struct mm_struct *mm,
+	struct task_struct *task)
+{
+}
+
+/* Initialize context-related info for a new mm_struct */
+static inline int init_new_context(struct task_struct *task,
+	struct mm_struct *mm)
+{
+	return 0;
+}
+
+static inline void destroy_context(struct mm_struct *mm)
+{
+}
+
+static inline pgd_t *current_pgdir(void)
+{
+	return pfn_to_virt(csr_read(sptbr) & SPTBR_PPN);
+}
+
+static inline void set_pgdir(pgd_t *pgd)
+{
+	csr_write(sptbr, virt_to_pfn(pgd) | SPTBR_MODE);
+}
+
+static inline void switch_mm(struct mm_struct *prev,
+	struct mm_struct *next, struct task_struct *task)
+{
+	if (likely(prev != next)) {
+		set_pgdir(next->pgd);
+		local_flush_tlb_all();
+	}
+}
+
+static inline void activate_mm(struct mm_struct *prev,
+			       struct mm_struct *next)
+{
+	switch_mm(prev, next, NULL);
+}
+
+static inline void deactivate_mm(struct task_struct *task,
+	struct mm_struct *mm)
+{
+}
+
+#endif /* _ASM_RISCV_MMU_CONTEXT_H */
diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
new file mode 100644
index 000000000000..e1491c20d6fd
--- /dev/null
+++ b/arch/riscv/include/asm/page.h
@@ -0,0 +1,138 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ * Copyright (C) 2017 XiaojingZhu <zhuxiaoj@ict.ac.cn>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PAGE_H
+#define _ASM_RISCV_PAGE_H
+
+#include <linux/pfn.h>
+#include <linux/const.h>
+
+#define PAGE_SHIFT	(12)
+#define PAGE_SIZE	(_AC(1, UL) << PAGE_SHIFT)
+#define PAGE_MASK	(~(PAGE_SIZE - 1))
+
+#ifdef __KERNEL__
+
+/*
+ * PAGE_OFFSET -- the first address of the first page of memory.
+ * When not using MMU this corresponds to the first free page in
+ * physical memory (aligned on a page boundary).
+ */
+#ifdef CONFIG_64BIT
+#define PAGE_OFFSET		_AC(0xffffffff80000000, UL)
+#else
+#define PAGE_OFFSET		_AC(0xc0000000, UL)
+#endif
+
+#define KERN_VIRT_SIZE (-PAGE_OFFSET)
+
+#ifndef __ASSEMBLY__
+
+#define PAGE_UP(addr)	(((addr)+((PAGE_SIZE)-1))&(~((PAGE_SIZE)-1)))
+#define PAGE_DOWN(addr)	((addr)&(~((PAGE_SIZE)-1)))
+
+/* align addr on a size boundary - adjust address up/down if needed */
+#define _ALIGN_UP(addr, size)	(((addr)+((size)-1))&(~((size)-1)))
+#define _ALIGN_DOWN(addr, size)	((addr)&(~((size)-1)))
+
+/* align addr on a size boundary - adjust address up if needed */
+#define _ALIGN(addr, size)	_ALIGN_UP(addr, size)
+
+#define clear_page(pgaddr)			memset((pgaddr), 0, PAGE_SIZE)
+#define copy_page(to, from)			memcpy((to), (from), PAGE_SIZE)
+
+#define clear_user_page(pgaddr, vaddr, page)	memset((pgaddr), 0, PAGE_SIZE)
+#define copy_user_page(vto, vfrom, vaddr, topg) \
+			memcpy((vto), (vfrom), PAGE_SIZE)
+
+/*
+ * Use struct definitions to apply C type checking
+ */
+
+/* Page Global Directory entry */
+typedef struct {
+	unsigned long pgd;
+} pgd_t;
+
+/* Page Table entry */
+typedef struct {
+	unsigned long pte;
+} pte_t;
+
+typedef struct {
+	unsigned long pgprot;
+} pgprot_t;
+
+typedef struct page *pgtable_t;
+
+#define pte_val(x)	((x).pte)
+#define pgd_val(x)	((x).pgd)
+#define pgprot_val(x)	((x).pgprot)
+
+#define __pte(x)	((pte_t) { (x) })
+#define __pgd(x)	((pgd_t) { (x) })
+#define __pgprot(x)	((pgprot_t) { (x) })
+
+#ifdef CONFIG_64BITS
+#define PTE_FMT "%016lx"
+#else
+#define PTE_FMT "%08lx"
+#endif
+
+extern unsigned long va_pa_offset;
+extern unsigned long pfn_base;
+
+extern unsigned long max_low_pfn;
+extern unsigned long min_low_pfn;
+
+#define __pa(x)		((unsigned long)(x) - va_pa_offset)
+#define __va(x)		((void *)((unsigned long) (x) + va_pa_offset))
+
+#define phys_to_pfn(phys)	(PFN_DOWN(phys))
+#define pfn_to_phys(pfn)	(PFN_PHYS(pfn))
+
+#define virt_to_pfn(vaddr)	(phys_to_pfn(__pa(vaddr)))
+#define pfn_to_virt(pfn)	(__va(pfn_to_phys(pfn)))
+
+#define virt_to_page(vaddr)	(pfn_to_page(virt_to_pfn(vaddr)))
+#define page_to_virt(page)	(pfn_to_virt(page_to_pfn(page)))
+
+#define page_to_phys(page)	(pfn_to_phys(page_to_pfn(page)))
+#define page_to_bus(page)	(page_to_phys(page))
+#define phys_to_page(paddr)	(pfn_to_page(phys_to_pfn(paddr)))
+
+#define pfn_valid(pfn) \
+	(((pfn) >= pfn_base) && (((pfn)-pfn_base) < max_mapnr))
+
+#define ARCH_PFN_OFFSET		(pfn_base)
+
+#endif /* __ASSEMBLY__ */
+
+#define virt_addr_valid(vaddr)	(pfn_valid(virt_to_pfn(vaddr)))
+
+#endif /* __KERNEL__ */
+
+#define VM_DATA_DEFAULT_FLAGS	(VM_READ | VM_WRITE | \
+				 VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
+
+#include <asm-generic/memory_model.h>
+#include <asm-generic/getorder.h>
+
+/* vDSO support */
+/* We do define AT_SYSINFO_EHDR but don't use the gate mechanism */
+#define __HAVE_ARCH_GATE_AREA
+
+#endif /* _ASM_RISCV_PAGE_H */
diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h
new file mode 100644
index 000000000000..b40074bcb164
--- /dev/null
+++ b/arch/riscv/include/asm/pgalloc.h
@@ -0,0 +1,124 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGALLOC_H
+#define _ASM_RISCV_PGALLOC_H
+
+#include <linux/mm.h>
+#include <asm/tlb.h>
+
+static inline void pmd_populate_kernel(struct mm_struct *mm,
+	pmd_t *pmd, pte_t *pte)
+{
+	unsigned long pfn = virt_to_pfn(pte);
+
+	set_pmd(pmd, __pmd((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+
+static inline void pmd_populate(struct mm_struct *mm,
+	pmd_t *pmd, pgtable_t pte)
+{
+	unsigned long pfn = virt_to_pfn(page_address(pte));
+
+	set_pmd(pmd, __pmd((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+
+#ifndef __PAGETABLE_PMD_FOLDED
+static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+{
+	unsigned long pfn = virt_to_pfn(pmd);
+
+	set_pud(pud, __pud((pfn << _PAGE_PFN_SHIFT) | _PAGE_TABLE));
+}
+#endif /* __PAGETABLE_PMD_FOLDED */
+
+#define pmd_pgtable(pmd)	pmd_page(pmd)
+
+static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+{
+	pgd_t *pgd;
+
+	pgd = (pgd_t *)__get_free_page(GFP_KERNEL);
+	if (likely(pgd != NULL)) {
+		memset(pgd, 0, USER_PTRS_PER_PGD * sizeof(pgd_t));
+		/* Copy kernel mappings */
+		memcpy(pgd + USER_PTRS_PER_PGD,
+			init_mm.pgd + USER_PTRS_PER_PGD,
+			(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
+	}
+	return pgd;
+}
+
+static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
+{
+	free_page((unsigned long)pgd);
+}
+
+#ifndef __PAGETABLE_PMD_FOLDED
+
+static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
+{
+	return (pmd_t *)__get_free_page(
+		GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+}
+
+static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
+{
+	free_page((unsigned long)pmd);
+}
+
+#define __pmd_free_tlb(tlb, pmd, addr)  pmd_free((tlb)->mm, pmd)
+
+#endif /* __PAGETABLE_PMD_FOLDED */
+
+static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
+	unsigned long address)
+{
+	return (pte_t *)__get_free_page(
+		GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+}
+
+static inline struct page *pte_alloc_one(struct mm_struct *mm,
+	unsigned long address)
+{
+	struct page *pte;
+
+	pte = alloc_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+	if (likely(pte != NULL))
+		pgtable_page_ctor(pte);
+	return pte;
+}
+
+static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
+{
+	free_page((unsigned long)pte);
+}
+
+static inline void pte_free(struct mm_struct *mm, pgtable_t pte)
+{
+	pgtable_page_dtor(pte);
+	__free_page(pte);
+}
+
+#define __pte_free_tlb(tlb, pte, buf)   \
+do {                                    \
+	pgtable_page_dtor(pte);         \
+	tlb_remove_page((tlb), pte);    \
+} while (0)
+
+static inline void check_pgt_cache(void)
+{
+}
+
+#endif /* _ASM_RISCV_PGALLOC_H */
diff --git a/arch/riscv/include/asm/pgtable-32.h b/arch/riscv/include/asm/pgtable-32.h
new file mode 100644
index 000000000000..d61974b74182
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-32.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_32_H
+#define _ASM_RISCV_PGTABLE_32_H
+
+#include <asm-generic/pgtable-nopmd.h>
+#include <linux/const.h>
+
+/* Size of region mapped by a page global directory */
+#define PGDIR_SHIFT     22
+#define PGDIR_SIZE      (_AC(1, UL) << PGDIR_SHIFT)
+#define PGDIR_MASK      (~(PGDIR_SIZE - 1))
+
+#endif /* _ASM_RISCV_PGTABLE_32_H */
diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h
new file mode 100644
index 000000000000..7aa0ea9bd8bb
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-64.h
@@ -0,0 +1,84 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_64_H
+#define _ASM_RISCV_PGTABLE_64_H
+
+#include <linux/const.h>
+
+#define PGDIR_SHIFT     30
+/* Size of region mapped by a page global directory */
+#define PGDIR_SIZE      (_AC(1, UL) << PGDIR_SHIFT)
+#define PGDIR_MASK      (~(PGDIR_SIZE - 1))
+
+#define PMD_SHIFT       21
+/* Size of region mapped by a page middle directory */
+#define PMD_SIZE        (_AC(1, UL) << PMD_SHIFT)
+#define PMD_MASK        (~(PMD_SIZE - 1))
+
+/* Page Middle Directory entry */
+typedef struct {
+	unsigned long pmd;
+} pmd_t;
+
+#define pmd_val(x)      ((x).pmd)
+#define __pmd(x)        ((pmd_t) { (x) })
+
+#define PTRS_PER_PMD    (PAGE_SIZE / sizeof(pmd_t))
+
+static inline int pud_present(pud_t pud)
+{
+	return (pud_val(pud) & _PAGE_PRESENT);
+}
+
+static inline int pud_none(pud_t pud)
+{
+	return (pud_val(pud) == 0);
+}
+
+static inline int pud_bad(pud_t pud)
+{
+	return !pud_present(pud);
+}
+
+static inline void set_pud(pud_t *pudp, pud_t pud)
+{
+	*pudp = pud;
+}
+
+static inline void pud_clear(pud_t *pudp)
+{
+	set_pud(pudp, __pud(0));
+}
+
+static inline unsigned long pud_page_vaddr(pud_t pud)
+{
+	return (unsigned long)pfn_to_virt(pud_val(pud) >> _PAGE_PFN_SHIFT);
+}
+
+#define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
+
+static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
+{
+	return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(addr);
+}
+
+static inline pmd_t pfn_pmd(unsigned long pfn, pgprot_t prot)
+{
+	return __pmd((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+#define pmd_ERROR(e) \
+	pr_err("%s:%d: bad pmd %016lx.\n", __FILE__, __LINE__, pmd_val(e))
+
+#endif /* _ASM_RISCV_PGTABLE_64_H */
diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm/pgtable-bits.h
new file mode 100644
index 000000000000..997ddbb1d370
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable-bits.h
@@ -0,0 +1,48 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_BITS_H
+#define _ASM_RISCV_PGTABLE_BITS_H
+
+/*
+ * PTE format:
+ * | XLEN-1  10 | 9             8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0
+ *       PFN      reserved for SW   D   A   G   U   X   W   R   V
+ */
+
+#define _PAGE_ACCESSED_OFFSET 6
+
+#define _PAGE_PRESENT   (1 << 0)
+#define _PAGE_READ      (1 << 1)    /* Readable */
+#define _PAGE_WRITE     (1 << 2)    /* Writable */
+#define _PAGE_EXEC      (1 << 3)    /* Executable */
+#define _PAGE_USER      (1 << 4)    /* User */
+#define _PAGE_GLOBAL    (1 << 5)    /* Global */
+#define _PAGE_ACCESSED  (1 << 6)    /* Set by hardware on any access */
+#define _PAGE_DIRTY     (1 << 7)    /* Set by hardware on any write */
+#define _PAGE_SOFT      (1 << 8)    /* Reserved for software */
+
+#define _PAGE_SPECIAL   _PAGE_SOFT
+#define _PAGE_TABLE     _PAGE_PRESENT
+
+#define _PAGE_PFN_SHIFT 10
+
+/* Set of bits to preserve across pte_modify() */
+#define _PAGE_CHG_MASK  (~(unsigned long)(_PAGE_PRESENT | _PAGE_READ |	\
+					  _PAGE_WRITE | _PAGE_EXEC |	\
+					  _PAGE_USER | _PAGE_GLOBAL))
+
+/* Advertise support for _PAGE_SPECIAL */
+#define __HAVE_ARCH_PTE_SPECIAL
+
+#endif /* _ASM_RISCV_PGTABLE_BITS_H */
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
new file mode 100644
index 000000000000..8bb44014f5c3
--- /dev/null
+++ b/arch/riscv/include/asm/pgtable.h
@@ -0,0 +1,427 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PGTABLE_H
+#define _ASM_RISCV_PGTABLE_H
+
+#include <linux/mmzone.h>
+
+#include <asm/pgtable-bits.h>
+
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_MMU
+
+/* Page Upper Directory not used in RISC-V */
+#include <asm-generic/pgtable-nopud.h>
+#include <asm/page.h>
+#include <asm/tlbflush.h>
+#include <linux/mm_types.h>
+
+#ifdef CONFIG_64BIT
+#include <asm/pgtable-64.h>
+#else
+#include <asm/pgtable-32.h>
+#endif /* CONFIG_64BIT */
+
+/* Number of entries in the page global directory */
+#define PTRS_PER_PGD    (PAGE_SIZE / sizeof(pgd_t))
+/* Number of entries in the page table */
+#define PTRS_PER_PTE    (PAGE_SIZE / sizeof(pte_t))
+
+/* Number of PGD entries that a user-mode program can use */
+#define USER_PTRS_PER_PGD   (TASK_SIZE / PGDIR_SIZE)
+#define FIRST_USER_ADDRESS  0
+
+/* Page protection bits */
+#define _PAGE_BASE	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER)
+
+#define PAGE_NONE		__pgprot(0)
+#define PAGE_READ		__pgprot(_PAGE_BASE | _PAGE_READ)
+#define PAGE_WRITE		__pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_WRITE)
+#define PAGE_EXEC		__pgprot(_PAGE_BASE | _PAGE_EXEC)
+#define PAGE_READ_EXEC		__pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC)
+#define PAGE_WRITE_EXEC		__pgprot(_PAGE_BASE | _PAGE_READ |	\
+					 _PAGE_EXEC | _PAGE_WRITE)
+
+#define PAGE_COPY		PAGE_READ
+#define PAGE_COPY_EXEC		PAGE_EXEC
+#define PAGE_COPY_READ_EXEC	PAGE_READ_EXEC
+#define PAGE_SHARED		PAGE_WRITE
+#define PAGE_SHARED_EXEC	PAGE_WRITE_EXEC
+
+#define _PAGE_KERNEL		(_PAGE_READ \
+				| _PAGE_WRITE \
+				| _PAGE_PRESENT \
+				| _PAGE_ACCESSED \
+				| _PAGE_DIRTY)
+
+#define PAGE_KERNEL		__pgprot(_PAGE_KERNEL)
+#define PAGE_KERNEL_EXEC	__pgprot(_PAGE_KERNEL | _PAGE_EXEC)
+
+extern pgd_t swapper_pg_dir[];
+
+/* MAP_PRIVATE permissions: xwr (copy-on-write) */
+#define __P000	PAGE_NONE
+#define __P001	PAGE_READ
+#define __P010	PAGE_COPY
+#define __P011	PAGE_COPY
+#define __P100	PAGE_EXEC
+#define __P101	PAGE_READ_EXEC
+#define __P110	PAGE_COPY_EXEC
+#define __P111	PAGE_COPY_READ_EXEC
+
+/* MAP_SHARED permissions: xwr */
+#define __S000	PAGE_NONE
+#define __S001	PAGE_READ
+#define __S010	PAGE_SHARED
+#define __S011	PAGE_SHARED
+#define __S100	PAGE_EXEC
+#define __S101	PAGE_READ_EXEC
+#define __S110	PAGE_SHARED_EXEC
+#define __S111	PAGE_SHARED_EXEC
+
+/*
+ * ZERO_PAGE is a global shared page that is always zero,
+ * used for zero-mapped memory areas, etc.
+ */
+extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
+
+static inline int pmd_present(pmd_t pmd)
+{
+	return (pmd_val(pmd) & _PAGE_PRESENT);
+}
+
+static inline int pmd_none(pmd_t pmd)
+{
+	return (pmd_val(pmd) == 0);
+}
+
+static inline int pmd_bad(pmd_t pmd)
+{
+	return !pmd_present(pmd);
+}
+
+static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
+{
+	*pmdp = pmd;
+}
+
+static inline void pmd_clear(pmd_t *pmdp)
+{
+	set_pmd(pmdp, __pmd(0));
+}
+
+
+static inline pgd_t pfn_pgd(unsigned long pfn, pgprot_t prot)
+{
+	return __pgd((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+#define pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
+
+/* Locate an entry in the page global directory */
+static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr)
+{
+	return mm->pgd + pgd_index(addr);
+}
+/* Locate an entry in the kernel page global directory */
+#define pgd_offset_k(addr)      pgd_offset(&init_mm, (addr))
+
+static inline struct page *pmd_page(pmd_t pmd)
+{
+	return pfn_to_page(pmd_val(pmd) >> _PAGE_PFN_SHIFT);
+}
+
+static inline unsigned long pmd_page_vaddr(pmd_t pmd)
+{
+	return (unsigned long)pfn_to_virt(pmd_val(pmd) >> _PAGE_PFN_SHIFT);
+}
+
+/* Yields the page frame number (PFN) of a page table entry */
+static inline unsigned long pte_pfn(pte_t pte)
+{
+	return (pte_val(pte) >> _PAGE_PFN_SHIFT);
+}
+
+#define pte_page(x)     pfn_to_page(pte_pfn(x))
+
+/* Constructs a page table entry */
+static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot)
+{
+	return __pte((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
+}
+
+static inline pte_t mk_pte(struct page *page, pgprot_t prot)
+{
+	return pfn_pte(page_to_pfn(page), prot);
+}
+
+#define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+
+static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long addr)
+{
+	return (pte_t *)pmd_page_vaddr(*pmd) + pte_index(addr);
+}
+
+#define pte_offset_map(dir, addr)	pte_offset_kernel((dir), (addr))
+#define pte_unmap(pte)			((void)(pte))
+
+/*
+ * Certain architectures need to do special things when PTEs within
+ * a page table are directly modified.  Thus, the following hook is
+ * made available.
+ */
+static inline void set_pte(pte_t *ptep, pte_t pteval)
+{
+	*ptep = pteval;
+}
+
+static inline void set_pte_at(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep, pte_t pteval)
+{
+	set_pte(ptep, pteval);
+}
+
+static inline void pte_clear(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep)
+{
+	set_pte_at(mm, addr, ptep, __pte(0));
+}
+
+static inline int pte_present(pte_t pte)
+{
+	return (pte_val(pte) & _PAGE_PRESENT);
+}
+
+static inline int pte_none(pte_t pte)
+{
+	return (pte_val(pte) == 0);
+}
+
+/* static inline int pte_read(pte_t pte) */
+
+static inline int pte_write(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_WRITE;
+}
+
+static inline int pte_huge(pte_t pte)
+{
+	return pte_present(pte)
+		&& (pte_val(pte) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC));
+}
+
+/* static inline int pte_exec(pte_t pte) */
+
+static inline int pte_dirty(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_DIRTY;
+}
+
+static inline int pte_young(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_ACCESSED;
+}
+
+static inline int pte_special(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_SPECIAL;
+}
+
+/* static inline pte_t pte_rdprotect(pte_t pte) */
+
+static inline pte_t pte_wrprotect(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_WRITE));
+}
+
+/* static inline pte_t pte_mkread(pte_t pte) */
+
+static inline pte_t pte_mkwrite(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_WRITE);
+}
+
+/* static inline pte_t pte_mkexec(pte_t pte) */
+
+static inline pte_t pte_mkdirty(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_DIRTY);
+}
+
+static inline pte_t pte_mkclean(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_DIRTY));
+}
+
+static inline pte_t pte_mkyoung(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_ACCESSED);
+}
+
+static inline pte_t pte_mkold(pte_t pte)
+{
+	return __pte(pte_val(pte) & ~(_PAGE_ACCESSED));
+}
+
+static inline pte_t pte_mkspecial(pte_t pte)
+{
+	return __pte(pte_val(pte) | _PAGE_SPECIAL);
+}
+
+/* Modify page protection bits */
+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+	return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
+}
+
+#define pgd_ERROR(e) \
+	pr_err("%s:%d: bad pgd " PTE_FMT ".\n", __FILE__, __LINE__, pgd_val(e))
+
+
+/* Commit new configuration to MMU hardware */
+static inline void update_mmu_cache(struct vm_area_struct *vma,
+	unsigned long address, pte_t *ptep)
+{
+	/* The kernel assumes that TLBs don't cache invalid entries, but
+	 * in RISC-V, SFENCE.VMA specifies an ordering constraint, not a
+	 * cache flush; it is necessary even after writing invalid entries.
+	 * Relying on flush_tlb_fix_spurious_fault would suffice, but
+	 * the extra traps reduce performance.  So, eagerly SFENCE.VMA.
+	 */
+	local_flush_tlb_page(address);
+}
+
+#define __HAVE_ARCH_PTE_SAME
+static inline int pte_same(pte_t pte_a, pte_t pte_b)
+{
+	return pte_val(pte_a) == pte_val(pte_b);
+}
+
+#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
+static inline int ptep_set_access_flags(struct vm_area_struct *vma,
+					unsigned long address, pte_t *ptep,
+					pte_t entry, int dirty)
+{
+	if (!pte_same(*ptep, entry))
+		set_pte_at(vma->vm_mm, address, ptep, entry);
+	/* update_mmu_cache will unconditionally execute, handling both
+	 * the case that the PTE changed and the spurious fault case.
+	 */
+	return true;
+}
+
+#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
+static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+}
+
+#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
+static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
+					    unsigned long address,
+					    pte_t *ptep)
+{
+	if (!pte_young(*ptep))
+		return 0;
+	return test_and_clear_bit(_PAGE_ACCESSED_OFFSET, &pte_val(*ptep));
+}
+
+#define __HAVE_ARCH_PTEP_SET_WRPROTECT
+static inline void ptep_set_wrprotect(struct mm_struct *mm,
+				      unsigned long address, pte_t *ptep)
+{
+	atomic_long_and(~(unsigned long)_PAGE_WRITE, (atomic_long_t *)ptep);
+}
+
+#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
+static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
+					 unsigned long address, pte_t *ptep)
+{
+	/*
+	 * This comment is borrowed from x86, but applies equally to RISC-V:
+	 *
+	 * Clearing the accessed bit without a TLB flush
+	 * doesn't cause data corruption. [ It could cause incorrect
+	 * page aging and the (mistaken) reclaim of hot pages, but the
+	 * chance of that should be relatively low. ]
+	 *
+	 * So as a performance optimization don't flush the TLB when
+	 * clearing the accessed bit, it will eventually be flushed by
+	 * a context switch or a VM operation anyway. [ In the rare
+	 * event of it not getting flushed for a long time the delay
+	 * shouldn't really matter because there's no real memory
+	 * pressure for swapout to react to. ]
+	 */
+	return ptep_test_and_clear_young(vma, address, ptep);
+}
+
+/*
+ * Encode and decode a swap entry
+ *
+ * Format of swap PTE:
+ *	bit            0:	_PAGE_PRESENT (zero)
+ *	bit            1:	reserved for future use (zero)
+ *	bits      2 to 6:	swap type
+ *	bits 7 to XLEN-1:	swap offset
+ */
+#define __SWP_TYPE_SHIFT	2
+#define __SWP_TYPE_BITS		5
+#define __SWP_TYPE_MASK		((1UL << __SWP_TYPE_BITS) - 1)
+#define __SWP_OFFSET_SHIFT	(__SWP_TYPE_BITS + __SWP_TYPE_SHIFT)
+
+#define MAX_SWAPFILES_CHECK()	\
+	BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > __SWP_TYPE_BITS)
+
+#define __swp_type(x)	(((x).val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK)
+#define __swp_offset(x)	((x).val >> __SWP_OFFSET_SHIFT)
+#define __swp_entry(type, offset) ((swp_entry_t) \
+	{ ((type) << __SWP_TYPE_SHIFT) | ((offset) << __SWP_OFFSET_SHIFT) })
+
+#define __pte_to_swp_entry(pte)	((swp_entry_t) { pte_val(pte) })
+#define __swp_entry_to_pte(x)	((pte_t) { (x).val })
+
+#ifdef CONFIG_FLATMEM
+#define kern_addr_valid(addr)   (1) /* FIXME */
+#endif
+
+extern void paging_init(void);
+
+static inline void pgtable_cache_init(void)
+{
+	/* No page table caches to initialize */
+}
+
+#endif /* CONFIG_MMU */
+
+#define VMALLOC_SIZE     (KERN_VIRT_SIZE >> 1)
+#define VMALLOC_END      (PAGE_OFFSET - 1)
+#define VMALLOC_START    (PAGE_OFFSET - VMALLOC_SIZE)
+
+/* Task size is 0x40000000000 for RV64 or 0xb800000 for RV32.
+ * Note that PGDIR_SIZE must evenly divide TASK_SIZE.
+ */
+#ifdef CONFIG_64BIT
+#define TASK_SIZE (PGDIR_SIZE * PTRS_PER_PGD / 2)
+#else
+#define TASK_SIZE VMALLOC_START
+#endif
+
+#include <asm-generic/pgtable.h>
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PGTABLE_H */
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
new file mode 100644
index 000000000000..b2a431c7f233
--- /dev/null
+++ b/arch/riscv/mm/fault.c
@@ -0,0 +1,280 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+
+#include <linux/mm.h>
+#include <linux/kernel.h>
+#include <linux/interrupt.h>
+#include <linux/perf_event.h>
+#include <linux/signal.h>
+#include <linux/uaccess.h>
+
+#include <asm/pgalloc.h>
+#include <asm/ptrace.h>
+#include <asm/uaccess.h>
+
+/*
+ * This routine handles page faults.  It determines the address and the
+ * problem, and then passes it off to one of the appropriate routines.
+ */
+asmlinkage void do_page_fault(struct pt_regs *regs)
+{
+	struct task_struct *tsk;
+	struct vm_area_struct *vma;
+	struct mm_struct *mm;
+	unsigned long addr, cause;
+	unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
+	int fault, code = SEGV_MAPERR;
+
+	cause = regs->scause;
+	addr = regs->sbadaddr;
+
+	tsk = current;
+	mm = tsk->mm;
+
+	/*
+	 * Fault-in kernel-space virtual memory on-demand.
+	 * The 'reference' page table is init_mm.pgd.
+	 *
+	 * NOTE! We MUST NOT take any locks for this case. We may
+	 * be in an interrupt or a critical region, and should
+	 * only copy the information from the master page table,
+	 * nothing more.
+	 */
+	if (unlikely((addr >= VMALLOC_START) && (addr <= VMALLOC_END)))
+		goto vmalloc_fault;
+
+	/* Enable interrupts if they were enabled in the parent context. */
+	if (likely(regs->sstatus & SR_PIE))
+		local_irq_enable();
+
+	/*
+	 * If we're in an interrupt, have no user context, or are running
+	 * in an atomic region, then we must not take the fault.
+	 */
+	if (unlikely(faulthandler_disabled() || !mm))
+		goto no_context;
+
+	if (user_mode(regs))
+		flags |= FAULT_FLAG_USER;
+
+	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
+
+retry:
+	down_read(&mm->mmap_sem);
+	vma = find_vma(mm, addr);
+	if (unlikely(!vma))
+		goto bad_area;
+	if (likely(vma->vm_start <= addr))
+		goto good_area;
+	if (unlikely(!(vma->vm_flags & VM_GROWSDOWN)))
+		goto bad_area;
+	if (unlikely(expand_stack(vma, addr)))
+		goto bad_area;
+
+	/*
+	 * Ok, we have a good vm_area for this memory access, so
+	 * we can handle it.
+	 */
+good_area:
+	code = SEGV_ACCERR;
+
+	switch (cause) {
+	case EXC_INST_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_EXEC))
+			goto bad_area;
+		break;
+	case EXC_LOAD_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_READ))
+			goto bad_area;
+		break;
+	case EXC_STORE_PAGE_FAULT:
+		if (!(vma->vm_flags & VM_WRITE))
+			goto bad_area;
+		flags |= FAULT_FLAG_WRITE;
+		break;
+	default:
+		panic("%s: unhandled cause %lu", __func__, cause);
+	}
+
+	/*
+	 * If for any reason at all we could not handle the fault,
+	 * make sure we exit gracefully rather than endlessly redo
+	 * the fault.
+	 */
+	fault = handle_mm_fault(vma, addr, flags);
+
+	/*
+	 * If we need to retry but a fatal signal is pending, handle the
+	 * signal first. We do not need to release the mmap_sem because it
+	 * would already be released in __lock_page_or_retry in mm/filemap.c.
+	 */
+	if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(tsk))
+		return;
+
+	if (unlikely(fault & VM_FAULT_ERROR)) {
+		if (fault & VM_FAULT_OOM)
+			goto out_of_memory;
+		else if (fault & VM_FAULT_SIGBUS)
+			goto do_sigbus;
+		BUG();
+	}
+
+	/*
+	 * Major/minor page fault accounting is only done on the
+	 * initial attempt. If we go through a retry, it is extremely
+	 * likely that the page will be found in page cache at that point.
+	 */
+	if (flags & FAULT_FLAG_ALLOW_RETRY) {
+		if (fault & VM_FAULT_MAJOR) {
+			tsk->maj_flt++;
+			perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ,
+				      1, regs, addr);
+		} else {
+			tsk->min_flt++;
+			perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN,
+				      1, regs, addr);
+		}
+		if (fault & VM_FAULT_RETRY) {
+			/*
+			 * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
+			 * of starvation.
+			 */
+			flags &= ~(FAULT_FLAG_ALLOW_RETRY);
+			flags |= FAULT_FLAG_TRIED;
+
+			/*
+			 * No need to up_read(&mm->mmap_sem) as we would
+			 * have already released it in __lock_page_or_retry
+			 * in mm/filemap.c.
+			 */
+			goto retry;
+		}
+	}
+
+	up_read(&mm->mmap_sem);
+	return;
+
+	/*
+	 * Something tried to access memory that isn't in our memory map.
+	 * Fix it, but check if it's kernel or user first.
+	 */
+bad_area:
+	up_read(&mm->mmap_sem);
+	/* User mode accesses just cause a SIGSEGV */
+	if (user_mode(regs)) {
+		do_trap(regs, SIGSEGV, code, addr, tsk);
+		return;
+	}
+
+no_context:
+	/* Are we prepared to handle this kernel fault? */
+	if (fixup_exception(regs))
+		return;
+
+	/*
+	 * Oops. The kernel tried to access some bad page. We'll have to
+	 * terminate things with extreme prejudice.
+	 */
+	bust_spinlocks(1);
+	pr_alert("Unable to handle kernel %s at virtual address " REG_FMT "\n",
+		(addr < PAGE_SIZE) ? "NULL pointer dereference" :
+		"paging request", addr);
+	die(regs, "Oops");
+	do_exit(SIGKILL);
+
+	/*
+	 * We ran out of memory, call the OOM killer, and return the userspace
+	 * (which will retry the fault, or kill us if we got oom-killed).
+	 */
+out_of_memory:
+	up_read(&mm->mmap_sem);
+	if (!user_mode(regs))
+		goto no_context;
+	pagefault_out_of_memory();
+	return;
+
+do_sigbus:
+	up_read(&mm->mmap_sem);
+	/* Kernel mode? Handle exceptions or die */
+	if (!user_mode(regs))
+		goto no_context;
+	do_trap(regs, SIGBUS, BUS_ADRERR, addr, tsk);
+	return;
+
+vmalloc_fault:
+	{
+		pgd_t *pgd, *pgd_k;
+		pud_t *pud, *pud_k;
+		p4d_t *p4d, *p4d_k;
+		pmd_t *pmd, *pmd_k;
+		pte_t *pte_k;
+		int index;
+
+		if (user_mode(regs))
+			goto bad_area;
+
+		/*
+		 * Synchronize this task's top level page-table
+		 * with the 'reference' page table.
+		 *
+		 * Do _not_ use "tsk->active_mm->pgd" here.
+		 * We might be inside an interrupt in the middle
+		 * of a task switch.
+		 */
+		index = pgd_index(addr);
+		pgd = (pgd_t *)pfn_to_virt(csr_read(sptbr)) + index;
+		pgd_k = init_mm.pgd + index;
+
+		if (!pgd_present(*pgd_k))
+			goto no_context;
+		set_pgd(pgd, *pgd_k);
+
+		p4d = p4d_offset(pgd, addr);
+		p4d_k = p4d_offset(pgd_k, addr);
+		if (!p4d_present(*p4d_k))
+			goto no_context;
+
+		pud = pud_offset(p4d, addr);
+		pud_k = pud_offset(p4d_k, addr);
+		if (!pud_present(*pud_k))
+			goto no_context;
+
+		/* Since the vmalloc area is global, it is unnecessary
+		 * to copy individual PTEs
+		 */
+		pmd = pmd_offset(pud, addr);
+		pmd_k = pmd_offset(pud_k, addr);
+		if (!pmd_present(*pmd_k))
+			goto no_context;
+		set_pmd(pmd, *pmd_k);
+
+		/* Make sure the actual PTE exists as well to
+		 * catch kernel vmalloc-area accesses to non-mapped
+		 * addresses. If we don't do this, this will just
+		 * silently loop forever.
+		 */
+		pte_k = pte_offset_kernel(pmd_k, addr);
+		if (!pte_present(*pte_k))
+			goto no_context;
+		return;
+	}
+}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 8/9] RISC-V: User-facing API
  2017-06-28 18:55     ` Palmer Dabbelt
@ 2017-06-28 18:55       ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains code that is in some way visible to the user:
including via system calls, the VDSO, module loading and signal
handling.  It also contains some generic code that is ABI visible.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/mmu.h              |  26 +++
 arch/riscv/include/asm/ptrace.h           | 116 ++++++++++++
 arch/riscv/include/asm/syscall.h          |  90 ++++++++++
 arch/riscv/include/asm/syscalls.h         |  25 +++
 arch/riscv/include/asm/unistd.h           |  16 ++
 arch/riscv/include/asm/vdso.h             |  32 ++++
 arch/riscv/include/uapi/asm/auxvec.h      |  24 +++
 arch/riscv/include/uapi/asm/bitsperlong.h |  25 +++
 arch/riscv/include/uapi/asm/byteorder.h   |  23 +++
 arch/riscv/include/uapi/asm/elf.h         |  83 +++++++++
 arch/riscv/include/uapi/asm/ptrace.h      |  87 +++++++++
 arch/riscv/include/uapi/asm/sigcontext.h  |  29 +++
 arch/riscv/include/uapi/asm/siginfo.h     |  24 +++
 arch/riscv/include/uapi/asm/ucontext.h    |  35 ++++
 arch/riscv/include/uapi/asm/unistd.h      |  26 +++
 arch/riscv/kernel/module.c                | 215 ++++++++++++++++++++++
 arch/riscv/kernel/ptrace.c                | 147 +++++++++++++++
 arch/riscv/kernel/riscv_ksyms.c           |  15 ++
 arch/riscv/kernel/signal.c                | 288 ++++++++++++++++++++++++++++++
 arch/riscv/kernel/sys_riscv.c             |  87 +++++++++
 arch/riscv/kernel/syscall_table.c         |  25 +++
 arch/riscv/kernel/vdso/.gitignore         |   1 +
 arch/riscv/kernel/vdso/cmpxchg32.S        |  40 +++++
 arch/riscv/kernel/vdso/cmpxchg64.S        |  40 +++++
 arch/riscv/kernel/vdso/sigreturn.S        |  24 +++
 arch/riscv/kernel/vdso/vdso.S             |  27 +++
 26 files changed, 1570 insertions(+)
 create mode 100644 arch/riscv/include/asm/mmu.h
 create mode 100644 arch/riscv/include/asm/ptrace.h
 create mode 100644 arch/riscv/include/asm/syscall.h
 create mode 100644 arch/riscv/include/asm/syscalls.h
 create mode 100644 arch/riscv/include/asm/unistd.h
 create mode 100644 arch/riscv/include/asm/vdso.h
 create mode 100644 arch/riscv/include/uapi/asm/auxvec.h
 create mode 100644 arch/riscv/include/uapi/asm/bitsperlong.h
 create mode 100644 arch/riscv/include/uapi/asm/byteorder.h
 create mode 100644 arch/riscv/include/uapi/asm/elf.h
 create mode 100644 arch/riscv/include/uapi/asm/ptrace.h
 create mode 100644 arch/riscv/include/uapi/asm/sigcontext.h
 create mode 100644 arch/riscv/include/uapi/asm/siginfo.h
 create mode 100644 arch/riscv/include/uapi/asm/ucontext.h
 create mode 100644 arch/riscv/include/uapi/asm/unistd.h
 create mode 100644 arch/riscv/kernel/module.c
 create mode 100644 arch/riscv/kernel/ptrace.c
 create mode 100644 arch/riscv/kernel/riscv_ksyms.c
 create mode 100644 arch/riscv/kernel/signal.c
 create mode 100644 arch/riscv/kernel/sys_riscv.c
 create mode 100644 arch/riscv/kernel/syscall_table.c
 create mode 100644 arch/riscv/kernel/vdso/.gitignore
 create mode 100644 arch/riscv/kernel/vdso/cmpxchg32.S
 create mode 100644 arch/riscv/kernel/vdso/cmpxchg64.S
 create mode 100644 arch/riscv/kernel/vdso/sigreturn.S
 create mode 100644 arch/riscv/kernel/vdso/vdso.S

diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h
new file mode 100644
index 000000000000..66805cba9a27
--- /dev/null
+++ b/arch/riscv/include/asm/mmu.h
@@ -0,0 +1,26 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef _ASM_RISCV_MMU_H
+#define _ASM_RISCV_MMU_H
+
+#ifndef __ASSEMBLY__
+
+typedef struct {
+	void *vdso;
+} mm_context_t;
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_MMU_H */
diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h
new file mode 100644
index 000000000000..340201868842
--- /dev/null
+++ b/arch/riscv/include/asm/ptrace.h
@@ -0,0 +1,116 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PTRACE_H
+#define _ASM_RISCV_PTRACE_H
+
+#include <uapi/asm/ptrace.h>
+#include <asm/csr.h>
+
+#ifndef __ASSEMBLY__
+
+struct pt_regs {
+	unsigned long sepc;
+	unsigned long ra;
+	unsigned long sp;
+	unsigned long gp;
+	unsigned long tp;
+	unsigned long t0;
+	unsigned long t1;
+	unsigned long t2;
+	unsigned long s0;
+	unsigned long s1;
+	unsigned long a0;
+	unsigned long a1;
+	unsigned long a2;
+	unsigned long a3;
+	unsigned long a4;
+	unsigned long a5;
+	unsigned long a6;
+	unsigned long a7;
+	unsigned long s2;
+	unsigned long s3;
+	unsigned long s4;
+	unsigned long s5;
+	unsigned long s6;
+	unsigned long s7;
+	unsigned long s8;
+	unsigned long s9;
+	unsigned long s10;
+	unsigned long s11;
+	unsigned long t3;
+	unsigned long t4;
+	unsigned long t5;
+	unsigned long t6;
+	/* Supervisor CSRs */
+	unsigned long sstatus;
+	unsigned long sbadaddr;
+	unsigned long scause;
+};
+
+#ifdef CONFIG_64BIT
+#define REG_FMT "%016lx"
+#else
+#define REG_FMT "%08lx"
+#endif
+
+#define user_mode(regs) (((regs)->sstatus & SR_PS) == 0)
+
+
+/* Helpers for working with the instruction pointer */
+#define GET_IP(regs) ((regs)->sepc)
+#define SET_IP(regs, val) (GET_IP(regs) = (val))
+
+static inline unsigned long instruction_pointer(struct pt_regs *regs)
+{
+	return GET_IP(regs);
+}
+static inline void instruction_pointer_set(struct pt_regs *regs,
+					   unsigned long val)
+{
+	SET_IP(regs, val);
+}
+
+#define profile_pc(regs) instruction_pointer(regs)
+
+/* Helpers for working with the user stack pointer */
+#define GET_USP(regs) ((regs)->sp)
+#define SET_USP(regs, val) (GET_USP(regs) = (val))
+
+static inline unsigned long user_stack_pointer(struct pt_regs *regs)
+{
+	return GET_USP(regs);
+}
+static inline void user_stack_pointer_set(struct pt_regs *regs,
+					  unsigned long val)
+{
+	SET_USP(regs, val);
+}
+
+/* Helpers for working with the frame pointer */
+#define GET_FP(regs) ((regs)->s0)
+#define SET_FP(regs, val) (GET_FP(regs) = (val))
+
+static inline unsigned long frame_pointer(struct pt_regs *regs)
+{
+	return GET_FP(regs);
+}
+static inline void frame_pointer_set(struct pt_regs *regs,
+				     unsigned long val)
+{
+	SET_FP(regs, val);
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PTRACE_H */
diff --git a/arch/riscv/include/asm/syscall.h b/arch/riscv/include/asm/syscall.h
new file mode 100644
index 000000000000..c70f0e7a06b8
--- /dev/null
+++ b/arch/riscv/include/asm/syscall.h
@@ -0,0 +1,90 @@
+/*
+ * Copyright (C) 2008-2009 Red Hat, Inc.  All rights reserved.
+ * Copyright 2010 Tilera Corporation. All Rights Reserved.
+ * Copyright 2015 Regents of the University of California, Berkeley
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ *
+ * See asm-generic/syscall.h for descriptions of what we must do here.
+ */
+
+#ifndef _ASM_RISCV_SYSCALL_H
+#define _ASM_RISCV_SYSCALL_H
+
+#include <linux/sched.h>
+#include <linux/err.h>
+
+/* The array of function pointers for syscalls. */
+extern void *sys_call_table[];
+
+/*
+ * Only the low 32 bits of orig_r0 are meaningful, so we return int.
+ * This importantly ignores the high bits on 64-bit, so comparisons
+ * sign-extend the low 32 bits.
+ */
+static inline int syscall_get_nr(struct task_struct *task,
+				 struct pt_regs *regs)
+{
+	return regs->a7;
+}
+
+static inline void syscall_set_nr(struct task_struct *task,
+				  struct pt_regs *regs,
+				  int sysno)
+{
+	regs->a7 = sysno;
+}
+
+static inline void syscall_rollback(struct task_struct *task,
+				    struct pt_regs *regs)
+{
+	/* FIXME: We can't do this... */
+}
+
+static inline long syscall_get_error(struct task_struct *task,
+				     struct pt_regs *regs)
+{
+	unsigned long error = regs->a0;
+
+	return IS_ERR_VALUE(error) ? error : 0;
+}
+
+static inline long syscall_get_return_value(struct task_struct *task,
+					    struct pt_regs *regs)
+{
+	return regs->a0;
+}
+
+static inline void syscall_set_return_value(struct task_struct *task,
+					    struct pt_regs *regs,
+					    int error, long val)
+{
+	regs->a0 = (long) error ?: val;
+}
+
+static inline void syscall_get_arguments(struct task_struct *task,
+					 struct pt_regs *regs,
+					 unsigned int i, unsigned int n,
+					 unsigned long *args)
+{
+	BUG_ON(i + n > 6);
+	memcpy(args, &regs->a0 + i * sizeof(regs->a0), n * sizeof(args[0]));
+}
+
+static inline void syscall_set_arguments(struct task_struct *task,
+					 struct pt_regs *regs,
+					 unsigned int i, unsigned int n,
+					 const unsigned long *args)
+{
+	BUG_ON(i + n > 6);
+	memcpy(&regs->a0 + i * sizeof(regs->a0), args, n * sizeof(regs->a0));
+}
+
+#endif	/* _ASM_TILE_SYSCALL_H */
diff --git a/arch/riscv/include/asm/syscalls.h b/arch/riscv/include/asm/syscalls.h
new file mode 100644
index 000000000000..d85267c4f7ea
--- /dev/null
+++ b/arch/riscv/include/asm/syscalls.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SYSCALLS_H
+#define _ASM_RISCV_SYSCALLS_H
+
+#include <linux/linkage.h>
+
+#include <asm-generic/syscalls.h>
+
+/* kernel/sys_riscv.c */
+asmlinkage long sys_sysriscv(unsigned long, unsigned long,
+	unsigned long, unsigned long);
+
+#endif /* _ASM_RISCV_SYSCALLS_H */
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
new file mode 100644
index 000000000000..9f250ed007cd
--- /dev/null
+++ b/arch/riscv/include/asm/unistd.h
@@ -0,0 +1,16 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#define __ARCH_HAVE_MMU
+#define __ARCH_WANT_SYS_CLONE
+#include <uapi/asm/unistd.h>
diff --git a/arch/riscv/include/asm/vdso.h b/arch/riscv/include/asm/vdso.h
new file mode 100644
index 000000000000..95768a3810a7
--- /dev/null
+++ b/arch/riscv/include/asm/vdso.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright (C) 2012 ARM Limited
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_RISCV_VDSO_H
+#define _ASM_RISCV_VDSO_H
+
+#include <linux/types.h>
+
+struct vdso_data {
+};
+
+#define VDSO_SYMBOL(base, name)					\
+({								\
+	extern const char __vdso_##name[];			\
+	(void __user *)((unsigned long)(base) + __vdso_##name);	\
+})
+
+#endif /* _ASM_RISCV_VDSO_H */
diff --git a/arch/riscv/include/uapi/asm/auxvec.h b/arch/riscv/include/uapi/asm/auxvec.h
new file mode 100644
index 000000000000..1376515547cd
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/auxvec.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _UAPI_ASM_RISCV_AUXVEC_H
+#define _UAPI_ASM_RISCV_AUXVEC_H
+
+/* vDSO location */
+#define AT_SYSINFO_EHDR 33
+
+#endif /* _UAPI_ASM_RISCV_AUXVEC_H */
diff --git a/arch/riscv/include/uapi/asm/bitsperlong.h b/arch/riscv/include/uapi/asm/bitsperlong.h
new file mode 100644
index 000000000000..0b3cb52fd29d
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/bitsperlong.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _UAPI_ASM_RISCV_BITSPERLONG_H
+#define _UAPI_ASM_RISCV_BITSPERLONG_H
+
+#define __BITS_PER_LONG (__SIZEOF_POINTER__ * 8)
+
+#include <asm-generic/bitsperlong.h>
+
+#endif /* _UAPI_ASM_RISCV_BITSPERLONG_H */
diff --git a/arch/riscv/include/uapi/asm/byteorder.h b/arch/riscv/include/uapi/asm/byteorder.h
new file mode 100644
index 000000000000..4ca38af2cd32
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/byteorder.h
@@ -0,0 +1,23 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _UAPI_ASM_RISCV_BYTEORDER_H
+#define _UAPI_ASM_RISCV_BYTEORDER_H
+
+#include <linux/byteorder/little_endian.h>
+
+#endif /* _UAPI_ASM_RISCV_BYTEORDER_H */
diff --git a/arch/riscv/include/uapi/asm/elf.h b/arch/riscv/include/uapi/asm/elf.h
new file mode 100644
index 000000000000..a510edfa8226
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/elf.h
@@ -0,0 +1,83 @@
+/*
+ * Copyright (C) 2003 Matjaz Breskvar <phoenix@bsemi.com>
+ * Copyright (C) 2010-2011 Jonas Bonn <jonas@southpole.se>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef _UAPI_ASM_ELF_H
+#define _UAPI_ASM_ELF_H
+
+#include <asm/ptrace.h>
+
+/* ELF register definitions */
+typedef unsigned long elf_greg_t;
+typedef struct user_regs_struct elf_gregset_t;
+#define ELF_NGREG (sizeof(elf_gregset_t) / sizeof(elf_greg_t))
+
+typedef union __riscv_fp_state elf_fpregset_t;
+
+#define ELF_RISCV_R_SYM(r_info) ((r_info) >> 32)
+#define ELF_RISCV_R_TYPE(r_info) ((r_info) & 0xffffffff)
+
+/*
+ * RISC-V relocation types
+ */
+
+/* Relocation types used by the dynamic linker */
+#define R_RISCV_NONE		0
+#define R_RISCV_32		1
+#define R_RISCV_64		2
+#define R_RISCV_RELATIVE	3
+#define R_RISCV_COPY		4
+#define R_RISCV_JUMP_SLOT	5
+#define R_RISCV_TLS_DTPMOD32	6
+#define R_RISCV_TLS_DTPMOD64	7
+#define R_RISCV_TLS_DTPREL32	8
+#define R_RISCV_TLS_DTPREL64	9
+#define R_RISCV_TLS_TPREL32	10
+#define R_RISCV_TLS_TPREL64	11
+
+/* Relocation types not used by the dynamic linker */
+#define R_RISCV_BRANCH		16
+#define R_RISCV_JAL		17
+#define R_RISCV_CALL		18
+#define R_RISCV_CALL_PLT	19
+#define R_RISCV_GOT_HI20	20
+#define R_RISCV_TLS_GOT_HI20	21
+#define R_RISCV_TLS_GD_HI20	22
+#define R_RISCV_PCREL_HI20	23
+#define R_RISCV_PCREL_LO12_I	24
+#define R_RISCV_PCREL_LO12_S	25
+#define R_RISCV_HI20		26
+#define R_RISCV_LO12_I		27
+#define R_RISCV_LO12_S		28
+#define R_RISCV_TPREL_HI20	29
+#define R_RISCV_TPREL_LO12_I	30
+#define R_RISCV_TPREL_LO12_S	31
+#define R_RISCV_TPREL_ADD	32
+#define R_RISCV_ADD8		33
+#define R_RISCV_ADD16		34
+#define R_RISCV_ADD32		35
+#define R_RISCV_ADD64		36
+#define R_RISCV_SUB8		37
+#define R_RISCV_SUB16		38
+#define R_RISCV_SUB32		39
+#define R_RISCV_SUB64		40
+#define R_RISCV_GNU_VTINHERIT	41
+#define R_RISCV_GNU_VTENTRY	42
+#define R_RISCV_ALIGN		43
+#define R_RISCV_RVC_BRANCH	44
+#define R_RISCV_RVC_JUMP	45
+#define R_RISCV_LUI		46
+#define R_RISCV_GPREL_I		47
+#define R_RISCV_GPREL_S		48
+#define R_RISCV_TPREL_I		49
+#define R_RISCV_TPREL_S		50
+#define R_RISCV_RELAX		51
+
+#endif /* _UAPI_ASM_ELF_H */
diff --git a/arch/riscv/include/uapi/asm/ptrace.h b/arch/riscv/include/uapi/asm/ptrace.h
new file mode 100644
index 000000000000..01aee1654eae
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/ptrace.h
@@ -0,0 +1,87 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _UAPI_ASM_RISCV_PTRACE_H
+#define _UAPI_ASM_RISCV_PTRACE_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/types.h>
+
+/* User-mode register state for core dumps, ptrace, sigcontext
+ *
+ * This decouples struct pt_regs from the userspace ABI.
+ * struct user_regs_struct must form a prefix of struct pt_regs.
+ */
+struct user_regs_struct {
+	unsigned long pc;
+	unsigned long ra;
+	unsigned long sp;
+	unsigned long gp;
+	unsigned long tp;
+	unsigned long t0;
+	unsigned long t1;
+	unsigned long t2;
+	unsigned long s0;
+	unsigned long s1;
+	unsigned long a0;
+	unsigned long a1;
+	unsigned long a2;
+	unsigned long a3;
+	unsigned long a4;
+	unsigned long a5;
+	unsigned long a6;
+	unsigned long a7;
+	unsigned long s2;
+	unsigned long s3;
+	unsigned long s4;
+	unsigned long s5;
+	unsigned long s6;
+	unsigned long s7;
+	unsigned long s8;
+	unsigned long s9;
+	unsigned long s10;
+	unsigned long s11;
+	unsigned long t3;
+	unsigned long t4;
+	unsigned long t5;
+	unsigned long t6;
+};
+
+struct __riscv_f_ext_state {
+	__u32 f[32];
+	__u32 fcsr;
+};
+
+struct __riscv_d_ext_state {
+	__u64 f[32];
+	__u32 fcsr;
+};
+
+struct __riscv_q_ext_state {
+	__u64 f[64] __attribute__((aligned(16)));
+	__u32 fcsr;
+	/* Reserved for expansion of sigcontext structure.  Currently zeroed
+	 * upon signal, and must be zero upon sigreturn.  */
+	__u32 reserved[3];
+};
+
+union __riscv_fp_state {
+	struct __riscv_f_ext_state f;
+	struct __riscv_d_ext_state d;
+	struct __riscv_q_ext_state q;
+};
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _UAPI_ASM_RISCV_PTRACE_H */
diff --git a/arch/riscv/include/uapi/asm/sigcontext.h b/arch/riscv/include/uapi/asm/sigcontext.h
new file mode 100644
index 000000000000..805b015402ef
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/sigcontext.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _UAPI_ASM_RISCV_SIGCONTEXT_H
+#define _UAPI_ASM_RISCV_SIGCONTEXT_H
+
+#include <asm/ptrace.h>
+
+/* Signal context structure
+ *
+ * This contains the context saved before a signal handler is invoked;
+ * it is restored by sys_sigreturn / sys_rt_sigreturn.
+ */
+struct sigcontext {
+	struct user_regs_struct sc_regs;
+	union __riscv_fp_state sc_fpregs;
+};
+
+#endif /* _UAPI_ASM_RISCV_SIGCONTEXT_H */
diff --git a/arch/riscv/include/uapi/asm/siginfo.h b/arch/riscv/include/uapi/asm/siginfo.h
new file mode 100644
index 000000000000..f96849aac662
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/siginfo.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2016 SiFive, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_SIGINFO_H
+#define __ASM_SIGINFO_H
+
+#define __ARCH_SI_PREAMBLE_SIZE	(__SIZEOF_POINTER__ == 4 ? 12 : 16)
+
+#include <asm-generic/siginfo.h>
+
+#endif
diff --git a/arch/riscv/include/uapi/asm/ucontext.h b/arch/riscv/include/uapi/asm/ucontext.h
new file mode 100644
index 000000000000..52eff9febcfd
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/ucontext.h
@@ -0,0 +1,35 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2017 SiFive, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * This file was copied from arch/arm64/include/uapi/asm/ucontext.h
+ */
+#ifndef _UAPI__ASM_UCONTEXT_H
+#define _UAPI__ASM_UCONTEXT_H
+
+#include <linux/types.h>
+
+struct ucontext {
+	unsigned long	  uc_flags;
+	struct ucontext	 *uc_link;
+	stack_t		  uc_stack;
+	sigset_t	  uc_sigmask;
+	/* glibc uses a 1024-bit sigset_t */
+	__u8		  __unused[1024 / 8 - sizeof(sigset_t)];
+	/* last for future expansion */
+	struct sigcontext uc_mcontext;
+};
+
+#endif /* _UAPI__ASM_UCONTEXT_H */
diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h
new file mode 100644
index 000000000000..7e3909ac3c18
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/unistd.h
@@ -0,0 +1,26 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+/* FIXME: This exists for now in order to maintain compatibility with our
+ * pre-upstream glibc, and will be removed for our real Linux submission.
+ */
+#define __ARCH_WANT_RENAMEAT
+
+#include <asm-generic/unistd.h>
+
+/*
+ * These system calls add support for AMOs on RISC-V systems without support
+ * for the A extension.
+ */
+#define __NR_sysriscv_cmpxchg32		(__NR_arch_specific_syscall + 0)
+#define __NR_sysriscv_cmpxchg64		(__NR_arch_specific_syscall + 1)
diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
new file mode 100644
index 000000000000..753cb9894feb
--- /dev/null
+++ b/arch/riscv/kernel/module.c
@@ -0,0 +1,215 @@
+/*
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  Copyright (C) 2017 Zihao Yu
+ */
+
+#include <linux/elf.h>
+#include <linux/err.h>
+#include <linux/errno.h>
+#include <linux/moduleloader.h>
+
+static int apply_r_riscv_64_rela(struct module *me, u32 *location, Elf_Addr v)
+{
+	*(u64 *)location = v;
+	return 0;
+}
+
+static int apply_r_riscv_branch_rela(struct module *me, u32 *location,
+				     Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	u32 imm12 = (offset & 0x1000) << (31 - 12);
+	u32 imm11 = (offset & 0x800) >> (11 - 7);
+	u32 imm10_5 = (offset & 0x7e0) << (30 - 10);
+	u32 imm4_1 = (offset & 0x1e) << (11 - 4);
+
+	*location = (*location & 0x1fff07f) | imm12 | imm11 | imm10_5 | imm4_1;
+	return 0;
+}
+
+static int apply_r_riscv_jal_rela(struct module *me, u32 *location,
+				  Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	u32 imm20 = (offset & 0x100000) << (31 - 20);
+	u32 imm19_12 = (offset & 0xff000);
+	u32 imm11 = (offset & 0x800) << (20 - 11);
+	u32 imm10_1 = (offset & 0x7fe) << (30 - 10);
+
+	*location = (*location & 0xfff) | imm20 | imm19_12 | imm11 | imm10_1;
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location,
+					 Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	s32 hi20;
+
+	if (offset != (s32)offset) {
+		pr_err(
+		  "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+		  me->name, v, location);
+		return -EINVAL;
+	}
+
+	hi20 = (offset + 0x800) & 0xfffff000;
+	*location = (*location & 0xfff) | hi20;
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_lo12_i_rela(struct module *me, u32 *location,
+					   Elf_Addr v)
+{
+	/* v is the lo12 value to fill. It is calculated before calling this
+	 * handler.
+	 */
+	*location = (*location & 0xfffff) | ((v & 0xfff) << 20);
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_lo12_s_rela(struct module *me, u32 *location,
+					   Elf_Addr v)
+{
+	/* v is the lo12 value to fill. It is calculated before calling this
+	 * handler.
+	 */
+	u32 imm11_5 = (v & 0xfe0) << (31 - 11);
+	u32 imm4_0 = (v & 0x1f) << (11 - 4);
+
+	*location = (*location & 0x1fff07f) | imm11_5 | imm4_0;
+	return 0;
+}
+
+static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location,
+				       Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	s32 fill_v = offset;
+	u32 hi20, lo12;
+
+	if (offset != fill_v) {
+		pr_err(
+		  "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+		  me->name, v, location);
+		return -EINVAL;
+	}
+
+	hi20 = (offset + 0x800) & 0xfffff000;
+	lo12 = (offset - hi20) & 0xfff;
+	*location = (*location & 0xfff) | hi20;
+	*(location + 1) = (*(location + 1) & 0xfffff) | (lo12 << 20);
+	return 0;
+}
+
+static int apply_r_riscv_relax_rela(struct module *me, u32 *location,
+				    Elf_Addr v)
+{
+	return 0;
+}
+
+static int (*reloc_handlers_rela[]) (struct module *me, u32 *location,
+				Elf_Addr v) = {
+	[R_RISCV_64]			= apply_r_riscv_64_rela,
+	[R_RISCV_BRANCH]		= apply_r_riscv_branch_rela,
+	[R_RISCV_JAL]			= apply_r_riscv_jal_rela,
+	[R_RISCV_PCREL_HI20]		= apply_r_riscv_pcrel_hi20_rela,
+	[R_RISCV_PCREL_LO12_I]		= apply_r_riscv_pcrel_lo12_i_rela,
+	[R_RISCV_PCREL_LO12_S]		= apply_r_riscv_pcrel_lo12_s_rela,
+	[R_RISCV_CALL_PLT]		= apply_r_riscv_call_plt_rela,
+	[R_RISCV_RELAX]			= apply_r_riscv_relax_rela,
+};
+
+int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
+		       unsigned int symindex, unsigned int relsec,
+		       struct module *me)
+{
+	Elf_Rela *rel = (void *) sechdrs[relsec].sh_addr;
+	int (*handler)(struct module *me, u32 *location, Elf_Addr v);
+	Elf_Sym *sym;
+	u32 *location;
+	unsigned int i, type;
+	Elf_Addr v;
+	int res;
+
+	pr_debug("Applying relocate section %u to %u\n", relsec,
+	       sechdrs[relsec].sh_info);
+
+	for (i = 0; i < sechdrs[relsec].sh_size / sizeof(*rel); i++) {
+		/* This is where to make the change */
+		location = (void *)sechdrs[sechdrs[relsec].sh_info].sh_addr
+			+ rel[i].r_offset;
+		/* This is the symbol it is referring to */
+		sym = (Elf_Sym *)sechdrs[symindex].sh_addr
+			+ ELF_RISCV_R_SYM(rel[i].r_info);
+		if (IS_ERR_VALUE(sym->st_value)) {
+			/* Ignore unresolved weak symbol */
+			if (ELF_ST_BIND(sym->st_info) == STB_WEAK)
+				continue;
+			printk(KERN_WARNING "%s: Unknown symbol %s\n",
+			       me->name, strtab + sym->st_name);
+			return -ENOENT;
+		}
+
+		type = ELF_RISCV_R_TYPE(rel[i].r_info);
+
+		if (type < ARRAY_SIZE(reloc_handlers_rela))
+			handler = reloc_handlers_rela[type];
+		else
+			handler = NULL;
+
+		if (!handler) {
+			pr_err("%s: Unknown relocation type %u\n",
+			       me->name, type);
+			return -EINVAL;
+		}
+
+		v = sym->st_value + rel[i].r_addend;
+
+		if (type == R_RISCV_PCREL_LO12_I || type == R_RISCV_PCREL_LO12_S) {
+			unsigned int j;
+
+			for (j = 0; j < sechdrs[relsec].sh_size / sizeof(*rel); j++) {
+				u64 hi20_loc =
+					sechdrs[sechdrs[relsec].sh_info].sh_addr
+					+ rel[j].r_offset;
+				/* Find the corresponding HI20 PC-relative relocation entry */
+				if (hi20_loc == sym->st_value) {
+					Elf_Sym *hi20_sym =
+						(Elf_Sym *)sechdrs[symindex].sh_addr
+						+ ELF_RISCV_R_SYM(rel[j].r_info);
+					u64 hi20_sym_val =
+						hi20_sym->st_value
+						+ rel[j].r_addend;
+					/* Calculate lo12 */
+					s64 offset = hi20_sym_val - hi20_loc;
+					s32 hi20 = (offset + 0x800) & 0xfffff000;
+					s32 lo12 = offset - hi20;
+					v = lo12;
+					break;
+				}
+			}
+			if (j == sechdrs[relsec].sh_size / sizeof(*rel)) {
+				pr_err(
+				  "%s: Can not find HI20 PC-relative relocation information\n",
+				  me->name);
+				return -EINVAL;
+			}
+		}
+
+		res = handler(me, location, v);
+		if (res)
+			return res;
+	}
+
+	return 0;
+}
diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
new file mode 100644
index 000000000000..69b3b2d10664
--- /dev/null
+++ b/arch/riscv/kernel/ptrace.c
@@ -0,0 +1,147 @@
+/*
+ * Copyright 2010 Tilera Corporation. All Rights Reserved.
+ * Copyright 2015 Regents of the University of California
+ * Copyright 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ *
+ * Copied from arch/tile/kernel/ptrace.c
+ */
+
+#include <asm/ptrace.h>
+#include <asm/syscall.h>
+#include <asm/thread_info.h>
+#include <linux/ptrace.h>
+#include <linux/elf.h>
+#include <linux/regset.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/tracehook.h>
+#include <trace/events/syscalls.h>
+
+enum riscv_regset {
+	REGSET_X,
+};
+
+/*
+ * Get registers from task and ready the result for userspace.
+ */
+static char *getregs(struct task_struct *child, struct pt_regs *uregs)
+{
+	*uregs = *task_pt_regs(child);
+	return (char *)uregs;
+}
+
+/* Put registers back to task. */
+static void putregs(struct task_struct *child, struct pt_regs *uregs)
+{
+	struct pt_regs *regs = task_pt_regs(child);
+	*regs = *uregs;
+}
+
+static int riscv_gpr_get(struct task_struct *target,
+			 const struct user_regset *regset,
+			 unsigned int pos, unsigned int count,
+			 void *kbuf, void __user *ubuf)
+{
+	struct pt_regs regs;
+
+	getregs(target, &regs);
+
+	return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &regs, 0,
+				   sizeof(regs));
+}
+
+static int riscv_gpr_set(struct task_struct *target,
+			 const struct user_regset *regset,
+			 unsigned int pos, unsigned int count,
+			 const void *kbuf, const void __user *ubuf)
+{
+	int ret;
+	struct pt_regs regs;
+
+	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &regs, 0,
+				 sizeof(regs));
+	if (ret)
+		return ret;
+
+	putregs(target, &regs);
+
+	return 0;
+}
+
+
+static const struct user_regset riscv_user_regset[] = {
+	[REGSET_X] = {
+		.core_note_type = NT_PRSTATUS,
+		.n = ELF_NGREG,
+		.size = sizeof(elf_greg_t),
+		.align = sizeof(elf_greg_t),
+		.get = &riscv_gpr_get,
+		.set = &riscv_gpr_set,
+	},
+};
+
+static const struct user_regset_view riscv_user_native_view = {
+	.name = "riscv",
+	.e_machine = EM_RISCV,
+	.regsets = riscv_user_regset,
+	.n = ARRAY_SIZE(riscv_user_regset),
+};
+
+const struct user_regset_view *task_user_regset_view(struct task_struct *task)
+{
+	return &riscv_user_native_view;
+}
+
+void ptrace_disable(struct task_struct *child)
+{
+	clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+}
+
+long arch_ptrace(struct task_struct *child, long request,
+		 unsigned long addr, unsigned long data)
+{
+	long ret = -EIO;
+
+	switch (request) {
+	default:
+		ret = ptrace_request(child, request, addr, data);
+		break;
+	}
+
+	return ret;
+}
+
+/* Allows PTRACE_SYSCALL to work.  These are called from entry.S in
+ * {handle,ret_from}_syscall.
+ */
+void do_syscall_trace_enter(struct pt_regs *regs)
+{
+	if (test_thread_flag(TIF_SYSCALL_TRACE))
+		if (tracehook_report_syscall_entry(regs))
+			syscall_set_nr(current, regs, -1);
+
+#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
+	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
+		trace_sys_enter(regs, syscall_get_nr(current, regs));
+#endif
+}
+
+void do_syscall_trace_exit(struct pt_regs *regs)
+{
+	if (test_thread_flag(TIF_SYSCALL_TRACE))
+		tracehook_report_syscall_exit(regs, 0);
+
+#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
+	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
+		trace_sys_exit(regs, regs->regs[0]);
+#endif
+}
diff --git a/arch/riscv/kernel/riscv_ksyms.c b/arch/riscv/kernel/riscv_ksyms.c
new file mode 100644
index 000000000000..23cc81ec9e94
--- /dev/null
+++ b/arch/riscv/kernel/riscv_ksyms.c
@@ -0,0 +1,15 @@
+/*
+ * Copyright (C) 2017 Zihao Yu
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/export.h>
+#include <linux/uaccess.h>
+
+/*
+ * Assembly functions that may be used (directly or indirectly) by modules
+ */
+EXPORT_SYMBOL(__copy_user);
diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
new file mode 100644
index 000000000000..f32a568ce25b
--- /dev/null
+++ b/arch/riscv/kernel/signal.c
@@ -0,0 +1,288 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/signal.h>
+#include <linux/uaccess.h>
+#include <linux/syscalls.h>
+#include <linux/tracehook.h>
+#include <linux/linkage.h>
+
+#include <asm/ucontext.h>
+#include <asm/vdso.h>
+#include <asm/switch_to.h>
+#include <asm/csr.h>
+
+#define DEBUG_SIG 0
+
+struct rt_sigframe {
+	struct siginfo info;
+	struct ucontext uc;
+};
+
+static long restore_d_state(struct pt_regs *regs,
+	struct __riscv_d_ext_state __user *state)
+{
+	long err;
+	err = __copy_from_user(&current->thread.fstate, state, sizeof(*state));
+	if (likely(!err))
+		fstate_restore(current, regs);
+	return err;
+}
+
+static long save_d_state(struct pt_regs *regs,
+	struct __riscv_d_ext_state __user *state)
+{
+	fstate_save(current, regs);
+	return __copy_to_user(state, &current->thread.fstate, sizeof(*state));
+}
+
+static long restore_sigcontext(struct pt_regs *regs,
+	struct sigcontext __user *sc)
+{
+	long err;
+	size_t i;
+	/* sc_regs is structured the same as the start of pt_regs */
+	err = __copy_from_user(regs, &sc->sc_regs, sizeof(sc->sc_regs));
+	if (unlikely(err))
+		return err;
+	/* Restore the floating-point state. */
+	err = restore_d_state(regs, &sc->sc_fpregs.d);
+	if (unlikely(err))
+		return err;
+	/* We support no other extension state at this time. */
+	for (i = 0; i < ARRAY_SIZE(sc->sc_fpregs.q.reserved); i++) {
+		u32 value;
+		err = __get_user(value, &sc->sc_fpregs.q.reserved[i]);
+		if (unlikely(err))
+			break;
+		if (value != 0)
+			return -EINVAL;
+	}
+	return err;
+}
+
+SYSCALL_DEFINE0(rt_sigreturn)
+{
+	struct pt_regs *regs = current_pt_regs();
+	struct rt_sigframe __user *frame;
+	struct task_struct *task;
+	sigset_t set;
+
+	/* Always make any pending restarted system calls return -EINTR */
+	current->restart_block.fn = do_no_restart_syscall;
+
+	frame = (struct rt_sigframe __user *)regs->sp;
+
+	if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
+		goto badframe;
+
+	if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
+		goto badframe;
+
+	set_current_blocked(&set);
+
+	if (restore_sigcontext(regs, &frame->uc.uc_mcontext))
+		goto badframe;
+
+	if (restore_altstack(&frame->uc.uc_stack))
+		goto badframe;
+
+	return regs->a0;
+
+badframe:
+	task = current;
+	if (show_unhandled_signals) {
+		pr_info_ratelimited(
+			"%s[%d]: bad frame in %s: frame=%p pc=%p sp=%p\n",
+			task->comm, task_pid_nr(task), __func__,
+			frame, (void *)regs->sepc, (void *)regs->sp);
+	}
+	force_sig(SIGSEGV, task);
+	return 0;
+}
+
+static long setup_sigcontext(struct rt_sigframe __user *frame,
+	struct pt_regs *regs)
+{
+	struct sigcontext __user *sc = &frame->uc.uc_mcontext;
+	long err;
+	size_t i;
+	/* sc_regs is structured the same as the start of pt_regs */
+	err = __copy_to_user(&sc->sc_regs, regs, sizeof(sc->sc_regs));
+	/* Save the floating-point state. */
+	err |= save_d_state(regs, &sc->sc_fpregs.d);
+	/* We support no other extension state at this time. */
+	for (i = 0; i < ARRAY_SIZE(sc->sc_fpregs.q.reserved); i++)
+		err |= __put_user(0, &sc->sc_fpregs.q.reserved[i]);
+	return err;
+}
+
+static inline void __user *get_sigframe(struct ksignal *ksig,
+	struct pt_regs *regs, size_t framesize)
+{
+	unsigned long sp;
+	/* Default to using normal stack */
+	sp = regs->sp;
+
+	/*
+	 * If we are on the alternate signal stack and would overflow it, don't.
+	 * Return an always-bogus address instead so we will die with SIGSEGV.
+	 */
+	if (on_sig_stack(sp) && !likely(on_sig_stack(sp - framesize)))
+		return (void __user __force *)(-1UL);
+
+	/* This is the X/Open sanctioned signal stack switching. */
+	sp = sigsp(sp, ksig) - framesize;
+
+	/* Align the stack frame. */
+	sp &= ~0xfUL;
+
+	return (void __user *)sp;
+}
+
+
+static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+	struct pt_regs *regs)
+{
+	struct rt_sigframe __user *frame;
+	long err = 0;
+
+	frame = get_sigframe(ksig, regs, sizeof(*frame));
+	if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+		return -EFAULT;
+
+	err |= copy_siginfo_to_user(&frame->info, &ksig->info);
+
+	/* Create the ucontext. */
+	err |= __put_user(0, &frame->uc.uc_flags);
+	err |= __put_user(NULL, &frame->uc.uc_link);
+	err |= __save_altstack(&frame->uc.uc_stack, regs->sp);
+	err |= setup_sigcontext(frame, regs);
+	err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
+	if (err)
+		return -EFAULT;
+
+	/* Set up to return from userspace. */
+	regs->ra = (unsigned long)VDSO_SYMBOL(
+		current->mm->context.vdso, rt_sigreturn);
+
+	/*
+	 * Set up registers for signal handler.
+	 * Registers that we don't modify keep the value they had from
+	 * user-space at the time we took the signal.
+	 * We always pass siginfo and mcontext, regardless of SA_SIGINFO,
+	 * since some things rely on this (e.g. glibc's debug/segfault.c).
+	 */
+	regs->sepc = (unsigned long)ksig->ka.sa.sa_handler;
+	regs->sp = (unsigned long)frame;
+	regs->a0 = ksig->sig;                     /* a0: signal number */
+	regs->a1 = (unsigned long)(&frame->info); /* a1: siginfo pointer */
+	regs->a2 = (unsigned long)(&frame->uc);   /* a2: ucontext pointer */
+
+#if DEBUG_SIG
+	pr_info("SIG deliver (%s:%d): sig=%d pc=%p ra=%p sp=%p\n",
+		current->comm, task_pid_nr(current), ksig->sig,
+		(void *)regs->sepc, (void *)regs->ra, frame);
+#endif
+
+	return 0;
+}
+
+static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
+{
+	sigset_t *oldset = sigmask_to_save();
+	int ret;
+
+	/* Are we from a system call? */
+	if (regs->scause == EXC_SYSCALL) {
+		/* If so, check system call restarting.. */
+		switch (regs->a0) {
+		case -ERESTART_RESTARTBLOCK:
+		case -ERESTARTNOHAND:
+			regs->a0 = -EINTR;
+			break;
+
+		case -ERESTARTSYS:
+			if (!(ksig->ka.sa.sa_flags & SA_RESTART)) {
+				regs->a0 = -EINTR;
+				break;
+			}
+			/* fallthrough */
+		case -ERESTARTNOINTR:
+			regs->sepc -= 0x4;
+			break;
+		}
+	}
+
+	/* Set up the stack frame */
+	ret = setup_rt_frame(ksig, oldset, regs);
+
+	signal_setup_done(ret, ksig, 0);
+}
+
+static void do_signal(struct pt_regs *regs)
+{
+	struct ksignal ksig;
+
+	if (get_signal(&ksig)) {
+		/* Actually deliver the signal */
+		handle_signal(&ksig, regs);
+		return;
+	}
+
+	/* Did we come from a system call? */
+	if (regs->scause == EXC_SYSCALL) {
+		/* Restart the system call - no handlers present */
+		switch (regs->a0) {
+		case -ERESTARTNOHAND:
+		case -ERESTARTSYS:
+		case -ERESTARTNOINTR:
+			regs->sepc -= 0x4;
+			break;
+		case -ERESTART_RESTARTBLOCK:
+			regs->a7 = __NR_restart_syscall;
+			regs->sepc -= 0x4;
+			break;
+		}
+	}
+
+	/* If there is no signal to deliver, we just put the saved
+	 * sigmask back.
+	 */
+	restore_saved_sigmask();
+}
+
+/*
+ * notification of userspace execution resumption
+ * - triggered by the _TIF_WORK_MASK flags
+ */
+asmlinkage void do_notify_resume(struct pt_regs *regs,
+	unsigned long thread_info_flags)
+{
+	/* Handle pending signal delivery */
+	if (thread_info_flags & _TIF_SIGPENDING)
+		do_signal(regs);
+
+	if (thread_info_flags & _TIF_NOTIFY_RESUME) {
+		clear_thread_flag(TIF_NOTIFY_RESUME);
+		tracehook_notify_resume(regs);
+	}
+}
diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
new file mode 100644
index 000000000000..ab699efe636e
--- /dev/null
+++ b/arch/riscv/kernel/sys_riscv.c
@@ -0,0 +1,87 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/syscalls.h>
+#include <asm/unistd.h>
+
+#ifdef CONFIG_64BIT
+SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
+	unsigned long, prot, unsigned long, flags,
+	unsigned long, fd, off_t, offset)
+{
+	if (unlikely(offset & (~PAGE_MASK)))
+		return -EINVAL;
+	return sys_mmap_pgoff(addr, len, prot, flags, fd, offset >> PAGE_SHIFT);
+}
+#else
+SYSCALL_DEFINE6(mmap2, unsigned long, addr, unsigned long, len,
+	unsigned long, prot, unsigned long, flags,
+	unsigned long, fd, off_t, offset)
+{
+	/* Note that the shift for mmap2 is constant (12),
+	 * regardless of PAGE_SIZE
+	 */
+	if (unlikely(offset & (~PAGE_MASK >> 12)))
+		return -EINVAL;
+	return sys_mmap_pgoff(addr, len, prot, flags, fd,
+		offset >> (PAGE_SHIFT - 12));
+}
+#endif /* !CONFIG_64BIT */
+
+SYSCALL_DEFINE3(sysriscv_cmpxchg32, unsigned long, arg1, unsigned long, arg2,
+		unsigned long, arg3)
+{
+	unsigned long flags;
+	unsigned long prev;
+	unsigned int *ptr;
+	unsigned int err;
+
+	ptr = (unsigned int *)arg1;
+	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned int)))
+		return -EFAULT;
+
+	preempt_disable();
+	raw_local_irq_save(flags);
+	err = __get_user(prev, ptr);
+	if (likely(!err && prev == arg2))
+		err = __put_user(arg3, ptr);
+	raw_local_irq_restore(flags);
+	preempt_enable();
+
+	return unlikely(err) ? err : prev;
+}
+
+SYSCALL_DEFINE3(sysriscv_cmpxchg64, unsigned long, arg1, unsigned long, arg2,
+		unsigned long, arg3)
+{
+	unsigned long flags;
+	unsigned long prev;
+	unsigned int *ptr;
+	unsigned int err;
+
+	ptr = (unsigned int *)arg1;
+	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned long)))
+		return -EFAULT;
+
+	preempt_disable();
+	raw_local_irq_save(flags);
+	err = __get_user(prev, ptr);
+	if (likely(!err && prev == arg2))
+		err = __put_user(arg3, ptr);
+	raw_local_irq_restore(flags);
+	preempt_enable();
+
+	return unlikely(err) ? err : prev;
+}
diff --git a/arch/riscv/kernel/syscall_table.c b/arch/riscv/kernel/syscall_table.c
new file mode 100644
index 000000000000..8fa239b67cbc
--- /dev/null
+++ b/arch/riscv/kernel/syscall_table.c
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2009 Arnd Bergmann <arnd@arndb.de>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/syscalls.h>
+
+#include <asm/syscalls.h>
+
+#undef __SYSCALL
+#define __SYSCALL(nr, call)	[nr] = (call),
+
+void *sys_call_table[__NR_syscalls] = {
+	[0 ... __NR_syscalls - 1] = sys_ni_syscall,
+#include <asm/unistd.h>
+};
diff --git a/arch/riscv/kernel/vdso/.gitignore b/arch/riscv/kernel/vdso/.gitignore
new file mode 100644
index 000000000000..f8b69d84238e
--- /dev/null
+++ b/arch/riscv/kernel/vdso/.gitignore
@@ -0,0 +1 @@
+vdso.lds
diff --git a/arch/riscv/kernel/vdso/cmpxchg32.S b/arch/riscv/kernel/vdso/cmpxchg32.S
new file mode 100644
index 000000000000..387deb4e852c
--- /dev/null
+++ b/arch/riscv/kernel/vdso/cmpxchg32.S
@@ -0,0 +1,40 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/unistd.h>
+
+	.text
+ENTRY(__vdso_cmpxchg32)
+	.cfi_startproc
+#ifdef CONFIG_ISA_A
+/*
+ * a0: ptr
+ * a1: compare value
+ * a2: exchange value
+ */
+0:
+	lr.w.aqrl t0, 0(a0)
+	bne       t0, a1, 1f
+	sc.w.aqrl t1, a2, 0(a0)
+	bnez      t1, 0b
+1:
+	mv        a0, t0
+	ret
+#else
+	li a7, __NR_sysriscv_cmpxchg32
+	scall
+	ret
+#endif
+	.cfi_endproc
+ENDPROC(__vdso_cmpxchg32)
diff --git a/arch/riscv/kernel/vdso/cmpxchg64.S b/arch/riscv/kernel/vdso/cmpxchg64.S
new file mode 100644
index 000000000000..6286842f78c0
--- /dev/null
+++ b/arch/riscv/kernel/vdso/cmpxchg64.S
@@ -0,0 +1,40 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/unistd.h>
+
+	.text
+ENTRY(__vdso_cmpxchg64)
+	.cfi_startproc
+#ifdef CONFIG_ISA_A
+/*
+ * a0: ptr
+ * a1: compare value
+ * a2: exchange value
+ */
+0:
+	lr.d.aqrl t0, 0(a0)
+	bne       t0, a1, 1f
+	sc.d.aqrl t1, a2, 0(a0)
+	bnez      t1, 0b
+1:
+	mv        a0, t0
+	ret
+#else
+	li a7, __NR_sysriscv_cmpxchg64
+	scall
+	ret
+#endif
+	.cfi_endproc
+ENDPROC(__vdso_cmpxchg64)
diff --git a/arch/riscv/kernel/vdso/sigreturn.S b/arch/riscv/kernel/vdso/sigreturn.S
new file mode 100644
index 000000000000..f5aa3d72acfb
--- /dev/null
+++ b/arch/riscv/kernel/vdso/sigreturn.S
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/unistd.h>
+
+	.text
+ENTRY(__vdso_rt_sigreturn)
+	.cfi_startproc
+	.cfi_signal_frame
+	li a7, __NR_rt_sigreturn
+	scall
+	.cfi_endproc
+ENDPROC(__vdso_rt_sigreturn)
diff --git a/arch/riscv/kernel/vdso/vdso.S b/arch/riscv/kernel/vdso/vdso.S
new file mode 100644
index 000000000000..7055de5f9174
--- /dev/null
+++ b/arch/riscv/kernel/vdso/vdso.S
@@ -0,0 +1,27 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/page.h>
+
+	__PAGE_ALIGNED_DATA
+
+	.globl vdso_start, vdso_end
+	.balign PAGE_SIZE
+vdso_start:
+	.incbin "arch/riscv/kernel/vdso/vdso.so"
+	.balign PAGE_SIZE
+vdso_end:
+
+	.previous
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 8/9] RISC-V: User-facing API
@ 2017-06-28 18:55       ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains code that is in some way visible to the user:
including via system calls, the VDSO, module loading and signal
handling.  It also contains some generic code that is ABI visible.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/mmu.h              |  26 +++
 arch/riscv/include/asm/ptrace.h           | 116 ++++++++++++
 arch/riscv/include/asm/syscall.h          |  90 ++++++++++
 arch/riscv/include/asm/syscalls.h         |  25 +++
 arch/riscv/include/asm/unistd.h           |  16 ++
 arch/riscv/include/asm/vdso.h             |  32 ++++
 arch/riscv/include/uapi/asm/auxvec.h      |  24 +++
 arch/riscv/include/uapi/asm/bitsperlong.h |  25 +++
 arch/riscv/include/uapi/asm/byteorder.h   |  23 +++
 arch/riscv/include/uapi/asm/elf.h         |  83 +++++++++
 arch/riscv/include/uapi/asm/ptrace.h      |  87 +++++++++
 arch/riscv/include/uapi/asm/sigcontext.h  |  29 +++
 arch/riscv/include/uapi/asm/siginfo.h     |  24 +++
 arch/riscv/include/uapi/asm/ucontext.h    |  35 ++++
 arch/riscv/include/uapi/asm/unistd.h      |  26 +++
 arch/riscv/kernel/module.c                | 215 ++++++++++++++++++++++
 arch/riscv/kernel/ptrace.c                | 147 +++++++++++++++
 arch/riscv/kernel/riscv_ksyms.c           |  15 ++
 arch/riscv/kernel/signal.c                | 288 ++++++++++++++++++++++++++++++
 arch/riscv/kernel/sys_riscv.c             |  87 +++++++++
 arch/riscv/kernel/syscall_table.c         |  25 +++
 arch/riscv/kernel/vdso/.gitignore         |   1 +
 arch/riscv/kernel/vdso/cmpxchg32.S        |  40 +++++
 arch/riscv/kernel/vdso/cmpxchg64.S        |  40 +++++
 arch/riscv/kernel/vdso/sigreturn.S        |  24 +++
 arch/riscv/kernel/vdso/vdso.S             |  27 +++
 26 files changed, 1570 insertions(+)
 create mode 100644 arch/riscv/include/asm/mmu.h
 create mode 100644 arch/riscv/include/asm/ptrace.h
 create mode 100644 arch/riscv/include/asm/syscall.h
 create mode 100644 arch/riscv/include/asm/syscalls.h
 create mode 100644 arch/riscv/include/asm/unistd.h
 create mode 100644 arch/riscv/include/asm/vdso.h
 create mode 100644 arch/riscv/include/uapi/asm/auxvec.h
 create mode 100644 arch/riscv/include/uapi/asm/bitsperlong.h
 create mode 100644 arch/riscv/include/uapi/asm/byteorder.h
 create mode 100644 arch/riscv/include/uapi/asm/elf.h
 create mode 100644 arch/riscv/include/uapi/asm/ptrace.h
 create mode 100644 arch/riscv/include/uapi/asm/sigcontext.h
 create mode 100644 arch/riscv/include/uapi/asm/siginfo.h
 create mode 100644 arch/riscv/include/uapi/asm/ucontext.h
 create mode 100644 arch/riscv/include/uapi/asm/unistd.h
 create mode 100644 arch/riscv/kernel/module.c
 create mode 100644 arch/riscv/kernel/ptrace.c
 create mode 100644 arch/riscv/kernel/riscv_ksyms.c
 create mode 100644 arch/riscv/kernel/signal.c
 create mode 100644 arch/riscv/kernel/sys_riscv.c
 create mode 100644 arch/riscv/kernel/syscall_table.c
 create mode 100644 arch/riscv/kernel/vdso/.gitignore
 create mode 100644 arch/riscv/kernel/vdso/cmpxchg32.S
 create mode 100644 arch/riscv/kernel/vdso/cmpxchg64.S
 create mode 100644 arch/riscv/kernel/vdso/sigreturn.S
 create mode 100644 arch/riscv/kernel/vdso/vdso.S

diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h
new file mode 100644
index 000000000000..66805cba9a27
--- /dev/null
+++ b/arch/riscv/include/asm/mmu.h
@@ -0,0 +1,26 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef _ASM_RISCV_MMU_H
+#define _ASM_RISCV_MMU_H
+
+#ifndef __ASSEMBLY__
+
+typedef struct {
+	void *vdso;
+} mm_context_t;
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_MMU_H */
diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h
new file mode 100644
index 000000000000..340201868842
--- /dev/null
+++ b/arch/riscv/include/asm/ptrace.h
@@ -0,0 +1,116 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PTRACE_H
+#define _ASM_RISCV_PTRACE_H
+
+#include <uapi/asm/ptrace.h>
+#include <asm/csr.h>
+
+#ifndef __ASSEMBLY__
+
+struct pt_regs {
+	unsigned long sepc;
+	unsigned long ra;
+	unsigned long sp;
+	unsigned long gp;
+	unsigned long tp;
+	unsigned long t0;
+	unsigned long t1;
+	unsigned long t2;
+	unsigned long s0;
+	unsigned long s1;
+	unsigned long a0;
+	unsigned long a1;
+	unsigned long a2;
+	unsigned long a3;
+	unsigned long a4;
+	unsigned long a5;
+	unsigned long a6;
+	unsigned long a7;
+	unsigned long s2;
+	unsigned long s3;
+	unsigned long s4;
+	unsigned long s5;
+	unsigned long s6;
+	unsigned long s7;
+	unsigned long s8;
+	unsigned long s9;
+	unsigned long s10;
+	unsigned long s11;
+	unsigned long t3;
+	unsigned long t4;
+	unsigned long t5;
+	unsigned long t6;
+	/* Supervisor CSRs */
+	unsigned long sstatus;
+	unsigned long sbadaddr;
+	unsigned long scause;
+};
+
+#ifdef CONFIG_64BIT
+#define REG_FMT "%016lx"
+#else
+#define REG_FMT "%08lx"
+#endif
+
+#define user_mode(regs) (((regs)->sstatus & SR_PS) == 0)
+
+
+/* Helpers for working with the instruction pointer */
+#define GET_IP(regs) ((regs)->sepc)
+#define SET_IP(regs, val) (GET_IP(regs) = (val))
+
+static inline unsigned long instruction_pointer(struct pt_regs *regs)
+{
+	return GET_IP(regs);
+}
+static inline void instruction_pointer_set(struct pt_regs *regs,
+					   unsigned long val)
+{
+	SET_IP(regs, val);
+}
+
+#define profile_pc(regs) instruction_pointer(regs)
+
+/* Helpers for working with the user stack pointer */
+#define GET_USP(regs) ((regs)->sp)
+#define SET_USP(regs, val) (GET_USP(regs) = (val))
+
+static inline unsigned long user_stack_pointer(struct pt_regs *regs)
+{
+	return GET_USP(regs);
+}
+static inline void user_stack_pointer_set(struct pt_regs *regs,
+					  unsigned long val)
+{
+	SET_USP(regs, val);
+}
+
+/* Helpers for working with the frame pointer */
+#define GET_FP(regs) ((regs)->s0)
+#define SET_FP(regs, val) (GET_FP(regs) = (val))
+
+static inline unsigned long frame_pointer(struct pt_regs *regs)
+{
+	return GET_FP(regs);
+}
+static inline void frame_pointer_set(struct pt_regs *regs,
+				     unsigned long val)
+{
+	SET_FP(regs, val);
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PTRACE_H */
diff --git a/arch/riscv/include/asm/syscall.h b/arch/riscv/include/asm/syscall.h
new file mode 100644
index 000000000000..c70f0e7a06b8
--- /dev/null
+++ b/arch/riscv/include/asm/syscall.h
@@ -0,0 +1,90 @@
+/*
+ * Copyright (C) 2008-2009 Red Hat, Inc.  All rights reserved.
+ * Copyright 2010 Tilera Corporation. All Rights Reserved.
+ * Copyright 2015 Regents of the University of California, Berkeley
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ *
+ * See asm-generic/syscall.h for descriptions of what we must do here.
+ */
+
+#ifndef _ASM_RISCV_SYSCALL_H
+#define _ASM_RISCV_SYSCALL_H
+
+#include <linux/sched.h>
+#include <linux/err.h>
+
+/* The array of function pointers for syscalls. */
+extern void *sys_call_table[];
+
+/*
+ * Only the low 32 bits of orig_r0 are meaningful, so we return int.
+ * This importantly ignores the high bits on 64-bit, so comparisons
+ * sign-extend the low 32 bits.
+ */
+static inline int syscall_get_nr(struct task_struct *task,
+				 struct pt_regs *regs)
+{
+	return regs->a7;
+}
+
+static inline void syscall_set_nr(struct task_struct *task,
+				  struct pt_regs *regs,
+				  int sysno)
+{
+	regs->a7 = sysno;
+}
+
+static inline void syscall_rollback(struct task_struct *task,
+				    struct pt_regs *regs)
+{
+	/* FIXME: We can't do this... */
+}
+
+static inline long syscall_get_error(struct task_struct *task,
+				     struct pt_regs *regs)
+{
+	unsigned long error = regs->a0;
+
+	return IS_ERR_VALUE(error) ? error : 0;
+}
+
+static inline long syscall_get_return_value(struct task_struct *task,
+					    struct pt_regs *regs)
+{
+	return regs->a0;
+}
+
+static inline void syscall_set_return_value(struct task_struct *task,
+					    struct pt_regs *regs,
+					    int error, long val)
+{
+	regs->a0 = (long) error ?: val;
+}
+
+static inline void syscall_get_arguments(struct task_struct *task,
+					 struct pt_regs *regs,
+					 unsigned int i, unsigned int n,
+					 unsigned long *args)
+{
+	BUG_ON(i + n > 6);
+	memcpy(args, &regs->a0 + i * sizeof(regs->a0), n * sizeof(args[0]));
+}
+
+static inline void syscall_set_arguments(struct task_struct *task,
+					 struct pt_regs *regs,
+					 unsigned int i, unsigned int n,
+					 const unsigned long *args)
+{
+	BUG_ON(i + n > 6);
+	memcpy(&regs->a0 + i * sizeof(regs->a0), args, n * sizeof(regs->a0));
+}
+
+#endif	/* _ASM_TILE_SYSCALL_H */
diff --git a/arch/riscv/include/asm/syscalls.h b/arch/riscv/include/asm/syscalls.h
new file mode 100644
index 000000000000..d85267c4f7ea
--- /dev/null
+++ b/arch/riscv/include/asm/syscalls.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SYSCALLS_H
+#define _ASM_RISCV_SYSCALLS_H
+
+#include <linux/linkage.h>
+
+#include <asm-generic/syscalls.h>
+
+/* kernel/sys_riscv.c */
+asmlinkage long sys_sysriscv(unsigned long, unsigned long,
+	unsigned long, unsigned long);
+
+#endif /* _ASM_RISCV_SYSCALLS_H */
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
new file mode 100644
index 000000000000..9f250ed007cd
--- /dev/null
+++ b/arch/riscv/include/asm/unistd.h
@@ -0,0 +1,16 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#define __ARCH_HAVE_MMU
+#define __ARCH_WANT_SYS_CLONE
+#include <uapi/asm/unistd.h>
diff --git a/arch/riscv/include/asm/vdso.h b/arch/riscv/include/asm/vdso.h
new file mode 100644
index 000000000000..95768a3810a7
--- /dev/null
+++ b/arch/riscv/include/asm/vdso.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright (C) 2012 ARM Limited
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_RISCV_VDSO_H
+#define _ASM_RISCV_VDSO_H
+
+#include <linux/types.h>
+
+struct vdso_data {
+};
+
+#define VDSO_SYMBOL(base, name)					\
+({								\
+	extern const char __vdso_##name[];			\
+	(void __user *)((unsigned long)(base) + __vdso_##name);	\
+})
+
+#endif /* _ASM_RISCV_VDSO_H */
diff --git a/arch/riscv/include/uapi/asm/auxvec.h b/arch/riscv/include/uapi/asm/auxvec.h
new file mode 100644
index 000000000000..1376515547cd
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/auxvec.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _UAPI_ASM_RISCV_AUXVEC_H
+#define _UAPI_ASM_RISCV_AUXVEC_H
+
+/* vDSO location */
+#define AT_SYSINFO_EHDR 33
+
+#endif /* _UAPI_ASM_RISCV_AUXVEC_H */
diff --git a/arch/riscv/include/uapi/asm/bitsperlong.h b/arch/riscv/include/uapi/asm/bitsperlong.h
new file mode 100644
index 000000000000..0b3cb52fd29d
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/bitsperlong.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _UAPI_ASM_RISCV_BITSPERLONG_H
+#define _UAPI_ASM_RISCV_BITSPERLONG_H
+
+#define __BITS_PER_LONG (__SIZEOF_POINTER__ * 8)
+
+#include <asm-generic/bitsperlong.h>
+
+#endif /* _UAPI_ASM_RISCV_BITSPERLONG_H */
diff --git a/arch/riscv/include/uapi/asm/byteorder.h b/arch/riscv/include/uapi/asm/byteorder.h
new file mode 100644
index 000000000000..4ca38af2cd32
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/byteorder.h
@@ -0,0 +1,23 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _UAPI_ASM_RISCV_BYTEORDER_H
+#define _UAPI_ASM_RISCV_BYTEORDER_H
+
+#include <linux/byteorder/little_endian.h>
+
+#endif /* _UAPI_ASM_RISCV_BYTEORDER_H */
diff --git a/arch/riscv/include/uapi/asm/elf.h b/arch/riscv/include/uapi/asm/elf.h
new file mode 100644
index 000000000000..a510edfa8226
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/elf.h
@@ -0,0 +1,83 @@
+/*
+ * Copyright (C) 2003 Matjaz Breskvar <phoenix@bsemi.com>
+ * Copyright (C) 2010-2011 Jonas Bonn <jonas@southpole.se>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef _UAPI_ASM_ELF_H
+#define _UAPI_ASM_ELF_H
+
+#include <asm/ptrace.h>
+
+/* ELF register definitions */
+typedef unsigned long elf_greg_t;
+typedef struct user_regs_struct elf_gregset_t;
+#define ELF_NGREG (sizeof(elf_gregset_t) / sizeof(elf_greg_t))
+
+typedef union __riscv_fp_state elf_fpregset_t;
+
+#define ELF_RISCV_R_SYM(r_info) ((r_info) >> 32)
+#define ELF_RISCV_R_TYPE(r_info) ((r_info) & 0xffffffff)
+
+/*
+ * RISC-V relocation types
+ */
+
+/* Relocation types used by the dynamic linker */
+#define R_RISCV_NONE		0
+#define R_RISCV_32		1
+#define R_RISCV_64		2
+#define R_RISCV_RELATIVE	3
+#define R_RISCV_COPY		4
+#define R_RISCV_JUMP_SLOT	5
+#define R_RISCV_TLS_DTPMOD32	6
+#define R_RISCV_TLS_DTPMOD64	7
+#define R_RISCV_TLS_DTPREL32	8
+#define R_RISCV_TLS_DTPREL64	9
+#define R_RISCV_TLS_TPREL32	10
+#define R_RISCV_TLS_TPREL64	11
+
+/* Relocation types not used by the dynamic linker */
+#define R_RISCV_BRANCH		16
+#define R_RISCV_JAL		17
+#define R_RISCV_CALL		18
+#define R_RISCV_CALL_PLT	19
+#define R_RISCV_GOT_HI20	20
+#define R_RISCV_TLS_GOT_HI20	21
+#define R_RISCV_TLS_GD_HI20	22
+#define R_RISCV_PCREL_HI20	23
+#define R_RISCV_PCREL_LO12_I	24
+#define R_RISCV_PCREL_LO12_S	25
+#define R_RISCV_HI20		26
+#define R_RISCV_LO12_I		27
+#define R_RISCV_LO12_S		28
+#define R_RISCV_TPREL_HI20	29
+#define R_RISCV_TPREL_LO12_I	30
+#define R_RISCV_TPREL_LO12_S	31
+#define R_RISCV_TPREL_ADD	32
+#define R_RISCV_ADD8		33
+#define R_RISCV_ADD16		34
+#define R_RISCV_ADD32		35
+#define R_RISCV_ADD64		36
+#define R_RISCV_SUB8		37
+#define R_RISCV_SUB16		38
+#define R_RISCV_SUB32		39
+#define R_RISCV_SUB64		40
+#define R_RISCV_GNU_VTINHERIT	41
+#define R_RISCV_GNU_VTENTRY	42
+#define R_RISCV_ALIGN		43
+#define R_RISCV_RVC_BRANCH	44
+#define R_RISCV_RVC_JUMP	45
+#define R_RISCV_LUI		46
+#define R_RISCV_GPREL_I		47
+#define R_RISCV_GPREL_S		48
+#define R_RISCV_TPREL_I		49
+#define R_RISCV_TPREL_S		50
+#define R_RISCV_RELAX		51
+
+#endif /* _UAPI_ASM_ELF_H */
diff --git a/arch/riscv/include/uapi/asm/ptrace.h b/arch/riscv/include/uapi/asm/ptrace.h
new file mode 100644
index 000000000000..01aee1654eae
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/ptrace.h
@@ -0,0 +1,87 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _UAPI_ASM_RISCV_PTRACE_H
+#define _UAPI_ASM_RISCV_PTRACE_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/types.h>
+
+/* User-mode register state for core dumps, ptrace, sigcontext
+ *
+ * This decouples struct pt_regs from the userspace ABI.
+ * struct user_regs_struct must form a prefix of struct pt_regs.
+ */
+struct user_regs_struct {
+	unsigned long pc;
+	unsigned long ra;
+	unsigned long sp;
+	unsigned long gp;
+	unsigned long tp;
+	unsigned long t0;
+	unsigned long t1;
+	unsigned long t2;
+	unsigned long s0;
+	unsigned long s1;
+	unsigned long a0;
+	unsigned long a1;
+	unsigned long a2;
+	unsigned long a3;
+	unsigned long a4;
+	unsigned long a5;
+	unsigned long a6;
+	unsigned long a7;
+	unsigned long s2;
+	unsigned long s3;
+	unsigned long s4;
+	unsigned long s5;
+	unsigned long s6;
+	unsigned long s7;
+	unsigned long s8;
+	unsigned long s9;
+	unsigned long s10;
+	unsigned long s11;
+	unsigned long t3;
+	unsigned long t4;
+	unsigned long t5;
+	unsigned long t6;
+};
+
+struct __riscv_f_ext_state {
+	__u32 f[32];
+	__u32 fcsr;
+};
+
+struct __riscv_d_ext_state {
+	__u64 f[32];
+	__u32 fcsr;
+};
+
+struct __riscv_q_ext_state {
+	__u64 f[64] __attribute__((aligned(16)));
+	__u32 fcsr;
+	/* Reserved for expansion of sigcontext structure.  Currently zeroed
+	 * upon signal, and must be zero upon sigreturn.  */
+	__u32 reserved[3];
+};
+
+union __riscv_fp_state {
+	struct __riscv_f_ext_state f;
+	struct __riscv_d_ext_state d;
+	struct __riscv_q_ext_state q;
+};
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _UAPI_ASM_RISCV_PTRACE_H */
diff --git a/arch/riscv/include/uapi/asm/sigcontext.h b/arch/riscv/include/uapi/asm/sigcontext.h
new file mode 100644
index 000000000000..805b015402ef
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/sigcontext.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _UAPI_ASM_RISCV_SIGCONTEXT_H
+#define _UAPI_ASM_RISCV_SIGCONTEXT_H
+
+#include <asm/ptrace.h>
+
+/* Signal context structure
+ *
+ * This contains the context saved before a signal handler is invoked;
+ * it is restored by sys_sigreturn / sys_rt_sigreturn.
+ */
+struct sigcontext {
+	struct user_regs_struct sc_regs;
+	union __riscv_fp_state sc_fpregs;
+};
+
+#endif /* _UAPI_ASM_RISCV_SIGCONTEXT_H */
diff --git a/arch/riscv/include/uapi/asm/siginfo.h b/arch/riscv/include/uapi/asm/siginfo.h
new file mode 100644
index 000000000000..f96849aac662
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/siginfo.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2016 SiFive, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_SIGINFO_H
+#define __ASM_SIGINFO_H
+
+#define __ARCH_SI_PREAMBLE_SIZE	(__SIZEOF_POINTER__ == 4 ? 12 : 16)
+
+#include <asm-generic/siginfo.h>
+
+#endif
diff --git a/arch/riscv/include/uapi/asm/ucontext.h b/arch/riscv/include/uapi/asm/ucontext.h
new file mode 100644
index 000000000000..52eff9febcfd
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/ucontext.h
@@ -0,0 +1,35 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2017 SiFive, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * This file was copied from arch/arm64/include/uapi/asm/ucontext.h
+ */
+#ifndef _UAPI__ASM_UCONTEXT_H
+#define _UAPI__ASM_UCONTEXT_H
+
+#include <linux/types.h>
+
+struct ucontext {
+	unsigned long	  uc_flags;
+	struct ucontext	 *uc_link;
+	stack_t		  uc_stack;
+	sigset_t	  uc_sigmask;
+	/* glibc uses a 1024-bit sigset_t */
+	__u8		  __unused[1024 / 8 - sizeof(sigset_t)];
+	/* last for future expansion */
+	struct sigcontext uc_mcontext;
+};
+
+#endif /* _UAPI__ASM_UCONTEXT_H */
diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h
new file mode 100644
index 000000000000..7e3909ac3c18
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/unistd.h
@@ -0,0 +1,26 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+/* FIXME: This exists for now in order to maintain compatibility with our
+ * pre-upstream glibc, and will be removed for our real Linux submission.
+ */
+#define __ARCH_WANT_RENAMEAT
+
+#include <asm-generic/unistd.h>
+
+/*
+ * These system calls add support for AMOs on RISC-V systems without support
+ * for the A extension.
+ */
+#define __NR_sysriscv_cmpxchg32		(__NR_arch_specific_syscall + 0)
+#define __NR_sysriscv_cmpxchg64		(__NR_arch_specific_syscall + 1)
diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
new file mode 100644
index 000000000000..753cb9894feb
--- /dev/null
+++ b/arch/riscv/kernel/module.c
@@ -0,0 +1,215 @@
+/*
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  Copyright (C) 2017 Zihao Yu
+ */
+
+#include <linux/elf.h>
+#include <linux/err.h>
+#include <linux/errno.h>
+#include <linux/moduleloader.h>
+
+static int apply_r_riscv_64_rela(struct module *me, u32 *location, Elf_Addr v)
+{
+	*(u64 *)location = v;
+	return 0;
+}
+
+static int apply_r_riscv_branch_rela(struct module *me, u32 *location,
+				     Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	u32 imm12 = (offset & 0x1000) << (31 - 12);
+	u32 imm11 = (offset & 0x800) >> (11 - 7);
+	u32 imm10_5 = (offset & 0x7e0) << (30 - 10);
+	u32 imm4_1 = (offset & 0x1e) << (11 - 4);
+
+	*location = (*location & 0x1fff07f) | imm12 | imm11 | imm10_5 | imm4_1;
+	return 0;
+}
+
+static int apply_r_riscv_jal_rela(struct module *me, u32 *location,
+				  Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	u32 imm20 = (offset & 0x100000) << (31 - 20);
+	u32 imm19_12 = (offset & 0xff000);
+	u32 imm11 = (offset & 0x800) << (20 - 11);
+	u32 imm10_1 = (offset & 0x7fe) << (30 - 10);
+
+	*location = (*location & 0xfff) | imm20 | imm19_12 | imm11 | imm10_1;
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location,
+					 Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	s32 hi20;
+
+	if (offset != (s32)offset) {
+		pr_err(
+		  "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+		  me->name, v, location);
+		return -EINVAL;
+	}
+
+	hi20 = (offset + 0x800) & 0xfffff000;
+	*location = (*location & 0xfff) | hi20;
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_lo12_i_rela(struct module *me, u32 *location,
+					   Elf_Addr v)
+{
+	/* v is the lo12 value to fill. It is calculated before calling this
+	 * handler.
+	 */
+	*location = (*location & 0xfffff) | ((v & 0xfff) << 20);
+	return 0;
+}
+
+static int apply_r_riscv_pcrel_lo12_s_rela(struct module *me, u32 *location,
+					   Elf_Addr v)
+{
+	/* v is the lo12 value to fill. It is calculated before calling this
+	 * handler.
+	 */
+	u32 imm11_5 = (v & 0xfe0) << (31 - 11);
+	u32 imm4_0 = (v & 0x1f) << (11 - 4);
+
+	*location = (*location & 0x1fff07f) | imm11_5 | imm4_0;
+	return 0;
+}
+
+static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location,
+				       Elf_Addr v)
+{
+	s64 offset = (void *)v - (void *)location;
+	s32 fill_v = offset;
+	u32 hi20, lo12;
+
+	if (offset != fill_v) {
+		pr_err(
+		  "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+		  me->name, v, location);
+		return -EINVAL;
+	}
+
+	hi20 = (offset + 0x800) & 0xfffff000;
+	lo12 = (offset - hi20) & 0xfff;
+	*location = (*location & 0xfff) | hi20;
+	*(location + 1) = (*(location + 1) & 0xfffff) | (lo12 << 20);
+	return 0;
+}
+
+static int apply_r_riscv_relax_rela(struct module *me, u32 *location,
+				    Elf_Addr v)
+{
+	return 0;
+}
+
+static int (*reloc_handlers_rela[]) (struct module *me, u32 *location,
+				Elf_Addr v) = {
+	[R_RISCV_64]			= apply_r_riscv_64_rela,
+	[R_RISCV_BRANCH]		= apply_r_riscv_branch_rela,
+	[R_RISCV_JAL]			= apply_r_riscv_jal_rela,
+	[R_RISCV_PCREL_HI20]		= apply_r_riscv_pcrel_hi20_rela,
+	[R_RISCV_PCREL_LO12_I]		= apply_r_riscv_pcrel_lo12_i_rela,
+	[R_RISCV_PCREL_LO12_S]		= apply_r_riscv_pcrel_lo12_s_rela,
+	[R_RISCV_CALL_PLT]		= apply_r_riscv_call_plt_rela,
+	[R_RISCV_RELAX]			= apply_r_riscv_relax_rela,
+};
+
+int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
+		       unsigned int symindex, unsigned int relsec,
+		       struct module *me)
+{
+	Elf_Rela *rel = (void *) sechdrs[relsec].sh_addr;
+	int (*handler)(struct module *me, u32 *location, Elf_Addr v);
+	Elf_Sym *sym;
+	u32 *location;
+	unsigned int i, type;
+	Elf_Addr v;
+	int res;
+
+	pr_debug("Applying relocate section %u to %u\n", relsec,
+	       sechdrs[relsec].sh_info);
+
+	for (i = 0; i < sechdrs[relsec].sh_size / sizeof(*rel); i++) {
+		/* This is where to make the change */
+		location = (void *)sechdrs[sechdrs[relsec].sh_info].sh_addr
+			+ rel[i].r_offset;
+		/* This is the symbol it is referring to */
+		sym = (Elf_Sym *)sechdrs[symindex].sh_addr
+			+ ELF_RISCV_R_SYM(rel[i].r_info);
+		if (IS_ERR_VALUE(sym->st_value)) {
+			/* Ignore unresolved weak symbol */
+			if (ELF_ST_BIND(sym->st_info) == STB_WEAK)
+				continue;
+			printk(KERN_WARNING "%s: Unknown symbol %s\n",
+			       me->name, strtab + sym->st_name);
+			return -ENOENT;
+		}
+
+		type = ELF_RISCV_R_TYPE(rel[i].r_info);
+
+		if (type < ARRAY_SIZE(reloc_handlers_rela))
+			handler = reloc_handlers_rela[type];
+		else
+			handler = NULL;
+
+		if (!handler) {
+			pr_err("%s: Unknown relocation type %u\n",
+			       me->name, type);
+			return -EINVAL;
+		}
+
+		v = sym->st_value + rel[i].r_addend;
+
+		if (type == R_RISCV_PCREL_LO12_I || type == R_RISCV_PCREL_LO12_S) {
+			unsigned int j;
+
+			for (j = 0; j < sechdrs[relsec].sh_size / sizeof(*rel); j++) {
+				u64 hi20_loc =
+					sechdrs[sechdrs[relsec].sh_info].sh_addr
+					+ rel[j].r_offset;
+				/* Find the corresponding HI20 PC-relative relocation entry */
+				if (hi20_loc == sym->st_value) {
+					Elf_Sym *hi20_sym =
+						(Elf_Sym *)sechdrs[symindex].sh_addr
+						+ ELF_RISCV_R_SYM(rel[j].r_info);
+					u64 hi20_sym_val =
+						hi20_sym->st_value
+						+ rel[j].r_addend;
+					/* Calculate lo12 */
+					s64 offset = hi20_sym_val - hi20_loc;
+					s32 hi20 = (offset + 0x800) & 0xfffff000;
+					s32 lo12 = offset - hi20;
+					v = lo12;
+					break;
+				}
+			}
+			if (j == sechdrs[relsec].sh_size / sizeof(*rel)) {
+				pr_err(
+				  "%s: Can not find HI20 PC-relative relocation information\n",
+				  me->name);
+				return -EINVAL;
+			}
+		}
+
+		res = handler(me, location, v);
+		if (res)
+			return res;
+	}
+
+	return 0;
+}
diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
new file mode 100644
index 000000000000..69b3b2d10664
--- /dev/null
+++ b/arch/riscv/kernel/ptrace.c
@@ -0,0 +1,147 @@
+/*
+ * Copyright 2010 Tilera Corporation. All Rights Reserved.
+ * Copyright 2015 Regents of the University of California
+ * Copyright 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ *
+ * Copied from arch/tile/kernel/ptrace.c
+ */
+
+#include <asm/ptrace.h>
+#include <asm/syscall.h>
+#include <asm/thread_info.h>
+#include <linux/ptrace.h>
+#include <linux/elf.h>
+#include <linux/regset.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/tracehook.h>
+#include <trace/events/syscalls.h>
+
+enum riscv_regset {
+	REGSET_X,
+};
+
+/*
+ * Get registers from task and ready the result for userspace.
+ */
+static char *getregs(struct task_struct *child, struct pt_regs *uregs)
+{
+	*uregs = *task_pt_regs(child);
+	return (char *)uregs;
+}
+
+/* Put registers back to task. */
+static void putregs(struct task_struct *child, struct pt_regs *uregs)
+{
+	struct pt_regs *regs = task_pt_regs(child);
+	*regs = *uregs;
+}
+
+static int riscv_gpr_get(struct task_struct *target,
+			 const struct user_regset *regset,
+			 unsigned int pos, unsigned int count,
+			 void *kbuf, void __user *ubuf)
+{
+	struct pt_regs regs;
+
+	getregs(target, &regs);
+
+	return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &regs, 0,
+				   sizeof(regs));
+}
+
+static int riscv_gpr_set(struct task_struct *target,
+			 const struct user_regset *regset,
+			 unsigned int pos, unsigned int count,
+			 const void *kbuf, const void __user *ubuf)
+{
+	int ret;
+	struct pt_regs regs;
+
+	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &regs, 0,
+				 sizeof(regs));
+	if (ret)
+		return ret;
+
+	putregs(target, &regs);
+
+	return 0;
+}
+
+
+static const struct user_regset riscv_user_regset[] = {
+	[REGSET_X] = {
+		.core_note_type = NT_PRSTATUS,
+		.n = ELF_NGREG,
+		.size = sizeof(elf_greg_t),
+		.align = sizeof(elf_greg_t),
+		.get = &riscv_gpr_get,
+		.set = &riscv_gpr_set,
+	},
+};
+
+static const struct user_regset_view riscv_user_native_view = {
+	.name = "riscv",
+	.e_machine = EM_RISCV,
+	.regsets = riscv_user_regset,
+	.n = ARRAY_SIZE(riscv_user_regset),
+};
+
+const struct user_regset_view *task_user_regset_view(struct task_struct *task)
+{
+	return &riscv_user_native_view;
+}
+
+void ptrace_disable(struct task_struct *child)
+{
+	clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+}
+
+long arch_ptrace(struct task_struct *child, long request,
+		 unsigned long addr, unsigned long data)
+{
+	long ret = -EIO;
+
+	switch (request) {
+	default:
+		ret = ptrace_request(child, request, addr, data);
+		break;
+	}
+
+	return ret;
+}
+
+/* Allows PTRACE_SYSCALL to work.  These are called from entry.S in
+ * {handle,ret_from}_syscall.
+ */
+void do_syscall_trace_enter(struct pt_regs *regs)
+{
+	if (test_thread_flag(TIF_SYSCALL_TRACE))
+		if (tracehook_report_syscall_entry(regs))
+			syscall_set_nr(current, regs, -1);
+
+#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
+	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
+		trace_sys_enter(regs, syscall_get_nr(current, regs));
+#endif
+}
+
+void do_syscall_trace_exit(struct pt_regs *regs)
+{
+	if (test_thread_flag(TIF_SYSCALL_TRACE))
+		tracehook_report_syscall_exit(regs, 0);
+
+#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
+	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
+		trace_sys_exit(regs, regs->regs[0]);
+#endif
+}
diff --git a/arch/riscv/kernel/riscv_ksyms.c b/arch/riscv/kernel/riscv_ksyms.c
new file mode 100644
index 000000000000..23cc81ec9e94
--- /dev/null
+++ b/arch/riscv/kernel/riscv_ksyms.c
@@ -0,0 +1,15 @@
+/*
+ * Copyright (C) 2017 Zihao Yu
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/export.h>
+#include <linux/uaccess.h>
+
+/*
+ * Assembly functions that may be used (directly or indirectly) by modules
+ */
+EXPORT_SYMBOL(__copy_user);
diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
new file mode 100644
index 000000000000..f32a568ce25b
--- /dev/null
+++ b/arch/riscv/kernel/signal.c
@@ -0,0 +1,288 @@
+/*
+ * Copyright (C) 2009 Sunplus Core Technology Co., Ltd.
+ *  Chen Liqin <liqin.chen@sunplusct.com>
+ *  Lennox Wu <lennox.wu@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see the file COPYING, or write
+ * to the Free Software Foundation, Inc.,
+ */
+
+#include <linux/signal.h>
+#include <linux/uaccess.h>
+#include <linux/syscalls.h>
+#include <linux/tracehook.h>
+#include <linux/linkage.h>
+
+#include <asm/ucontext.h>
+#include <asm/vdso.h>
+#include <asm/switch_to.h>
+#include <asm/csr.h>
+
+#define DEBUG_SIG 0
+
+struct rt_sigframe {
+	struct siginfo info;
+	struct ucontext uc;
+};
+
+static long restore_d_state(struct pt_regs *regs,
+	struct __riscv_d_ext_state __user *state)
+{
+	long err;
+	err = __copy_from_user(&current->thread.fstate, state, sizeof(*state));
+	if (likely(!err))
+		fstate_restore(current, regs);
+	return err;
+}
+
+static long save_d_state(struct pt_regs *regs,
+	struct __riscv_d_ext_state __user *state)
+{
+	fstate_save(current, regs);
+	return __copy_to_user(state, &current->thread.fstate, sizeof(*state));
+}
+
+static long restore_sigcontext(struct pt_regs *regs,
+	struct sigcontext __user *sc)
+{
+	long err;
+	size_t i;
+	/* sc_regs is structured the same as the start of pt_regs */
+	err = __copy_from_user(regs, &sc->sc_regs, sizeof(sc->sc_regs));
+	if (unlikely(err))
+		return err;
+	/* Restore the floating-point state. */
+	err = restore_d_state(regs, &sc->sc_fpregs.d);
+	if (unlikely(err))
+		return err;
+	/* We support no other extension state at this time. */
+	for (i = 0; i < ARRAY_SIZE(sc->sc_fpregs.q.reserved); i++) {
+		u32 value;
+		err = __get_user(value, &sc->sc_fpregs.q.reserved[i]);
+		if (unlikely(err))
+			break;
+		if (value != 0)
+			return -EINVAL;
+	}
+	return err;
+}
+
+SYSCALL_DEFINE0(rt_sigreturn)
+{
+	struct pt_regs *regs = current_pt_regs();
+	struct rt_sigframe __user *frame;
+	struct task_struct *task;
+	sigset_t set;
+
+	/* Always make any pending restarted system calls return -EINTR */
+	current->restart_block.fn = do_no_restart_syscall;
+
+	frame = (struct rt_sigframe __user *)regs->sp;
+
+	if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
+		goto badframe;
+
+	if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
+		goto badframe;
+
+	set_current_blocked(&set);
+
+	if (restore_sigcontext(regs, &frame->uc.uc_mcontext))
+		goto badframe;
+
+	if (restore_altstack(&frame->uc.uc_stack))
+		goto badframe;
+
+	return regs->a0;
+
+badframe:
+	task = current;
+	if (show_unhandled_signals) {
+		pr_info_ratelimited(
+			"%s[%d]: bad frame in %s: frame=%p pc=%p sp=%p\n",
+			task->comm, task_pid_nr(task), __func__,
+			frame, (void *)regs->sepc, (void *)regs->sp);
+	}
+	force_sig(SIGSEGV, task);
+	return 0;
+}
+
+static long setup_sigcontext(struct rt_sigframe __user *frame,
+	struct pt_regs *regs)
+{
+	struct sigcontext __user *sc = &frame->uc.uc_mcontext;
+	long err;
+	size_t i;
+	/* sc_regs is structured the same as the start of pt_regs */
+	err = __copy_to_user(&sc->sc_regs, regs, sizeof(sc->sc_regs));
+	/* Save the floating-point state. */
+	err |= save_d_state(regs, &sc->sc_fpregs.d);
+	/* We support no other extension state at this time. */
+	for (i = 0; i < ARRAY_SIZE(sc->sc_fpregs.q.reserved); i++)
+		err |= __put_user(0, &sc->sc_fpregs.q.reserved[i]);
+	return err;
+}
+
+static inline void __user *get_sigframe(struct ksignal *ksig,
+	struct pt_regs *regs, size_t framesize)
+{
+	unsigned long sp;
+	/* Default to using normal stack */
+	sp = regs->sp;
+
+	/*
+	 * If we are on the alternate signal stack and would overflow it, don't.
+	 * Return an always-bogus address instead so we will die with SIGSEGV.
+	 */
+	if (on_sig_stack(sp) && !likely(on_sig_stack(sp - framesize)))
+		return (void __user __force *)(-1UL);
+
+	/* This is the X/Open sanctioned signal stack switching. */
+	sp = sigsp(sp, ksig) - framesize;
+
+	/* Align the stack frame. */
+	sp &= ~0xfUL;
+
+	return (void __user *)sp;
+}
+
+
+static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+	struct pt_regs *regs)
+{
+	struct rt_sigframe __user *frame;
+	long err = 0;
+
+	frame = get_sigframe(ksig, regs, sizeof(*frame));
+	if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+		return -EFAULT;
+
+	err |= copy_siginfo_to_user(&frame->info, &ksig->info);
+
+	/* Create the ucontext. */
+	err |= __put_user(0, &frame->uc.uc_flags);
+	err |= __put_user(NULL, &frame->uc.uc_link);
+	err |= __save_altstack(&frame->uc.uc_stack, regs->sp);
+	err |= setup_sigcontext(frame, regs);
+	err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
+	if (err)
+		return -EFAULT;
+
+	/* Set up to return from userspace. */
+	regs->ra = (unsigned long)VDSO_SYMBOL(
+		current->mm->context.vdso, rt_sigreturn);
+
+	/*
+	 * Set up registers for signal handler.
+	 * Registers that we don't modify keep the value they had from
+	 * user-space at the time we took the signal.
+	 * We always pass siginfo and mcontext, regardless of SA_SIGINFO,
+	 * since some things rely on this (e.g. glibc's debug/segfault.c).
+	 */
+	regs->sepc = (unsigned long)ksig->ka.sa.sa_handler;
+	regs->sp = (unsigned long)frame;
+	regs->a0 = ksig->sig;                     /* a0: signal number */
+	regs->a1 = (unsigned long)(&frame->info); /* a1: siginfo pointer */
+	regs->a2 = (unsigned long)(&frame->uc);   /* a2: ucontext pointer */
+
+#if DEBUG_SIG
+	pr_info("SIG deliver (%s:%d): sig=%d pc=%p ra=%p sp=%p\n",
+		current->comm, task_pid_nr(current), ksig->sig,
+		(void *)regs->sepc, (void *)regs->ra, frame);
+#endif
+
+	return 0;
+}
+
+static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
+{
+	sigset_t *oldset = sigmask_to_save();
+	int ret;
+
+	/* Are we from a system call? */
+	if (regs->scause == EXC_SYSCALL) {
+		/* If so, check system call restarting.. */
+		switch (regs->a0) {
+		case -ERESTART_RESTARTBLOCK:
+		case -ERESTARTNOHAND:
+			regs->a0 = -EINTR;
+			break;
+
+		case -ERESTARTSYS:
+			if (!(ksig->ka.sa.sa_flags & SA_RESTART)) {
+				regs->a0 = -EINTR;
+				break;
+			}
+			/* fallthrough */
+		case -ERESTARTNOINTR:
+			regs->sepc -= 0x4;
+			break;
+		}
+	}
+
+	/* Set up the stack frame */
+	ret = setup_rt_frame(ksig, oldset, regs);
+
+	signal_setup_done(ret, ksig, 0);
+}
+
+static void do_signal(struct pt_regs *regs)
+{
+	struct ksignal ksig;
+
+	if (get_signal(&ksig)) {
+		/* Actually deliver the signal */
+		handle_signal(&ksig, regs);
+		return;
+	}
+
+	/* Did we come from a system call? */
+	if (regs->scause == EXC_SYSCALL) {
+		/* Restart the system call - no handlers present */
+		switch (regs->a0) {
+		case -ERESTARTNOHAND:
+		case -ERESTARTSYS:
+		case -ERESTARTNOINTR:
+			regs->sepc -= 0x4;
+			break;
+		case -ERESTART_RESTARTBLOCK:
+			regs->a7 = __NR_restart_syscall;
+			regs->sepc -= 0x4;
+			break;
+		}
+	}
+
+	/* If there is no signal to deliver, we just put the saved
+	 * sigmask back.
+	 */
+	restore_saved_sigmask();
+}
+
+/*
+ * notification of userspace execution resumption
+ * - triggered by the _TIF_WORK_MASK flags
+ */
+asmlinkage void do_notify_resume(struct pt_regs *regs,
+	unsigned long thread_info_flags)
+{
+	/* Handle pending signal delivery */
+	if (thread_info_flags & _TIF_SIGPENDING)
+		do_signal(regs);
+
+	if (thread_info_flags & _TIF_NOTIFY_RESUME) {
+		clear_thread_flag(TIF_NOTIFY_RESUME);
+		tracehook_notify_resume(regs);
+	}
+}
diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
new file mode 100644
index 000000000000..ab699efe636e
--- /dev/null
+++ b/arch/riscv/kernel/sys_riscv.c
@@ -0,0 +1,87 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2014 Darius Rad <darius@bluespec.com>
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/syscalls.h>
+#include <asm/unistd.h>
+
+#ifdef CONFIG_64BIT
+SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
+	unsigned long, prot, unsigned long, flags,
+	unsigned long, fd, off_t, offset)
+{
+	if (unlikely(offset & (~PAGE_MASK)))
+		return -EINVAL;
+	return sys_mmap_pgoff(addr, len, prot, flags, fd, offset >> PAGE_SHIFT);
+}
+#else
+SYSCALL_DEFINE6(mmap2, unsigned long, addr, unsigned long, len,
+	unsigned long, prot, unsigned long, flags,
+	unsigned long, fd, off_t, offset)
+{
+	/* Note that the shift for mmap2 is constant (12),
+	 * regardless of PAGE_SIZE
+	 */
+	if (unlikely(offset & (~PAGE_MASK >> 12)))
+		return -EINVAL;
+	return sys_mmap_pgoff(addr, len, prot, flags, fd,
+		offset >> (PAGE_SHIFT - 12));
+}
+#endif /* !CONFIG_64BIT */
+
+SYSCALL_DEFINE3(sysriscv_cmpxchg32, unsigned long, arg1, unsigned long, arg2,
+		unsigned long, arg3)
+{
+	unsigned long flags;
+	unsigned long prev;
+	unsigned int *ptr;
+	unsigned int err;
+
+	ptr = (unsigned int *)arg1;
+	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned int)))
+		return -EFAULT;
+
+	preempt_disable();
+	raw_local_irq_save(flags);
+	err = __get_user(prev, ptr);
+	if (likely(!err && prev == arg2))
+		err = __put_user(arg3, ptr);
+	raw_local_irq_restore(flags);
+	preempt_enable();
+
+	return unlikely(err) ? err : prev;
+}
+
+SYSCALL_DEFINE3(sysriscv_cmpxchg64, unsigned long, arg1, unsigned long, arg2,
+		unsigned long, arg3)
+{
+	unsigned long flags;
+	unsigned long prev;
+	unsigned int *ptr;
+	unsigned int err;
+
+	ptr = (unsigned int *)arg1;
+	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned long)))
+		return -EFAULT;
+
+	preempt_disable();
+	raw_local_irq_save(flags);
+	err = __get_user(prev, ptr);
+	if (likely(!err && prev == arg2))
+		err = __put_user(arg3, ptr);
+	raw_local_irq_restore(flags);
+	preempt_enable();
+
+	return unlikely(err) ? err : prev;
+}
diff --git a/arch/riscv/kernel/syscall_table.c b/arch/riscv/kernel/syscall_table.c
new file mode 100644
index 000000000000..8fa239b67cbc
--- /dev/null
+++ b/arch/riscv/kernel/syscall_table.c
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2009 Arnd Bergmann <arnd@arndb.de>
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/syscalls.h>
+
+#include <asm/syscalls.h>
+
+#undef __SYSCALL
+#define __SYSCALL(nr, call)	[nr] = (call),
+
+void *sys_call_table[__NR_syscalls] = {
+	[0 ... __NR_syscalls - 1] = sys_ni_syscall,
+#include <asm/unistd.h>
+};
diff --git a/arch/riscv/kernel/vdso/.gitignore b/arch/riscv/kernel/vdso/.gitignore
new file mode 100644
index 000000000000..f8b69d84238e
--- /dev/null
+++ b/arch/riscv/kernel/vdso/.gitignore
@@ -0,0 +1 @@
+vdso.lds
diff --git a/arch/riscv/kernel/vdso/cmpxchg32.S b/arch/riscv/kernel/vdso/cmpxchg32.S
new file mode 100644
index 000000000000..387deb4e852c
--- /dev/null
+++ b/arch/riscv/kernel/vdso/cmpxchg32.S
@@ -0,0 +1,40 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/unistd.h>
+
+	.text
+ENTRY(__vdso_cmpxchg32)
+	.cfi_startproc
+#ifdef CONFIG_ISA_A
+/*
+ * a0: ptr
+ * a1: compare value
+ * a2: exchange value
+ */
+0:
+	lr.w.aqrl t0, 0(a0)
+	bne       t0, a1, 1f
+	sc.w.aqrl t1, a2, 0(a0)
+	bnez      t1, 0b
+1:
+	mv        a0, t0
+	ret
+#else
+	li a7, __NR_sysriscv_cmpxchg32
+	scall
+	ret
+#endif
+	.cfi_endproc
+ENDPROC(__vdso_cmpxchg32)
diff --git a/arch/riscv/kernel/vdso/cmpxchg64.S b/arch/riscv/kernel/vdso/cmpxchg64.S
new file mode 100644
index 000000000000..6286842f78c0
--- /dev/null
+++ b/arch/riscv/kernel/vdso/cmpxchg64.S
@@ -0,0 +1,40 @@
+/*
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/unistd.h>
+
+	.text
+ENTRY(__vdso_cmpxchg64)
+	.cfi_startproc
+#ifdef CONFIG_ISA_A
+/*
+ * a0: ptr
+ * a1: compare value
+ * a2: exchange value
+ */
+0:
+	lr.d.aqrl t0, 0(a0)
+	bne       t0, a1, 1f
+	sc.d.aqrl t1, a2, 0(a0)
+	bnez      t1, 0b
+1:
+	mv        a0, t0
+	ret
+#else
+	li a7, __NR_sysriscv_cmpxchg64
+	scall
+	ret
+#endif
+	.cfi_endproc
+ENDPROC(__vdso_cmpxchg64)
diff --git a/arch/riscv/kernel/vdso/sigreturn.S b/arch/riscv/kernel/vdso/sigreturn.S
new file mode 100644
index 000000000000..f5aa3d72acfb
--- /dev/null
+++ b/arch/riscv/kernel/vdso/sigreturn.S
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/linkage.h>
+#include <asm/unistd.h>
+
+	.text
+ENTRY(__vdso_rt_sigreturn)
+	.cfi_startproc
+	.cfi_signal_frame
+	li a7, __NR_rt_sigreturn
+	scall
+	.cfi_endproc
+ENDPROC(__vdso_rt_sigreturn)
diff --git a/arch/riscv/kernel/vdso/vdso.S b/arch/riscv/kernel/vdso/vdso.S
new file mode 100644
index 000000000000..7055de5f9174
--- /dev/null
+++ b/arch/riscv/kernel/vdso/vdso.S
@@ -0,0 +1,27 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/page.h>
+
+	__PAGE_ALIGNED_DATA
+
+	.globl vdso_start, vdso_end
+	.balign PAGE_SIZE
+vdso_start:
+	.incbin "arch/riscv/kernel/vdso/vdso.so"
+	.balign PAGE_SIZE
+vdso_end:
+
+	.previous
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 9/9] RISC-V: Build Infastructure
  2017-06-28 18:55     ` Palmer Dabbelt
@ 2017-06-28 18:55       ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains all the build infastructure that actually enables
the RISC-V port.  This includes Makefiles, linker scripts, and Kconfig
files.  It also contains the only top-level change, which adds RISC-V to
the list of architectures that need a sed run to produce the ARCH
variable when building locally.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 Makefile                                  |   3 +-
 arch/riscv/Kconfig                        | 318 ++++++++++++++++++++++++++++++
 arch/riscv/Makefile                       |  64 ++++++
 arch/riscv/configs/freedom-u500_defconfig |  53 +++++
 arch/riscv/configs/spike32_defconfig      |  50 +++++
 arch/riscv/configs/spike64_defconfig      |  46 +++++
 arch/riscv/include/asm/Kbuild             |  60 ++++++
 arch/riscv/include/uapi/asm/Kbuild        |   4 +
 arch/riscv/kernel/.gitignore              |   1 +
 arch/riscv/kernel/Makefile                |  16 ++
 arch/riscv/kernel/vdso/Makefile           |  61 ++++++
 arch/riscv/kernel/vdso/vdso.lds.S         |  77 ++++++++
 arch/riscv/kernel/vmlinux.lds.S           |  92 +++++++++
 arch/riscv/lib/Makefile                   |   5 +
 arch/riscv/mm/Makefile                    |   1 +
 15 files changed, 850 insertions(+), 1 deletion(-)
 create mode 100644 arch/riscv/Kconfig
 create mode 100644 arch/riscv/Makefile
 create mode 100644 arch/riscv/configs/freedom-u500_defconfig
 create mode 100644 arch/riscv/configs/spike32_defconfig
 create mode 100644 arch/riscv/configs/spike64_defconfig
 create mode 100644 arch/riscv/include/asm/Kbuild
 create mode 100644 arch/riscv/include/uapi/asm/Kbuild
 create mode 100644 arch/riscv/kernel/.gitignore
 create mode 100644 arch/riscv/kernel/Makefile
 create mode 100644 arch/riscv/kernel/vdso/Makefile
 create mode 100644 arch/riscv/kernel/vdso/vdso.lds.S
 create mode 100644 arch/riscv/kernel/vmlinux.lds.S
 create mode 100644 arch/riscv/lib/Makefile
 create mode 100644 arch/riscv/mm/Makefile

diff --git a/Makefile b/Makefile
index 6d8a984ed9c9..b8fbaa805d56 100644
--- a/Makefile
+++ b/Makefile
@@ -232,7 +232,8 @@ SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
 				  -e s/arm.*/arm/ -e s/sa110/arm/ \
 				  -e s/s390x/s390/ -e s/parisc64/parisc/ \
 				  -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
-				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ )
+				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
+				  -e s/riscv.*/riscv/)
 
 # Cross compiling and selecting different set of gcc/bin-utils
 # ---------------------------------------------------------------------------
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
new file mode 100644
index 000000000000..38c8112861fd
--- /dev/null
+++ b/arch/riscv/Kconfig
@@ -0,0 +1,318 @@
+#
+# For a description of the syntax of this configuration file,
+# see Documentation/kbuild/kconfig-language.txt.
+#
+
+config RISCV
+	def_bool y
+	select OF
+	select OF_EARLY_FLATTREE
+	select OF_IRQ
+	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
+	select ARCH_WANT_FRAME_POINTERS
+	select CLONE_BACKWARDS
+	select COMMON_CLK
+	select GENERIC_CLOCKEVENTS
+	select GENERIC_CPU_DEVICES
+	select GENERIC_IRQ_SHOW
+	select GENERIC_PCI_IOMAP
+	select GENERIC_STRNCPY_FROM_USER
+	select GENERIC_STRNLEN_USER
+	select GENERIC_SMP_IDLE_THREAD
+	select GENERIC_ATOMIC64 if !64BIT || !ISA_A
+	select ARCH_WANT_OPTIONAL_GPIOLIB
+	select HAVE_MEMBLOCK
+	select HAVE_DMA_API_DEBUG
+	select HAVE_DMA_CONTIGUOUS
+	select HAVE_GENERIC_DMA_COHERENT
+	select IRQ_DOMAIN
+	select NO_BOOTMEM
+	select ISA_A if SMP
+	select SYSRISCV_ATOMIC if !ISA_A
+	select SPARSE_IRQ
+	select SYSCTL_EXCEPTION_TRACE
+	select HAVE_ARCH_TRACEHOOK
+	select MODULES_USE_ELF_RELA if MODULES
+	select THREAD_INFO_IN_TASK
+	select RISCV_IRQ_INTC
+	select RISCV_TIMER
+
+config MMU
+	def_bool y
+
+# even on 32-bit, physical (and DMA) addresses are > 32-bits
+config ARCH_PHYS_ADDR_T_64BIT
+	def_bool y
+
+config ARCH_DMA_ADDR_T_64BIT
+	def_bool y
+
+config STACKTRACE_SUPPORT
+	def_bool y
+
+config RWSEM_GENERIC_SPINLOCK
+	def_bool y
+
+config GENERIC_BUG
+	def_bool y
+	depends on BUG
+	select GENERIC_BUG_RELATIVE_POINTERS if 64BIT
+
+config GENERIC_BUG_RELATIVE_POINTERS
+	bool
+
+config GENERIC_CALIBRATE_DELAY
+	def_bool y
+
+config GENERIC_CSUM
+	def_bool y
+
+config GENERIC_HWEIGHT
+	def_bool y
+
+config PGTABLE_LEVELS
+	int
+	default 3 if 64BIT
+	default 2
+
+config HAVE_KPROBES
+	def_bool n
+
+config DMA_NOOP_OPS
+	def_bool y
+
+menu "Platform type"
+
+config SMP
+	bool "Symmetric Multi-Processing"
+	help
+	  This enables support for systems with more than one CPU.  If
+	  you say N here, the kernel will run on single and
+	  multiprocessor machines, but will use only one CPU of a
+	  multiprocessor machine. If you say Y here, the kernel will run
+	  on many, but not all, single processor machines. On a single
+	  processor machine, the kernel will run faster if you say N
+	  here.
+
+	  If you don't know what to do here, say N.
+
+config NR_CPUS
+	int "Maximum number of CPUs (2-32)"
+	range 2 32
+	depends on SMP
+	default "8"
+
+config CPU_SUPPORTS_32BIT_KERNEL
+	bool
+config CPU_SUPPORTS_64BIT_KERNEL
+	bool
+
+choice
+	prompt "Base ISA"
+	default ARCH_RV64I
+
+config ARCH_RV32I
+	bool "RV32I"
+	select CPU_SUPPORTS_32BIT_KERNEL
+	select 32BIT
+	select GENERIC_ASHLDI3
+	select GENERIC_ASHRDI3
+	select GENERIC_LSHRDI3
+
+config ARCH_RV64I
+	bool "RV64I"
+	select CPU_SUPPORTS_64BIT_KERNEL
+	select 64BIT
+
+endchoice
+
+choice
+	prompt "CPU Tuning"
+	default TUNE_GENERIC
+
+config TUNE_GENERIC
+	bool "generic"
+
+endchoice
+
+config ISA_C
+	bool "Emit compressed instructions when building Linux"
+	default n
+	help
+	   Adds "C" to the ISA subsets that the toolchain is allowed to emit
+	   when building Linux, which results in compressed instructions in the
+	   Linux binary.
+
+	   If you don't know what to do here, say Y.
+
+config ISA_A
+	bool "Emit atomic instructions when building Linux"
+	default y
+	help
+	   Adds "A" to the ISA subsets that the toolchain is allowed to emit
+	   when building Linux, which results in atomic instructions in the
+	   Linux binary.
+
+	   If you don't know what to do here, say Y.
+
+config SYSRISCV_ATOMIC
+	bool "Include support for atomic operation syscalls"
+	default !ISA_A
+	help
+	  If atomic memory instructions are present, i.e.,
+	  CONFIG_ISA_A, this includes support for the syscall that
+	  provides atomic accesses.  This is only useful to run
+	  binaries that require atomic access but were compiled with
+	  -mno-atomic.
+
+	  If CONFIG_ISA_A is unset, this option is mandatory.
+
+	  If you don't know what to do here, say N.
+
+config RV_PUM
+	def_bool y
+	prompt "Protect User Memory" if EXPERT
+	---help---
+	  Protect User Memory (PUM) prevents the kernel from inadvertently
+	  accessing user-space memory.  There is a small performance cost
+	  and kernel size increase if this is enabled.
+
+	  If unsure, say Y.
+
+endmenu
+
+menu "Kernel type"
+
+choice
+	prompt "Kernel code model"
+	default 64BIT
+
+config 32BIT
+	bool "32-bit kernel"
+	depends on CPU_SUPPORTS_32BIT_KERNEL
+	help
+	  Select this option to build a 32-bit kernel.
+
+config 64BIT
+	bool "64-bit kernel"
+	depends on CPU_SUPPORTS_64BIT_KERNEL
+	help
+	  Select this option to build a 64-bit kernel.
+
+endchoice
+
+source "mm/Kconfig"
+
+source "kernel/Kconfig.preempt"
+
+source "kernel/Kconfig.hz"
+
+endmenu
+
+menu "Bus support"
+
+config PCI
+	bool "PCI support"
+	select PCI_MSI
+	help
+	  This feature enables support for PCI bus system. If you say Y
+	  here, the kernel will include drivers and infrastructure code
+	  to support PCI bus devices.
+
+	  If you don't know what to do here, say Y.
+
+config PCI_DOMAINS
+	def_bool PCI
+
+config PCI_DOMAINS_GENERIC
+	def_bool PCI
+
+source "drivers/pci/Kconfig"
+
+endmenu
+
+source "init/Kconfig"
+
+source "kernel/Kconfig.freezer"
+
+menu "Executable file formats"
+
+source "fs/Kconfig.binfmt"
+
+endmenu
+
+menu "Power management options"
+
+source kernel/power/Kconfig
+
+endmenu
+
+source "net/Kconfig"
+
+source "drivers/Kconfig"
+
+source "fs/Kconfig"
+
+menu "Kernel hacking"
+
+config CMDLINE_BOOL
+	bool "Built-in kernel command line"
+	default n
+	help
+	  For most platforms, it is firmware or second stage bootloader
+	  that by default specifies the kernel command line options.
+	  However, it might be necessary or advantageous to either override
+	  the default kernel command line or add a few extra options to it.
+	  For such cases, this option allows hardcoding command line options
+	  directly into the kernel.
+
+	  For that, choose 'Y' here and fill in the extra boot parameters
+	  in CONFIG_CMDLINE.
+
+	  The built-in options will be concatenated to the default command
+	  line if CMDLINE_OVERRIDE is set to 'N'. Otherwise, the default
+	  command line will be ignored and replaced by the built-in string.
+
+config CMDLINE
+	string "Built-in kernel command string"
+	depends on CMDLINE_BOOL
+	default ""
+	help
+	  Supply command-line options at build time by entering them here.
+
+config CMDLINE_OVERRIDE
+	bool "Built-in command line overrides bootloader arguments"
+	default n
+	depends on CMDLINE_BOOL
+	help
+	  Set this option to 'Y' to have the kernel ignore the bootloader
+	  or firmware command line.  Instead, the built-in command line
+	  will be used exclusively.
+
+	  If you don't know what to do here, say N.
+
+config EARLY_PRINTK
+	bool "Early printk"
+	default n
+	help
+	  This option enables special console drivers which allow the kernel
+	  to print messages very early in the bootup process.
+
+	  This is useful for kernel debugging when your machine crashes very
+	  early before the console code is initialized. For normal operation
+	  it is not recommended because it looks ugly and doesn't cooperate
+	  with klogd/syslogd or the X server. You should normally N here,
+	  unless you want to debug such a crash.
+
+
+source "lib/Kconfig.debug"
+
+config CMDLINE_BOOL
+	bool
+endmenu
+
+source "security/Kconfig"
+
+source "crypto/Kconfig"
+
+source "lib/Kconfig"
diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
new file mode 100644
index 000000000000..66c4a5e383f9
--- /dev/null
+++ b/arch/riscv/Makefile
@@ -0,0 +1,64 @@
+# This file is included by the global makefile so that you can add your own
+# architecture-specific flags and dependencies. Remember to do have actions
+# for "archclean" and "archdep" for cleaning up and making dependencies for
+# this architecture
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License.  See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+
+LDFLAGS         :=
+OBJCOPYFLAGS    := -O binary
+LDFLAGS_vmlinux :=
+KBUILD_AFLAGS_MODULE += -fPIC
+KBUILD_CFLAGS_MODULE += -fPIC
+
+KBUILD_DEFCONFIG = spike64_defconfig
+
+export BITS
+ifeq ($(CONFIG_ARCH_RV64I),y)
+	BITS := 64
+	UTS_MACHINE := riscv64
+
+	KBUILD_CFLAGS += -mabi=lp64
+	KBUILD_AFLAGS += -mabi=lp64
+	KBUILD_MARCH = rv64im
+	LDFLAGS += -melf64lriscv
+else
+	BITS := 32
+	UTS_MACHINE := riscv32
+
+	KBUILD_CFLAGS += -mabi=ilp32
+	KBUILD_AFLAGS += -mabi=ilp32
+	KBUILD_MARCH = rv32im
+	LDFLAGS += -melf32lriscv
+endif
+
+KBUILD_CFLAGS += -Wall
+
+ifeq ($(CONFIG_ISA_A),y)
+	KBUILD_ARCH_A = a
+endif
+ifeq ($(CONFIG_ISA_C),y)
+	KBUILD_ARCH_C = c
+endif
+
+KBUILD_AFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_ARCH_A)fd$(KBUILD_ARCH_C)
+
+KBUILD_CFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_ARCH_A)$(KBUILD_ARCH_C)
+KBUILD_CFLAGS += -mno-save-restore
+
+# GCC versions that support the "-mstrict-align" option default to allowing
+# unaligned accesses.  While unaligned accesses are explicitly allowed in the
+# RISC-V ISA, they're emulated by machine mode traps on all extant
+# architectures.  It's faster to have GCC emit only aligned accesses.
+KBUILD_CFLAGS += $(call cc-option,-mstrict-align)
+
+head-y := arch/riscv/kernel/head.o
+
+core-y += arch/riscv/kernel/ arch/riscv/mm/
+
+libs-y += arch/riscv/lib/
+
+all: vmlinux
diff --git a/arch/riscv/configs/freedom-u500_defconfig b/arch/riscv/configs/freedom-u500_defconfig
new file mode 100644
index 000000000000..b37908d45067
--- /dev/null
+++ b/arch/riscv/configs/freedom-u500_defconfig
@@ -0,0 +1,53 @@
+CONFIG_CROSS_COMPILE="riscv64-unknown-linux-gnu-"
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+# CONFIG_SGETMASK_SYSCALL is not set
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+# CONFIG_COMPACTION is not set
+CONFIG_HZ_100=y
+CONFIG_PCI_MSI=y
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+CONFIG_OF=y
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+# CONFIG_UNUSED_SYMBOLS is not set
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_EARLY_PRINTK is not set
+# CONFIG_CRYPTO_HW is not set
+CONFIG_TTY=y
diff --git a/arch/riscv/configs/spike32_defconfig b/arch/riscv/configs/spike32_defconfig
new file mode 100644
index 000000000000..cf3431e94311
--- /dev/null
+++ b/arch/riscv/configs/spike32_defconfig
@@ -0,0 +1,50 @@
+CONFIG_64BIT=n
+CONFIG_32BIT=y
+CONFIG_ARCH_RV64I=n
+CONFIG_ARCH_RV32I=y
+CONFIG_PCI=y
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_CRYPTO_HW is not set
+CONFIG_TTY=y
diff --git a/arch/riscv/configs/spike64_defconfig b/arch/riscv/configs/spike64_defconfig
new file mode 100644
index 000000000000..5ad6644df541
--- /dev/null
+++ b/arch/riscv/configs/spike64_defconfig
@@ -0,0 +1,46 @@
+CONFIG_PCI=y
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_CRYPTO_HW is not set
+CONFIG_TTY=y
diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild
new file mode 100644
index 000000000000..deaa5ff9ebb4
--- /dev/null
+++ b/arch/riscv/include/asm/Kbuild
@@ -0,0 +1,60 @@
+generic-y += bugs.h
+generic-y += cacheflush.h
+generic-y += checksum.h
+generic-y += clkdev.h
+generic-y += cputime.h
+generic-y += div64.h
+generic-y += dma.h
+generic-y += dma-contiguous.h
+generic-y += emergency-restart.h
+generic-y += errno.h
+generic-y += exec.h
+generic-y += fb.h
+generic-y += fcntl.h
+generic-y += ftrace.h
+generic-y += futex.h
+generic-y += hardirq.h
+generic-y += hash.h
+generic-y += hw_irq.h
+generic-y += ioctl.h
+generic-y += ioctls.h
+generic-y += ipcbuf.h
+generic-y += irq_regs.h
+generic-y += irq_work.h
+generic-y += kdebug.h
+generic-y += kmap_types.h
+generic-y += kvm_para.h
+generic-y += local.h
+generic-y += mm-arch-hooks.h
+generic-y += mman.h
+generic-y += module.h
+generic-y += msgbuf.h
+generic-y += mutex.h
+generic-y += param.h
+generic-y += percpu.h
+generic-y += poll.h
+generic-y += posix_types.h
+generic-y += preempt.h
+generic-y += resource.h
+generic-y += scatterlist.h
+generic-y += sections.h
+generic-y += sembuf.h
+generic-y += setup.h
+generic-y += shmbuf.h
+generic-y += shmparam.h
+generic-y += signal.h
+generic-y += socket.h
+generic-y += sockios.h
+generic-y += stat.h
+generic-y += statfs.h
+generic-y += swab.h
+generic-y += termbits.h
+generic-y += termios.h
+generic-y += topology.h
+generic-y += trace_clock.h
+generic-y += types.h
+generic-y += unaligned.h
+generic-y += user.h
+generic-y += vga.h
+generic-y += vmlinux.lds.h
+generic-y += xor.h
diff --git a/arch/riscv/include/uapi/asm/Kbuild b/arch/riscv/include/uapi/asm/Kbuild
new file mode 100644
index 000000000000..0727570a49b2
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/Kbuild
@@ -0,0 +1,4 @@
+# UAPI Header export list
+include include/uapi/asm-generic/Kbuild.asm
+
+generic-y += setup.h
diff --git a/arch/riscv/kernel/.gitignore b/arch/riscv/kernel/.gitignore
new file mode 100644
index 000000000000..b51634f6a7cd
--- /dev/null
+++ b/arch/riscv/kernel/.gitignore
@@ -0,0 +1 @@
+/vmlinux.lds
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
new file mode 100644
index 000000000000..7f58cd251ab8
--- /dev/null
+++ b/arch/riscv/kernel/Makefile
@@ -0,0 +1,16 @@
+#
+# Makefile for the RISC-V Linux kernel
+#
+
+extra-y := head.o vmlinux.lds
+
+obj-y	:= cpu.o entry.o irq.o process.o ptrace.o reset.o setup.o \
+	   signal.o syscall_table.o sys_riscv.o time.o traps.o \
+	   riscv_ksyms.o stacktrace.o vdso.o cacheinfo.o vdso/
+
+CFLAGS_setup.o := -mcmodel=medany
+
+obj-$(CONFIG_SMP)		+= smpboot.o smp.o
+obj-$(CONFIG_MODULES)		+= module.o
+
+clean:
diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
new file mode 100644
index 000000000000..1f272425e889
--- /dev/null
+++ b/arch/riscv/kernel/vdso/Makefile
@@ -0,0 +1,61 @@
+# Derived from arch/{arm64,tile}/kernel/vdso/Makefile
+
+obj-vdso := sigreturn.o cmpxchg32.o cmpxchg64.o
+
+# Build rules
+targets := $(obj-vdso) vdso.so vdso.so.dbg
+obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
+
+#ccflags-y := -shared -fno-common -fno-builtin
+#ccflags-y += -nostdlib -Wl,-soname=linux-vdso.so.1 \
+		$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
+
+CFLAGS_vdso.so = $(c_flags)
+CFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \
+	$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
+CFLAGS_vdso_syms.o = -r
+
+obj-y += vdso.o
+
+# We also create a special relocatable object that should mirror the symbol
+# table and layout of the linked DSO.  With ld -R we can then refer to
+# these symbols in the kernel code rather than hand-coded addresses.
+extra-y += vdso.lds vdso-syms.o
+$(obj)/built-in.o: $(obj)/vdso-syms.o
+$(obj)/built-in.o: ld_flags += -R $(obj)/vdso-syms.o
+
+CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+
+# Force dependency
+$(obj)/vdso.o : $(obj)/vdso.so
+
+# Link rule for the *.so file; *.lds must be first
+$(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso)
+	$(call if_changed,vdsold)
+$(obj)/vdso-syms.o: $(src)/vdso.lds $(obj-vdso)
+	$(call if_changed,vdsold)
+
+# Strip rule for the *.so file
+$(obj)/%.so: OBJCOPYFLAGS := -S
+$(obj)/%.so: $(obj)/%.so.dbg FORCE
+	$(call if_changed,objcopy)
+
+# Assembly rules for the *.S files
+$(obj-vdso): %.o: %.S
+	$(call if_changed_dep,vdsoas)
+
+# Actual build commands
+quiet_cmd_vdsold = VDSOLD  $@
+      cmd_vdsold = $(CC) $(c_flags) -nostdlib $(CFLAGS_$(@F)) -Wl,-n -Wl,-T $^ -o $@
+quiet_cmd_vdsoas = VDSOAS  $@
+      cmd_vdsoas = $(CC) $(a_flags) -c -o $@ $<
+
+# Install commands for the unstripped file
+quiet_cmd_vdso_install = INSTALL $@
+      cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/$@
+
+vdso.so: $(obj)/vdso.so.dbg
+	@mkdir -p $(MODLIB)/vdso
+	$(call cmd,vdso_install)
+
+vdso_install: vdso.so
diff --git a/arch/riscv/kernel/vdso/vdso.lds.S b/arch/riscv/kernel/vdso/vdso.lds.S
new file mode 100644
index 000000000000..7142e1aafc30
--- /dev/null
+++ b/arch/riscv/kernel/vdso/vdso.lds.S
@@ -0,0 +1,77 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+OUTPUT_ARCH(riscv)
+
+SECTIONS
+{
+	. = SIZEOF_HEADERS;
+
+	.hash		: { *(.hash) }			:text
+	.gnu.hash	: { *(.gnu.hash) }
+	.dynsym		: { *(.dynsym) }
+	.dynstr		: { *(.dynstr) }
+	.gnu.version	: { *(.gnu.version) }
+	.gnu.version_d	: { *(.gnu.version_d) }
+	.gnu.version_r	: { *(.gnu.version_r) }
+
+	.note		: { *(.note.*) }		:text	:note
+	.dynamic	: { *(.dynamic) }		:text	:dynamic
+
+	.eh_frame_hdr	: { *(.eh_frame_hdr) }		:text	:eh_frame_hdr
+	.eh_frame	: { KEEP (*(.eh_frame)) }	:text
+
+	.rodata		: { *(.rodata .rodata.* .gnu.linkonce.r.*) }
+
+	/*
+	 * This linker script is used both with -r and with -shared.
+	 * For the layouts to match, we need to skip more than enough
+	 * space for the dynamic symbol table, etc. If this amount is
+	 * insufficient, ld -shared will error; simply increase it here.
+	 */
+	. = 0x800;
+	.text		: { *(.text .text.*) }		:text
+
+	.data		: {
+		*(.got.plt) *(.got)
+		*(.data .data.* .gnu.linkonce.d.*)
+		*(.dynbss)
+		*(.bss .bss.* .gnu.linkonce.b.*)
+	}
+}
+
+/*
+ * We must supply the ELF program headers explicitly to get just one
+ * PT_LOAD segment, and set the flags explicitly to make segments read-only.
+ */
+PHDRS
+{
+	text		PT_LOAD		FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
+	dynamic		PT_DYNAMIC	FLAGS(4);		/* PF_R */
+	note		PT_NOTE		FLAGS(4);		/* PF_R */
+	eh_frame_hdr	PT_GNU_EH_FRAME;
+}
+
+/*
+ * This controls what symbols we export from the DSO.
+ */
+VERSION
+{
+	LINUX_2.6 {
+	global:
+		__vdso_rt_sigreturn;
+		__vdso_cmpxchg32;
+		__vdso_cmpxchg64;
+	local: *;
+	};
+}
diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
new file mode 100644
index 000000000000..ece84991609c
--- /dev/null
+++ b/arch/riscv/kernel/vmlinux.lds.S
@@ -0,0 +1,92 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#define LOAD_OFFSET PAGE_OFFSET
+#include <asm/vmlinux.lds.h>
+#include <asm/page.h>
+#include <asm/cache.h>
+#include <asm/thread_info.h>
+
+OUTPUT_ARCH(riscv)
+ENTRY(_start)
+
+jiffies = jiffies_64;
+
+SECTIONS
+{
+	/* Beginning of code and text segment */
+	. = LOAD_OFFSET;
+	_start = .;
+	__init_begin = .;
+	HEAD_TEXT_SECTION
+	INIT_TEXT_SECTION(PAGE_SIZE)
+	INIT_DATA_SECTION(16)
+	/* we have to discard exit text and such at runtime, not link time */
+	.exit.text :
+	{
+		EXIT_TEXT
+	}
+	.exit.data :
+	{
+		EXIT_DATA
+	}
+	PERCPU_SECTION(L1_CACHE_BYTES)
+	__init_end = .;
+
+	.text : {
+		_text = .;
+		_stext = .;
+		TEXT_TEXT
+		SCHED_TEXT
+		CPUIDLE_TEXT
+		LOCK_TEXT
+		KPROBES_TEXT
+		ENTRY_TEXT
+		IRQENTRY_TEXT
+		*(.fixup)
+		_etext = .;
+	}
+
+	/* Start of data section */
+	_sdata = .;
+	RO_DATA_SECTION(L1_CACHE_BYTES)
+	.srodata : {
+		*(.srodata*)
+	}
+
+	RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
+	.sdata : {
+		__global_pointer$ = . + 0x800;
+		*(.sdata*)
+		/* End of data section */
+		_edata = .;
+		*(.sbss*)
+	}
+
+	BSS_SECTION(0, 0, 0)
+
+	EXCEPTION_TABLE(0x10)
+	NOTES
+
+	.rel.dyn : {
+		*(.rel.dyn*)
+	}
+
+	_end = .;
+
+	STABS_DEBUG
+	DWARF_DEBUG
+
+	DISCARDS
+}
diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile
new file mode 100644
index 000000000000..120c38e77a46
--- /dev/null
+++ b/arch/riscv/lib/Makefile
@@ -0,0 +1,5 @@
+lib-y	:= delay.o memcpy.o memset.o uaccess.o
+
+ifeq ($(CONFIG_64BIT),)
+lib-y += udivdi3.o
+endif
diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
new file mode 100644
index 000000000000..36ebe6feb5d6
--- /dev/null
+++ b/arch/riscv/mm/Makefile
@@ -0,0 +1 @@
+obj-y := init.o fault.o extable.o ioremap.o
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 9/9] RISC-V: Build Infastructure
@ 2017-06-28 18:55       ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 18:55 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert
  Cc: Palmer Dabbelt

This patch contains all the build infastructure that actually enables
the RISC-V port.  This includes Makefiles, linker scripts, and Kconfig
files.  It also contains the only top-level change, which adds RISC-V to
the list of architectures that need a sed run to produce the ARCH
variable when building locally.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 Makefile                                  |   3 +-
 arch/riscv/Kconfig                        | 318 ++++++++++++++++++++++++++++++
 arch/riscv/Makefile                       |  64 ++++++
 arch/riscv/configs/freedom-u500_defconfig |  53 +++++
 arch/riscv/configs/spike32_defconfig      |  50 +++++
 arch/riscv/configs/spike64_defconfig      |  46 +++++
 arch/riscv/include/asm/Kbuild             |  60 ++++++
 arch/riscv/include/uapi/asm/Kbuild        |   4 +
 arch/riscv/kernel/.gitignore              |   1 +
 arch/riscv/kernel/Makefile                |  16 ++
 arch/riscv/kernel/vdso/Makefile           |  61 ++++++
 arch/riscv/kernel/vdso/vdso.lds.S         |  77 ++++++++
 arch/riscv/kernel/vmlinux.lds.S           |  92 +++++++++
 arch/riscv/lib/Makefile                   |   5 +
 arch/riscv/mm/Makefile                    |   1 +
 15 files changed, 850 insertions(+), 1 deletion(-)
 create mode 100644 arch/riscv/Kconfig
 create mode 100644 arch/riscv/Makefile
 create mode 100644 arch/riscv/configs/freedom-u500_defconfig
 create mode 100644 arch/riscv/configs/spike32_defconfig
 create mode 100644 arch/riscv/configs/spike64_defconfig
 create mode 100644 arch/riscv/include/asm/Kbuild
 create mode 100644 arch/riscv/include/uapi/asm/Kbuild
 create mode 100644 arch/riscv/kernel/.gitignore
 create mode 100644 arch/riscv/kernel/Makefile
 create mode 100644 arch/riscv/kernel/vdso/Makefile
 create mode 100644 arch/riscv/kernel/vdso/vdso.lds.S
 create mode 100644 arch/riscv/kernel/vmlinux.lds.S
 create mode 100644 arch/riscv/lib/Makefile
 create mode 100644 arch/riscv/mm/Makefile

diff --git a/Makefile b/Makefile
index 6d8a984ed9c9..b8fbaa805d56 100644
--- a/Makefile
+++ b/Makefile
@@ -232,7 +232,8 @@ SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
 				  -e s/arm.*/arm/ -e s/sa110/arm/ \
 				  -e s/s390x/s390/ -e s/parisc64/parisc/ \
 				  -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
-				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ )
+				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
+				  -e s/riscv.*/riscv/)
 
 # Cross compiling and selecting different set of gcc/bin-utils
 # ---------------------------------------------------------------------------
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
new file mode 100644
index 000000000000..38c8112861fd
--- /dev/null
+++ b/arch/riscv/Kconfig
@@ -0,0 +1,318 @@
+#
+# For a description of the syntax of this configuration file,
+# see Documentation/kbuild/kconfig-language.txt.
+#
+
+config RISCV
+	def_bool y
+	select OF
+	select OF_EARLY_FLATTREE
+	select OF_IRQ
+	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
+	select ARCH_WANT_FRAME_POINTERS
+	select CLONE_BACKWARDS
+	select COMMON_CLK
+	select GENERIC_CLOCKEVENTS
+	select GENERIC_CPU_DEVICES
+	select GENERIC_IRQ_SHOW
+	select GENERIC_PCI_IOMAP
+	select GENERIC_STRNCPY_FROM_USER
+	select GENERIC_STRNLEN_USER
+	select GENERIC_SMP_IDLE_THREAD
+	select GENERIC_ATOMIC64 if !64BIT || !ISA_A
+	select ARCH_WANT_OPTIONAL_GPIOLIB
+	select HAVE_MEMBLOCK
+	select HAVE_DMA_API_DEBUG
+	select HAVE_DMA_CONTIGUOUS
+	select HAVE_GENERIC_DMA_COHERENT
+	select IRQ_DOMAIN
+	select NO_BOOTMEM
+	select ISA_A if SMP
+	select SYSRISCV_ATOMIC if !ISA_A
+	select SPARSE_IRQ
+	select SYSCTL_EXCEPTION_TRACE
+	select HAVE_ARCH_TRACEHOOK
+	select MODULES_USE_ELF_RELA if MODULES
+	select THREAD_INFO_IN_TASK
+	select RISCV_IRQ_INTC
+	select RISCV_TIMER
+
+config MMU
+	def_bool y
+
+# even on 32-bit, physical (and DMA) addresses are > 32-bits
+config ARCH_PHYS_ADDR_T_64BIT
+	def_bool y
+
+config ARCH_DMA_ADDR_T_64BIT
+	def_bool y
+
+config STACKTRACE_SUPPORT
+	def_bool y
+
+config RWSEM_GENERIC_SPINLOCK
+	def_bool y
+
+config GENERIC_BUG
+	def_bool y
+	depends on BUG
+	select GENERIC_BUG_RELATIVE_POINTERS if 64BIT
+
+config GENERIC_BUG_RELATIVE_POINTERS
+	bool
+
+config GENERIC_CALIBRATE_DELAY
+	def_bool y
+
+config GENERIC_CSUM
+	def_bool y
+
+config GENERIC_HWEIGHT
+	def_bool y
+
+config PGTABLE_LEVELS
+	int
+	default 3 if 64BIT
+	default 2
+
+config HAVE_KPROBES
+	def_bool n
+
+config DMA_NOOP_OPS
+	def_bool y
+
+menu "Platform type"
+
+config SMP
+	bool "Symmetric Multi-Processing"
+	help
+	  This enables support for systems with more than one CPU.  If
+	  you say N here, the kernel will run on single and
+	  multiprocessor machines, but will use only one CPU of a
+	  multiprocessor machine. If you say Y here, the kernel will run
+	  on many, but not all, single processor machines. On a single
+	  processor machine, the kernel will run faster if you say N
+	  here.
+
+	  If you don't know what to do here, say N.
+
+config NR_CPUS
+	int "Maximum number of CPUs (2-32)"
+	range 2 32
+	depends on SMP
+	default "8"
+
+config CPU_SUPPORTS_32BIT_KERNEL
+	bool
+config CPU_SUPPORTS_64BIT_KERNEL
+	bool
+
+choice
+	prompt "Base ISA"
+	default ARCH_RV64I
+
+config ARCH_RV32I
+	bool "RV32I"
+	select CPU_SUPPORTS_32BIT_KERNEL
+	select 32BIT
+	select GENERIC_ASHLDI3
+	select GENERIC_ASHRDI3
+	select GENERIC_LSHRDI3
+
+config ARCH_RV64I
+	bool "RV64I"
+	select CPU_SUPPORTS_64BIT_KERNEL
+	select 64BIT
+
+endchoice
+
+choice
+	prompt "CPU Tuning"
+	default TUNE_GENERIC
+
+config TUNE_GENERIC
+	bool "generic"
+
+endchoice
+
+config ISA_C
+	bool "Emit compressed instructions when building Linux"
+	default n
+	help
+	   Adds "C" to the ISA subsets that the toolchain is allowed to emit
+	   when building Linux, which results in compressed instructions in the
+	   Linux binary.
+
+	   If you don't know what to do here, say Y.
+
+config ISA_A
+	bool "Emit atomic instructions when building Linux"
+	default y
+	help
+	   Adds "A" to the ISA subsets that the toolchain is allowed to emit
+	   when building Linux, which results in atomic instructions in the
+	   Linux binary.
+
+	   If you don't know what to do here, say Y.
+
+config SYSRISCV_ATOMIC
+	bool "Include support for atomic operation syscalls"
+	default !ISA_A
+	help
+	  If atomic memory instructions are present, i.e.,
+	  CONFIG_ISA_A, this includes support for the syscall that
+	  provides atomic accesses.  This is only useful to run
+	  binaries that require atomic access but were compiled with
+	  -mno-atomic.
+
+	  If CONFIG_ISA_A is unset, this option is mandatory.
+
+	  If you don't know what to do here, say N.
+
+config RV_PUM
+	def_bool y
+	prompt "Protect User Memory" if EXPERT
+	---help---
+	  Protect User Memory (PUM) prevents the kernel from inadvertently
+	  accessing user-space memory.  There is a small performance cost
+	  and kernel size increase if this is enabled.
+
+	  If unsure, say Y.
+
+endmenu
+
+menu "Kernel type"
+
+choice
+	prompt "Kernel code model"
+	default 64BIT
+
+config 32BIT
+	bool "32-bit kernel"
+	depends on CPU_SUPPORTS_32BIT_KERNEL
+	help
+	  Select this option to build a 32-bit kernel.
+
+config 64BIT
+	bool "64-bit kernel"
+	depends on CPU_SUPPORTS_64BIT_KERNEL
+	help
+	  Select this option to build a 64-bit kernel.
+
+endchoice
+
+source "mm/Kconfig"
+
+source "kernel/Kconfig.preempt"
+
+source "kernel/Kconfig.hz"
+
+endmenu
+
+menu "Bus support"
+
+config PCI
+	bool "PCI support"
+	select PCI_MSI
+	help
+	  This feature enables support for PCI bus system. If you say Y
+	  here, the kernel will include drivers and infrastructure code
+	  to support PCI bus devices.
+
+	  If you don't know what to do here, say Y.
+
+config PCI_DOMAINS
+	def_bool PCI
+
+config PCI_DOMAINS_GENERIC
+	def_bool PCI
+
+source "drivers/pci/Kconfig"
+
+endmenu
+
+source "init/Kconfig"
+
+source "kernel/Kconfig.freezer"
+
+menu "Executable file formats"
+
+source "fs/Kconfig.binfmt"
+
+endmenu
+
+menu "Power management options"
+
+source kernel/power/Kconfig
+
+endmenu
+
+source "net/Kconfig"
+
+source "drivers/Kconfig"
+
+source "fs/Kconfig"
+
+menu "Kernel hacking"
+
+config CMDLINE_BOOL
+	bool "Built-in kernel command line"
+	default n
+	help
+	  For most platforms, it is firmware or second stage bootloader
+	  that by default specifies the kernel command line options.
+	  However, it might be necessary or advantageous to either override
+	  the default kernel command line or add a few extra options to it.
+	  For such cases, this option allows hardcoding command line options
+	  directly into the kernel.
+
+	  For that, choose 'Y' here and fill in the extra boot parameters
+	  in CONFIG_CMDLINE.
+
+	  The built-in options will be concatenated to the default command
+	  line if CMDLINE_OVERRIDE is set to 'N'. Otherwise, the default
+	  command line will be ignored and replaced by the built-in string.
+
+config CMDLINE
+	string "Built-in kernel command string"
+	depends on CMDLINE_BOOL
+	default ""
+	help
+	  Supply command-line options at build time by entering them here.
+
+config CMDLINE_OVERRIDE
+	bool "Built-in command line overrides bootloader arguments"
+	default n
+	depends on CMDLINE_BOOL
+	help
+	  Set this option to 'Y' to have the kernel ignore the bootloader
+	  or firmware command line.  Instead, the built-in command line
+	  will be used exclusively.
+
+	  If you don't know what to do here, say N.
+
+config EARLY_PRINTK
+	bool "Early printk"
+	default n
+	help
+	  This option enables special console drivers which allow the kernel
+	  to print messages very early in the bootup process.
+
+	  This is useful for kernel debugging when your machine crashes very
+	  early before the console code is initialized. For normal operation
+	  it is not recommended because it looks ugly and doesn't cooperate
+	  with klogd/syslogd or the X server. You should normally N here,
+	  unless you want to debug such a crash.
+
+
+source "lib/Kconfig.debug"
+
+config CMDLINE_BOOL
+	bool
+endmenu
+
+source "security/Kconfig"
+
+source "crypto/Kconfig"
+
+source "lib/Kconfig"
diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
new file mode 100644
index 000000000000..66c4a5e383f9
--- /dev/null
+++ b/arch/riscv/Makefile
@@ -0,0 +1,64 @@
+# This file is included by the global makefile so that you can add your own
+# architecture-specific flags and dependencies. Remember to do have actions
+# for "archclean" and "archdep" for cleaning up and making dependencies for
+# this architecture
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License.  See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+
+LDFLAGS         :=
+OBJCOPYFLAGS    := -O binary
+LDFLAGS_vmlinux :=
+KBUILD_AFLAGS_MODULE += -fPIC
+KBUILD_CFLAGS_MODULE += -fPIC
+
+KBUILD_DEFCONFIG = spike64_defconfig
+
+export BITS
+ifeq ($(CONFIG_ARCH_RV64I),y)
+	BITS := 64
+	UTS_MACHINE := riscv64
+
+	KBUILD_CFLAGS += -mabi=lp64
+	KBUILD_AFLAGS += -mabi=lp64
+	KBUILD_MARCH = rv64im
+	LDFLAGS += -melf64lriscv
+else
+	BITS := 32
+	UTS_MACHINE := riscv32
+
+	KBUILD_CFLAGS += -mabi=ilp32
+	KBUILD_AFLAGS += -mabi=ilp32
+	KBUILD_MARCH = rv32im
+	LDFLAGS += -melf32lriscv
+endif
+
+KBUILD_CFLAGS += -Wall
+
+ifeq ($(CONFIG_ISA_A),y)
+	KBUILD_ARCH_A = a
+endif
+ifeq ($(CONFIG_ISA_C),y)
+	KBUILD_ARCH_C = c
+endif
+
+KBUILD_AFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_ARCH_A)fd$(KBUILD_ARCH_C)
+
+KBUILD_CFLAGS += -march=$(KBUILD_MARCH)$(KBUILD_ARCH_A)$(KBUILD_ARCH_C)
+KBUILD_CFLAGS += -mno-save-restore
+
+# GCC versions that support the "-mstrict-align" option default to allowing
+# unaligned accesses.  While unaligned accesses are explicitly allowed in the
+# RISC-V ISA, they're emulated by machine mode traps on all extant
+# architectures.  It's faster to have GCC emit only aligned accesses.
+KBUILD_CFLAGS += $(call cc-option,-mstrict-align)
+
+head-y := arch/riscv/kernel/head.o
+
+core-y += arch/riscv/kernel/ arch/riscv/mm/
+
+libs-y += arch/riscv/lib/
+
+all: vmlinux
diff --git a/arch/riscv/configs/freedom-u500_defconfig b/arch/riscv/configs/freedom-u500_defconfig
new file mode 100644
index 000000000000..b37908d45067
--- /dev/null
+++ b/arch/riscv/configs/freedom-u500_defconfig
@@ -0,0 +1,53 @@
+CONFIG_CROSS_COMPILE="riscv64-unknown-linux-gnu-"
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+# CONFIG_SGETMASK_SYSCALL is not set
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+# CONFIG_COMPACTION is not set
+CONFIG_HZ_100=y
+CONFIG_PCI_MSI=y
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+CONFIG_OF=y
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+# CONFIG_UNUSED_SYMBOLS is not set
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_EARLY_PRINTK is not set
+# CONFIG_CRYPTO_HW is not set
+CONFIG_TTY=y
diff --git a/arch/riscv/configs/spike32_defconfig b/arch/riscv/configs/spike32_defconfig
new file mode 100644
index 000000000000..cf3431e94311
--- /dev/null
+++ b/arch/riscv/configs/spike32_defconfig
@@ -0,0 +1,50 @@
+CONFIG_64BIT=n
+CONFIG_32BIT=y
+CONFIG_ARCH_RV64I=n
+CONFIG_ARCH_RV32I=y
+CONFIG_PCI=y
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_CRYPTO_HW is not set
+CONFIG_TTY=y
diff --git a/arch/riscv/configs/spike64_defconfig b/arch/riscv/configs/spike64_defconfig
new file mode 100644
index 000000000000..5ad6644df541
--- /dev/null
+++ b/arch/riscv/configs/spike64_defconfig
@@ -0,0 +1,46 @@
+CONFIG_PCI=y
+CONFIG_DEFAULT_HOSTNAME="ucbvax"
+# CONFIG_CROSS_MEMORY_ATTACH is not set
+# CONFIG_FHANDLE is not set
+CONFIG_NAMESPACES=y
+CONFIG_EMBEDDED=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_EFI_PARTITION is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+# CONFIG_BLK_DEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_VT is not set
+CONFIG_DEVKMEM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT2_FS=y
+# CONFIG_FILE_LOCKING is not set
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_PROC_PAGE_MONITOR is not set
+# CONFIG_SYSFS is not set
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_SECTION_MISMATCH=y
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_CRYPTO_HW is not set
+CONFIG_TTY=y
diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild
new file mode 100644
index 000000000000..deaa5ff9ebb4
--- /dev/null
+++ b/arch/riscv/include/asm/Kbuild
@@ -0,0 +1,60 @@
+generic-y += bugs.h
+generic-y += cacheflush.h
+generic-y += checksum.h
+generic-y += clkdev.h
+generic-y += cputime.h
+generic-y += div64.h
+generic-y += dma.h
+generic-y += dma-contiguous.h
+generic-y += emergency-restart.h
+generic-y += errno.h
+generic-y += exec.h
+generic-y += fb.h
+generic-y += fcntl.h
+generic-y += ftrace.h
+generic-y += futex.h
+generic-y += hardirq.h
+generic-y += hash.h
+generic-y += hw_irq.h
+generic-y += ioctl.h
+generic-y += ioctls.h
+generic-y += ipcbuf.h
+generic-y += irq_regs.h
+generic-y += irq_work.h
+generic-y += kdebug.h
+generic-y += kmap_types.h
+generic-y += kvm_para.h
+generic-y += local.h
+generic-y += mm-arch-hooks.h
+generic-y += mman.h
+generic-y += module.h
+generic-y += msgbuf.h
+generic-y += mutex.h
+generic-y += param.h
+generic-y += percpu.h
+generic-y += poll.h
+generic-y += posix_types.h
+generic-y += preempt.h
+generic-y += resource.h
+generic-y += scatterlist.h
+generic-y += sections.h
+generic-y += sembuf.h
+generic-y += setup.h
+generic-y += shmbuf.h
+generic-y += shmparam.h
+generic-y += signal.h
+generic-y += socket.h
+generic-y += sockios.h
+generic-y += stat.h
+generic-y += statfs.h
+generic-y += swab.h
+generic-y += termbits.h
+generic-y += termios.h
+generic-y += topology.h
+generic-y += trace_clock.h
+generic-y += types.h
+generic-y += unaligned.h
+generic-y += user.h
+generic-y += vga.h
+generic-y += vmlinux.lds.h
+generic-y += xor.h
diff --git a/arch/riscv/include/uapi/asm/Kbuild b/arch/riscv/include/uapi/asm/Kbuild
new file mode 100644
index 000000000000..0727570a49b2
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/Kbuild
@@ -0,0 +1,4 @@
+# UAPI Header export list
+include include/uapi/asm-generic/Kbuild.asm
+
+generic-y += setup.h
diff --git a/arch/riscv/kernel/.gitignore b/arch/riscv/kernel/.gitignore
new file mode 100644
index 000000000000..b51634f6a7cd
--- /dev/null
+++ b/arch/riscv/kernel/.gitignore
@@ -0,0 +1 @@
+/vmlinux.lds
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
new file mode 100644
index 000000000000..7f58cd251ab8
--- /dev/null
+++ b/arch/riscv/kernel/Makefile
@@ -0,0 +1,16 @@
+#
+# Makefile for the RISC-V Linux kernel
+#
+
+extra-y := head.o vmlinux.lds
+
+obj-y	:= cpu.o entry.o irq.o process.o ptrace.o reset.o setup.o \
+	   signal.o syscall_table.o sys_riscv.o time.o traps.o \
+	   riscv_ksyms.o stacktrace.o vdso.o cacheinfo.o vdso/
+
+CFLAGS_setup.o := -mcmodel=medany
+
+obj-$(CONFIG_SMP)		+= smpboot.o smp.o
+obj-$(CONFIG_MODULES)		+= module.o
+
+clean:
diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
new file mode 100644
index 000000000000..1f272425e889
--- /dev/null
+++ b/arch/riscv/kernel/vdso/Makefile
@@ -0,0 +1,61 @@
+# Derived from arch/{arm64,tile}/kernel/vdso/Makefile
+
+obj-vdso := sigreturn.o cmpxchg32.o cmpxchg64.o
+
+# Build rules
+targets := $(obj-vdso) vdso.so vdso.so.dbg
+obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
+
+#ccflags-y := -shared -fno-common -fno-builtin
+#ccflags-y += -nostdlib -Wl,-soname=linux-vdso.so.1 \
+		$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
+
+CFLAGS_vdso.so = $(c_flags)
+CFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \
+	$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
+CFLAGS_vdso_syms.o = -r
+
+obj-y += vdso.o
+
+# We also create a special relocatable object that should mirror the symbol
+# table and layout of the linked DSO.  With ld -R we can then refer to
+# these symbols in the kernel code rather than hand-coded addresses.
+extra-y += vdso.lds vdso-syms.o
+$(obj)/built-in.o: $(obj)/vdso-syms.o
+$(obj)/built-in.o: ld_flags += -R $(obj)/vdso-syms.o
+
+CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+
+# Force dependency
+$(obj)/vdso.o : $(obj)/vdso.so
+
+# Link rule for the *.so file; *.lds must be first
+$(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso)
+	$(call if_changed,vdsold)
+$(obj)/vdso-syms.o: $(src)/vdso.lds $(obj-vdso)
+	$(call if_changed,vdsold)
+
+# Strip rule for the *.so file
+$(obj)/%.so: OBJCOPYFLAGS := -S
+$(obj)/%.so: $(obj)/%.so.dbg FORCE
+	$(call if_changed,objcopy)
+
+# Assembly rules for the *.S files
+$(obj-vdso): %.o: %.S
+	$(call if_changed_dep,vdsoas)
+
+# Actual build commands
+quiet_cmd_vdsold = VDSOLD  $@
+      cmd_vdsold = $(CC) $(c_flags) -nostdlib $(CFLAGS_$(@F)) -Wl,-n -Wl,-T $^ -o $@
+quiet_cmd_vdsoas = VDSOAS  $@
+      cmd_vdsoas = $(CC) $(a_flags) -c -o $@ $<
+
+# Install commands for the unstripped file
+quiet_cmd_vdso_install = INSTALL $@
+      cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/$@
+
+vdso.so: $(obj)/vdso.so.dbg
+	@mkdir -p $(MODLIB)/vdso
+	$(call cmd,vdso_install)
+
+vdso_install: vdso.so
diff --git a/arch/riscv/kernel/vdso/vdso.lds.S b/arch/riscv/kernel/vdso/vdso.lds.S
new file mode 100644
index 000000000000..7142e1aafc30
--- /dev/null
+++ b/arch/riscv/kernel/vdso/vdso.lds.S
@@ -0,0 +1,77 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+OUTPUT_ARCH(riscv)
+
+SECTIONS
+{
+	. = SIZEOF_HEADERS;
+
+	.hash		: { *(.hash) }			:text
+	.gnu.hash	: { *(.gnu.hash) }
+	.dynsym		: { *(.dynsym) }
+	.dynstr		: { *(.dynstr) }
+	.gnu.version	: { *(.gnu.version) }
+	.gnu.version_d	: { *(.gnu.version_d) }
+	.gnu.version_r	: { *(.gnu.version_r) }
+
+	.note		: { *(.note.*) }		:text	:note
+	.dynamic	: { *(.dynamic) }		:text	:dynamic
+
+	.eh_frame_hdr	: { *(.eh_frame_hdr) }		:text	:eh_frame_hdr
+	.eh_frame	: { KEEP (*(.eh_frame)) }	:text
+
+	.rodata		: { *(.rodata .rodata.* .gnu.linkonce.r.*) }
+
+	/*
+	 * This linker script is used both with -r and with -shared.
+	 * For the layouts to match, we need to skip more than enough
+	 * space for the dynamic symbol table, etc. If this amount is
+	 * insufficient, ld -shared will error; simply increase it here.
+	 */
+	. = 0x800;
+	.text		: { *(.text .text.*) }		:text
+
+	.data		: {
+		*(.got.plt) *(.got)
+		*(.data .data.* .gnu.linkonce.d.*)
+		*(.dynbss)
+		*(.bss .bss.* .gnu.linkonce.b.*)
+	}
+}
+
+/*
+ * We must supply the ELF program headers explicitly to get just one
+ * PT_LOAD segment, and set the flags explicitly to make segments read-only.
+ */
+PHDRS
+{
+	text		PT_LOAD		FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
+	dynamic		PT_DYNAMIC	FLAGS(4);		/* PF_R */
+	note		PT_NOTE		FLAGS(4);		/* PF_R */
+	eh_frame_hdr	PT_GNU_EH_FRAME;
+}
+
+/*
+ * This controls what symbols we export from the DSO.
+ */
+VERSION
+{
+	LINUX_2.6 {
+	global:
+		__vdso_rt_sigreturn;
+		__vdso_cmpxchg32;
+		__vdso_cmpxchg64;
+	local: *;
+	};
+}
diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
new file mode 100644
index 000000000000..ece84991609c
--- /dev/null
+++ b/arch/riscv/kernel/vmlinux.lds.S
@@ -0,0 +1,92 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#define LOAD_OFFSET PAGE_OFFSET
+#include <asm/vmlinux.lds.h>
+#include <asm/page.h>
+#include <asm/cache.h>
+#include <asm/thread_info.h>
+
+OUTPUT_ARCH(riscv)
+ENTRY(_start)
+
+jiffies = jiffies_64;
+
+SECTIONS
+{
+	/* Beginning of code and text segment */
+	. = LOAD_OFFSET;
+	_start = .;
+	__init_begin = .;
+	HEAD_TEXT_SECTION
+	INIT_TEXT_SECTION(PAGE_SIZE)
+	INIT_DATA_SECTION(16)
+	/* we have to discard exit text and such at runtime, not link time */
+	.exit.text :
+	{
+		EXIT_TEXT
+	}
+	.exit.data :
+	{
+		EXIT_DATA
+	}
+	PERCPU_SECTION(L1_CACHE_BYTES)
+	__init_end = .;
+
+	.text : {
+		_text = .;
+		_stext = .;
+		TEXT_TEXT
+		SCHED_TEXT
+		CPUIDLE_TEXT
+		LOCK_TEXT
+		KPROBES_TEXT
+		ENTRY_TEXT
+		IRQENTRY_TEXT
+		*(.fixup)
+		_etext = .;
+	}
+
+	/* Start of data section */
+	_sdata = .;
+	RO_DATA_SECTION(L1_CACHE_BYTES)
+	.srodata : {
+		*(.srodata*)
+	}
+
+	RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
+	.sdata : {
+		__global_pointer$ = . + 0x800;
+		*(.sdata*)
+		/* End of data section */
+		_edata = .;
+		*(.sbss*)
+	}
+
+	BSS_SECTION(0, 0, 0)
+
+	EXCEPTION_TABLE(0x10)
+	NOTES
+
+	.rel.dyn : {
+		*(.rel.dyn*)
+	}
+
+	_end = .;
+
+	STABS_DEBUG
+	DWARF_DEBUG
+
+	DISCARDS
+}
diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile
new file mode 100644
index 000000000000..120c38e77a46
--- /dev/null
+++ b/arch/riscv/lib/Makefile
@@ -0,0 +1,5 @@
+lib-y	:= delay.o memcpy.o memset.o uaccess.o
+
+ifeq ($(CONFIG_64BIT),)
+lib-y += udivdi3.o
+endif
diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
new file mode 100644
index 000000000000..36ebe6feb5d6
--- /dev/null
+++ b/arch/riscv/mm/Makefile
@@ -0,0 +1 @@
+obj-y := init.o fault.o extable.o ioremap.o
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* Re: [PATCH 9/9] RISC-V: Build Infastructure
  2017-06-28 18:55       ` Palmer Dabbelt
  (?)
@ 2017-06-28 21:05       ` Karsten Merker
  2017-06-28 21:13           ` Palmer Dabbelt
  -1 siblings, 1 reply; 283+ messages in thread
From: Karsten Merker @ 2017-06-28 21:05 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert

On Wed, Jun 28, 2017 at 11:55:38AM -0700, Palmer Dabbelt wrote:
> This patch contains all the build infastructure that actually enables
> the RISC-V port.  This includes Makefiles, linker scripts, and Kconfig
> files.  It also contains the only top-level change, which adds RISC-V to
> the list of architectures that need a sed run to produce the ARCH
> variable when building locally.
> 
> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
> ---
[...]
> diff --git a/arch/riscv/configs/freedom-u500_defconfig b/arch/riscv/configs/freedom-u500_defconfig
> new file mode 100644
> index 000000000000..b37908d45067
> --- /dev/null
> +++ b/arch/riscv/configs/freedom-u500_defconfig
> @@ -0,0 +1,53 @@
> +CONFIG_CROSS_COMPILE="riscv64-unknown-linux-gnu-"

Hello Palmer,

just a tiny nitpick: defconfigs shouldn't predefine a crosscompiler
prefix.  That has already been fixed in the other defconfigs at
some earlier point in time, but IIRC the freedom-u500_defconfig got
added later on and it was probably overlooked then.

Regards,
Karsten
-- 
Gem. Par. 28 Abs. 4 Bundesdatenschutzgesetz widerspreche ich der Nutzung
sowie der Weitergabe meiner personenbezogenen Daten für Zwecke der
Werbung sowie der Markt- oder Meinungsforschung.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 9/9] RISC-V: Build Infastructure
  2017-06-28 21:05       ` Karsten Merker
@ 2017-06-28 21:13           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 21:13 UTC (permalink / raw)
  To: merker
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017 14:05:14 PDT (-0700), merker@debian.org wrote:
> On Wed, Jun 28, 2017 at 11:55:38AM -0700, Palmer Dabbelt wrote:
>> This patch contains all the build infastructure that actually enables
>> the RISC-V port.  This includes Makefiles, linker scripts, and Kconfig
>> files.  It also contains the only top-level change, which adds RISC-V to
>> the list of architectures that need a sed run to produce the ARCH
>> variable when building locally.
>>
>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>> ---
> [...]
>> diff --git a/arch/riscv/configs/freedom-u500_defconfig b/arch/riscv/configs/freedom-u500_defconfig
>> new file mode 100644
>> index 000000000000..b37908d45067
>> --- /dev/null
>> +++ b/arch/riscv/configs/freedom-u500_defconfig
>> @@ -0,0 +1,53 @@
>> +CONFIG_CROSS_COMPILE="riscv64-unknown-linux-gnu-"
>
> just a tiny nitpick: defconfigs shouldn't predefine a crosscompiler
> prefix.  That has already been fixed in the other defconfigs at
> some earlier point in time, but IIRC the freedom-u500_defconfig got
> added later on and it was probably overlooked then.

Oh, sorry about that -- you're right, I just missed that one as it wasn't in
the original patch set.

I'll squash this in when we do a v4.

commit 1a262894827be647d55016da09f61eb49962dd7d
Author: Palmer Dabbelt <palmer@dabbelt.com>
Date:   Wed Jun 28 14:11:17 2017 -0700

    Don't define CROSS_COMPILE in the u500 defconfig

diff --git a/arch/riscv/configs/freedom-u500_defconfig b/arch/riscv/configs/freedom-u500_defconfig
index b37908d45067..b580dc5c8feb 100644
--- a/arch/riscv/configs/freedom-u500_defconfig
+++ b/arch/riscv/configs/freedom-u500_defconfig
@@ -1,4 +1,3 @@
-CONFIG_CROSS_COMPILE="riscv64-unknown-linux-gnu-"
 CONFIG_DEFAULT_HOSTNAME="ucbvax"
 # CONFIG_CROSS_MEMORY_ATTACH is not set
 # CONFIG_FHANDLE is not set

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* Re: [PATCH 9/9] RISC-V: Build Infastructure
@ 2017-06-28 21:13           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-28 21:13 UTC (permalink / raw)
  To: merker
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017 14:05:14 PDT (-0700), merker@debian.org wrote:
> On Wed, Jun 28, 2017 at 11:55:38AM -0700, Palmer Dabbelt wrote:
>> This patch contains all the build infastructure that actually enables
>> the RISC-V port.  This includes Makefiles, linker scripts, and Kconfig
>> files.  It also contains the only top-level change, which adds RISC-V to
>> the list of architectures that need a sed run to produce the ARCH
>> variable when building locally.
>>
>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>> ---
> [...]
>> diff --git a/arch/riscv/configs/freedom-u500_defconfig b/arch/riscv/configs/freedom-u500_defconfig
>> new file mode 100644
>> index 000000000000..b37908d45067
>> --- /dev/null
>> +++ b/arch/riscv/configs/freedom-u500_defconfig
>> @@ -0,0 +1,53 @@
>> +CONFIG_CROSS_COMPILE="riscv64-unknown-linux-gnu-"
>
> just a tiny nitpick: defconfigs shouldn't predefine a crosscompiler
> prefix.  That has already been fixed in the other defconfigs at
> some earlier point in time, but IIRC the freedom-u500_defconfig got
> added later on and it was probably overlooked then.

Oh, sorry about that -- you're right, I just missed that one as it wasn't in
the original patch set.

I'll squash this in when we do a v4.

commit 1a262894827be647d55016da09f61eb49962dd7d
Author: Palmer Dabbelt <palmer@dabbelt.com>
Date:   Wed Jun 28 14:11:17 2017 -0700

    Don't define CROSS_COMPILE in the u500 defconfig

diff --git a/arch/riscv/configs/freedom-u500_defconfig b/arch/riscv/configs/freedom-u500_defconfig
index b37908d45067..b580dc5c8feb 100644
--- a/arch/riscv/configs/freedom-u500_defconfig
+++ b/arch/riscv/configs/freedom-u500_defconfig
@@ -1,4 +1,3 @@
-CONFIG_CROSS_COMPILE="riscv64-unknown-linux-gnu-"
 CONFIG_DEFAULT_HOSTNAME="ucbvax"
 # CONFIG_CROSS_MEMORY_ATTACH is not set
 # CONFIG_FHANDLE is not set

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* Re: [PATCH 9/9] RISC-V: Build Infastructure
  2017-06-28 18:55       ` Palmer Dabbelt
@ 2017-06-28 21:25         ` James Hogan
  -1 siblings, 0 replies; 283+ messages in thread
From: James Hogan @ 2017-06-28 21:25 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

[-- Attachment #1: Type: text/plain, Size: 740 bytes --]

Hi Palmer,

On Wed, Jun 28, 2017 at 11:55:38AM -0700, Palmer Dabbelt wrote:
> +	select SYSRISCV_ATOMIC if !ISA_A
...
> +config SYSRISCV_ATOMIC
> +	bool "Include support for atomic operation syscalls"
> +	default !ISA_A
> +	help
> +	  If atomic memory instructions are present, i.e.,
> +	  CONFIG_ISA_A, this includes support for the syscall that
> +	  provides atomic accesses.  This is only useful to run
> +	  binaries that require atomic access but were compiled with
> +	  -mno-atomic.
> +
> +	  If CONFIG_ISA_A is unset, this option is mandatory.
> +
> +	  If you don't know what to do here, say N.

Can this be removed now that you mentioned the atomics syscall being
mandatory? I can't find any other references to it.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 9/9] RISC-V: Build Infastructure
@ 2017-06-28 21:25         ` James Hogan
  0 siblings, 0 replies; 283+ messages in thread
From: James Hogan @ 2017-06-28 21:25 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

[-- Attachment #1: Type: text/plain, Size: 740 bytes --]

Hi Palmer,

On Wed, Jun 28, 2017 at 11:55:38AM -0700, Palmer Dabbelt wrote:
> +	select SYSRISCV_ATOMIC if !ISA_A
...
> +config SYSRISCV_ATOMIC
> +	bool "Include support for atomic operation syscalls"
> +	default !ISA_A
> +	help
> +	  If atomic memory instructions are present, i.e.,
> +	  CONFIG_ISA_A, this includes support for the syscall that
> +	  provides atomic accesses.  This is only useful to run
> +	  binaries that require atomic access but were compiled with
> +	  -mno-atomic.
> +
> +	  If CONFIG_ISA_A is unset, this option is mandatory.
> +
> +	  If you don't know what to do here, say N.

Can this be removed now that you mentioned the atomics syscall being
mandatory? I can't find any other references to it.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 8/9] RISC-V: User-facing API
  2017-06-28 18:55       ` Palmer Dabbelt
  (?)
@ 2017-06-28 21:49       ` Thomas Gleixner
  2017-06-28 21:52         ` Thomas Gleixner
  2017-06-29 17:22           ` Palmer Dabbelt
  -1 siblings, 2 replies; 283+ messages in thread
From: Thomas Gleixner @ 2017-06-28 21:49 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017, Palmer Dabbelt wrote:
> +
> +SYSCALL_DEFINE3(sysriscv_cmpxchg32, unsigned long, arg1, unsigned long, arg2,
> +		unsigned long, arg3)
> +{
> +	unsigned long flags;
> +	unsigned long prev;
> +	unsigned int *ptr;
> +	unsigned int err;
> +
> +	ptr = (unsigned int *)arg1;

Errm. Why isn't arg1 a proper pointer type and the arguments arg2/3 u32?

And please give the arguments a proper name, so it's obvious what is what.

SYSCALL_DEFINE3(sysriscv_cmpxchg32, u32 __user *, ptr, u32 new, u32 old)

Hmm?

> +	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned int)))
> +		return -EFAULT;
> +
> +	preempt_disable();
> +	raw_local_irq_save(flags);

Why do you want to disable interrupts here? This is thread context and
accessing user space memory, so the only protection this needs is against
preemption.

> +	err = __get_user(prev, ptr);
> +	if (likely(!err && prev == arg2))
> +		err = __put_user(arg3, ptr);
> +	raw_local_irq_restore(flags);
> +	preempt_enable();
> +
> +	return unlikely(err) ? err : prev;
> +}
> +
> +SYSCALL_DEFINE3(sysriscv_cmpxchg64, unsigned long, arg1, unsigned long, arg2,
> +		unsigned long, arg3)

This one is even worse. How does this implement cmpxchg64 on a 32bit machine?

Answer: Not at all, because arg2 and 3 are 32bit ....

> +{
> +	unsigned long flags;
> +	unsigned long prev;
> +	unsigned int *ptr;
> +	unsigned int err;
> +
> +	ptr = (unsigned int *)arg1;

Type casting to random pointer types makes the code more obvious
and safe, right? What the heck has a int pointer to do with u64?

> +	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned long)))
> +		return -EFAULT;
> +
> +	preempt_disable();
> +	raw_local_irq_save(flags);

Same as above.

> +	err = __get_user(prev, ptr);

Sigh. Type safety is overrated, right?

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 8/9] RISC-V: User-facing API
  2017-06-28 21:49       ` Thomas Gleixner
@ 2017-06-28 21:52         ` Thomas Gleixner
  2017-06-29 17:22           ` Palmer Dabbelt
  1 sibling, 0 replies; 283+ messages in thread
From: Thomas Gleixner @ 2017-06-28 21:52 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017, Thomas Gleixner wrote:
> On Wed, 28 Jun 2017, Palmer Dabbelt wrote:
> > +	err = __get_user(prev, ptr);
> 
> Sigh. Type safety is overrated, right?

But the comment above your __get_user() implementation says:

+ * @ptr must have pointer-to-simple-variable type, and the result of
+ * dereferencing @ptr must be assignable to @x without a cast.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 8/9] RISC-V: User-facing API
  2017-06-28 18:55       ` Palmer Dabbelt
@ 2017-06-28 22:42         ` James Hogan
  -1 siblings, 0 replies; 283+ messages in thread
From: James Hogan @ 2017-06-28 22:42 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

[-- Attachment #1: Type: text/plain, Size: 8612 bytes --]

Hi Palmer,

On Wed, Jun 28, 2017 at 11:55:37AM -0700, Palmer Dabbelt wrote:
> diff --git a/arch/riscv/include/asm/syscalls.h b/arch/riscv/include/asm/syscalls.h
> new file mode 100644
> index 000000000000..d85267c4f7ea
> --- /dev/null
> +++ b/arch/riscv/include/asm/syscalls.h
> @@ -0,0 +1,25 @@
...
> +/* kernel/sys_riscv.c */
> +asmlinkage long sys_sysriscv(unsigned long, unsigned long,
> +	unsigned long, unsigned long);

You suggested in the cover letter this wasn't muxed any longer, maybe
you should have a prototype for each of the cmpxchg syscalls instead?

> diff --git a/arch/riscv/include/uapi/asm/ptrace.h b/arch/riscv/include/uapi/asm/ptrace.h
> new file mode 100644
> index 000000000000..01aee1654eae
> --- /dev/null
> +++ b/arch/riscv/include/uapi/asm/ptrace.h
...
> +struct __riscv_f_ext_state {
> +	__u32 f[32];
> +	__u32 fcsr;
> +};
> +
> +struct __riscv_d_ext_state {
> +	__u64 f[32];
> +	__u32 fcsr;
> +};
> +
> +struct __riscv_q_ext_state {
> +	__u64 f[64] __attribute__((aligned(16)));
> +	__u32 fcsr;
> +	/* Reserved for expansion of sigcontext structure.  Currently zeroed
> +	 * upon signal, and must be zero upon sigreturn.  */
> +	__u32 reserved[3];
> +};
> +
> +union __riscv_fp_state {
> +	struct __riscv_f_ext_state f;
> +	struct __riscv_d_ext_state d;
> +	struct __riscv_q_ext_state q;
> +};

Out of interest, how does one tell which fp format is in use?

> diff --git a/arch/riscv/include/uapi/asm/ucontext.h b/arch/riscv/include/uapi/asm/ucontext.h
> new file mode 100644
> index 000000000000..52eff9febcfd
> --- /dev/null
> +++ b/arch/riscv/include/uapi/asm/ucontext.h
...
> +struct ucontext {
> +	unsigned long	  uc_flags;
> +	struct ucontext	 *uc_link;
> +	stack_t		  uc_stack;
> +	sigset_t	  uc_sigmask;
> +	/* glibc uses a 1024-bit sigset_t */
> +	__u8		  __unused[1024 / 8 - sizeof(sigset_t)];
> +	/* last for future expansion */
> +	struct sigcontext uc_mcontext;
> +};

Any particular reason not to use the asm-generic ucontext?

> diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h
> new file mode 100644
> index 000000000000..7e3909ac3c18
> --- /dev/null
> +++ b/arch/riscv/include/uapi/asm/unistd.h
...
> +/* FIXME: This exists for now in order to maintain compatibility with our
> + * pre-upstream glibc, and will be removed for our real Linux submission.
> + */
> +#define __ARCH_WANT_RENAMEAT
> +

Don't forget ;-)

Have you seen the patches floating around for dropping
getrlimit/setrlimit (in favour of prlimit64) and fstatat64/fstat64 (in
favour of statx)? I guess its no big deal.

> +#include <asm-generic/unistd.h>
> +
> +/*
> + * These system calls add support for AMOs on RISC-V systems without support
> + * for the A extension.
> + */
> +#define __NR_sysriscv_cmpxchg32		(__NR_arch_specific_syscall + 0)
> +#define __NR_sysriscv_cmpxchg64		(__NR_arch_specific_syscall + 1)

I think you need the magic __SYSCALL invocations here like in
include/uapi/asm/unistd.h, otherwise they won't get included in your
syscall table.

> diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
> new file mode 100644
> index 000000000000..69b3b2d10664
> --- /dev/null
> +++ b/arch/riscv/kernel/ptrace.c
...
> +enum riscv_regset {
> +	REGSET_X,
> +};
> +
> +/*
> + * Get registers from task and ready the result for userspace.
> + */
> +static char *getregs(struct task_struct *child, struct pt_regs *uregs)
> +{
> +	*uregs = *task_pt_regs(child);
> +	return (char *)uregs;
> +}
> +
> +/* Put registers back to task. */
> +static void putregs(struct task_struct *child, struct pt_regs *uregs)
> +{
> +	struct pt_regs *regs = task_pt_regs(child);
> +	*regs = *uregs;
> +}
> +
> +static int riscv_gpr_get(struct task_struct *target,
> +			 const struct user_regset *regset,
> +			 unsigned int pos, unsigned int count,
> +			 void *kbuf, void __user *ubuf)
> +{
> +	struct pt_regs regs;
> +
> +	getregs(target, &regs);
> +
> +	return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &regs, 0,
> +				   sizeof(regs));

Shouldn't this be limited to sizeof(struct user_regs_struct)?

Why not copy straight out of task_pt_regs(target) instead of bouncing
via the stack?

> +}
> +
> +static int riscv_gpr_set(struct task_struct *target,
> +			 const struct user_regset *regset,
> +			 unsigned int pos, unsigned int count,
> +			 const void *kbuf, const void __user *ubuf)
> +{
> +	int ret;
> +	struct pt_regs regs;
> +
> +	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &regs, 0,
> +				 sizeof(regs));

likewise.

In fact if userland supplies insufficient data then this looks
vulnerable to a kernel stack data leak, since regs will remain partially
uninitialised and then get written to the target regs where it can be
read back again.

If you're going to bounce via the stack I think you need to fully
initialise before using user_regset_copyin, or you could just copy
directly into task_pt_regs(target) for now since, at least for the
current internal struct pt_regs, the begining of pt_regs appears to
match user_regs_struct.

> +	if (ret)
> +		return ret;
> +
> +	putregs(target, &regs);

Similarly this needs to be careful not to overwrite the supervisor
registers with whatever was on kernel stack (assuming only partially
copied as suggested above)?

> +
> +	return 0;
> +}
> +
> +
> +static const struct user_regset riscv_user_regset[] = {
> +	[REGSET_X] = {
> +		.core_note_type = NT_PRSTATUS,
> +		.n = ELF_NGREG,
> +		.size = sizeof(elf_greg_t),
> +		.align = sizeof(elf_greg_t),
> +		.get = &riscv_gpr_get,
> +		.set = &riscv_gpr_set,
> +	},

Will the FP registers get exposed at some point as well?

> diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
> new file mode 100644
> index 000000000000..ab699efe636e
> --- /dev/null
> +++ b/arch/riscv/kernel/sys_riscv.c
...
> +SYSCALL_DEFINE3(sysriscv_cmpxchg32, unsigned long, arg1, unsigned long, arg2,
> +		unsigned long, arg3)
> +{
> +	unsigned long flags;
> +	unsigned long prev;

should that be unsigned int? Else on 64-bit half of it could be left
uninitialised.

> +	unsigned int *ptr;

should that be tagged with __user?

> +	unsigned int err;
> +
> +	ptr = (unsigned int *)arg1;

I presume you'll need to cast to __user __force to keep sparse happy
here.

> +	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned int)))
> +		return -EFAULT;
> +
> +	preempt_disable();
> +	raw_local_irq_save(flags);
> +	err = __get_user(prev, ptr);
> +	if (likely(!err && prev == arg2))
> +		err = __put_user(arg3, ptr);
> +	raw_local_irq_restore(flags);
> +	preempt_enable();

Are user accesses safe from atomic context? What if it needs paging in? 

You could disable page faults but then I think you'd have to handle the
EFAULT again outside of atomic context to try getting it paged in, and
then retry in atomic context. Or perhaps there's a cleaner way that
doesn't come to mind late at night.

I'm not sure OTOH whether copy on write (i.e. affecting the __put_user()
but not the __get_user() would be problematic. I suppose as long as it
can safely allocate a page it should be fine... Should be possible to
test using madvise(MADV_DONTNEED) (which I think makes pages use the
zero page with copy-on-write).

Also if this is going to be included on SMP kernels (where I gather
proper atomics are available), does it need an SMP safe version too
which uses proper atomics?

> +
> +	return unlikely(err) ? err : prev;
> +}
> +
> +SYSCALL_DEFINE3(sysriscv_cmpxchg64, unsigned long, arg1, unsigned long, arg2,
> +		unsigned long, arg3)
> +{
> +	unsigned long flags;
> +	unsigned long prev;
> +	unsigned int *ptr;

should that be unsigned long __user *?

> +	unsigned int err;
> +
> +	ptr = (unsigned int *)arg1;
> +	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned long)))
> +		return -EFAULT;
> +
> +	preempt_disable();
> +	raw_local_irq_save(flags);
> +	err = __get_user(prev, ptr);
> +	if (likely(!err && prev == arg2))
> +		err = __put_user(arg3, ptr);
> +	raw_local_irq_restore(flags);
> +	preempt_enable();

Likewise to other comments above.

This doesn't look much different to sysriscv_cmpxchg32 on 32-bit. Is it
meant to be excluded from 32-bit kernels? If so definition of the __NR_
constant and the __SYSCALL magic in uapi/asm/unistd.h should I presume
be conditional on the ABI.

> +
> +	return unlikely(err) ? err : prev;
> +}

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 8/9] RISC-V: User-facing API
@ 2017-06-28 22:42         ` James Hogan
  0 siblings, 0 replies; 283+ messages in thread
From: James Hogan @ 2017-06-28 22:42 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

[-- Attachment #1: Type: text/plain, Size: 8612 bytes --]

Hi Palmer,

On Wed, Jun 28, 2017 at 11:55:37AM -0700, Palmer Dabbelt wrote:
> diff --git a/arch/riscv/include/asm/syscalls.h b/arch/riscv/include/asm/syscalls.h
> new file mode 100644
> index 000000000000..d85267c4f7ea
> --- /dev/null
> +++ b/arch/riscv/include/asm/syscalls.h
> @@ -0,0 +1,25 @@
...
> +/* kernel/sys_riscv.c */
> +asmlinkage long sys_sysriscv(unsigned long, unsigned long,
> +	unsigned long, unsigned long);

You suggested in the cover letter this wasn't muxed any longer, maybe
you should have a prototype for each of the cmpxchg syscalls instead?

> diff --git a/arch/riscv/include/uapi/asm/ptrace.h b/arch/riscv/include/uapi/asm/ptrace.h
> new file mode 100644
> index 000000000000..01aee1654eae
> --- /dev/null
> +++ b/arch/riscv/include/uapi/asm/ptrace.h
...
> +struct __riscv_f_ext_state {
> +	__u32 f[32];
> +	__u32 fcsr;
> +};
> +
> +struct __riscv_d_ext_state {
> +	__u64 f[32];
> +	__u32 fcsr;
> +};
> +
> +struct __riscv_q_ext_state {
> +	__u64 f[64] __attribute__((aligned(16)));
> +	__u32 fcsr;
> +	/* Reserved for expansion of sigcontext structure.  Currently zeroed
> +	 * upon signal, and must be zero upon sigreturn.  */
> +	__u32 reserved[3];
> +};
> +
> +union __riscv_fp_state {
> +	struct __riscv_f_ext_state f;
> +	struct __riscv_d_ext_state d;
> +	struct __riscv_q_ext_state q;
> +};

Out of interest, how does one tell which fp format is in use?

> diff --git a/arch/riscv/include/uapi/asm/ucontext.h b/arch/riscv/include/uapi/asm/ucontext.h
> new file mode 100644
> index 000000000000..52eff9febcfd
> --- /dev/null
> +++ b/arch/riscv/include/uapi/asm/ucontext.h
...
> +struct ucontext {
> +	unsigned long	  uc_flags;
> +	struct ucontext	 *uc_link;
> +	stack_t		  uc_stack;
> +	sigset_t	  uc_sigmask;
> +	/* glibc uses a 1024-bit sigset_t */
> +	__u8		  __unused[1024 / 8 - sizeof(sigset_t)];
> +	/* last for future expansion */
> +	struct sigcontext uc_mcontext;
> +};

Any particular reason not to use the asm-generic ucontext?

> diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h
> new file mode 100644
> index 000000000000..7e3909ac3c18
> --- /dev/null
> +++ b/arch/riscv/include/uapi/asm/unistd.h
...
> +/* FIXME: This exists for now in order to maintain compatibility with our
> + * pre-upstream glibc, and will be removed for our real Linux submission.
> + */
> +#define __ARCH_WANT_RENAMEAT
> +

Don't forget ;-)

Have you seen the patches floating around for dropping
getrlimit/setrlimit (in favour of prlimit64) and fstatat64/fstat64 (in
favour of statx)? I guess its no big deal.

> +#include <asm-generic/unistd.h>
> +
> +/*
> + * These system calls add support for AMOs on RISC-V systems without support
> + * for the A extension.
> + */
> +#define __NR_sysriscv_cmpxchg32		(__NR_arch_specific_syscall + 0)
> +#define __NR_sysriscv_cmpxchg64		(__NR_arch_specific_syscall + 1)

I think you need the magic __SYSCALL invocations here like in
include/uapi/asm/unistd.h, otherwise they won't get included in your
syscall table.

> diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
> new file mode 100644
> index 000000000000..69b3b2d10664
> --- /dev/null
> +++ b/arch/riscv/kernel/ptrace.c
...
> +enum riscv_regset {
> +	REGSET_X,
> +};
> +
> +/*
> + * Get registers from task and ready the result for userspace.
> + */
> +static char *getregs(struct task_struct *child, struct pt_regs *uregs)
> +{
> +	*uregs = *task_pt_regs(child);
> +	return (char *)uregs;
> +}
> +
> +/* Put registers back to task. */
> +static void putregs(struct task_struct *child, struct pt_regs *uregs)
> +{
> +	struct pt_regs *regs = task_pt_regs(child);
> +	*regs = *uregs;
> +}
> +
> +static int riscv_gpr_get(struct task_struct *target,
> +			 const struct user_regset *regset,
> +			 unsigned int pos, unsigned int count,
> +			 void *kbuf, void __user *ubuf)
> +{
> +	struct pt_regs regs;
> +
> +	getregs(target, &regs);
> +
> +	return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &regs, 0,
> +				   sizeof(regs));

Shouldn't this be limited to sizeof(struct user_regs_struct)?

Why not copy straight out of task_pt_regs(target) instead of bouncing
via the stack?

> +}
> +
> +static int riscv_gpr_set(struct task_struct *target,
> +			 const struct user_regset *regset,
> +			 unsigned int pos, unsigned int count,
> +			 const void *kbuf, const void __user *ubuf)
> +{
> +	int ret;
> +	struct pt_regs regs;
> +
> +	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &regs, 0,
> +				 sizeof(regs));

likewise.

In fact if userland supplies insufficient data then this looks
vulnerable to a kernel stack data leak, since regs will remain partially
uninitialised and then get written to the target regs where it can be
read back again.

If you're going to bounce via the stack I think you need to fully
initialise before using user_regset_copyin, or you could just copy
directly into task_pt_regs(target) for now since, at least for the
current internal struct pt_regs, the begining of pt_regs appears to
match user_regs_struct.

> +	if (ret)
> +		return ret;
> +
> +	putregs(target, &regs);

Similarly this needs to be careful not to overwrite the supervisor
registers with whatever was on kernel stack (assuming only partially
copied as suggested above)?

> +
> +	return 0;
> +}
> +
> +
> +static const struct user_regset riscv_user_regset[] = {
> +	[REGSET_X] = {
> +		.core_note_type = NT_PRSTATUS,
> +		.n = ELF_NGREG,
> +		.size = sizeof(elf_greg_t),
> +		.align = sizeof(elf_greg_t),
> +		.get = &riscv_gpr_get,
> +		.set = &riscv_gpr_set,
> +	},

Will the FP registers get exposed at some point as well?

> diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
> new file mode 100644
> index 000000000000..ab699efe636e
> --- /dev/null
> +++ b/arch/riscv/kernel/sys_riscv.c
...
> +SYSCALL_DEFINE3(sysriscv_cmpxchg32, unsigned long, arg1, unsigned long, arg2,
> +		unsigned long, arg3)
> +{
> +	unsigned long flags;
> +	unsigned long prev;

should that be unsigned int? Else on 64-bit half of it could be left
uninitialised.

> +	unsigned int *ptr;

should that be tagged with __user?

> +	unsigned int err;
> +
> +	ptr = (unsigned int *)arg1;

I presume you'll need to cast to __user __force to keep sparse happy
here.

> +	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned int)))
> +		return -EFAULT;
> +
> +	preempt_disable();
> +	raw_local_irq_save(flags);
> +	err = __get_user(prev, ptr);
> +	if (likely(!err && prev == arg2))
> +		err = __put_user(arg3, ptr);
> +	raw_local_irq_restore(flags);
> +	preempt_enable();

Are user accesses safe from atomic context? What if it needs paging in? 

You could disable page faults but then I think you'd have to handle the
EFAULT again outside of atomic context to try getting it paged in, and
then retry in atomic context. Or perhaps there's a cleaner way that
doesn't come to mind late at night.

I'm not sure OTOH whether copy on write (i.e. affecting the __put_user()
but not the __get_user() would be problematic. I suppose as long as it
can safely allocate a page it should be fine... Should be possible to
test using madvise(MADV_DONTNEED) (which I think makes pages use the
zero page with copy-on-write).

Also if this is going to be included on SMP kernels (where I gather
proper atomics are available), does it need an SMP safe version too
which uses proper atomics?

> +
> +	return unlikely(err) ? err : prev;
> +}
> +
> +SYSCALL_DEFINE3(sysriscv_cmpxchg64, unsigned long, arg1, unsigned long, arg2,
> +		unsigned long, arg3)
> +{
> +	unsigned long flags;
> +	unsigned long prev;
> +	unsigned int *ptr;

should that be unsigned long __user *?

> +	unsigned int err;
> +
> +	ptr = (unsigned int *)arg1;
> +	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned long)))
> +		return -EFAULT;
> +
> +	preempt_disable();
> +	raw_local_irq_save(flags);
> +	err = __get_user(prev, ptr);
> +	if (likely(!err && prev == arg2))
> +		err = __put_user(arg3, ptr);
> +	raw_local_irq_restore(flags);
> +	preempt_enable();

Likewise to other comments above.

This doesn't look much different to sysriscv_cmpxchg32 on 32-bit. Is it
meant to be excluded from 32-bit kernels? If so definition of the __NR_
constant and the __SYSCALL magic in uapi/asm/unistd.h should I presume
be conditional on the ABI.

> +
> +	return unlikely(err) ? err : prev;
> +}

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 9/9] RISC-V: Build Infastructure
  2017-06-28 18:55       ` Palmer Dabbelt
@ 2017-06-28 22:54         ` James Hogan
  -1 siblings, 0 replies; 283+ messages in thread
From: James Hogan @ 2017-06-28 22:54 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

[-- Attachment #1: Type: text/plain, Size: 1502 bytes --]

On Wed, Jun 28, 2017 at 11:55:38AM -0700, Palmer Dabbelt wrote:
> diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
> new file mode 100644
> index 000000000000..7f58cd251ab8
> --- /dev/null
> +++ b/arch/riscv/kernel/Makefile
> @@ -0,0 +1,16 @@
> +#
> +# Makefile for the RISC-V Linux kernel
> +#
> +
> +extra-y := head.o vmlinux.lds
> +
> +obj-y	:= cpu.o entry.o irq.o process.o ptrace.o reset.o setup.o \
> +	   signal.o syscall_table.o sys_riscv.o time.o traps.o \
> +	   riscv_ksyms.o stacktrace.o vdso.o cacheinfo.o vdso/

I would suggest splitting these sort of things onto separate lines.
It'll make later changes much more readable (especially when you're
escaping newlines) as well as avoiding conflicts.

> +
> +CFLAGS_setup.o := -mcmodel=medany
> +
> +obj-$(CONFIG_SMP)		+= smpboot.o smp.o

same here

> +obj-$(CONFIG_MODULES)		+= module.o
> +
> +clean:

> diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile
> new file mode 100644
> index 000000000000..120c38e77a46
> --- /dev/null
> +++ b/arch/riscv/lib/Makefile
> @@ -0,0 +1,5 @@
> +lib-y	:= delay.o memcpy.o memset.o uaccess.o

and here

> +
> +ifeq ($(CONFIG_64BIT),)
> +lib-y += udivdi3.o
> +endif

Would this work?

lib-$(CONFIG_32BIT) += udivdi3.o

> diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
> new file mode 100644
> index 000000000000..36ebe6feb5d6
> --- /dev/null
> +++ b/arch/riscv/mm/Makefile
> @@ -0,0 +1 @@
> +obj-y := init.o fault.o extable.o ioremap.o

and here

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 9/9] RISC-V: Build Infastructure
@ 2017-06-28 22:54         ` James Hogan
  0 siblings, 0 replies; 283+ messages in thread
From: James Hogan @ 2017-06-28 22:54 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

[-- Attachment #1: Type: text/plain, Size: 1502 bytes --]

On Wed, Jun 28, 2017 at 11:55:38AM -0700, Palmer Dabbelt wrote:
> diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
> new file mode 100644
> index 000000000000..7f58cd251ab8
> --- /dev/null
> +++ b/arch/riscv/kernel/Makefile
> @@ -0,0 +1,16 @@
> +#
> +# Makefile for the RISC-V Linux kernel
> +#
> +
> +extra-y := head.o vmlinux.lds
> +
> +obj-y	:= cpu.o entry.o irq.o process.o ptrace.o reset.o setup.o \
> +	   signal.o syscall_table.o sys_riscv.o time.o traps.o \
> +	   riscv_ksyms.o stacktrace.o vdso.o cacheinfo.o vdso/

I would suggest splitting these sort of things onto separate lines.
It'll make later changes much more readable (especially when you're
escaping newlines) as well as avoiding conflicts.

> +
> +CFLAGS_setup.o := -mcmodel=medany
> +
> +obj-$(CONFIG_SMP)		+= smpboot.o smp.o

same here

> +obj-$(CONFIG_MODULES)		+= module.o
> +
> +clean:

> diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile
> new file mode 100644
> index 000000000000..120c38e77a46
> --- /dev/null
> +++ b/arch/riscv/lib/Makefile
> @@ -0,0 +1,5 @@
> +lib-y	:= delay.o memcpy.o memset.o uaccess.o

and here

> +
> +ifeq ($(CONFIG_64BIT),)
> +lib-y += udivdi3.o
> +endif

Would this work?

lib-$(CONFIG_32BIT) += udivdi3.o

> diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
> new file mode 100644
> index 000000000000..36ebe6feb5d6
> --- /dev/null
> +++ b/arch/riscv/mm/Makefile
> @@ -0,0 +1 @@
> +obj-y := init.o fault.o extable.o ioremap.o

and here

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 7/9] RISC-V: Paging and MMU
  2017-06-28 18:55       ` Palmer Dabbelt
@ 2017-06-28 23:09         ` James Hogan
  -1 siblings, 0 replies; 283+ messages in thread
From: James Hogan @ 2017-06-28 23:09 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

[-- Attachment #1: Type: text/plain, Size: 332 bytes --]

On Wed, Jun 28, 2017 at 11:55:36AM -0700, Palmer Dabbelt wrote:
> diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
> new file mode 100644
> index 000000000000..e1491c20d6fd
> --- /dev/null
> +++ b/arch/riscv/include/asm/page.h
...
> +#ifdef __KERNEL__

I think thats a given outside of uapi.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 7/9] RISC-V: Paging and MMU
@ 2017-06-28 23:09         ` James Hogan
  0 siblings, 0 replies; 283+ messages in thread
From: James Hogan @ 2017-06-28 23:09 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

[-- Attachment #1: Type: text/plain, Size: 332 bytes --]

On Wed, Jun 28, 2017 at 11:55:36AM -0700, Palmer Dabbelt wrote:
> diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
> new file mode 100644
> index 000000000000..e1491c20d6fd
> --- /dev/null
> +++ b/arch/riscv/include/asm/page.h
...
> +#ifdef __KERNEL__

I think thats a given outside of uapi.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/9] RISC-V: Task implementation
  2017-06-28 18:55       ` Palmer Dabbelt
@ 2017-06-28 23:32         ` James Hogan
  -1 siblings, 0 replies; 283+ messages in thread
From: James Hogan @ 2017-06-28 23:32 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

[-- Attachment #1: Type: text/plain, Size: 1374 bytes --]

On Wed, Jun 28, 2017 at 11:55:34AM -0700, Palmer Dabbelt wrote:
> diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
> new file mode 100644
> index 000000000000..1190de7a0f74
> --- /dev/null
> +++ b/arch/riscv/include/asm/kprobes.h
> @@ -0,0 +1,22 @@
...
> +#ifdef CONFIG_KPROBES
> +#error "RISC-V doesn't skpport CONFIG_KPROBES"
> +#endif

I'm wondering where your fallback definition of e.g. NOKPROBE_SYMBOL
comes from then.

Could you just use the asm-generic one?

> diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
> new file mode 100644
> index 000000000000..b13d3ea3bf79
> --- /dev/null
> +++ b/arch/riscv/kernel/process.c
> @@ -0,0 +1,131 @@
...
> +void show_regs(struct pt_regs *regs)
> +{
> +	show_regs_print_info(KERN_DEFAULT);
> +
> +	printk(KERN_CONT "sepc: " REG_FMT " ra : " REG_FMT " sp : " REG_FMT "\n",
> +		regs->sepc, regs->ra, regs->sp);

I've noticed inconsistent use of pr_* and printk(KERN_* in this
patchset. Maybe now would be the best time to switch everything to pr_*.

> +	/* Reset FPU context
> +	 *	frm: round to nearest, ties to even (IEEE default)
> +	 *	fflags: accrued exceptions cleared
> +	 */

Similarly lots of multiline comments which don't follow the standard
style in Documentation/process/coding-style.rst. Maybe now is the best
time to convert if you're going to.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/9] RISC-V: Task implementation
@ 2017-06-28 23:32         ` James Hogan
  0 siblings, 0 replies; 283+ messages in thread
From: James Hogan @ 2017-06-28 23:32 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

[-- Attachment #1: Type: text/plain, Size: 1374 bytes --]

On Wed, Jun 28, 2017 at 11:55:34AM -0700, Palmer Dabbelt wrote:
> diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
> new file mode 100644
> index 000000000000..1190de7a0f74
> --- /dev/null
> +++ b/arch/riscv/include/asm/kprobes.h
> @@ -0,0 +1,22 @@
...
> +#ifdef CONFIG_KPROBES
> +#error "RISC-V doesn't skpport CONFIG_KPROBES"
> +#endif

I'm wondering where your fallback definition of e.g. NOKPROBE_SYMBOL
comes from then.

Could you just use the asm-generic one?

> diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
> new file mode 100644
> index 000000000000..b13d3ea3bf79
> --- /dev/null
> +++ b/arch/riscv/kernel/process.c
> @@ -0,0 +1,131 @@
...
> +void show_regs(struct pt_regs *regs)
> +{
> +	show_regs_print_info(KERN_DEFAULT);
> +
> +	printk(KERN_CONT "sepc: " REG_FMT " ra : " REG_FMT " sp : " REG_FMT "\n",
> +		regs->sepc, regs->ra, regs->sp);

I've noticed inconsistent use of pr_* and printk(KERN_* in this
patchset. Maybe now would be the best time to switch everything to pr_*.

> +	/* Reset FPU context
> +	 *	frm: round to nearest, ties to even (IEEE default)
> +	 *	fflags: accrued exceptions cleared
> +	 */

Similarly lots of multiline comments which don't follow the standard
style in Documentation/process/coding-style.rst. Maybe now is the best
time to convert if you're going to.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/9] RISC-V: Task implementation
  2017-06-28 18:55       ` Palmer Dabbelt
  (?)
  (?)
@ 2017-06-29  8:22       ` Tobias Klauser
  2017-06-29 22:52           ` Palmer Dabbelt
  -1 siblings, 1 reply; 283+ messages in thread
From: Tobias Klauser @ 2017-06-29  8:22 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, will.deacon, james.hogan, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On 2017-06-28 at 20:55:34 +0200, Palmer Dabbelt <palmer@dabbelt.com> wrote:
[...]
> diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
> new file mode 100644
> index 000000000000..1190de7a0f74
> --- /dev/null
> +++ b/arch/riscv/include/asm/kprobes.h
> @@ -0,0 +1,22 @@
> +/*
> + * Copyright (C) 2017 SiFive
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful,
> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *   GNU General Public License for more details.
> + */
> +
> +
> +#ifndef ASM_RISCV_KPROBES_H
> +#define ASM_RISCV_KPROBES_H
> +
> +#ifdef CONFIG_KPROBES
> +#error "RISC-V doesn't skpport CONFIG_KPROBES"

Typo: s/skpport/support/

> +#endif
> +
> +#endif
> diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
> new file mode 100644
> index 000000000000..65aa014db9b4
> --- /dev/null
> +++ b/arch/riscv/include/asm/processor.h
> @@ -0,0 +1,102 @@
> +/*
> + * Copyright (C) 2012 Regents of the University of California
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful,
> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *   GNU General Public License for more details.
> + */
> +
> +#ifndef _ASM_RISCV_PROCESSOR_H
> +#define _ASM_RISCV_PROCESSOR_H
> +
> +#include <linux/const.h>
> +
> +#include <asm/ptrace.h>
> +
> +/*
> + * This decides where the kernel will search for a free chunk of vm
> + * space during mmap's.
> + */
> +#define TASK_UNMAPPED_BASE	PAGE_ALIGN(TASK_SIZE >> 1)
> +
> +#ifdef __KERNEL__
> +#define STACK_TOP		TASK_SIZE
> +#define STACK_TOP_MAX		STACK_TOP
> +#define STACK_ALIGN		16
> +#endif /* __KERNEL__ */
> +
> +#ifndef __ASSEMBLY__
> +
> +struct task_struct;
> +struct pt_regs;
> +
> +/*
> + * Default implementation of macro that returns current
> + * instruction pointer ("program counter").
> + */
> +#define current_text_addr()	({ __label__ _l; _l: &&_l; })
> +
> +/* CPU-specific state of a task */
> +struct thread_struct {
> +	/* Callee-saved registers */
> +	unsigned long ra;
> +	unsigned long sp;	/* Kernel mode stack */
> +	unsigned long s[12];	/* s[0]: frame pointer */
> +	struct __riscv_d_ext_state fstate;
> +};
> +
> +#define INIT_THREAD {					\
> +	.sp = sizeof(init_stack) + (long)&init_stack,	\
> +}
> +
> +/* Return saved (kernel) PC of a blocked thread. */
> +#define thread_saved_pc(t)	((t)->thread.ra)
> +#define thread_saved_sp(t)	((t)->thread.sp)
> +#define thread_saved_fp(t)	((t)->thread.s[0])

These aren't needed outside of arch-specific code (anymore) and the
riscv port doesn't seem to be using them, so they can be omitted.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/9] RISC-V: Device, timer, IRQs, and the SBI
  2017-06-28 18:55       ` Palmer Dabbelt
  (?)
@ 2017-06-29  8:39       ` Tobias Klauser
  2017-06-29 22:52           ` Palmer Dabbelt
  -1 siblings, 1 reply; 283+ messages in thread
From: Tobias Klauser @ 2017-06-29  8:39 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, will.deacon, james.hogan, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On 2017-06-28 at 20:55:35 +0200, Palmer Dabbelt <palmer@dabbelt.com> wrote:
[...]
> diff --git a/arch/riscv/include/asm/device.h b/arch/riscv/include/asm/device.h
> new file mode 100644
> index 000000000000..28975e528d2f
> --- /dev/null
> +++ b/arch/riscv/include/asm/device.h
> @@ -0,0 +1,27 @@
> +/*
> + * Copyright (C) 2016 SiFive
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation, version 2.
> + *
> + *   This program is distributed in the hope that it will be useful,
> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *   GNU General Public License for more details.
> + */
> +
> +
> +#ifndef _ASM_RISCV_DEVICE_H
> +#define _ASM_RISCV_DEVICE_H
> +
> +#include <linux/sysfs.h>
> +
> +struct dev_archdata {
> +	struct dma_map_ops *dma_ops;
> +};

The dma_ops member isn't used in any arch code or driver from what I can
tell (I checked against your riscv-for-submission-v3 branch). Could
device.h from asm-generic be used instead, or did I miss something?

> +
> +struct pdev_archdata {
> +};
> +
> +#endif /* _ASM_RISCV_DEVICE_H */

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 1/9] RISC-V: Init and Halt Code
  2017-06-28 18:55       ` Palmer Dabbelt
@ 2017-06-29  9:44         ` Geert Uytterhoeven
  -1 siblings, 0 replies; 283+ messages in thread
From: Geert Uytterhoeven @ 2017-06-29  9:44 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Peter Zijlstra, Ingo Molnar, Luis R. Rodriguez, Al Viro,
	Stephen Rothwell, Nicolas Dichtel, Russell King, Mark Salter,
	Tobias Klauser, Will Deacon, James Hogan, Paul Gortmaker,
	Guenter Roeck, linux-kernel, Linux-Arch, albert

Hi Palmer,

On Wed, Jun 28, 2017 at 8:55 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> This contains the various __init C functions, the initial assembly
> kernel entry point, and the code to reset the system.  When a file was
> init-related, it contains

...?

> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 1/9] RISC-V: Init and Halt Code
@ 2017-06-29  9:44         ` Geert Uytterhoeven
  0 siblings, 0 replies; 283+ messages in thread
From: Geert Uytterhoeven @ 2017-06-29  9:44 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Peter Zijlstra, Ingo Molnar, Luis R. Rodriguez, Al Viro,
	Stephen Rothwell, Nicolas Dichtel, Russell King, Mark Salter,
	Tobias Klauser, Will Deacon, James Hogan, Paul Gortmaker,
	Guenter Roeck, linux-kernel, Linux-Arch, albert

Hi Palmer,

On Wed, Jun 28, 2017 at 8:55 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> This contains the various __init C functions, the initial assembly
> kernel entry point, and the code to reset the system.  When a file was
> init-related, it contains

...?

> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>

Gr{oetje,eeting}s,

                        Geert

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 9/9] RISC-V: Build Infastructure
  2017-06-28 21:25         ` James Hogan
@ 2017-06-29 16:29           ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 16:29 UTC (permalink / raw)
  To: james.hogan
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017 14:25:00 PDT (-0700), james.hogan@imgtec.com wrote:
> Hi Palmer,
>
> On Wed, Jun 28, 2017 at 11:55:38AM -0700, Palmer Dabbelt wrote:
>> +	select SYSRISCV_ATOMIC if !ISA_A
> ...
>> +config SYSRISCV_ATOMIC
>> +	bool "Include support for atomic operation syscalls"
>> +	default !ISA_A
>> +	help
>> +	  If atomic memory instructions are present, i.e.,
>> +	  CONFIG_ISA_A, this includes support for the syscall that
>> +	  provides atomic accesses.  This is only useful to run
>> +	  binaries that require atomic access but were compiled with
>> +	  -mno-atomic.
>> +
>> +	  If CONFIG_ISA_A is unset, this option is mandatory.
>> +
>> +	  If you don't know what to do here, say N.
>
> Can this be removed now that you mentioned the atomics syscall being
> mandatory? I can't find any other references to it.

Oh, sorry, I must have just missed it when swizzling that around.  I'll remove
it as part of our v4.

  diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
  index 38c8112861fd..9f587af28910 100644
  --- a/arch/riscv/Kconfig
  +++ b/arch/riscv/Kconfig
  @@ -155,20 +155,6 @@ config ISA_A

             If you don't know what to do here, say Y.

  -config SYSRISCV_ATOMIC
  -       bool "Include support for atomic operation syscalls"
  -       default !ISA_A
  -       help
  -         If atomic memory instructions are present, i.e.,
  -         CONFIG_ISA_A, this includes support for the syscall that
  -         provides atomic accesses.  This is only useful to run
  -         binaries that require atomic access but were compiled with
  -         -mno-atomic.
  -
  -         If CONFIG_ISA_A is unset, this option is mandatory.
  -
  -         If you don't know what to do here, say N.
  -
   config RV_PUM
          def_bool y
          prompt "Protect User Memory" if EXPERT

Thanks for catching this!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 9/9] RISC-V: Build Infastructure
@ 2017-06-29 16:29           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 16:29 UTC (permalink / raw)
  To: james.hogan
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017 14:25:00 PDT (-0700), james.hogan@imgtec.com wrote:
> Hi Palmer,
>
> On Wed, Jun 28, 2017 at 11:55:38AM -0700, Palmer Dabbelt wrote:
>> +	select SYSRISCV_ATOMIC if !ISA_A
> ...
>> +config SYSRISCV_ATOMIC
>> +	bool "Include support for atomic operation syscalls"
>> +	default !ISA_A
>> +	help
>> +	  If atomic memory instructions are present, i.e.,
>> +	  CONFIG_ISA_A, this includes support for the syscall that
>> +	  provides atomic accesses.  This is only useful to run
>> +	  binaries that require atomic access but were compiled with
>> +	  -mno-atomic.
>> +
>> +	  If CONFIG_ISA_A is unset, this option is mandatory.
>> +
>> +	  If you don't know what to do here, say N.
>
> Can this be removed now that you mentioned the atomics syscall being
> mandatory? I can't find any other references to it.

Oh, sorry, I must have just missed it when swizzling that around.  I'll remove
it as part of our v4.

  diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
  index 38c8112861fd..9f587af28910 100644
  --- a/arch/riscv/Kconfig
  +++ b/arch/riscv/Kconfig
  @@ -155,20 +155,6 @@ config ISA_A

             If you don't know what to do here, say Y.

  -config SYSRISCV_ATOMIC
  -       bool "Include support for atomic operation syscalls"
  -       default !ISA_A
  -       help
  -         If atomic memory instructions are present, i.e.,
  -         CONFIG_ISA_A, this includes support for the syscall that
  -         provides atomic accesses.  This is only useful to run
  -         binaries that require atomic access but were compiled with
  -         -mno-atomic.
  -
  -         If CONFIG_ISA_A is unset, this option is mandatory.
  -
  -         If you don't know what to do here, say N.
  -
   config RV_PUM
          def_bool y
          prompt "Protect User Memory" if EXPERT

Thanks for catching this!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 8/9] RISC-V: User-facing API
  2017-06-28 21:49       ` Thomas Gleixner
@ 2017-06-29 17:22           ` Palmer Dabbelt
  2017-06-29 17:22           ` Palmer Dabbelt
  1 sibling, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 17:22 UTC (permalink / raw)
  To: tglx
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017 14:49:44 PDT (-0700), tglx@linutronix.de wrote:
> On Wed, 28 Jun 2017, Palmer Dabbelt wrote:
>> +
>> +SYSCALL_DEFINE3(sysriscv_cmpxchg32, unsigned long, arg1, unsigned long, arg2,
>> +		unsigned long, arg3)
>> +{
>> +	unsigned long flags;
>> +	unsigned long prev;
>> +	unsigned int *ptr;
>> +	unsigned int err;
>> +
>> +	ptr = (unsigned int *)arg1;
>
> Errm. Why isn't arg1 a proper pointer type and the arguments arg2/3 u32?
>
> And please give the arguments a proper name, so it's obvious what is what.
>
> SYSCALL_DEFINE3(sysriscv_cmpxchg32, u32 __user *, ptr, u32 new, u32 old)
>
> Hmm?

Sorry about that -- this used to be a multiplexed system call, and I guess I
was just being stupid when demultiplexing it.  That's much better, I've
converted these over.

>> +	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned int)))
>> +		return -EFAULT;
>> +
>> +	preempt_disable();
>> +	raw_local_irq_save(flags);
>
> Why do you want to disable interrupts here? This is thread context and
> accessing user space memory, so the only protection this needs is against
> preemption.

OK, that makes sense.

>> +	err = __get_user(prev, ptr);
>> +	if (likely(!err && prev == arg2))
>> +		err = __put_user(arg3, ptr);
>> +	raw_local_irq_restore(flags);
>> +	preempt_enable();
>> +
>> +	return unlikely(err) ? err : prev;
>> +}
>> +
>> +SYSCALL_DEFINE3(sysriscv_cmpxchg64, unsigned long, arg1, unsigned long, arg2,
>> +		unsigned long, arg3)
>
> This one is even worse. How does this implement cmpxchg64 on a 32bit machine?
>
> Answer: Not at all, because arg2 and 3 are 32bit ....

Thanks for catching that -- this was just a bit of copy-and-paste gone wrong.

>> +{
>> +	unsigned long flags;
>> +	unsigned long prev;
>> +	unsigned int *ptr;
>> +	unsigned int err;
>> +
>> +	ptr = (unsigned int *)arg1;
>
> Type casting to random pointer types makes the code more obvious
> and safe, right? What the heck has a int pointer to do with u64?
>
>> +	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned long)))
>> +		return -EFAULT;
>> +
>> +	preempt_disable();
>> +	raw_local_irq_save(flags);
>
> Same as above.
>
>> +	err = __get_user(prev, ptr);
>
> Sigh. Type safety is overrated, right?

Again, this was due to the multiplexing that has been removed.  I've gone ahead
and cleaned up this system call here

  https://github.com/riscv/riscv-linux/commit/1af46852b968db5af044ec3a0329a73116b3e6ec

We'll include this in our v4 patch set.

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 8/9] RISC-V: User-facing API
@ 2017-06-29 17:22           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 17:22 UTC (permalink / raw)
  To: tglx
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017 14:49:44 PDT (-0700), tglx@linutronix.de wrote:
> On Wed, 28 Jun 2017, Palmer Dabbelt wrote:
>> +
>> +SYSCALL_DEFINE3(sysriscv_cmpxchg32, unsigned long, arg1, unsigned long, arg2,
>> +		unsigned long, arg3)
>> +{
>> +	unsigned long flags;
>> +	unsigned long prev;
>> +	unsigned int *ptr;
>> +	unsigned int err;
>> +
>> +	ptr = (unsigned int *)arg1;
>
> Errm. Why isn't arg1 a proper pointer type and the arguments arg2/3 u32?
>
> And please give the arguments a proper name, so it's obvious what is what.
>
> SYSCALL_DEFINE3(sysriscv_cmpxchg32, u32 __user *, ptr, u32 new, u32 old)
>
> Hmm?

Sorry about that -- this used to be a multiplexed system call, and I guess I
was just being stupid when demultiplexing it.  That's much better, I've
converted these over.

>> +	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned int)))
>> +		return -EFAULT;
>> +
>> +	preempt_disable();
>> +	raw_local_irq_save(flags);
>
> Why do you want to disable interrupts here? This is thread context and
> accessing user space memory, so the only protection this needs is against
> preemption.

OK, that makes sense.

>> +	err = __get_user(prev, ptr);
>> +	if (likely(!err && prev == arg2))
>> +		err = __put_user(arg3, ptr);
>> +	raw_local_irq_restore(flags);
>> +	preempt_enable();
>> +
>> +	return unlikely(err) ? err : prev;
>> +}
>> +
>> +SYSCALL_DEFINE3(sysriscv_cmpxchg64, unsigned long, arg1, unsigned long, arg2,
>> +		unsigned long, arg3)
>
> This one is even worse. How does this implement cmpxchg64 on a 32bit machine?
>
> Answer: Not at all, because arg2 and 3 are 32bit ....

Thanks for catching that -- this was just a bit of copy-and-paste gone wrong.

>> +{
>> +	unsigned long flags;
>> +	unsigned long prev;
>> +	unsigned int *ptr;
>> +	unsigned int err;
>> +
>> +	ptr = (unsigned int *)arg1;
>
> Type casting to random pointer types makes the code more obvious
> and safe, right? What the heck has a int pointer to do with u64?
>
>> +	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned long)))
>> +		return -EFAULT;
>> +
>> +	preempt_disable();
>> +	raw_local_irq_save(flags);
>
> Same as above.
>
>> +	err = __get_user(prev, ptr);
>
> Sigh. Type safety is overrated, right?

Again, this was due to the multiplexing that has been removed.  I've gone ahead
and cleaned up this system call here

  https://github.com/riscv/riscv-linux/commit/1af46852b968db5af044ec3a0329a73116b3e6ec

We'll include this in our v4 patch set.

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 8/9] RISC-V: User-facing API
  2017-06-28 22:42         ` James Hogan
@ 2017-06-29 21:42           ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 21:42 UTC (permalink / raw)
  To: james.hogan
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017 15:42:37 PDT (-0700), james.hogan@imgtec.com wrote:
> Hi Palmer,
>
> On Wed, Jun 28, 2017 at 11:55:37AM -0700, Palmer Dabbelt wrote:
>> diff --git a/arch/riscv/include/asm/syscalls.h b/arch/riscv/include/asm/syscalls.h
>> new file mode 100644
>> index 000000000000..d85267c4f7ea
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/syscalls.h
>> @@ -0,0 +1,25 @@
> ...
>> +/* kernel/sys_riscv.c */
>> +asmlinkage long sys_sysriscv(unsigned long, unsigned long,
>> +	unsigned long, unsigned long);
>
> You suggested in the cover letter this wasn't muxed any longer, maybe
> you should have a prototype for each of the cmpxchg syscalls instead?

Sorry, I just missed that.  I'll fix it for the v4

  diff --git a/arch/riscv/include/asm/syscalls.h b/arch/riscv/include/asm/syscalls.h
  index d85267c4f7ea..6490274fbb76 100644
  --- a/arch/riscv/include/asm/syscalls.h
  +++ b/arch/riscv/include/asm/syscalls.h
  @@ -19,7 +19,7 @@
   #include <asm-generic/syscalls.h>

   /* kernel/sys_riscv.c */
  -asmlinkage long sys_sysriscv(unsigned long, unsigned long,
  -       unsigned long, unsigned long);
  +asmlinkage long sys_sysriscv_cmpxchg32(u32 __user * ptr, u32 new, u32 old);
  +asmlinkage long sys_sysriscv_cmpxchg64(u64 __user * ptr, u64 new, u64 old);

   #endif /* _ASM_RISCV_SYSCALLS_H */

>> diff --git a/arch/riscv/include/uapi/asm/ptrace.h b/arch/riscv/include/uapi/asm/ptrace.h
>> new file mode 100644
>> index 000000000000..01aee1654eae
>> --- /dev/null
>> +++ b/arch/riscv/include/uapi/asm/ptrace.h
> ...
>> +struct __riscv_f_ext_state {
>> +	__u32 f[32];
>> +	__u32 fcsr;
>> +};
>> +
>> +struct __riscv_d_ext_state {
>> +	__u64 f[32];
>> +	__u32 fcsr;
>> +};
>> +
>> +struct __riscv_q_ext_state {
>> +	__u64 f[64] __attribute__((aligned(16)));
>> +	__u32 fcsr;
>> +	/* Reserved for expansion of sigcontext structure.  Currently zeroed
>> +	 * upon signal, and must be zero upon sigreturn.  */
>> +	__u32 reserved[3];
>> +};
>> +
>> +union __riscv_fp_state {
>> +	struct __riscv_f_ext_state f;
>> +	struct __riscv_d_ext_state d;
>> +	struct __riscv_q_ext_state q;
>> +};
>
> Out of interest, how does one tell which fp format is in use?

We might need another tag here -- I'll talk to Andrew (who did the glibc side
of this) and make sure we can handle something like running F user code on a D
kernel.

>> diff --git a/arch/riscv/include/uapi/asm/ucontext.h b/arch/riscv/include/uapi/asm/ucontext.h
>> new file mode 100644
>> index 000000000000..52eff9febcfd
>> --- /dev/null
>> +++ b/arch/riscv/include/uapi/asm/ucontext.h
> ...
>> +struct ucontext {
>> +	unsigned long	  uc_flags;
>> +	struct ucontext	 *uc_link;
>> +	stack_t		  uc_stack;
>> +	sigset_t	  uc_sigmask;
>> +	/* glibc uses a 1024-bit sigset_t */
>> +	__u8		  __unused[1024 / 8 - sizeof(sigset_t)];
>> +	/* last for future expansion */
>> +	struct sigcontext uc_mcontext;
>> +};
>
> Any particular reason not to use the asm-generic ucontext?

In the generic ucontext, 'uc_sigmask' is at the end of the structure so it can
be expanded.  Since we want our mcontext to be expandable as well, we
pre-allocate some expandable space for sigmask and then put mcontext at the
end.

We stole this idea from arm64.

>> diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h
>> new file mode 100644
>> index 000000000000..7e3909ac3c18
>> --- /dev/null
>> +++ b/arch/riscv/include/uapi/asm/unistd.h
> ...
>> +/* FIXME: This exists for now in order to maintain compatibility with our
>> + * pre-upstream glibc, and will be removed for our real Linux submission.
>> + */
>> +#define __ARCH_WANT_RENAMEAT
>> +
>
> Don't forget ;-)
>
> Have you seen the patches floating around for dropping
> getrlimit/setrlimit (in favour of prlimit64) and fstatat64/fstat64 (in
> favour of statx)? I guess its no big deal.

Yes, but we're trying to make this glibc release so we decided to hold off on
them.  If we can't make it then we might reconsider, but they seem like fairly
small issues.

>> +#include <asm-generic/unistd.h>
>> +
>> +/*
>> + * These system calls add support for AMOs on RISC-V systems without support
>> + * for the A extension.
>> + */
>> +#define __NR_sysriscv_cmpxchg32		(__NR_arch_specific_syscall + 0)
>> +#define __NR_sysriscv_cmpxchg64		(__NR_arch_specific_syscall + 1)
>
> I think you need the magic __SYSCALL invocations here like in
> include/uapi/asm/unistd.h, otherwise they won't get included in your
> syscall table.

OK, I've added those.

  diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h
  index 7e3909ac3c18..3cdb32912ac7 100644
  --- a/arch/riscv/include/uapi/asm/unistd.h
  +++ b/arch/riscv/include/uapi/asm/unistd.h
  @@ -23,4 +23,6 @@
    * for the A extension.
    */
   #define __NR_sysriscv_cmpxchg32                (__NR_arch_specific_syscall + 0)
  +__SYSCALL(__NR_sysriscv_cmpxchg32, sys_sysriscv_cmpxchg32)
   #define __NR_sysriscv_cmpxchg64                (__NR_arch_specific_syscall + 1)
  +__SYSCALL(__NR_sysriscv_cmpxchg64, sys_sysriscv_cmpxchg64)

>> diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
>> new file mode 100644
>> index 000000000000..69b3b2d10664
>> --- /dev/null
>> +++ b/arch/riscv/kernel/ptrace.c
> ...
>> +enum riscv_regset {
>> +	REGSET_X,
>> +};
>> +
>> +/*
>> + * Get registers from task and ready the result for userspace.
>> + */
>> +static char *getregs(struct task_struct *child, struct pt_regs *uregs)
>> +{
>> +	*uregs = *task_pt_regs(child);
>> +	return (char *)uregs;
>> +}
>> +
>> +/* Put registers back to task. */
>> +static void putregs(struct task_struct *child, struct pt_regs *uregs)
>> +{
>> +	struct pt_regs *regs = task_pt_regs(child);
>> +	*regs = *uregs;
>> +}
>> +
>> +static int riscv_gpr_get(struct task_struct *target,
>> +			 const struct user_regset *regset,
>> +			 unsigned int pos, unsigned int count,
>> +			 void *kbuf, void __user *ubuf)
>> +{
>> +	struct pt_regs regs;
>> +
>> +	getregs(target, &regs);
>> +
>> +	return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &regs, 0,
>> +				   sizeof(regs));
>
> Shouldn't this be limited to sizeof(struct user_regs_struct)?
>
> Why not copy straight out of task_pt_regs(target) instead of bouncing
> via the stack?

IIRC this code used to be more complicated as it supported the two different
ptrace register APIs.  There's no reason to have this function now, so I've
just pulled into the only caller.

>> +}
>> +
>> +static int riscv_gpr_set(struct task_struct *target,
>> +			 const struct user_regset *regset,
>> +			 unsigned int pos, unsigned int count,
>> +			 const void *kbuf, const void __user *ubuf)
>> +{
>> +	int ret;
>> +	struct pt_regs regs;
>> +
>> +	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &regs, 0,
>> +				 sizeof(regs));
>
> likewise.
>
> In fact if userland supplies insufficient data then this looks
> vulnerable to a kernel stack data leak, since regs will remain partially
> uninitialised and then get written to the target regs where it can be
> read back again.
>
> If you're going to bounce via the stack I think you need to fully
> initialise before using user_regset_copyin, or you could just copy
> directly into task_pt_regs(target) for now since, at least for the
> current internal struct pt_regs, the begining of pt_regs appears to
> match user_regs_struct.
>
>> +	if (ret)
>> +		return ret;
>> +
>> +	putregs(target, &regs);
>
> Similarly this needs to be careful not to overwrite the supervisor
> registers with whatever was on kernel stack (assuming only partially
> copied as suggested above)?
>
>> +
>> +	return 0;
>> +}
>> +
>> +
>> +static const struct user_regset riscv_user_regset[] = {
>> +	[REGSET_X] = {
>> +		.core_note_type = NT_PRSTATUS,
>> +		.n = ELF_NGREG,
>> +		.size = sizeof(elf_greg_t),
>> +		.align = sizeof(elf_greg_t),
>> +		.get = &riscv_gpr_get,
>> +		.set = &riscv_gpr_set,
>> +	},
>
> Will the FP registers get exposed at some point as well?
>
>> diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
>> new file mode 100644
>> index 000000000000..ab699efe636e
>> --- /dev/null
>> +++ b/arch/riscv/kernel/sys_riscv.c
> ...
>> +SYSCALL_DEFINE3(sysriscv_cmpxchg32, unsigned long, arg1, unsigned long, arg2,
>> +		unsigned long, arg3)
>> +{
>> +	unsigned long flags;
>> +	unsigned long prev;
>
> should that be unsigned int? Else on 64-bit half of it could be left
> uninitialised.
>
>> +	unsigned int *ptr;
>
> should that be tagged with __user?
>
>> +	unsigned int err;
>> +
>> +	ptr = (unsigned int *)arg1;
>
> I presume you'll need to cast to __user __force to keep sparse happy
> here.

This should be fixed, I was just lazy when converting from the multiplexed
syscall version.

>
>> +	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned int)))
>> +		return -EFAULT;
>> +
>> +	preempt_disable();
>> +	raw_local_irq_save(flags);
>> +	err = __get_user(prev, ptr);
>> +	if (likely(!err && prev == arg2))
>> +		err = __put_user(arg3, ptr);
>> +	raw_local_irq_restore(flags);
>> +	preempt_enable();
>
> Are user accesses safe from atomic context? What if it needs paging in?
>
> You could disable page faults but then I think you'd have to handle the
> EFAULT again outside of atomic context to try getting it paged in, and
> then retry in atomic context. Or perhaps there's a cleaner way that
> doesn't come to mind late at night.
>
> I'm not sure OTOH whether copy on write (i.e. affecting the __put_user()
> but not the __get_user() would be problematic. I suppose as long as it
> can safely allocate a page it should be fine... Should be possible to
> test using madvise(MADV_DONTNEED) (which I think makes pages use the
> zero page with copy-on-write).
>
> Also if this is going to be included on SMP kernels (where I gather
> proper atomics are available), does it need an SMP safe version too
> which uses proper atomics?

On 64-bit machines with the A extension (which is required for SMP) then that's
the right thing to do -- we're actually doing it in the VDSO right now, but
there's no reason not to do it in the syscall as well.

On 32-bit machines, I think it's still not safe as we don't have a 64-bit CAS
even with the A extension.  I think the best thing to do is actually to
disallow the 64-bit CAS on 32-bit machines -- we could disallow this on just
SMP machines, but I think it's saner do disallow it everywhere so we don't end
up with binaries that won't run on SMP kernels.

I'll try to figure out if userspace can work without it, but I think it should
be OK as we don't have double-word CAS on 64-bit.

>> +
>> +	return unlikely(err) ? err : prev;
>> +}
>> +
>> +SYSCALL_DEFINE3(sysriscv_cmpxchg64, unsigned long, arg1, unsigned long, arg2,
>> +		unsigned long, arg3)
>> +{
>> +	unsigned long flags;
>> +	unsigned long prev;
>> +	unsigned int *ptr;
>
> should that be unsigned long __user *?
>
>> +	unsigned int err;
>> +
>> +	ptr = (unsigned int *)arg1;
>> +	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned long)))
>> +		return -EFAULT;
>> +
>> +	preempt_disable();
>> +	raw_local_irq_save(flags);
>> +	err = __get_user(prev, ptr);
>> +	if (likely(!err && prev == arg2))
>> +		err = __put_user(arg3, ptr);
>> +	raw_local_irq_restore(flags);
>> +	preempt_enable();
>
> Likewise to other comments above.
>
> This doesn't look much different to sysriscv_cmpxchg32 on 32-bit. Is it
> meant to be excluded from 32-bit kernels? If so definition of the __NR_
> constant and the __SYSCALL magic in uapi/asm/unistd.h should I presume
> be conditional on the ABI.

Sorry, that was just a copy-and-paste error.  This is intended to actually be a
64-bit CAS on 32-bit machines -- though maybe that was a bad idea.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 8/9] RISC-V: User-facing API
@ 2017-06-29 21:42           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 21:42 UTC (permalink / raw)
  To: james.hogan
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017 15:42:37 PDT (-0700), james.hogan@imgtec.com wrote:
> Hi Palmer,
>
> On Wed, Jun 28, 2017 at 11:55:37AM -0700, Palmer Dabbelt wrote:
>> diff --git a/arch/riscv/include/asm/syscalls.h b/arch/riscv/include/asm/syscalls.h
>> new file mode 100644
>> index 000000000000..d85267c4f7ea
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/syscalls.h
>> @@ -0,0 +1,25 @@
> ...
>> +/* kernel/sys_riscv.c */
>> +asmlinkage long sys_sysriscv(unsigned long, unsigned long,
>> +	unsigned long, unsigned long);
>
> You suggested in the cover letter this wasn't muxed any longer, maybe
> you should have a prototype for each of the cmpxchg syscalls instead?

Sorry, I just missed that.  I'll fix it for the v4

  diff --git a/arch/riscv/include/asm/syscalls.h b/arch/riscv/include/asm/syscalls.h
  index d85267c4f7ea..6490274fbb76 100644
  --- a/arch/riscv/include/asm/syscalls.h
  +++ b/arch/riscv/include/asm/syscalls.h
  @@ -19,7 +19,7 @@
   #include <asm-generic/syscalls.h>

   /* kernel/sys_riscv.c */
  -asmlinkage long sys_sysriscv(unsigned long, unsigned long,
  -       unsigned long, unsigned long);
  +asmlinkage long sys_sysriscv_cmpxchg32(u32 __user * ptr, u32 new, u32 old);
  +asmlinkage long sys_sysriscv_cmpxchg64(u64 __user * ptr, u64 new, u64 old);

   #endif /* _ASM_RISCV_SYSCALLS_H */

>> diff --git a/arch/riscv/include/uapi/asm/ptrace.h b/arch/riscv/include/uapi/asm/ptrace.h
>> new file mode 100644
>> index 000000000000..01aee1654eae
>> --- /dev/null
>> +++ b/arch/riscv/include/uapi/asm/ptrace.h
> ...
>> +struct __riscv_f_ext_state {
>> +	__u32 f[32];
>> +	__u32 fcsr;
>> +};
>> +
>> +struct __riscv_d_ext_state {
>> +	__u64 f[32];
>> +	__u32 fcsr;
>> +};
>> +
>> +struct __riscv_q_ext_state {
>> +	__u64 f[64] __attribute__((aligned(16)));
>> +	__u32 fcsr;
>> +	/* Reserved for expansion of sigcontext structure.  Currently zeroed
>> +	 * upon signal, and must be zero upon sigreturn.  */
>> +	__u32 reserved[3];
>> +};
>> +
>> +union __riscv_fp_state {
>> +	struct __riscv_f_ext_state f;
>> +	struct __riscv_d_ext_state d;
>> +	struct __riscv_q_ext_state q;
>> +};
>
> Out of interest, how does one tell which fp format is in use?

We might need another tag here -- I'll talk to Andrew (who did the glibc side
of this) and make sure we can handle something like running F user code on a D
kernel.

>> diff --git a/arch/riscv/include/uapi/asm/ucontext.h b/arch/riscv/include/uapi/asm/ucontext.h
>> new file mode 100644
>> index 000000000000..52eff9febcfd
>> --- /dev/null
>> +++ b/arch/riscv/include/uapi/asm/ucontext.h
> ...
>> +struct ucontext {
>> +	unsigned long	  uc_flags;
>> +	struct ucontext	 *uc_link;
>> +	stack_t		  uc_stack;
>> +	sigset_t	  uc_sigmask;
>> +	/* glibc uses a 1024-bit sigset_t */
>> +	__u8		  __unused[1024 / 8 - sizeof(sigset_t)];
>> +	/* last for future expansion */
>> +	struct sigcontext uc_mcontext;
>> +};
>
> Any particular reason not to use the asm-generic ucontext?

In the generic ucontext, 'uc_sigmask' is at the end of the structure so it can
be expanded.  Since we want our mcontext to be expandable as well, we
pre-allocate some expandable space for sigmask and then put mcontext at the
end.

We stole this idea from arm64.

>> diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h
>> new file mode 100644
>> index 000000000000..7e3909ac3c18
>> --- /dev/null
>> +++ b/arch/riscv/include/uapi/asm/unistd.h
> ...
>> +/* FIXME: This exists for now in order to maintain compatibility with our
>> + * pre-upstream glibc, and will be removed for our real Linux submission.
>> + */
>> +#define __ARCH_WANT_RENAMEAT
>> +
>
> Don't forget ;-)
>
> Have you seen the patches floating around for dropping
> getrlimit/setrlimit (in favour of prlimit64) and fstatat64/fstat64 (in
> favour of statx)? I guess its no big deal.

Yes, but we're trying to make this glibc release so we decided to hold off on
them.  If we can't make it then we might reconsider, but they seem like fairly
small issues.

>> +#include <asm-generic/unistd.h>
>> +
>> +/*
>> + * These system calls add support for AMOs on RISC-V systems without support
>> + * for the A extension.
>> + */
>> +#define __NR_sysriscv_cmpxchg32		(__NR_arch_specific_syscall + 0)
>> +#define __NR_sysriscv_cmpxchg64		(__NR_arch_specific_syscall + 1)
>
> I think you need the magic __SYSCALL invocations here like in
> include/uapi/asm/unistd.h, otherwise they won't get included in your
> syscall table.

OK, I've added those.

  diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h
  index 7e3909ac3c18..3cdb32912ac7 100644
  --- a/arch/riscv/include/uapi/asm/unistd.h
  +++ b/arch/riscv/include/uapi/asm/unistd.h
  @@ -23,4 +23,6 @@
    * for the A extension.
    */
   #define __NR_sysriscv_cmpxchg32                (__NR_arch_specific_syscall + 0)
  +__SYSCALL(__NR_sysriscv_cmpxchg32, sys_sysriscv_cmpxchg32)
   #define __NR_sysriscv_cmpxchg64                (__NR_arch_specific_syscall + 1)
  +__SYSCALL(__NR_sysriscv_cmpxchg64, sys_sysriscv_cmpxchg64)

>> diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
>> new file mode 100644
>> index 000000000000..69b3b2d10664
>> --- /dev/null
>> +++ b/arch/riscv/kernel/ptrace.c
> ...
>> +enum riscv_regset {
>> +	REGSET_X,
>> +};
>> +
>> +/*
>> + * Get registers from task and ready the result for userspace.
>> + */
>> +static char *getregs(struct task_struct *child, struct pt_regs *uregs)
>> +{
>> +	*uregs = *task_pt_regs(child);
>> +	return (char *)uregs;
>> +}
>> +
>> +/* Put registers back to task. */
>> +static void putregs(struct task_struct *child, struct pt_regs *uregs)
>> +{
>> +	struct pt_regs *regs = task_pt_regs(child);
>> +	*regs = *uregs;
>> +}
>> +
>> +static int riscv_gpr_get(struct task_struct *target,
>> +			 const struct user_regset *regset,
>> +			 unsigned int pos, unsigned int count,
>> +			 void *kbuf, void __user *ubuf)
>> +{
>> +	struct pt_regs regs;
>> +
>> +	getregs(target, &regs);
>> +
>> +	return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &regs, 0,
>> +				   sizeof(regs));
>
> Shouldn't this be limited to sizeof(struct user_regs_struct)?
>
> Why not copy straight out of task_pt_regs(target) instead of bouncing
> via the stack?

IIRC this code used to be more complicated as it supported the two different
ptrace register APIs.  There's no reason to have this function now, so I've
just pulled into the only caller.

>> +}
>> +
>> +static int riscv_gpr_set(struct task_struct *target,
>> +			 const struct user_regset *regset,
>> +			 unsigned int pos, unsigned int count,
>> +			 const void *kbuf, const void __user *ubuf)
>> +{
>> +	int ret;
>> +	struct pt_regs regs;
>> +
>> +	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &regs, 0,
>> +				 sizeof(regs));
>
> likewise.
>
> In fact if userland supplies insufficient data then this looks
> vulnerable to a kernel stack data leak, since regs will remain partially
> uninitialised and then get written to the target regs where it can be
> read back again.
>
> If you're going to bounce via the stack I think you need to fully
> initialise before using user_regset_copyin, or you could just copy
> directly into task_pt_regs(target) for now since, at least for the
> current internal struct pt_regs, the begining of pt_regs appears to
> match user_regs_struct.
>
>> +	if (ret)
>> +		return ret;
>> +
>> +	putregs(target, &regs);
>
> Similarly this needs to be careful not to overwrite the supervisor
> registers with whatever was on kernel stack (assuming only partially
> copied as suggested above)?
>
>> +
>> +	return 0;
>> +}
>> +
>> +
>> +static const struct user_regset riscv_user_regset[] = {
>> +	[REGSET_X] = {
>> +		.core_note_type = NT_PRSTATUS,
>> +		.n = ELF_NGREG,
>> +		.size = sizeof(elf_greg_t),
>> +		.align = sizeof(elf_greg_t),
>> +		.get = &riscv_gpr_get,
>> +		.set = &riscv_gpr_set,
>> +	},
>
> Will the FP registers get exposed at some point as well?
>
>> diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
>> new file mode 100644
>> index 000000000000..ab699efe636e
>> --- /dev/null
>> +++ b/arch/riscv/kernel/sys_riscv.c
> ...
>> +SYSCALL_DEFINE3(sysriscv_cmpxchg32, unsigned long, arg1, unsigned long, arg2,
>> +		unsigned long, arg3)
>> +{
>> +	unsigned long flags;
>> +	unsigned long prev;
>
> should that be unsigned int? Else on 64-bit half of it could be left
> uninitialised.
>
>> +	unsigned int *ptr;
>
> should that be tagged with __user?
>
>> +	unsigned int err;
>> +
>> +	ptr = (unsigned int *)arg1;
>
> I presume you'll need to cast to __user __force to keep sparse happy
> here.

This should be fixed, I was just lazy when converting from the multiplexed
syscall version.

>
>> +	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned int)))
>> +		return -EFAULT;
>> +
>> +	preempt_disable();
>> +	raw_local_irq_save(flags);
>> +	err = __get_user(prev, ptr);
>> +	if (likely(!err && prev == arg2))
>> +		err = __put_user(arg3, ptr);
>> +	raw_local_irq_restore(flags);
>> +	preempt_enable();
>
> Are user accesses safe from atomic context? What if it needs paging in?
>
> You could disable page faults but then I think you'd have to handle the
> EFAULT again outside of atomic context to try getting it paged in, and
> then retry in atomic context. Or perhaps there's a cleaner way that
> doesn't come to mind late at night.
>
> I'm not sure OTOH whether copy on write (i.e. affecting the __put_user()
> but not the __get_user() would be problematic. I suppose as long as it
> can safely allocate a page it should be fine... Should be possible to
> test using madvise(MADV_DONTNEED) (which I think makes pages use the
> zero page with copy-on-write).
>
> Also if this is going to be included on SMP kernels (where I gather
> proper atomics are available), does it need an SMP safe version too
> which uses proper atomics?

On 64-bit machines with the A extension (which is required for SMP) then that's
the right thing to do -- we're actually doing it in the VDSO right now, but
there's no reason not to do it in the syscall as well.

On 32-bit machines, I think it's still not safe as we don't have a 64-bit CAS
even with the A extension.  I think the best thing to do is actually to
disallow the 64-bit CAS on 32-bit machines -- we could disallow this on just
SMP machines, but I think it's saner do disallow it everywhere so we don't end
up with binaries that won't run on SMP kernels.

I'll try to figure out if userspace can work without it, but I think it should
be OK as we don't have double-word CAS on 64-bit.

>> +
>> +	return unlikely(err) ? err : prev;
>> +}
>> +
>> +SYSCALL_DEFINE3(sysriscv_cmpxchg64, unsigned long, arg1, unsigned long, arg2,
>> +		unsigned long, arg3)
>> +{
>> +	unsigned long flags;
>> +	unsigned long prev;
>> +	unsigned int *ptr;
>
> should that be unsigned long __user *?
>
>> +	unsigned int err;
>> +
>> +	ptr = (unsigned int *)arg1;
>> +	if (!access_ok(VERIFY_WRITE, ptr, sizeof(unsigned long)))
>> +		return -EFAULT;
>> +
>> +	preempt_disable();
>> +	raw_local_irq_save(flags);
>> +	err = __get_user(prev, ptr);
>> +	if (likely(!err && prev == arg2))
>> +		err = __put_user(arg3, ptr);
>> +	raw_local_irq_restore(flags);
>> +	preempt_enable();
>
> Likewise to other comments above.
>
> This doesn't look much different to sysriscv_cmpxchg32 on 32-bit. Is it
> meant to be excluded from 32-bit kernels? If so definition of the __NR_
> constant and the __SYSCALL magic in uapi/asm/unistd.h should I presume
> be conditional on the ABI.

Sorry, that was just a copy-and-paste error.  This is intended to actually be a
64-bit CAS on 32-bit machines -- though maybe that was a bad idea.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 9/9] RISC-V: Build Infastructure
  2017-06-28 22:54         ` James Hogan
@ 2017-06-29 22:11           ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 22:11 UTC (permalink / raw)
  To: james.hogan
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017 15:54:42 PDT (-0700), james.hogan@imgtec.com wrote:
> On Wed, Jun 28, 2017 at 11:55:38AM -0700, Palmer Dabbelt wrote:
>> diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
>> new file mode 100644
>> index 000000000000..7f58cd251ab8
>> --- /dev/null
>> +++ b/arch/riscv/kernel/Makefile
>> @@ -0,0 +1,16 @@
>> +#
>> +# Makefile for the RISC-V Linux kernel
>> +#
>> +
>> +extra-y := head.o vmlinux.lds
>> +
>> +obj-y	:= cpu.o entry.o irq.o process.o ptrace.o reset.o setup.o \
>> +	   signal.o syscall_table.o sys_riscv.o time.o traps.o \
>> +	   riscv_ksyms.o stacktrace.o vdso.o cacheinfo.o vdso/
>
> I would suggest splitting these sort of things onto separate lines.
> It'll make later changes much more readable (especially when you're
> escaping newlines) as well as avoiding conflicts.
>
>> +
>> +CFLAGS_setup.o := -mcmodel=medany
>> +
>> +obj-$(CONFIG_SMP)		+= smpboot.o smp.o
>
> same here
>
>> +obj-$(CONFIG_MODULES)		+= module.o
>> +
>> +clean:
>
>> diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile
>> new file mode 100644
>> index 000000000000..120c38e77a46
>> --- /dev/null
>> +++ b/arch/riscv/lib/Makefile
>> @@ -0,0 +1,5 @@
>> +lib-y	:= delay.o memcpy.o memset.o uaccess.o
>
> and here
>
>> +
>> +ifeq ($(CONFIG_64BIT),)
>> +lib-y += udivdi3.o
>> +endif
>
> Would this work?
>
> lib-$(CONFIG_32BIT) += udivdi3.o
>
>> diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
>> new file mode 100644
>> index 000000000000..36ebe6feb5d6
>> --- /dev/null
>> +++ b/arch/riscv/mm/Makefile
>> @@ -0,0 +1 @@
>> +obj-y := init.o fault.o extable.o ioremap.o
>
> and here

Sounds good.  I'll include this in the v4

  https://github.com/riscv/riscv-linux/commit/e5a3eee8cd3e493dcd9b81509b1aa9e0302b72da

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 9/9] RISC-V: Build Infastructure
@ 2017-06-29 22:11           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 22:11 UTC (permalink / raw)
  To: james.hogan
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017 15:54:42 PDT (-0700), james.hogan@imgtec.com wrote:
> On Wed, Jun 28, 2017 at 11:55:38AM -0700, Palmer Dabbelt wrote:
>> diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
>> new file mode 100644
>> index 000000000000..7f58cd251ab8
>> --- /dev/null
>> +++ b/arch/riscv/kernel/Makefile
>> @@ -0,0 +1,16 @@
>> +#
>> +# Makefile for the RISC-V Linux kernel
>> +#
>> +
>> +extra-y := head.o vmlinux.lds
>> +
>> +obj-y	:= cpu.o entry.o irq.o process.o ptrace.o reset.o setup.o \
>> +	   signal.o syscall_table.o sys_riscv.o time.o traps.o \
>> +	   riscv_ksyms.o stacktrace.o vdso.o cacheinfo.o vdso/
>
> I would suggest splitting these sort of things onto separate lines.
> It'll make later changes much more readable (especially when you're
> escaping newlines) as well as avoiding conflicts.
>
>> +
>> +CFLAGS_setup.o := -mcmodel=medany
>> +
>> +obj-$(CONFIG_SMP)		+= smpboot.o smp.o
>
> same here
>
>> +obj-$(CONFIG_MODULES)		+= module.o
>> +
>> +clean:
>
>> diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile
>> new file mode 100644
>> index 000000000000..120c38e77a46
>> --- /dev/null
>> +++ b/arch/riscv/lib/Makefile
>> @@ -0,0 +1,5 @@
>> +lib-y	:= delay.o memcpy.o memset.o uaccess.o
>
> and here
>
>> +
>> +ifeq ($(CONFIG_64BIT),)
>> +lib-y += udivdi3.o
>> +endif
>
> Would this work?
>
> lib-$(CONFIG_32BIT) += udivdi3.o
>
>> diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
>> new file mode 100644
>> index 000000000000..36ebe6feb5d6
>> --- /dev/null
>> +++ b/arch/riscv/mm/Makefile
>> @@ -0,0 +1 @@
>> +obj-y := init.o fault.o extable.o ioremap.o
>
> and here

Sounds good.  I'll include this in the v4

  https://github.com/riscv/riscv-linux/commit/e5a3eee8cd3e493dcd9b81509b1aa9e0302b72da

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 7/9] RISC-V: Paging and MMU
  2017-06-28 23:09         ` James Hogan
@ 2017-06-29 22:11           ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 22:11 UTC (permalink / raw)
  To: james.hogan
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017 16:09:59 PDT (-0700), james.hogan@imgtec.com wrote:
> On Wed, Jun 28, 2017 at 11:55:36AM -0700, Palmer Dabbelt wrote:
>> diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
>> new file mode 100644
>> index 000000000000..e1491c20d6fd
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/page.h
> ...
>> +#ifdef __KERNEL__
>
> I think thats a given outside of uapi.

Thanks.  I've removed those.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 7/9] RISC-V: Paging and MMU
@ 2017-06-29 22:11           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 22:11 UTC (permalink / raw)
  To: james.hogan
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017 16:09:59 PDT (-0700), james.hogan@imgtec.com wrote:
> On Wed, Jun 28, 2017 at 11:55:36AM -0700, Palmer Dabbelt wrote:
>> diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
>> new file mode 100644
>> index 000000000000..e1491c20d6fd
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/page.h
> ...
>> +#ifdef __KERNEL__
>
> I think thats a given outside of uapi.

Thanks.  I've removed those.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/9] RISC-V: Task implementation
  2017-06-28 23:32         ` James Hogan
@ 2017-06-29 22:52           ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 22:52 UTC (permalink / raw)
  To: james.hogan
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017 16:32:55 PDT (-0700), james.hogan@imgtec.com wrote:
> On Wed, Jun 28, 2017 at 11:55:34AM -0700, Palmer Dabbelt wrote:
>> diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
>> new file mode 100644
>> index 000000000000..1190de7a0f74
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/kprobes.h
>> @@ -0,0 +1,22 @@
> ...
>> +#ifdef CONFIG_KPROBES
>> +#error "RISC-V doesn't skpport CONFIG_KPROBES"
>> +#endif
>
> I'm wondering where your fallback definition of e.g. NOKPROBE_SYMBOL
> comes from then.
>
> Could you just use the asm-generic one?

I believe so.

>> diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
>> new file mode 100644
>> index 000000000000..b13d3ea3bf79
>> --- /dev/null

> ...
>> +void show_regs(struct pt_regs *regs)
>> +{
>> +	show_regs_print_info(KERN_DEFAULT);
>> +
>> +	printk(KERN_CONT "sepc: " REG_FMT " ra : " REG_FMT " sp : " REG_FMT "\n",
>> +		regs->sepc, regs->ra, regs->sp);
>
> I've noticed inconsistent use of pr_* and printk(KERN_* in this
> patchset. Maybe now would be the best time to switch everything to pr_*.

I went through and fixed them all.

>> +	/* Reset FPU context
>> +	 *	frm: round to nearest, ties to even (IEEE default)
>> +	 *	fflags: accrued exceptions cleared
>> +	 */
>
> Similarly lots of multiline comments which don't follow the standard
> style in Documentation/process/coding-style.rst. Maybe now is the best
> time to convert if you're going to.

Someone else found one and suggested this, it's on my TODO list.

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/9] RISC-V: Task implementation
@ 2017-06-29 22:52           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 22:52 UTC (permalink / raw)
  To: james.hogan
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Wed, 28 Jun 2017 16:32:55 PDT (-0700), james.hogan@imgtec.com wrote:
> On Wed, Jun 28, 2017 at 11:55:34AM -0700, Palmer Dabbelt wrote:
>> diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
>> new file mode 100644
>> index 000000000000..1190de7a0f74
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/kprobes.h
>> @@ -0,0 +1,22 @@
> ...
>> +#ifdef CONFIG_KPROBES
>> +#error "RISC-V doesn't skpport CONFIG_KPROBES"
>> +#endif
>
> I'm wondering where your fallback definition of e.g. NOKPROBE_SYMBOL
> comes from then.
>
> Could you just use the asm-generic one?

I believe so.

>> diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
>> new file mode 100644
>> index 000000000000..b13d3ea3bf79
>> --- /dev/null

> ...
>> +void show_regs(struct pt_regs *regs)
>> +{
>> +	show_regs_print_info(KERN_DEFAULT);
>> +
>> +	printk(KERN_CONT "sepc: " REG_FMT " ra : " REG_FMT " sp : " REG_FMT "\n",
>> +		regs->sepc, regs->ra, regs->sp);
>
> I've noticed inconsistent use of pr_* and printk(KERN_* in this
> patchset. Maybe now would be the best time to switch everything to pr_*.

I went through and fixed them all.

>> +	/* Reset FPU context
>> +	 *	frm: round to nearest, ties to even (IEEE default)
>> +	 *	fflags: accrued exceptions cleared
>> +	 */
>
> Similarly lots of multiline comments which don't follow the standard
> style in Documentation/process/coding-style.rst. Maybe now is the best
> time to convert if you're going to.

Someone else found one and suggested this, it's on my TODO list.

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/9] RISC-V: Task implementation
  2017-06-29  8:22       ` Tobias Klauser
@ 2017-06-29 22:52           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 22:52 UTC (permalink / raw)
  To: tklauser
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, will.deacon, james.hogan, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Thu, 29 Jun 2017 01:22:23 PDT (-0700), tklauser@distanz.ch wrote:
> On 2017-06-28 at 20:55:34 +0200, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> [...]
>> diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
>> new file mode 100644
>> index 000000000000..1190de7a0f74
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/kprobes.h
>> @@ -0,0 +1,22 @@
>> +/*
>> + * Copyright (C) 2017 SiFive
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful,
>> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + *   GNU General Public License for more details.
>> + */
>> +
>> +
>> +#ifndef ASM_RISCV_KPROBES_H
>> +#define ASM_RISCV_KPROBES_H
>> +
>> +#ifdef CONFIG_KPROBES
>> +#error "RISC-V doesn't skpport CONFIG_KPROBES"
>
> Typo: s/skpport/support/

Thanks.

>> +#endif
>> +
>> +#endif
>> diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
>> new file mode 100644
>> index 000000000000..65aa014db9b4
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/processor.h
>> @@ -0,0 +1,102 @@
>> +/*
>> + * Copyright (C) 2012 Regents of the University of California
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful,
>> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + *   GNU General Public License for more details.
>> + */
>> +
>> +#ifndef _ASM_RISCV_PROCESSOR_H
>> +#define _ASM_RISCV_PROCESSOR_H
>> +
>> +#include <linux/const.h>
>> +
>> +#include <asm/ptrace.h>
>> +
>> +/*
>> + * This decides where the kernel will search for a free chunk of vm
>> + * space during mmap's.
>> + */
>> +#define TASK_UNMAPPED_BASE	PAGE_ALIGN(TASK_SIZE >> 1)
>> +
>> +#ifdef __KERNEL__
>> +#define STACK_TOP		TASK_SIZE
>> +#define STACK_TOP_MAX		STACK_TOP
>> +#define STACK_ALIGN		16
>> +#endif /* __KERNEL__ */
>> +
>> +#ifndef __ASSEMBLY__
>> +
>> +struct task_struct;
>> +struct pt_regs;
>> +
>> +/*
>> + * Default implementation of macro that returns current
>> + * instruction pointer ("program counter").
>> + */
>> +#define current_text_addr()	({ __label__ _l; _l: &&_l; })
>> +
>> +/* CPU-specific state of a task */
>> +struct thread_struct {
>> +	/* Callee-saved registers */
>> +	unsigned long ra;
>> +	unsigned long sp;	/* Kernel mode stack */
>> +	unsigned long s[12];	/* s[0]: frame pointer */
>> +	struct __riscv_d_ext_state fstate;
>> +};
>> +
>> +#define INIT_THREAD {					\
>> +	.sp = sizeof(init_stack) + (long)&init_stack,	\
>> +}
>> +
>> +/* Return saved (kernel) PC of a blocked thread. */
>> +#define thread_saved_pc(t)	((t)->thread.ra)
>> +#define thread_saved_sp(t)	((t)->thread.sp)
>> +#define thread_saved_fp(t)	((t)->thread.s[0])
>
> These aren't needed outside of arch-specific code (anymore) and the
> riscv port doesn't seem to be using them, so they can be omitted.

Great.  I've removed them, I'll include that in the v4

  diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
  index 53768048b9de..b62873633bde 100644
  --- a/arch/riscv/include/asm/processor.h
  +++ b/arch/riscv/include/asm/processor.h
  @@ -52,11 +52,6 @@ struct thread_struct {
          .sp = sizeof(init_stack) + (long)&init_stack,   \
   }

  -/* Return saved (kernel) PC of a blocked thread. */
  -#define thread_saved_pc(t)     ((t)->thread.ra)
  -#define thread_saved_sp(t)     ((t)->thread.sp)
  -#define thread_saved_fp(t)     ((t)->thread.s[0])
  -
   #define task_pt_regs(tsk)                                              \
          ((struct pt_regs *)(task_stack_page(tsk) + THREAD_SIZE          \
                              - ALIGN(sizeof(struct pt_regs), STACK_ALIGN)))

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 5/9] RISC-V: Task implementation
@ 2017-06-29 22:52           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 22:52 UTC (permalink / raw)
  To: tklauser
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, will.deacon, james.hogan, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Thu, 29 Jun 2017 01:22:23 PDT (-0700), tklauser@distanz.ch wrote:
> On 2017-06-28 at 20:55:34 +0200, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> [...]
>> diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
>> new file mode 100644
>> index 000000000000..1190de7a0f74
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/kprobes.h
>> @@ -0,0 +1,22 @@
>> +/*
>> + * Copyright (C) 2017 SiFive
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful,
>> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + *   GNU General Public License for more details.
>> + */
>> +
>> +
>> +#ifndef ASM_RISCV_KPROBES_H
>> +#define ASM_RISCV_KPROBES_H
>> +
>> +#ifdef CONFIG_KPROBES
>> +#error "RISC-V doesn't skpport CONFIG_KPROBES"
>
> Typo: s/skpport/support/

Thanks.

>> +#endif
>> +
>> +#endif
>> diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
>> new file mode 100644
>> index 000000000000..65aa014db9b4
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/processor.h
>> @@ -0,0 +1,102 @@
>> +/*
>> + * Copyright (C) 2012 Regents of the University of California
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful,
>> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + *   GNU General Public License for more details.
>> + */
>> +
>> +#ifndef _ASM_RISCV_PROCESSOR_H
>> +#define _ASM_RISCV_PROCESSOR_H
>> +
>> +#include <linux/const.h>
>> +
>> +#include <asm/ptrace.h>
>> +
>> +/*
>> + * This decides where the kernel will search for a free chunk of vm
>> + * space during mmap's.
>> + */
>> +#define TASK_UNMAPPED_BASE	PAGE_ALIGN(TASK_SIZE >> 1)
>> +
>> +#ifdef __KERNEL__
>> +#define STACK_TOP		TASK_SIZE
>> +#define STACK_TOP_MAX		STACK_TOP
>> +#define STACK_ALIGN		16
>> +#endif /* __KERNEL__ */
>> +
>> +#ifndef __ASSEMBLY__
>> +
>> +struct task_struct;
>> +struct pt_regs;
>> +
>> +/*
>> + * Default implementation of macro that returns current
>> + * instruction pointer ("program counter").
>> + */
>> +#define current_text_addr()	({ __label__ _l; _l: &&_l; })
>> +
>> +/* CPU-specific state of a task */
>> +struct thread_struct {
>> +	/* Callee-saved registers */
>> +	unsigned long ra;
>> +	unsigned long sp;	/* Kernel mode stack */
>> +	unsigned long s[12];	/* s[0]: frame pointer */
>> +	struct __riscv_d_ext_state fstate;
>> +};
>> +
>> +#define INIT_THREAD {					\
>> +	.sp = sizeof(init_stack) + (long)&init_stack,	\
>> +}
>> +
>> +/* Return saved (kernel) PC of a blocked thread. */
>> +#define thread_saved_pc(t)	((t)->thread.ra)
>> +#define thread_saved_sp(t)	((t)->thread.sp)
>> +#define thread_saved_fp(t)	((t)->thread.s[0])
>
> These aren't needed outside of arch-specific code (anymore) and the
> riscv port doesn't seem to be using them, so they can be omitted.

Great.  I've removed them, I'll include that in the v4

  diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
  index 53768048b9de..b62873633bde 100644
  --- a/arch/riscv/include/asm/processor.h
  +++ b/arch/riscv/include/asm/processor.h
  @@ -52,11 +52,6 @@ struct thread_struct {
          .sp = sizeof(init_stack) + (long)&init_stack,   \
   }

  -/* Return saved (kernel) PC of a blocked thread. */
  -#define thread_saved_pc(t)     ((t)->thread.ra)
  -#define thread_saved_sp(t)     ((t)->thread.sp)
  -#define thread_saved_fp(t)     ((t)->thread.s[0])
  -
   #define task_pt_regs(tsk)                                              \
          ((struct pt_regs *)(task_stack_page(tsk) + THREAD_SIZE          \
                              - ALIGN(sizeof(struct pt_regs), STACK_ALIGN)))

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/9] RISC-V: Device, timer, IRQs, and the SBI
  2017-06-29  8:39       ` Tobias Klauser
@ 2017-06-29 22:52           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 22:52 UTC (permalink / raw)
  To: tklauser
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, will.deacon, james.hogan, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Thu, 29 Jun 2017 01:39:25 PDT (-0700), tklauser@distanz.ch wrote:
> On 2017-06-28 at 20:55:35 +0200, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> [...]
>> diff --git a/arch/riscv/include/asm/device.h b/arch/riscv/include/asm/device.h
>> new file mode 100644
>> index 000000000000..28975e528d2f
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/device.h
>> @@ -0,0 +1,27 @@
>> +/*
>> + * Copyright (C) 2016 SiFive
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful,
>> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + *   GNU General Public License for more details.
>> + */
>> +
>> +
>> +#ifndef _ASM_RISCV_DEVICE_H
>> +#define _ASM_RISCV_DEVICE_H
>> +
>> +#include <linux/sysfs.h>
>> +
>> +struct dev_archdata {
>> +	struct dma_map_ops *dma_ops;
>> +};
>
> The dma_ops member isn't used in any arch code or driver from what I can
> tell (I checked against your riscv-for-submission-v3 branch). Could
> device.h from asm-generic be used instead, or did I miss something?

I think you're right.  I'll include this

  diff --git a/arch/riscv/include/asm/device.h b/arch/riscv/include/asm/device.h
  index 28975e528d2f..a310c2c26101 100644
  --- a/arch/riscv/include/asm/device.h
  +++ b/arch/riscv/include/asm/device.h
  @@ -11,17 +11,9 @@
    *   GNU General Public License for more details.
    */

  -
   #ifndef _ASM_RISCV_DEVICE_H
   #define _ASM_RISCV_DEVICE_H

  -#include <linux/sysfs.h>
  -
  -struct dev_archdata {
  -       struct dma_map_ops *dma_ops;
  -};
  -
  -struct pdev_archdata {
  -};
  +#include <asm-generic/device.h>

   #endif /* _ASM_RISCV_DEVICE_H */

in the v4

>
>> +
>> +struct pdev_archdata {
>> +};
>> +
>> +#endif /* _ASM_RISCV_DEVICE_H */

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/9] RISC-V: Device, timer, IRQs, and the SBI
@ 2017-06-29 22:52           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 22:52 UTC (permalink / raw)
  To: tklauser
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, will.deacon, james.hogan, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Thu, 29 Jun 2017 01:39:25 PDT (-0700), tklauser@distanz.ch wrote:
> On 2017-06-28 at 20:55:35 +0200, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> [...]
>> diff --git a/arch/riscv/include/asm/device.h b/arch/riscv/include/asm/device.h
>> new file mode 100644
>> index 000000000000..28975e528d2f
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/device.h
>> @@ -0,0 +1,27 @@
>> +/*
>> + * Copyright (C) 2016 SiFive
>> + *
>> + *   This program is free software; you can redistribute it and/or
>> + *   modify it under the terms of the GNU General Public License
>> + *   as published by the Free Software Foundation, version 2.
>> + *
>> + *   This program is distributed in the hope that it will be useful,
>> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + *   GNU General Public License for more details.
>> + */
>> +
>> +
>> +#ifndef _ASM_RISCV_DEVICE_H
>> +#define _ASM_RISCV_DEVICE_H
>> +
>> +#include <linux/sysfs.h>
>> +
>> +struct dev_archdata {
>> +	struct dma_map_ops *dma_ops;
>> +};
>
> The dma_ops member isn't used in any arch code or driver from what I can
> tell (I checked against your riscv-for-submission-v3 branch). Could
> device.h from asm-generic be used instead, or did I miss something?

I think you're right.  I'll include this

  diff --git a/arch/riscv/include/asm/device.h b/arch/riscv/include/asm/device.h
  index 28975e528d2f..a310c2c26101 100644
  --- a/arch/riscv/include/asm/device.h
  +++ b/arch/riscv/include/asm/device.h
  @@ -11,17 +11,9 @@
    *   GNU General Public License for more details.
    */

  -
   #ifndef _ASM_RISCV_DEVICE_H
   #define _ASM_RISCV_DEVICE_H

  -#include <linux/sysfs.h>
  -
  -struct dev_archdata {
  -       struct dma_map_ops *dma_ops;
  -};
  -
  -struct pdev_archdata {
  -};
  +#include <asm-generic/device.h>

   #endif /* _ASM_RISCV_DEVICE_H */

in the v4

>
>> +
>> +struct pdev_archdata {
>> +};
>> +
>> +#endif /* _ASM_RISCV_DEVICE_H */

Thanks!

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 1/9] RISC-V: Init and Halt Code
  2017-06-29  9:44         ` Geert Uytterhoeven
@ 2017-06-29 22:52           ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 22:52 UTC (permalink / raw)
  To: geert
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert

On Thu, 29 Jun 2017 02:44:32 PDT (-0700), geert@linux-m68k.org wrote:
> Hi Palmer,
>
> On Wed, Jun 28, 2017 at 8:55 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> This contains the various __init C functions, the initial assembly
>> kernel entry point, and the code to reset the system.  When a file was
>> init-related, it contains
>
> ...?

Whoops.  I'm not even sure how to make a sentence out of that -- I guess I've
been mangling too many patch sets...

I'll try and be coherent next time :).

>
>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>
> Gr{oetje,eeting}s,
>
>                         Geert

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 1/9] RISC-V: Init and Halt Code
@ 2017-06-29 22:52           ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-06-29 22:52 UTC (permalink / raw)
  To: geert
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert

On Thu, 29 Jun 2017 02:44:32 PDT (-0700), geert@linux-m68k.org wrote:
> Hi Palmer,
>
> On Wed, Jun 28, 2017 at 8:55 PM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>> This contains the various __init C functions, the initial assembly
>> kernel entry point, and the code to reset the system.  When a file was
>> init-related, it contains
>
> ...?

Whoops.  I'm not even sure how to make a sentence out of that -- I guess I've
been mangling too many patch sets...

I'll try and be coherent next time :).

>
>> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
>
> Gr{oetje,eeting}s,
>
>                         Geert

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 6/9] RISC-V: Device, timer, IRQs, and the SBI
  2017-06-29 22:52           ` Palmer Dabbelt
  (?)
@ 2017-06-30  7:57           ` Tobias Klauser
  -1 siblings, 0 replies; 283+ messages in thread
From: Tobias Klauser @ 2017-06-30  7:57 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, will.deacon, james.hogan, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On 2017-06-30 at 00:52:44 +0200, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> On Thu, 29 Jun 2017 01:39:25 PDT (-0700), tklauser@distanz.ch wrote:
> > On 2017-06-28 at 20:55:35 +0200, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> > [...]
> >> diff --git a/arch/riscv/include/asm/device.h b/arch/riscv/include/asm/device.h
> >> new file mode 100644
> >> index 000000000000..28975e528d2f
> >> --- /dev/null
> >> +++ b/arch/riscv/include/asm/device.h
> >> @@ -0,0 +1,27 @@
> >> +/*
> >> + * Copyright (C) 2016 SiFive
> >> + *
> >> + *   This program is free software; you can redistribute it and/or
> >> + *   modify it under the terms of the GNU General Public License
> >> + *   as published by the Free Software Foundation, version 2.
> >> + *
> >> + *   This program is distributed in the hope that it will be useful,
> >> + *   but WITHOUT ANY WARRANTY; without even the implied warranty of
> >> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> >> + *   GNU General Public License for more details.
> >> + */
> >> +
> >> +
> >> +#ifndef _ASM_RISCV_DEVICE_H
> >> +#define _ASM_RISCV_DEVICE_H
> >> +
> >> +#include <linux/sysfs.h>
> >> +
> >> +struct dev_archdata {
> >> +	struct dma_map_ops *dma_ops;
> >> +};
> >
> > The dma_ops member isn't used in any arch code or driver from what I can
> > tell (I checked against your riscv-for-submission-v3 branch). Could
> > device.h from asm-generic be used instead, or did I miss something?
> 
> I think you're right.  I'll include this
> 
>   diff --git a/arch/riscv/include/asm/device.h b/arch/riscv/include/asm/device.h
>   index 28975e528d2f..a310c2c26101 100644
>   --- a/arch/riscv/include/asm/device.h
>   +++ b/arch/riscv/include/asm/device.h
>   @@ -11,17 +11,9 @@
>     *   GNU General Public License for more details.
>     */
> 
>   -
>    #ifndef _ASM_RISCV_DEVICE_H
>    #define _ASM_RISCV_DEVICE_H
> 
>   -#include <linux/sysfs.h>
>   -
>   -struct dev_archdata {
>   -       struct dma_map_ops *dma_ops;
>   -};
>   -
>   -struct pdev_archdata {
>   -};
>   +#include <asm-generic/device.h>
> 
>    #endif /* _ASM_RISCV_DEVICE_H */
> 
> in the v4

Better yet, remove arch/riscv/include/asm/device.h altogether and add device.h
to generic-y in the asm/Kbuild file, as follows:

diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild
index 710397395981..52b254e26378 100644
--- a/arch/riscv/include/asm/Kbuild
+++ b/arch/riscv/include/asm/Kbuild
@@ -3,6 +3,7 @@ generic-y += cacheflush.h
 generic-y += checksum.h
 generic-y += clkdev.h
 generic-y += cputime.h
+generic-y += device.h
 generic-y += div64.h
 generic-y += dma.h
 generic-y += dma-contiguous.h
diff --git a/arch/riscv/include/asm/device.h b/arch/riscv/include/asm/device.h
deleted file mode 100644
index 28975e528d2f..000000000000
--- a/arch/riscv/include/asm/device.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Copyright (C) 2016 SiFive
- *
- *   This program is free software; you can redistribute it and/or
- *   modify it under the terms of the GNU General Public License
- *   as published by the Free Software Foundation, version 2.
- *
- *   This program is distributed in the hope that it will be useful,
- *   but WITHOUT ANY WARRANTY; without even the implied warranty of
- *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- *   GNU General Public License for more details.
- */
-
-
-#ifndef _ASM_RISCV_DEVICE_H
-#define _ASM_RISCV_DEVICE_H
-
-#include <linux/sysfs.h>
-
-struct dev_archdata {
-       struct dma_map_ops *dma_ops;
-};
-
-struct pdev_archdata {
-};
-
-#endif /* _ASM_RISCV_DEVICE_H */

> 
> >
> >> +
> >> +struct pdev_archdata {
> >> +};
> >> +
> >> +#endif /* _ASM_RISCV_DEVICE_H */
> 
> Thanks!
> 

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* Re: [PATCH 8/9] RISC-V: User-facing API
  2017-06-29 21:42           ` Palmer Dabbelt
@ 2017-07-03 23:06             ` James Hogan
  -1 siblings, 0 replies; 283+ messages in thread
From: James Hogan @ 2017-07-03 23:06 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

[-- Attachment #1: Type: text/plain, Size: 1677 bytes --]

On Thu, Jun 29, 2017 at 02:42:38PM -0700, Palmer Dabbelt wrote:
> On Wed, 28 Jun 2017 15:42:37 PDT (-0700), james.hogan@imgtec.com wrote:
> > On Wed, Jun 28, 2017 at 11:55:37AM -0700, Palmer Dabbelt wrote:
> >> diff --git a/arch/riscv/include/uapi/asm/ucontext.h b/arch/riscv/include/uapi/asm/ucontext.h
> >> new file mode 100644
> >> index 000000000000..52eff9febcfd
> >> --- /dev/null
> >> +++ b/arch/riscv/include/uapi/asm/ucontext.h
> > ...
> >> +struct ucontext {
> >> +	unsigned long	  uc_flags;
> >> +	struct ucontext	 *uc_link;
> >> +	stack_t		  uc_stack;
> >> +	sigset_t	  uc_sigmask;
> >> +	/* glibc uses a 1024-bit sigset_t */
> >> +	__u8		  __unused[1024 / 8 - sizeof(sigset_t)];
> >> +	/* last for future expansion */
> >> +	struct sigcontext uc_mcontext;
> >> +};
> >
> > Any particular reason not to use the asm-generic ucontext?
> 
> In the generic ucontext, 'uc_sigmask' is at the end of the structure so it can
> be expanded.  Since we want our mcontext to be expandable as well, we
> pre-allocate some expandable space for sigmask and then put mcontext at the
> end.
> 
> We stole this idea from arm64.

Curious. __unused seems like overkill to be honest given that expanding
the number of signals up to 128 causes other issues (as discovered on
MIPS e.g. the waitpid() status, with stopsig not fitting below the exit
code (shift 8) and core dump flag (bit 7)), but perhaps it could be
carefully expanded by splitting the stopsig field.

Looks harmless here I suppose so I defer to others. If it is the
preferred approach does it make sense to make it the "default" for new
architectures at some point?

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 8/9] RISC-V: User-facing API
@ 2017-07-03 23:06             ` James Hogan
  0 siblings, 0 replies; 283+ messages in thread
From: James Hogan @ 2017-07-03 23:06 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

[-- Attachment #1: Type: text/plain, Size: 1677 bytes --]

On Thu, Jun 29, 2017 at 02:42:38PM -0700, Palmer Dabbelt wrote:
> On Wed, 28 Jun 2017 15:42:37 PDT (-0700), james.hogan@imgtec.com wrote:
> > On Wed, Jun 28, 2017 at 11:55:37AM -0700, Palmer Dabbelt wrote:
> >> diff --git a/arch/riscv/include/uapi/asm/ucontext.h b/arch/riscv/include/uapi/asm/ucontext.h
> >> new file mode 100644
> >> index 000000000000..52eff9febcfd
> >> --- /dev/null
> >> +++ b/arch/riscv/include/uapi/asm/ucontext.h
> > ...
> >> +struct ucontext {
> >> +	unsigned long	  uc_flags;
> >> +	struct ucontext	 *uc_link;
> >> +	stack_t		  uc_stack;
> >> +	sigset_t	  uc_sigmask;
> >> +	/* glibc uses a 1024-bit sigset_t */
> >> +	__u8		  __unused[1024 / 8 - sizeof(sigset_t)];
> >> +	/* last for future expansion */
> >> +	struct sigcontext uc_mcontext;
> >> +};
> >
> > Any particular reason not to use the asm-generic ucontext?
> 
> In the generic ucontext, 'uc_sigmask' is at the end of the structure so it can
> be expanded.  Since we want our mcontext to be expandable as well, we
> pre-allocate some expandable space for sigmask and then put mcontext at the
> end.
> 
> We stole this idea from arm64.

Curious. __unused seems like overkill to be honest given that expanding
the number of signals up to 128 causes other issues (as discovered on
MIPS e.g. the waitpid() status, with stopsig not fitting below the exit
code (shift 8) and core dump flag (bit 7)), but perhaps it could be
carefully expanded by splitting the stopsig field.

Looks harmless here I suppose so I defer to others. If it is the
preferred approach does it make sense to make it the "default" for new
architectures at some point?

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 8/9] RISC-V: User-facing API
  2017-07-03 23:06             ` James Hogan
@ 2017-07-05 16:49               ` Palmer Dabbelt
  -1 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-07-05 16:49 UTC (permalink / raw)
  To: james.hogan
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Mon, 03 Jul 2017 16:06:39 PDT (-0700), james.hogan@imgtec.com wrote:
> On Thu, Jun 29, 2017 at 02:42:38PM -0700, Palmer Dabbelt wrote:
>> On Wed, 28 Jun 2017 15:42:37 PDT (-0700), james.hogan@imgtec.com wrote:
>> > On Wed, Jun 28, 2017 at 11:55:37AM -0700, Palmer Dabbelt wrote:
>> >> diff --git a/arch/riscv/include/uapi/asm/ucontext.h b/arch/riscv/include/uapi/asm/ucontext.h
>> >> new file mode 100644
>> >> index 000000000000..52eff9febcfd
>> >> --- /dev/null
>> >> +++ b/arch/riscv/include/uapi/asm/ucontext.h
>> > ...
>> >> +struct ucontext {
>> >> +	unsigned long	  uc_flags;
>> >> +	struct ucontext	 *uc_link;
>> >> +	stack_t		  uc_stack;
>> >> +	sigset_t	  uc_sigmask;
>> >> +	/* glibc uses a 1024-bit sigset_t */
>> >> +	__u8		  __unused[1024 / 8 - sizeof(sigset_t)];
>> >> +	/* last for future expansion */
>> >> +	struct sigcontext uc_mcontext;
>> >> +};
>> >
>> > Any particular reason not to use the asm-generic ucontext?
>>
>> In the generic ucontext, 'uc_sigmask' is at the end of the structure so it can
>> be expanded.  Since we want our mcontext to be expandable as well, we
>> pre-allocate some expandable space for sigmask and then put mcontext at the
>> end.
>>
>> We stole this idea from arm64.
>
> Curious. __unused seems like overkill to be honest given that expanding
> the number of signals up to 128 causes other issues (as discovered on
> MIPS e.g. the waitpid() status, with stopsig not fitting below the exit
> code (shift 8) and core dump flag (bit 7)), but perhaps it could be
> carefully expanded by splitting the stopsig field.

Sorry, I don't understand the intricacies of this in the slightest.  In general
we try to avoid surprises in software land in RISC-V, so whenever we do
something we go look at the most popular architectures (Intel and ARM) and try
to ensure we don't paint ourselves into any corners that they didn't.

> Looks harmless here I suppose so I defer to others. If it is the
> preferred approach does it make sense to make it the "default" for new
> architectures at some point?

Again, this isn't really my thing, but we chose this because we thought it was
the sane way to do it.  Unless we're doing something silly, I don't see why it
wouldn't be a reasonable default.  This is predicated on having expandable
architectural state, otherwise putting sigmask at the end seems sane.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* Re: [PATCH 8/9] RISC-V: User-facing API
@ 2017-07-05 16:49               ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-07-05 16:49 UTC (permalink / raw)
  To: james.hogan
  Cc: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, paul.gortmaker, linux,
	linux-kernel, linux-arch, albert

On Mon, 03 Jul 2017 16:06:39 PDT (-0700), james.hogan@imgtec.com wrote:
> On Thu, Jun 29, 2017 at 02:42:38PM -0700, Palmer Dabbelt wrote:
>> On Wed, 28 Jun 2017 15:42:37 PDT (-0700), james.hogan@imgtec.com wrote:
>> > On Wed, Jun 28, 2017 at 11:55:37AM -0700, Palmer Dabbelt wrote:
>> >> diff --git a/arch/riscv/include/uapi/asm/ucontext.h b/arch/riscv/include/uapi/asm/ucontext.h
>> >> new file mode 100644
>> >> index 000000000000..52eff9febcfd
>> >> --- /dev/null
>> >> +++ b/arch/riscv/include/uapi/asm/ucontext.h
>> > ...
>> >> +struct ucontext {
>> >> +	unsigned long	  uc_flags;
>> >> +	struct ucontext	 *uc_link;
>> >> +	stack_t		  uc_stack;
>> >> +	sigset_t	  uc_sigmask;
>> >> +	/* glibc uses a 1024-bit sigset_t */
>> >> +	__u8		  __unused[1024 / 8 - sizeof(sigset_t)];
>> >> +	/* last for future expansion */
>> >> +	struct sigcontext uc_mcontext;
>> >> +};
>> >
>> > Any particular reason not to use the asm-generic ucontext?
>>
>> In the generic ucontext, 'uc_sigmask' is at the end of the structure so it can
>> be expanded.  Since we want our mcontext to be expandable as well, we
>> pre-allocate some expandable space for sigmask and then put mcontext at the
>> end.
>>
>> We stole this idea from arm64.
>
> Curious. __unused seems like overkill to be honest given that expanding
> the number of signals up to 128 causes other issues (as discovered on
> MIPS e.g. the waitpid() status, with stopsig not fitting below the exit
> code (shift 8) and core dump flag (bit 7)), but perhaps it could be
> carefully expanded by splitting the stopsig field.

Sorry, I don't understand the intricacies of this in the slightest.  In general
we try to avoid surprises in software land in RISC-V, so whenever we do
something we go look at the most popular architectures (Intel and ARM) and try
to ensure we don't paint ourselves into any corners that they didn't.

> Looks harmless here I suppose so I defer to others. If it is the
> preferred approach does it make sense to make it the "default" for new
> architectures at some point?

Again, this isn't really my thing, but we chose this because we thought it was
the sane way to do it.  Unless we're doing something silly, I don't see why it
wouldn't be a reasonable default.  This is predicated on having expandable
architectural state, otherwise putting sigmask at the end seems sane.

^ permalink raw reply	[flat|nested] 283+ messages in thread

* [PATCH 6/9] RISC-V: Device, timer, IRQs, and the SBI
  2017-07-04 19:50 RISC-V Linux Port v4 Palmer Dabbelt
@ 2017-07-04 19:50   ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-07-04 19:50 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert, patches
  Cc: Palmer Dabbelt

This patch contains code that interfaces with devices that are mandated
by the RISC-V supervisor specification and that don't have explicit
drivers anywhere else in the tree.  This includes the staticly defined
interrupts, the CSR-mapped timer, and virtualized SBI devices.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/delay.h       |  28 +++++++++
 arch/riscv/include/asm/dma-mapping.h |  38 ++++++++++++
 arch/riscv/include/asm/irq.h         |  31 ++++++++++
 arch/riscv/include/asm/irqflags.h    |  63 ++++++++++++++++++++
 arch/riscv/include/asm/pci.h         |  48 +++++++++++++++
 arch/riscv/include/asm/sbi.h         | 100 +++++++++++++++++++++++++++++++
 arch/riscv/include/asm/timex.h       |  59 +++++++++++++++++++
 arch/riscv/lib/delay.c               | 110 +++++++++++++++++++++++++++++++++++
 arch/riscv/mm/ioremap.c              |  92 +++++++++++++++++++++++++++++
 9 files changed, 569 insertions(+)
 create mode 100644 arch/riscv/include/asm/delay.h
 create mode 100644 arch/riscv/include/asm/dma-mapping.h
 create mode 100644 arch/riscv/include/asm/irq.h
 create mode 100644 arch/riscv/include/asm/irqflags.h
 create mode 100644 arch/riscv/include/asm/pci.h
 create mode 100644 arch/riscv/include/asm/sbi.h
 create mode 100644 arch/riscv/include/asm/timex.h
 create mode 100644 arch/riscv/lib/delay.c
 create mode 100644 arch/riscv/mm/ioremap.c

diff --git a/arch/riscv/include/asm/delay.h b/arch/riscv/include/asm/delay.h
new file mode 100644
index 000000000000..cbb0c9eb96cb
--- /dev/null
+++ b/arch/riscv/include/asm/delay.h
@@ -0,0 +1,28 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2016 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_DELAY_H
+#define _ASM_RISCV_DELAY_H
+
+extern unsigned long riscv_timebase;
+
+#define udelay udelay
+extern void udelay(unsigned long usecs);
+
+#define ndelay ndelay
+extern void ndelay(unsigned long nsecs);
+
+extern void __delay(unsigned long cycles);
+
+#endif /* _ASM_RISCV_DELAY_H */
diff --git a/arch/riscv/include/asm/dma-mapping.h b/arch/riscv/include/asm/dma-mapping.h
new file mode 100644
index 000000000000..3eec1000196d
--- /dev/null
+++ b/arch/riscv/include/asm/dma-mapping.h
@@ -0,0 +1,38 @@
+/*
+ * Copyright (C) 2003-2004 Hewlett-Packard Co
+ *	David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2016 SiFive, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_RISCV_DMA_MAPPING_H
+#define __ASM_RISCV_DMA_MAPPING_H
+
+/* Use ops->dma_mapping_error (if it exists) or assume success */
+// #undef DMA_ERROR_CODE
+
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
+{
+	return &dma_noop_ops;
+}
+
+static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
+{
+	if (!dev->dma_mask)
+		return false;
+
+	return addr + size - 1 <= *dev->dma_mask;
+}
+
+#endif	/* __ASM_RISCV_DMA_MAPPING_H */
diff --git a/arch/riscv/include/asm/irq.h b/arch/riscv/include/asm/irq.h
new file mode 100644
index 000000000000..e64c61e89f76
--- /dev/null
+++ b/arch/riscv/include/asm/irq.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_IRQ_H
+#define _ASM_RISCV_IRQ_H
+
+#define NR_IRQS         0
+
+#define INTERRUPT_CAUSE_SOFTWARE    1
+#define INTERRUPT_CAUSE_TIMER       5
+#define INTERRUPT_CAUSE_EXTERNAL    9
+
+void riscv_timer_interrupt(void);
+
+#include <asm-generic/irq.h>
+
+/* The value of csr sie before init_traps runs (core is up) */
+DECLARE_PER_CPU(atomic_long_t, riscv_early_sie);
+
+#endif /* _ASM_RISCV_IRQ_H */
diff --git a/arch/riscv/include/asm/irqflags.h b/arch/riscv/include/asm/irqflags.h
new file mode 100644
index 000000000000..6fdc860d7f84
--- /dev/null
+++ b/arch/riscv/include/asm/irqflags.h
@@ -0,0 +1,63 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef _ASM_RISCV_IRQFLAGS_H
+#define _ASM_RISCV_IRQFLAGS_H
+
+#include <asm/processor.h>
+#include <asm/csr.h>
+
+/* read interrupt enabled status */
+static inline unsigned long arch_local_save_flags(void)
+{
+	return csr_read(sstatus);
+}
+
+/* unconditionally enable interrupts */
+static inline void arch_local_irq_enable(void)
+{
+	csr_set(sstatus, SR_IE);
+}
+
+/* unconditionally disable interrupts */
+static inline void arch_local_irq_disable(void)
+{
+	csr_clear(sstatus, SR_IE);
+}
+
+/* get status and disable interrupts */
+static inline unsigned long arch_local_irq_save(void)
+{
+	return csr_read_clear(sstatus, SR_IE);
+}
+
+/* test flags */
+static inline int arch_irqs_disabled_flags(unsigned long flags)
+{
+	return !(flags & SR_IE);
+}
+
+/* test hardware interrupt enable bit */
+static inline int arch_irqs_disabled(void)
+{
+	return arch_irqs_disabled_flags(arch_local_save_flags());
+}
+
+/* set interrupt enabled status */
+static inline void arch_local_irq_restore(unsigned long flags)
+{
+	csr_set(sstatus, flags & SR_IE);
+}
+
+#endif /* _ASM_RISCV_IRQFLAGS_H */
diff --git a/arch/riscv/include/asm/pci.h b/arch/riscv/include/asm/pci.h
new file mode 100644
index 000000000000..285747fa2ef0
--- /dev/null
+++ b/arch/riscv/include/asm/pci.h
@@ -0,0 +1,48 @@
+/*
+ * Copyright (C) 2016 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef __ASM_RISCV_PCI_H
+#define __ASM_RISCV_PCI_H
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/io.h>
+
+#define PCIBIOS_MIN_IO		0
+#define PCIBIOS_MIN_MEM		0
+
+/* RISC-V shim does not initialize PCI bus */
+#define pcibios_assign_all_busses() 1
+
+/* RISC-V TileLink and PCIe share the share address space */
+#define PCI_DMA_BUS_IS_PHYS 1
+
+extern int isa_dma_bridge_buggy;
+
+#ifdef CONFIG_PCI
+static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
+{
+	/* no legacy IRQ on risc-v */
+	return -ENODEV;
+}
+
+static inline int pci_proc_domain(struct pci_bus *bus)
+{
+	/* always show the domain in /proc */
+	return 1;
+}
+#endif  /* CONFIG_PCI */
+
+#endif  /* __ASM_PCI_H */
diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
new file mode 100644
index 000000000000..b6bb10b92fe2
--- /dev/null
+++ b/arch/riscv/include/asm/sbi.h
@@ -0,0 +1,100 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SBI_H
+#define _ASM_RISCV_SBI_H
+
+#include <linux/types.h>
+
+#define SBI_SET_TIMER 0
+#define SBI_CONSOLE_PUTCHAR 1
+#define SBI_CONSOLE_GETCHAR 2
+#define SBI_CLEAR_IPI 3
+#define SBI_SEND_IPI 4
+#define SBI_REMOTE_FENCE_I 5
+#define SBI_REMOTE_SFENCE_VMA 6
+#define SBI_REMOTE_SFENCE_VMA_ASID 7
+#define SBI_SHUTDOWN 8
+
+#define SBI_CALL(which, arg0, arg1, arg2) ({			\
+	register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0);	\
+	register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1);	\
+	register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2);	\
+	register uintptr_t a7 asm ("a7") = (uintptr_t)(which);	\
+	asm volatile ("ecall"					\
+		      : "+r" (a0)				\
+		      : "r" (a1), "r" (a2), "r" (a7)		\
+		      : "memory");				\
+	a0;							\
+})
+
+/* Lazy implementations until SBI is finalized */
+#define SBI_CALL_0(which) SBI_CALL(which, 0, 0, 0)
+#define SBI_CALL_1(which, arg0) SBI_CALL(which, arg0, 0, 0)
+#define SBI_CALL_2(which, arg0, arg1) SBI_CALL(which, arg0, arg1, 0)
+
+static inline void sbi_console_putchar(int ch)
+{
+	SBI_CALL_1(SBI_CONSOLE_PUTCHAR, ch);
+}
+
+static inline int sbi_console_getchar(void)
+{
+	return SBI_CALL_0(SBI_CONSOLE_GETCHAR);
+}
+
+static inline void sbi_set_timer(uint64_t stime_value)
+{
+#if __riscv_xlen == 32
+	SBI_CALL_2(SBI_SET_TIMER, stime_value, stime_value >> 32);
+#else
+	SBI_CALL_1(SBI_SET_TIMER, stime_value);
+#endif
+}
+
+static inline void sbi_shutdown(void)
+{
+	SBI_CALL_0(SBI_SHUTDOWN);
+}
+
+static inline void sbi_clear_ipi(void)
+{
+	SBI_CALL_0(SBI_CLEAR_IPI);
+}
+
+static inline void sbi_send_ipi(const unsigned long *hart_mask)
+{
+	SBI_CALL_1(SBI_SEND_IPI, hart_mask);
+}
+
+static inline void sbi_remote_fence_i(const unsigned long *hart_mask)
+{
+	SBI_CALL_1(SBI_REMOTE_FENCE_I, hart_mask);
+}
+
+static inline void sbi_remote_sfence_vma(const unsigned long *hart_mask,
+					 unsigned long start,
+					 unsigned long size)
+{
+	SBI_CALL_1(SBI_REMOTE_SFENCE_VMA, hart_mask);
+}
+
+static inline void sbi_remote_sfence_vma_asid(const unsigned long *hart_mask,
+					      unsigned long start,
+					      unsigned long size,
+					      unsigned long asid)
+{
+	SBI_CALL_1(SBI_REMOTE_SFENCE_VMA_ASID, hart_mask);
+}
+
+#endif
diff --git a/arch/riscv/include/asm/timex.h b/arch/riscv/include/asm/timex.h
new file mode 100644
index 000000000000..3df4932d8964
--- /dev/null
+++ b/arch/riscv/include/asm/timex.h
@@ -0,0 +1,59 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TIMEX_H
+#define _ASM_RISCV_TIMEX_H
+
+#include <asm/param.h>
+
+typedef unsigned long cycles_t;
+
+static inline cycles_t get_cycles(void)
+{
+	cycles_t n;
+
+	__asm__ __volatile__ (
+		"rdtime %0"
+		: "=r" (n));
+	return n;
+}
+
+#ifdef CONFIG_64BIT
+static inline uint64_t get_cycles64(void)
+{
+        return get_cycles();
+}
+#else
+static inline uint64_t get_cycles64(void)
+{
+	u32 lo, hi, tmp;
+	__asm__ __volatile__ (
+		"1:\n"
+		"rdtimeh %0\n"
+		"rdtime %1\n"
+		"rdtimeh %2\n"
+		"bne %0, %2, 1b"
+		: "=&r" (hi), "=&r" (lo), "=&r" (tmp));
+	return ((u64)hi << 32) | lo;
+}
+#endif
+
+#define ARCH_HAS_READ_CURRENT_TIMER
+
+static inline int read_current_timer(unsigned long *timer_val)
+{
+	*timer_val = get_cycles();
+	return 0;
+}
+
+#endif /* _ASM_RISCV_TIMEX_H */
diff --git a/arch/riscv/lib/delay.c b/arch/riscv/lib/delay.c
new file mode 100644
index 000000000000..1cc4ac3964b4
--- /dev/null
+++ b/arch/riscv/lib/delay.c
@@ -0,0 +1,110 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/delay.h>
+#include <linux/param.h>
+#include <linux/timex.h>
+#include <linux/export.h>
+
+/*
+ * This is copies from arch/arm/include/asm/delay.h
+ *
+ * Loop (or tick) based delay:
+ *
+ * loops = loops_per_jiffy * jiffies_per_sec * delay_us / us_per_sec
+ *
+ * where:
+ *
+ * jiffies_per_sec = HZ
+ * us_per_sec = 1000000
+ *
+ * Therefore the constant part is HZ / 1000000 which is a small
+ * fractional number. To make this usable with integer math, we
+ * scale up this constant by 2^31, perform the actual multiplication,
+ * and scale the result back down by 2^31 with a simple shift:
+ *
+ * loops = (loops_per_jiffy * delay_us * UDELAY_MULT) >> 31
+ *
+ * where:
+ *
+ * UDELAY_MULT = 2^31 * HZ / 1000000
+ *             = (2^31 / 1000000) * HZ
+ *             = 2147.483648 * HZ
+ *             = 2147 * HZ + 483648 * HZ / 1000000
+ *
+ * 31 is the biggest scale shift value that won't overflow 32 bits for
+ * delay_us * UDELAY_MULT assuming HZ <= 1000 and delay_us <= 2000.
+ */
+#define MAX_UDELAY_US	2000
+#define MAX_UDELAY_HZ	1000
+#define UDELAY_MULT	(2147UL * HZ + 483648UL * HZ / 1000000UL)
+#define UDELAY_SHIFT	31
+
+#if HZ > MAX_UDELAY_HZ
+#error "HZ > MAX_UDELAY_HZ"
+#endif
+
+/*
+ * RISC-V supports both UDELAY and NDELAY.  This is largely the same as above,
+ * but with different constants.  I added 10 bits to the shift to get this, but
+ * the result is that I need a 64-bit multiply, which is slow on 32-bit
+ * platforms.
+ *
+ * NDELAY_MULT = 2^41 * HZ / 1000000000
+ *             = (2^41 / 1000000000) * HZ
+ *             = 2199.02325555 * HZ
+ *             = 2199 * HZ + 23255550 * HZ / 1000000000
+ *
+ * The maximum here is to avoid 64-bit overflow, but it isn't checked as it
+ * won't happen.
+ */
+#define MAX_NDELAY_NS   (1ULL << 42)
+#define MAX_NDELAY_HZ	MAX_UDELAY_HZ
+#define NDELAY_MULT	((unsigned long long)(2199ULL * HZ + 23255550ULL * HZ / 1000000000ULL))
+#define NDELAY_SHIFT	41
+
+#if HZ > MAX_NDELAY_HZ
+#error "HZ > MAX_NDELAY_HZ"
+#endif
+
+void __delay(unsigned long cycles)
+{
+	u64 t0 = get_cycles();
+
+	while ((unsigned long)(get_cycles() - t0) < cycles)
+		cpu_relax();
+}
+
+void udelay(unsigned long usecs)
+{
+	unsigned long ucycles = usecs * lpj_fine * UDELAY_MULT;
+
+	if (unlikely(usecs > MAX_UDELAY_US)) {
+		__delay((u64)usecs * riscv_timebase / 1000000ULL);
+		return;
+	}
+
+	__delay(ucycles >> UDELAY_SHIFT);
+}
+EXPORT_SYMBOL(udelay);
+
+void ndelay(unsigned long nsecs)
+{
+	/*
+	 * This doesn't bother checking for overflow, as it won't happen (it's
+	 * an hour) of delay.
+	 */
+	unsigned long long ncycles = nsecs * lpj_fine * NDELAY_MULT;
+	__delay(ncycles >> NDELAY_SHIFT);
+}
+EXPORT_SYMBOL(ndelay);
diff --git a/arch/riscv/mm/ioremap.c b/arch/riscv/mm/ioremap.c
new file mode 100644
index 000000000000..e99194a4077e
--- /dev/null
+++ b/arch/riscv/mm/ioremap.c
@@ -0,0 +1,92 @@
+/*
+ * (C) Copyright 1995 1996 Linus Torvalds
+ * (C) Copyright 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <linux/io.h>
+
+#include <asm/pgtable.h>
+
+/*
+ * Remap an arbitrary physical address space into the kernel virtual
+ * address space. Needed when the kernel wants to access high addresses
+ * directly.
+ *
+ * NOTE! We need to allow non-page-aligned mappings too: we will obviously
+ * have to convert them into an offset in a page-aligned mapping, but the
+ * caller shouldn't need to know that small detail.
+ */
+static void __iomem *__ioremap_caller(phys_addr_t addr, size_t size,
+	pgprot_t prot, void *caller)
+{
+	phys_addr_t last_addr;
+	unsigned long offset, vaddr;
+	struct vm_struct *area;
+
+	/* Disallow wrap-around or zero size */
+	last_addr = addr + size - 1;
+	if (!size || last_addr < addr)
+		return NULL;
+
+	/* Page-align mappings */
+	offset = addr & (~PAGE_MASK);
+	addr &= PAGE_MASK;
+	size = PAGE_ALIGN(size + offset);
+
+	area = get_vm_area_caller(size, VM_IOREMAP, caller);
+	if (!area)
+		return NULL;
+	vaddr = (unsigned long)area->addr;
+
+	if (ioremap_page_range(vaddr, vaddr + size, addr, prot)) {
+		free_vm_area(area);
+		return NULL;
+	}
+
+	return (void __iomem *)(vaddr + offset);
+}
+
+/*
+ * ioremap     -   map bus memory into CPU space
+ * @offset:    bus address of the memory
+ * @size:      size of the resource to map
+ *
+ * ioremap performs a platform specific sequence of operations to
+ * make bus memory CPU accessible via the readb/readw/readl/writeb/
+ * writew/writel functions and the other mmio helpers. The returned
+ * address is not guaranteed to be usable directly as a virtual
+ * address.
+ *
+ * Must be freed with iounmap.
+ */
+void __iomem *ioremap(phys_addr_t offset, unsigned long size)
+{
+	return __ioremap_caller(offset, size, PAGE_KERNEL,
+		__builtin_return_address(0));
+}
+EXPORT_SYMBOL(ioremap);
+
+
+/**
+ * iounmap - Free a IO remapping
+ * @addr: virtual address from ioremap_*
+ *
+ * Caller must ensure there is only one unmapping for the same pointer.
+ */
+void iounmap(void __iomem *addr)
+{
+	vunmap((void *)((unsigned long)addr & PAGE_MASK));
+}
+EXPORT_SYMBOL(iounmap);
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

* [PATCH 6/9] RISC-V: Device, timer, IRQs, and the SBI
@ 2017-07-04 19:50   ` Palmer Dabbelt
  0 siblings, 0 replies; 283+ messages in thread
From: Palmer Dabbelt @ 2017-07-04 19:50 UTC (permalink / raw)
  To: peterz, mingo, mcgrof, viro, sfr, nicolas.dichtel, rmk+kernel,
	msalter, tklauser, will.deacon, james.hogan, paul.gortmaker,
	linux, linux-kernel, linux-arch, albert, patches
  Cc: Palmer Dabbelt

This patch contains code that interfaces with devices that are mandated
by the RISC-V supervisor specification and that don't have explicit
drivers anywhere else in the tree.  This includes the staticly defined
interrupts, the CSR-mapped timer, and virtualized SBI devices.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/include/asm/delay.h       |  28 +++++++++
 arch/riscv/include/asm/dma-mapping.h |  38 ++++++++++++
 arch/riscv/include/asm/irq.h         |  31 ++++++++++
 arch/riscv/include/asm/irqflags.h    |  63 ++++++++++++++++++++
 arch/riscv/include/asm/pci.h         |  48 +++++++++++++++
 arch/riscv/include/asm/sbi.h         | 100 +++++++++++++++++++++++++++++++
 arch/riscv/include/asm/timex.h       |  59 +++++++++++++++++++
 arch/riscv/lib/delay.c               | 110 +++++++++++++++++++++++++++++++++++
 arch/riscv/mm/ioremap.c              |  92 +++++++++++++++++++++++++++++
 9 files changed, 569 insertions(+)
 create mode 100644 arch/riscv/include/asm/delay.h
 create mode 100644 arch/riscv/include/asm/dma-mapping.h
 create mode 100644 arch/riscv/include/asm/irq.h
 create mode 100644 arch/riscv/include/asm/irqflags.h
 create mode 100644 arch/riscv/include/asm/pci.h
 create mode 100644 arch/riscv/include/asm/sbi.h
 create mode 100644 arch/riscv/include/asm/timex.h
 create mode 100644 arch/riscv/lib/delay.c
 create mode 100644 arch/riscv/mm/ioremap.c

diff --git a/arch/riscv/include/asm/delay.h b/arch/riscv/include/asm/delay.h
new file mode 100644
index 000000000000..cbb0c9eb96cb
--- /dev/null
+++ b/arch/riscv/include/asm/delay.h
@@ -0,0 +1,28 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2016 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_DELAY_H
+#define _ASM_RISCV_DELAY_H
+
+extern unsigned long riscv_timebase;
+
+#define udelay udelay
+extern void udelay(unsigned long usecs);
+
+#define ndelay ndelay
+extern void ndelay(unsigned long nsecs);
+
+extern void __delay(unsigned long cycles);
+
+#endif /* _ASM_RISCV_DELAY_H */
diff --git a/arch/riscv/include/asm/dma-mapping.h b/arch/riscv/include/asm/dma-mapping.h
new file mode 100644
index 000000000000..3eec1000196d
--- /dev/null
+++ b/arch/riscv/include/asm/dma-mapping.h
@@ -0,0 +1,38 @@
+/*
+ * Copyright (C) 2003-2004 Hewlett-Packard Co
+ *	David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2016 SiFive, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_RISCV_DMA_MAPPING_H
+#define __ASM_RISCV_DMA_MAPPING_H
+
+/* Use ops->dma_mapping_error (if it exists) or assume success */
+// #undef DMA_ERROR_CODE
+
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
+{
+	return &dma_noop_ops;
+}
+
+static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
+{
+	if (!dev->dma_mask)
+		return false;
+
+	return addr + size - 1 <= *dev->dma_mask;
+}
+
+#endif	/* __ASM_RISCV_DMA_MAPPING_H */
diff --git a/arch/riscv/include/asm/irq.h b/arch/riscv/include/asm/irq.h
new file mode 100644
index 000000000000..e64c61e89f76
--- /dev/null
+++ b/arch/riscv/include/asm/irq.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_IRQ_H
+#define _ASM_RISCV_IRQ_H
+
+#define NR_IRQS         0
+
+#define INTERRUPT_CAUSE_SOFTWARE    1
+#define INTERRUPT_CAUSE_TIMER       5
+#define INTERRUPT_CAUSE_EXTERNAL    9
+
+void riscv_timer_interrupt(void);
+
+#include <asm-generic/irq.h>
+
+/* The value of csr sie before init_traps runs (core is up) */
+DECLARE_PER_CPU(atomic_long_t, riscv_early_sie);
+
+#endif /* _ASM_RISCV_IRQ_H */
diff --git a/arch/riscv/include/asm/irqflags.h b/arch/riscv/include/asm/irqflags.h
new file mode 100644
index 000000000000..6fdc860d7f84
--- /dev/null
+++ b/arch/riscv/include/asm/irqflags.h
@@ -0,0 +1,63 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+
+#ifndef _ASM_RISCV_IRQFLAGS_H
+#define _ASM_RISCV_IRQFLAGS_H
+
+#include <asm/processor.h>
+#include <asm/csr.h>
+
+/* read interrupt enabled status */
+static inline unsigned long arch_local_save_flags(void)
+{
+	return csr_read(sstatus);
+}
+
+/* unconditionally enable interrupts */
+static inline void arch_local_irq_enable(void)
+{
+	csr_set(sstatus, SR_IE);
+}
+
+/* unconditionally disable interrupts */
+static inline void arch_local_irq_disable(void)
+{
+	csr_clear(sstatus, SR_IE);
+}
+
+/* get status and disable interrupts */
+static inline unsigned long arch_local_irq_save(void)
+{
+	return csr_read_clear(sstatus, SR_IE);
+}
+
+/* test flags */
+static inline int arch_irqs_disabled_flags(unsigned long flags)
+{
+	return !(flags & SR_IE);
+}
+
+/* test hardware interrupt enable bit */
+static inline int arch_irqs_disabled(void)
+{
+	return arch_irqs_disabled_flags(arch_local_save_flags());
+}
+
+/* set interrupt enabled status */
+static inline void arch_local_irq_restore(unsigned long flags)
+{
+	csr_set(sstatus, flags & SR_IE);
+}
+
+#endif /* _ASM_RISCV_IRQFLAGS_H */
diff --git a/arch/riscv/include/asm/pci.h b/arch/riscv/include/asm/pci.h
new file mode 100644
index 000000000000..285747fa2ef0
--- /dev/null
+++ b/arch/riscv/include/asm/pci.h
@@ -0,0 +1,48 @@
+/*
+ * Copyright (C) 2016 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef __ASM_RISCV_PCI_H
+#define __ASM_RISCV_PCI_H
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/io.h>
+
+#define PCIBIOS_MIN_IO		0
+#define PCIBIOS_MIN_MEM		0
+
+/* RISC-V shim does not initialize PCI bus */
+#define pcibios_assign_all_busses() 1
+
+/* RISC-V TileLink and PCIe share the share address space */
+#define PCI_DMA_BUS_IS_PHYS 1
+
+extern int isa_dma_bridge_buggy;
+
+#ifdef CONFIG_PCI
+static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
+{
+	/* no legacy IRQ on risc-v */
+	return -ENODEV;
+}
+
+static inline int pci_proc_domain(struct pci_bus *bus)
+{
+	/* always show the domain in /proc */
+	return 1;
+}
+#endif  /* CONFIG_PCI */
+
+#endif  /* __ASM_PCI_H */
diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
new file mode 100644
index 000000000000..b6bb10b92fe2
--- /dev/null
+++ b/arch/riscv/include/asm/sbi.h
@@ -0,0 +1,100 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_SBI_H
+#define _ASM_RISCV_SBI_H
+
+#include <linux/types.h>
+
+#define SBI_SET_TIMER 0
+#define SBI_CONSOLE_PUTCHAR 1
+#define SBI_CONSOLE_GETCHAR 2
+#define SBI_CLEAR_IPI 3
+#define SBI_SEND_IPI 4
+#define SBI_REMOTE_FENCE_I 5
+#define SBI_REMOTE_SFENCE_VMA 6
+#define SBI_REMOTE_SFENCE_VMA_ASID 7
+#define SBI_SHUTDOWN 8
+
+#define SBI_CALL(which, arg0, arg1, arg2) ({			\
+	register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0);	\
+	register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1);	\
+	register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2);	\
+	register uintptr_t a7 asm ("a7") = (uintptr_t)(which);	\
+	asm volatile ("ecall"					\
+		      : "+r" (a0)				\
+		      : "r" (a1), "r" (a2), "r" (a7)		\
+		      : "memory");				\
+	a0;							\
+})
+
+/* Lazy implementations until SBI is finalized */
+#define SBI_CALL_0(which) SBI_CALL(which, 0, 0, 0)
+#define SBI_CALL_1(which, arg0) SBI_CALL(which, arg0, 0, 0)
+#define SBI_CALL_2(which, arg0, arg1) SBI_CALL(which, arg0, arg1, 0)
+
+static inline void sbi_console_putchar(int ch)
+{
+	SBI_CALL_1(SBI_CONSOLE_PUTCHAR, ch);
+}
+
+static inline int sbi_console_getchar(void)
+{
+	return SBI_CALL_0(SBI_CONSOLE_GETCHAR);
+}
+
+static inline void sbi_set_timer(uint64_t stime_value)
+{
+#if __riscv_xlen == 32
+	SBI_CALL_2(SBI_SET_TIMER, stime_value, stime_value >> 32);
+#else
+	SBI_CALL_1(SBI_SET_TIMER, stime_value);
+#endif
+}
+
+static inline void sbi_shutdown(void)
+{
+	SBI_CALL_0(SBI_SHUTDOWN);
+}
+
+static inline void sbi_clear_ipi(void)
+{
+	SBI_CALL_0(SBI_CLEAR_IPI);
+}
+
+static inline void sbi_send_ipi(const unsigned long *hart_mask)
+{
+	SBI_CALL_1(SBI_SEND_IPI, hart_mask);
+}
+
+static inline void sbi_remote_fence_i(const unsigned long *hart_mask)
+{
+	SBI_CALL_1(SBI_REMOTE_FENCE_I, hart_mask);
+}
+
+static inline void sbi_remote_sfence_vma(const unsigned long *hart_mask,
+					 unsigned long start,
+					 unsigned long size)
+{
+	SBI_CALL_1(SBI_REMOTE_SFENCE_VMA, hart_mask);
+}
+
+static inline void sbi_remote_sfence_vma_asid(const unsigned long *hart_mask,
+					      unsigned long start,
+					      unsigned long size,
+					      unsigned long asid)
+{
+	SBI_CALL_1(SBI_REMOTE_SFENCE_VMA_ASID, hart_mask);
+}
+
+#endif
diff --git a/arch/riscv/include/asm/timex.h b/arch/riscv/include/asm/timex.h
new file mode 100644
index 000000000000..3df4932d8964
--- /dev/null
+++ b/arch/riscv/include/asm/timex.h
@@ -0,0 +1,59 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TIMEX_H
+#define _ASM_RISCV_TIMEX_H
+
+#include <asm/param.h>
+
+typedef unsigned long cycles_t;
+
+static inline cycles_t get_cycles(void)
+{
+	cycles_t n;
+
+	__asm__ __volatile__ (
+		"rdtime %0"
+		: "=r" (n));
+	return n;
+}
+
+#ifdef CONFIG_64BIT
+static inline uint64_t get_cycles64(void)
+{
+        return get_cycles();
+}
+#else
+static inline uint64_t get_cycles64(void)
+{
+	u32 lo, hi, tmp;
+	__asm__ __volatile__ (
+		"1:\n"
+		"rdtimeh %0\n"
+		"rdtime %1\n"
+		"rdtimeh %2\n"
+		"bne %0, %2, 1b"
+		: "=&r" (hi), "=&r" (lo), "=&r" (tmp));
+	return ((u64)hi << 32) | lo;
+}
+#endif
+
+#define ARCH_HAS_READ_CURRENT_TIMER
+
+static inline int read_current_timer(unsigned long *timer_val)
+{
+	*timer_val = get_cycles();
+	return 0;
+}
+
+#endif /* _ASM_RISCV_TIMEX_H */
diff --git a/arch/riscv/lib/delay.c b/arch/riscv/lib/delay.c
new file mode 100644
index 000000000000..1cc4ac3964b4
--- /dev/null
+++ b/arch/riscv/lib/delay.c
@@ -0,0 +1,110 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/delay.h>
+#include <linux/param.h>
+#include <linux/timex.h>
+#include <linux/export.h>
+
+/*
+ * This is copies from arch/arm/include/asm/delay.h
+ *
+ * Loop (or tick) based delay:
+ *
+ * loops = loops_per_jiffy * jiffies_per_sec * delay_us / us_per_sec
+ *
+ * where:
+ *
+ * jiffies_per_sec = HZ
+ * us_per_sec = 1000000
+ *
+ * Therefore the constant part is HZ / 1000000 which is a small
+ * fractional number. To make this usable with integer math, we
+ * scale up this constant by 2^31, perform the actual multiplication,
+ * and scale the result back down by 2^31 with a simple shift:
+ *
+ * loops = (loops_per_jiffy * delay_us * UDELAY_MULT) >> 31
+ *
+ * where:
+ *
+ * UDELAY_MULT = 2^31 * HZ / 1000000
+ *             = (2^31 / 1000000) * HZ
+ *             = 2147.483648 * HZ
+ *             = 2147 * HZ + 483648 * HZ / 1000000
+ *
+ * 31 is the biggest scale shift value that won't overflow 32 bits for
+ * delay_us * UDELAY_MULT assuming HZ <= 1000 and delay_us <= 2000.
+ */
+#define MAX_UDELAY_US	2000
+#define MAX_UDELAY_HZ	1000
+#define UDELAY_MULT	(2147UL * HZ + 483648UL * HZ / 1000000UL)
+#define UDELAY_SHIFT	31
+
+#if HZ > MAX_UDELAY_HZ
+#error "HZ > MAX_UDELAY_HZ"
+#endif
+
+/*
+ * RISC-V supports both UDELAY and NDELAY.  This is largely the same as above,
+ * but with different constants.  I added 10 bits to the shift to get this, but
+ * the result is that I need a 64-bit multiply, which is slow on 32-bit
+ * platforms.
+ *
+ * NDELAY_MULT = 2^41 * HZ / 1000000000
+ *             = (2^41 / 1000000000) * HZ
+ *             = 2199.02325555 * HZ
+ *             = 2199 * HZ + 23255550 * HZ / 1000000000
+ *
+ * The maximum here is to avoid 64-bit overflow, but it isn't checked as it
+ * won't happen.
+ */
+#define MAX_NDELAY_NS   (1ULL << 42)
+#define MAX_NDELAY_HZ	MAX_UDELAY_HZ
+#define NDELAY_MULT	((unsigned long long)(2199ULL * HZ + 23255550ULL * HZ / 1000000000ULL))
+#define NDELAY_SHIFT	41
+
+#if HZ > MAX_NDELAY_HZ
+#error "HZ > MAX_NDELAY_HZ"
+#endif
+
+void __delay(unsigned long cycles)
+{
+	u64 t0 = get_cycles();
+
+	while ((unsigned long)(get_cycles() - t0) < cycles)
+		cpu_relax();
+}
+
+void udelay(unsigned long usecs)
+{
+	unsigned long ucycles = usecs * lpj_fine * UDELAY_MULT;
+
+	if (unlikely(usecs > MAX_UDELAY_US)) {
+		__delay((u64)usecs * riscv_timebase / 1000000ULL);
+		return;
+	}
+
+	__delay(ucycles >> UDELAY_SHIFT);
+}
+EXPORT_SYMBOL(udelay);
+
+void ndelay(unsigned long nsecs)
+{
+	/*
+	 * This doesn't bother checking for overflow, as it won't happen (it's
+	 * an hour) of delay.
+	 */
+	unsigned long long ncycles = nsecs * lpj_fine * NDELAY_MULT;
+	__delay(ncycles >> NDELAY_SHIFT);
+}
+EXPORT_SYMBOL(ndelay);
diff --git a/arch/riscv/mm/ioremap.c b/arch/riscv/mm/ioremap.c
new file mode 100644
index 000000000000..e99194a4077e
--- /dev/null
+++ b/arch/riscv/mm/ioremap.c
@@ -0,0 +1,92 @@
+/*
+ * (C) Copyright 1995 1996 Linus Torvalds
+ * (C) Copyright 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <linux/io.h>
+
+#include <asm/pgtable.h>
+
+/*
+ * Remap an arbitrary physical address space into the kernel virtual
+ * address space. Needed when the kernel wants to access high addresses
+ * directly.
+ *
+ * NOTE! We need to allow non-page-aligned mappings too: we will obviously
+ * have to convert them into an offset in a page-aligned mapping, but the
+ * caller shouldn't need to know that small detail.
+ */
+static void __iomem *__ioremap_caller(phys_addr_t addr, size_t size,
+	pgprot_t prot, void *caller)
+{
+	phys_addr_t last_addr;
+	unsigned long offset, vaddr;
+	struct vm_struct *area;
+
+	/* Disallow wrap-around or zero size */
+	last_addr = addr + size - 1;
+	if (!size || last_addr < addr)
+		return NULL;
+
+	/* Page-align mappings */
+	offset = addr & (~PAGE_MASK);
+	addr &= PAGE_MASK;
+	size = PAGE_ALIGN(size + offset);
+
+	area = get_vm_area_caller(size, VM_IOREMAP, caller);
+	if (!area)
+		return NULL;
+	vaddr = (unsigned long)area->addr;
+
+	if (ioremap_page_range(vaddr, vaddr + size, addr, prot)) {
+		free_vm_area(area);
+		return NULL;
+	}
+
+	return (void __iomem *)(vaddr + offset);
+}
+
+/*
+ * ioremap     -   map bus memory into CPU space
+ * @offset:    bus address of the memory
+ * @size:      size of the resource to map
+ *
+ * ioremap performs a platform specific sequence of operations to
+ * make bus memory CPU accessible via the readb/readw/readl/writeb/
+ * writew/writel functions and the other mmio helpers. The returned
+ * address is not guaranteed to be usable directly as a virtual
+ * address.
+ *
+ * Must be freed with iounmap.
+ */
+void __iomem *ioremap(phys_addr_t offset, unsigned long size)
+{
+	return __ioremap_caller(offset, size, PAGE_KERNEL,
+		__builtin_return_address(0));
+}
+EXPORT_SYMBOL(ioremap);
+
+
+/**
+ * iounmap - Free a IO remapping
+ * @addr: virtual address from ioremap_*
+ *
+ * Caller must ensure there is only one unmapping for the same pointer.
+ */
+void iounmap(void __iomem *addr)
+{
+	vunmap((void *)((unsigned long)addr & PAGE_MASK));
+}
+EXPORT_SYMBOL(iounmap);
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 283+ messages in thread

end of thread, other threads:[~2017-07-05 16:49 UTC | newest]

Thread overview: 283+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-23  0:41 RISC-V Linux Port v1 Palmer Dabbelt
2017-05-23  0:41 ` [PATCH 1/7] RISC-V: Top-Level Makefile for riscv{32,64} Palmer Dabbelt
2017-05-23 11:30   ` Arnd Bergmann
2017-05-27  0:57     ` Palmer Dabbelt
2017-05-29 10:50       ` Arnd Bergmann
2017-06-06  4:56         ` Palmer Dabbelt
2017-06-06 17:39           ` Karsten Merker
2017-06-06 17:57             ` Palmer Dabbelt
2017-05-23  0:41 ` [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs Palmer Dabbelt
2017-05-23  1:27   ` Olof Johansson
2017-05-23  1:31     ` Randy Dunlap
2017-05-23  4:49       ` Palmer Dabbelt
2017-05-23  4:49     ` Palmer Dabbelt
2017-05-23  5:16       ` [patches] " Olof Johansson
2017-05-23 21:07         ` Benjamin Herrenschmidt
2017-05-23  5:23   ` Olof Johansson
2017-05-23 15:29     ` Palmer Dabbelt
2017-05-23 10:51   ` Geert Uytterhoeven
2017-05-25  1:59     ` Palmer Dabbelt
2017-05-23 11:46   ` Arnd Bergmann
2017-05-27  0:57     ` Palmer Dabbelt
2017-05-29 11:17       ` Arnd Bergmann
2017-06-06  4:56         ` Palmer Dabbelt
2017-06-06  9:20           ` Arnd Bergmann
2017-06-06 20:38             ` Palmer Dabbelt
2017-05-23  0:41 ` [PATCH 3/7] RISC-V: Device Tree Documentation Palmer Dabbelt
2017-05-23 12:03   ` Arnd Bergmann
2017-05-27  0:57     ` Palmer Dabbelt
2017-05-23  0:41 ` [PATCH 4/7] RISC-V: arch/riscv/include Palmer Dabbelt
2017-05-23 12:55   ` Arnd Bergmann
2017-05-23 21:23     ` Benjamin Herrenschmidt
2017-06-03  2:00       ` Palmer Dabbelt
2017-06-01  0:56     ` Palmer Dabbelt
2017-06-01  9:00       ` Arnd Bergmann
2017-06-06  4:56         ` Palmer Dabbelt
2017-06-06  8:54           ` Arnd Bergmann
2017-06-06 19:07             ` Palmer Dabbelt
2017-05-23  0:41 ` [PATCH 5/7] RISC-V: arch/riscv/lib Palmer Dabbelt
2017-05-23 10:47   ` Geert Uytterhoeven
2017-05-23 22:07     ` Palmer Dabbelt
2017-05-23 11:19   ` Arnd Bergmann
2017-05-25  1:59     ` Palmer Dabbelt
2017-05-26  9:06       ` Arnd Bergmann
2017-06-06  4:56         ` Palmer Dabbelt
2017-06-06  9:31           ` Arnd Bergmann
2017-06-06 20:53             ` Palmer Dabbelt
2017-06-07  7:35               ` Arnd Bergmann
2017-06-23 23:24                 ` Palmer Dabbelt
2017-05-23  0:41 ` [PATCH 6/7] RISC-V: arch/riscv/kernel Palmer Dabbelt
2017-05-23  2:11   ` Olof Johansson
2017-05-25  1:59     ` Palmer Dabbelt
2017-05-25 19:51       ` Arnd Bergmann
2017-06-06  4:56         ` Palmer Dabbelt
2017-06-06  9:03           ` Arnd Bergmann
2017-06-06 20:38             ` Palmer Dabbelt
2017-05-23 13:35   ` Arnd Bergmann
2017-06-02 23:56     ` Palmer Dabbelt
2017-06-06  9:01       ` Arnd Bergmann
2017-06-06 20:37         ` Palmer Dabbelt
2017-05-25 17:05   ` Pavel Machek
2017-06-03  3:32     ` Palmer Dabbelt
2017-05-23  0:41 ` [PATCH 7/7] RISC-V: arch/riscv/mm Palmer Dabbelt
2017-05-23  1:28   ` Randy Dunlap
2017-05-23  2:17     ` Olof Johansson
2017-05-23  3:36       ` Palmer Dabbelt
2017-05-23  1:16 ` RISC-V Linux Port v1 Olof Johansson
2017-05-23  1:25   ` Randy Dunlap
2017-05-23  3:36     ` Palmer Dabbelt
2017-05-23  3:36   ` Palmer Dabbelt
2017-05-23  6:45     ` Tobias Klauser
2017-05-23 15:44       ` Palmer Dabbelt
2017-05-23  2:16 ` Randy Dunlap
2017-05-23  4:49   ` Palmer Dabbelt
2017-06-06 22:59 ` RISC-V Linux Port v2 Palmer Dabbelt
2017-06-06 22:59   ` Palmer Dabbelt
2017-06-06 22:59   ` [PATCH 01/17] drivers: support PCIe in RISCV Palmer Dabbelt
2017-06-06 22:59     ` Palmer Dabbelt
2017-06-07  7:17     ` Geert Uytterhoeven
2017-06-07 14:25     ` Christoph Hellwig
     [not found]       ` <CAMgXwTjXZ5dsxmJ2FyWhCRWo-3nyvKUDfhfV0nNC+oakF=AEsA@mail.gmail.com>
2017-06-07 17:40         ` Olof Johansson
2017-06-23 21:47           ` [patches] " Palmer Dabbelt
2017-06-23 21:47             ` Palmer Dabbelt
2017-06-06 22:59   ` [PATCH 02/17] pcie-xilinx: add missing 5th legacy interrupt Palmer Dabbelt
2017-06-06 22:59   ` Palmer Dabbelt
2017-06-06 22:59     ` Palmer Dabbelt
2017-06-07  7:18     ` Geert Uytterhoeven
2017-06-07  9:24     ` Marc Zyngier
2017-06-07 19:03       ` Wesley Terpstra
2017-06-06 22:59   ` [PATCH 03/17] base: fix order of OF initialization Palmer Dabbelt
2017-06-06 22:59     ` Palmer Dabbelt
2017-06-07  7:07     ` Geert Uytterhoeven
2017-06-07  9:35       ` Mark Rutland
2017-06-07  9:35         ` Mark Rutland
2017-06-07 18:39         ` Wesley Terpstra
2017-06-07 21:10           ` Benjamin Herrenschmidt
2017-06-08  3:49           ` Frank Rowand
2017-06-08  9:05             ` Mark Rutland
2017-06-08  9:05               ` Mark Rutland
2017-06-08  9:05               ` Mark Rutland
2017-06-09  0:37               ` Frank Rowand
2017-06-06 22:59   ` [PATCH 04/17] Documentation: atomic_ops.txt is core-api/atomic_ops.rst Palmer Dabbelt
2017-06-06 22:59     ` Palmer Dabbelt
2017-06-07  7:19     ` Geert Uytterhoeven
2017-06-07  9:20     ` Will Deacon
2017-06-06 22:59   ` [PATCH 05/17] MAINTAINERS: Add RISC-V Palmer Dabbelt
2017-06-06 22:59   ` Palmer Dabbelt
2017-06-06 22:59     ` Palmer Dabbelt
2017-06-06 22:59   ` [PATCH 06/17] pci: Add generic pcibios_{fixup_bus,align_resource} Palmer Dabbelt
2017-06-06 22:59     ` Palmer Dabbelt
2017-06-07  7:19     ` Geert Uytterhoeven
2017-06-07  8:01       ` Arnd Bergmann
2017-06-24  2:01         ` Palmer Dabbelt
2017-06-24  2:01           ` Palmer Dabbelt
2017-06-08  8:12       ` Christoph Hellwig
2017-06-08  8:35         ` Arnd Bergmann
2017-06-24  2:01           ` Palmer Dabbelt
2017-06-24  2:01             ` Palmer Dabbelt
2017-06-06 22:59   ` [PATCH 07/17] lib: Add shared copies of some GCC library routines Palmer Dabbelt
2017-06-06 22:59     ` Palmer Dabbelt
2017-06-06 22:59   ` Palmer Dabbelt
2017-06-06 22:59   ` [PATCH 08/17] dts: include documentation for the RISC-V interrupt controllers Palmer Dabbelt
2017-06-06 22:59     ` Palmer Dabbelt
2017-06-07  7:11     ` Geert Uytterhoeven
2017-06-07 10:13       ` Mark Rutland
2017-06-07 18:57         ` Wesley Terpstra
2017-06-07 19:57           ` Rob Herring
2017-06-07 20:31             ` Wesley Terpstra
2017-06-08 10:52           ` Mark Rutland
2017-06-09 21:46             ` Wesley Terpstra
2017-06-09 21:46               ` Wesley Terpstra
2017-06-09 21:58             ` Wesley Terpstra
2017-06-19 14:30               ` Mark Rutland
2017-06-07 22:27     ` Luis R. Rodriguez
2017-06-06 22:59   ` Palmer Dabbelt
2017-06-06 22:59   ` [PATCH 09/17] clocksource/timer-riscv: New RISC-V Clocksource Palmer Dabbelt
2017-06-06 22:59     ` Palmer Dabbelt
2017-06-07  7:12     ` Geert Uytterhoeven
2017-06-07  7:25       ` Arnd Bergmann
2017-06-23 23:24         ` Palmer Dabbelt
2017-06-23 23:24           ` Palmer Dabbelt
2017-06-07  9:43     ` Marc Zyngier
2017-06-24  2:02       ` Palmer Dabbelt
2017-06-24  2:02         ` Palmer Dabbelt
2017-06-06 23:00   ` [PATCH 10/17] irqchip: New RISC-V PLIC Driver Palmer Dabbelt
2017-06-06 23:00     ` Palmer Dabbelt
2017-06-07  7:13     ` Geert Uytterhoeven
2017-06-07  7:55       ` Arnd Bergmann
2017-06-24  0:45         ` Palmer Dabbelt
2017-06-24  0:45           ` Palmer Dabbelt
2017-06-07 10:52     ` Marc Zyngier
2017-06-09 13:47       ` Will Deacon
2017-06-27  1:09         ` Palmer Dabbelt
2017-06-27  1:09           ` Palmer Dabbelt
2017-06-25 20:49       ` Palmer Dabbelt
2017-06-25 20:49         ` Palmer Dabbelt
2017-06-06 23:00   ` Palmer Dabbelt
2017-06-06 23:00   ` [PATCH 11/17] irqchip: RISC-V Local Interrupt Controller Driver Palmer Dabbelt
2017-06-06 23:00     ` Palmer Dabbelt
2017-06-07  7:14     ` Geert Uytterhoeven
2017-06-06 23:00   ` [PATCH 12/17] tty: New RISC-V SBI Console Driver Palmer Dabbelt
2017-06-06 23:00     ` Palmer Dabbelt
2017-06-07  7:15     ` Geert Uytterhoeven
2017-06-07  7:58       ` Arnd Bergmann
2017-06-24  0:45         ` Palmer Dabbelt
2017-06-24  0:45           ` Palmer Dabbelt
2017-06-06 23:00   ` Palmer Dabbelt
2017-06-06 23:00   ` [PATCH 13/17] RISC-V: Add include subdirectory Palmer Dabbelt
2017-06-06 23:00     ` Palmer Dabbelt
2017-06-07  8:12     ` Arnd Bergmann
2017-06-24  2:01       ` Palmer Dabbelt
2017-06-24  2:01         ` Palmer Dabbelt
2017-06-24 15:42         ` Benjamin Herrenschmidt
2017-06-24 21:32           ` [patches] " Palmer Dabbelt
2017-06-24 21:32             ` Palmer Dabbelt
2017-06-25  3:01             ` Benjamin Herrenschmidt
2017-06-07 11:54     ` Peter Zijlstra
2017-06-07 12:25       ` Peter Zijlstra
2017-06-07 12:06     ` Peter Zijlstra
2017-06-07 12:18       ` Peter Zijlstra
2017-06-07 12:36       ` Peter Zijlstra
2017-06-07 12:58         ` Peter Zijlstra
2017-06-07 13:16           ` Will Deacon
2017-06-26 20:07             ` Palmer Dabbelt
2017-06-26 20:07               ` Palmer Dabbelt
2017-06-27  0:07               ` Daniel Lustig
2017-06-27  8:48                 ` Will Deacon
2017-06-07 16:35           ` Peter Zijlstra
2017-06-26 20:07           ` Palmer Dabbelt
2017-06-26 20:07             ` Palmer Dabbelt
2017-06-07 12:42     ` Peter Zijlstra
2017-06-07 13:17     ` Peter Zijlstra
2017-06-09  8:16       ` Peter Zijlstra
2017-06-26 20:07       ` Palmer Dabbelt
2017-06-26 20:07         ` Palmer Dabbelt
2017-06-06 23:00   ` [PATCH 14/17] RISC-V: lib files Palmer Dabbelt
2017-06-06 23:00     ` Palmer Dabbelt
2017-06-06 23:00   ` [PATCH 15/17] RISC-V: Add mm subdirectory Palmer Dabbelt
2017-06-06 23:00     ` Palmer Dabbelt
2017-06-06 23:00   ` [PATCH 16/17] RISC-V: Add kernel subdirectory Palmer Dabbelt
2017-06-06 23:00     ` Palmer Dabbelt
2017-06-06 23:00   ` Palmer Dabbelt
2017-06-06 23:00   ` [PATCH 17/17] RISC-V: Makefile and Kconfig Palmer Dabbelt
2017-06-06 23:00   ` Palmer Dabbelt
2017-06-06 23:00     ` Palmer Dabbelt
2017-06-07  9:23   ` RISC-V Linux Port v2 Will Deacon
2017-06-07 21:54     ` Palmer Dabbelt
2017-06-07 21:54       ` Palmer Dabbelt
2017-06-08 10:26       ` Will Deacon
2017-06-08 18:16         ` Palmer Dabbelt
2017-06-08 18:16           ` Palmer Dabbelt
2017-06-28 18:55   ` RISC-V Linux Port v3 Palmer Dabbelt
2017-06-28 18:55     ` Palmer Dabbelt
2017-06-28 18:55     ` [PATCH 1/9] RISC-V: Init and Halt Code Palmer Dabbelt
2017-06-28 18:55       ` Palmer Dabbelt
2017-06-29  9:44       ` Geert Uytterhoeven
2017-06-29  9:44         ` Geert Uytterhoeven
2017-06-29 22:52         ` Palmer Dabbelt
2017-06-29 22:52           ` Palmer Dabbelt
2017-06-28 18:55     ` Palmer Dabbelt
2017-06-28 18:55     ` [PATCH 2/9] RISC-V: Atomic and Locking Code Palmer Dabbelt
2017-06-28 18:55       ` Palmer Dabbelt
2017-06-28 18:55     ` [PATCH 3/9] RISC-V: Generic library routines and assembly Palmer Dabbelt
2017-06-28 18:55       ` Palmer Dabbelt
2017-06-28 18:55     ` [PATCH 4/9] RISC-V: ELF and module implementation Palmer Dabbelt
2017-06-28 18:55     ` Palmer Dabbelt
2017-06-28 18:55       ` Palmer Dabbelt
2017-06-28 18:55     ` [PATCH 5/9] RISC-V: Task implementation Palmer Dabbelt
2017-06-28 18:55       ` Palmer Dabbelt
2017-06-28 23:32       ` James Hogan
2017-06-28 23:32         ` James Hogan
2017-06-29 22:52         ` Palmer Dabbelt
2017-06-29 22:52           ` Palmer Dabbelt
2017-06-29  8:22       ` Tobias Klauser
2017-06-29 22:52         ` Palmer Dabbelt
2017-06-29 22:52           ` Palmer Dabbelt
2017-06-28 18:55     ` Palmer Dabbelt
2017-06-28 18:55     ` [PATCH 6/9] RISC-V: Device, timer, IRQs, and the SBI Palmer Dabbelt
2017-06-28 18:55       ` Palmer Dabbelt
2017-06-29  8:39       ` Tobias Klauser
2017-06-29 22:52         ` Palmer Dabbelt
2017-06-29 22:52           ` Palmer Dabbelt
2017-06-30  7:57           ` Tobias Klauser
2017-06-28 18:55     ` [PATCH 7/9] RISC-V: Paging and MMU Palmer Dabbelt
2017-06-28 18:55       ` Palmer Dabbelt
2017-06-28 23:09       ` James Hogan
2017-06-28 23:09         ` James Hogan
2017-06-29 22:11         ` Palmer Dabbelt
2017-06-29 22:11           ` Palmer Dabbelt
2017-06-28 18:55     ` Palmer Dabbelt
2017-06-28 18:55     ` [PATCH 8/9] RISC-V: User-facing API Palmer Dabbelt
2017-06-28 18:55       ` Palmer Dabbelt
2017-06-28 21:49       ` Thomas Gleixner
2017-06-28 21:52         ` Thomas Gleixner
2017-06-29 17:22         ` Palmer Dabbelt
2017-06-29 17:22           ` Palmer Dabbelt
2017-06-28 22:42       ` James Hogan
2017-06-28 22:42         ` James Hogan
2017-06-29 21:42         ` Palmer Dabbelt
2017-06-29 21:42           ` Palmer Dabbelt
2017-07-03 23:06           ` James Hogan
2017-07-03 23:06             ` James Hogan
2017-07-05 16:49             ` Palmer Dabbelt
2017-07-05 16:49               ` Palmer Dabbelt
2017-06-28 18:55     ` [PATCH 9/9] RISC-V: Build Infastructure Palmer Dabbelt
2017-06-28 18:55       ` Palmer Dabbelt
2017-06-28 21:05       ` Karsten Merker
2017-06-28 21:13         ` Palmer Dabbelt
2017-06-28 21:13           ` Palmer Dabbelt
2017-06-28 21:25       ` James Hogan
2017-06-28 21:25         ` James Hogan
2017-06-29 16:29         ` Palmer Dabbelt
2017-06-29 16:29           ` Palmer Dabbelt
2017-06-28 22:54       ` James Hogan
2017-06-28 22:54         ` James Hogan
2017-06-29 22:11         ` Palmer Dabbelt
2017-06-29 22:11           ` Palmer Dabbelt
2017-06-06 22:59 ` RISC-V Linux Port v2 Palmer Dabbelt
2017-06-07  7:29 ` David Howells
2017-06-07 21:54   ` Palmer Dabbelt
2017-06-07 21:54     ` Palmer Dabbelt
2017-06-07 21:54     ` Palmer Dabbelt
2017-07-04 19:50 RISC-V Linux Port v4 Palmer Dabbelt
2017-07-04 19:50 ` [PATCH 6/9] RISC-V: Device, timer, IRQs, and the SBI Palmer Dabbelt
2017-07-04 19:50   ` Palmer Dabbelt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.