linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/2] scs: switch to vmapped shadow stacks
@ 2020-11-30 23:34 Sami Tolvanen
  2020-11-30 23:34 ` [PATCH v3 1/2] " Sami Tolvanen
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Sami Tolvanen @ 2020-11-30 23:34 UTC (permalink / raw)
  To: Will Deacon, Catalin Marinas
  Cc: Mark Rutland, James Morse, Ard Biesheuvel, Kees Cook,
	linux-arm-kernel, linux-kernel, Sami Tolvanen

As discussed a few months ago [1][2], virtually mapped shadow call stacks
are better for safety and robustness. This series dusts off the VMAP
option from the original SCS patch series and switches the kernel to use
virtually mapped shadow stacks unconditionally when SCS is enabled.

 [1] https://lore.kernel.org/lkml/20200515172355.GD23334@willie-the-truck/
 [2] https://lore.kernel.org/lkml/20200427220942.GB80713@google.com/

Changes in v3:
- Split the actual allocation to __scs_alloc().
- Moved SDEI SCS initialization to init_sdei_scs().

Changes in v2:
- Added SCS_ORDER and used it to define SCS_SIZE, switched vmalloc() to
  use SCS_SIZE and removed the alignment.
- Moved the kasan_unpoison_vmalloc() to scs_alloc() when using a cached
  shadow stack instead of calling it in scs_free().
- Added a comment to scs_free().
- Moved arm64 IRQ and SDEI shadow stack initialization to irq/sdei.c,
  and removed the now unneeded scs.c.

Sami Tolvanen (2):
  scs: switch to vmapped shadow stacks
  arm64: scs: use vmapped IRQ and SDEI shadow stacks

 arch/arm64/kernel/Makefile |  1 -
 arch/arm64/kernel/entry.S  |  6 ++--
 arch/arm64/kernel/irq.c    | 19 ++++++++++
 arch/arm64/kernel/scs.c    | 16 ---------
 arch/arm64/kernel/sdei.c   | 70 +++++++++++++++++++++++++++++++++++++
 include/linux/scs.h        | 16 ++++-----
 kernel/scs.c               | 71 ++++++++++++++++++++++++++++++++------
 7 files changed, 158 insertions(+), 41 deletions(-)
 delete mode 100644 arch/arm64/kernel/scs.c


base-commit: b65054597872ce3aefbc6a666385eabdf9e288da
-- 
2.29.2.454.gaff20da3a2-goog


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v3 1/2] scs: switch to vmapped shadow stacks
  2020-11-30 23:34 [PATCH v3 0/2] scs: switch to vmapped shadow stacks Sami Tolvanen
@ 2020-11-30 23:34 ` Sami Tolvanen
  2020-11-30 23:34 ` [PATCH v3 2/2] arm64: scs: use vmapped IRQ and SDEI " Sami Tolvanen
  2020-12-01 11:40 ` [PATCH v3 0/2] scs: switch to vmapped " Will Deacon
  2 siblings, 0 replies; 5+ messages in thread
From: Sami Tolvanen @ 2020-11-30 23:34 UTC (permalink / raw)
  To: Will Deacon, Catalin Marinas
  Cc: Mark Rutland, James Morse, Ard Biesheuvel, Kees Cook,
	linux-arm-kernel, linux-kernel, Sami Tolvanen

The kernel currently uses kmem_cache to allocate shadow call stacks,
which means an overflows may not be immediately detected and can
potentially result in another task's shadow stack to be overwritten.

This change switches SCS to use virtually mapped shadow stacks for
tasks, which increases shadow stack size to a full page and provides
more robust overflow detection, similarly to VMAP_STACK.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Acked-by: Will Deacon <will@kernel.org>
---
 include/linux/scs.h | 12 ++++----
 kernel/scs.c        | 71 ++++++++++++++++++++++++++++++++++++++-------
 2 files changed, 66 insertions(+), 17 deletions(-)

diff --git a/include/linux/scs.h b/include/linux/scs.h
index 6dec390cf154..2a506c2a16f4 100644
--- a/include/linux/scs.h
+++ b/include/linux/scs.h
@@ -15,12 +15,8 @@
 
 #ifdef CONFIG_SHADOW_CALL_STACK
 
-/*
- * In testing, 1 KiB shadow stack size (i.e. 128 stack frames on a 64-bit
- * architecture) provided ~40% safety margin on stack usage while keeping
- * memory allocation overhead reasonable.
- */
-#define SCS_SIZE		SZ_1K
+#define SCS_ORDER		0
+#define SCS_SIZE		(PAGE_SIZE << SCS_ORDER)
 #define GFP_SCS			(GFP_KERNEL | __GFP_ZERO)
 
 /* An illegal pointer value to mark the end of the shadow stack. */
@@ -33,6 +29,8 @@
 #define task_scs(tsk)		(task_thread_info(tsk)->scs_base)
 #define task_scs_sp(tsk)	(task_thread_info(tsk)->scs_sp)
 
+void *scs_alloc(int node);
+void scs_free(void *s);
 void scs_init(void);
 int scs_prepare(struct task_struct *tsk, int node);
 void scs_release(struct task_struct *tsk);
@@ -61,6 +59,8 @@ static inline bool task_scs_end_corrupted(struct task_struct *tsk)
 
 #else /* CONFIG_SHADOW_CALL_STACK */
 
+static inline void *scs_alloc(int node) { return NULL; }
+static inline void scs_free(void *s) {}
 static inline void scs_init(void) {}
 static inline void scs_task_reset(struct task_struct *tsk) {}
 static inline int scs_prepare(struct task_struct *tsk, int node) { return 0; }
diff --git a/kernel/scs.c b/kernel/scs.c
index 4ff4a7ba0094..e2a71fc82fa0 100644
--- a/kernel/scs.c
+++ b/kernel/scs.c
@@ -5,26 +5,49 @@
  * Copyright (C) 2019 Google LLC
  */
 
+#include <linux/cpuhotplug.h>
 #include <linux/kasan.h>
 #include <linux/mm.h>
 #include <linux/scs.h>
-#include <linux/slab.h>
+#include <linux/vmalloc.h>
 #include <linux/vmstat.h>
 
-static struct kmem_cache *scs_cache;
-
 static void __scs_account(void *s, int account)
 {
-	struct page *scs_page = virt_to_page(s);
+	struct page *scs_page = vmalloc_to_page(s);
 
 	mod_node_page_state(page_pgdat(scs_page), NR_KERNEL_SCS_KB,
 			    account * (SCS_SIZE / SZ_1K));
 }
 
-static void *scs_alloc(int node)
+/* Matches NR_CACHED_STACKS for VMAP_STACK */
+#define NR_CACHED_SCS 2
+static DEFINE_PER_CPU(void *, scs_cache[NR_CACHED_SCS]);
+
+static void *__scs_alloc(int node)
 {
-	void *s = kmem_cache_alloc_node(scs_cache, GFP_SCS, node);
+	int i;
+	void *s;
+
+	for (i = 0; i < NR_CACHED_SCS; i++) {
+		s = this_cpu_xchg(scs_cache[i], NULL);
+		if (s) {
+			kasan_unpoison_vmalloc(s, SCS_SIZE);
+			memset(s, 0, SCS_SIZE);
+			return s;
+		}
+	}
+
+	return __vmalloc_node_range(SCS_SIZE, 1, VMALLOC_START, VMALLOC_END,
+				    GFP_SCS, PAGE_KERNEL, 0, node,
+				    __builtin_return_address(0));
+}
 
+void *scs_alloc(int node)
+{
+	void *s;
+
+	s = __scs_alloc(node);
 	if (!s)
 		return NULL;
 
@@ -34,21 +57,47 @@ static void *scs_alloc(int node)
 	 * Poison the allocation to catch unintentional accesses to
 	 * the shadow stack when KASAN is enabled.
 	 */
-	kasan_poison_object_data(scs_cache, s);
+	kasan_poison_vmalloc(s, SCS_SIZE);
 	__scs_account(s, 1);
 	return s;
 }
 
-static void scs_free(void *s)
+void scs_free(void *s)
 {
+	int i;
+
 	__scs_account(s, -1);
-	kasan_unpoison_object_data(scs_cache, s);
-	kmem_cache_free(scs_cache, s);
+
+	/*
+	 * We cannot sleep as this can be called in interrupt context,
+	 * so use this_cpu_cmpxchg to update the cache, and vfree_atomic
+	 * to free the stack.
+	 */
+
+	for (i = 0; i < NR_CACHED_SCS; i++)
+		if (this_cpu_cmpxchg(scs_cache[i], 0, s) == NULL)
+			return;
+
+	vfree_atomic(s);
+}
+
+static int scs_cleanup(unsigned int cpu)
+{
+	int i;
+	void **cache = per_cpu_ptr(scs_cache, cpu);
+
+	for (i = 0; i < NR_CACHED_SCS; i++) {
+		vfree(cache[i]);
+		cache[i] = NULL;
+	}
+
+	return 0;
 }
 
 void __init scs_init(void)
 {
-	scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, 0, 0, NULL);
+	cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL,
+			  scs_cleanup);
 }
 
 int scs_prepare(struct task_struct *tsk, int node)
-- 
2.29.2.454.gaff20da3a2-goog


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v3 2/2] arm64: scs: use vmapped IRQ and SDEI shadow stacks
  2020-11-30 23:34 [PATCH v3 0/2] scs: switch to vmapped shadow stacks Sami Tolvanen
  2020-11-30 23:34 ` [PATCH v3 1/2] " Sami Tolvanen
@ 2020-11-30 23:34 ` Sami Tolvanen
  2020-12-01 11:40 ` [PATCH v3 0/2] scs: switch to vmapped " Will Deacon
  2 siblings, 0 replies; 5+ messages in thread
From: Sami Tolvanen @ 2020-11-30 23:34 UTC (permalink / raw)
  To: Will Deacon, Catalin Marinas
  Cc: Mark Rutland, James Morse, Ard Biesheuvel, Kees Cook,
	linux-arm-kernel, linux-kernel, Sami Tolvanen

Use scs_alloc() to allocate also IRQ and SDEI shadow stacks instead of
using statically allocated stacks.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Acked-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/Makefile |  1 -
 arch/arm64/kernel/entry.S  |  6 ++--
 arch/arm64/kernel/irq.c    | 19 +++++++++++
 arch/arm64/kernel/scs.c    | 16 ---------
 arch/arm64/kernel/sdei.c   | 70 ++++++++++++++++++++++++++++++++++++++
 include/linux/scs.h        |  4 ---
 6 files changed, 92 insertions(+), 24 deletions(-)
 delete mode 100644 arch/arm64/kernel/scs.c

diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index bbaf0bc4ad60..86364ab6f13f 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -58,7 +58,6 @@ obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
 obj-$(CONFIG_CRASH_CORE)		+= crash_core.o
 obj-$(CONFIG_ARM_SDE_INTERFACE)		+= sdei.o
 obj-$(CONFIG_ARM64_PTR_AUTH)		+= pointer_auth.o
-obj-$(CONFIG_SHADOW_CALL_STACK)		+= scs.o
 obj-$(CONFIG_ARM64_MTE)			+= mte.o
 
 obj-y					+= vdso/ probes/
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index b295fb912b12..5c2ac4b5b2da 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -441,7 +441,7 @@ SYM_CODE_END(__swpan_exit_el0)
 
 #ifdef CONFIG_SHADOW_CALL_STACK
 	/* also switch to the irq shadow stack */
-	adr_this_cpu scs_sp, irq_shadow_call_stack, x26
+	ldr_this_cpu scs_sp, irq_shadow_call_stack_ptr, x26
 #endif
 
 9998:
@@ -1097,9 +1097,9 @@ SYM_CODE_START(__sdei_asm_handler)
 #ifdef CONFIG_SHADOW_CALL_STACK
 	/* Use a separate shadow call stack for normal and critical events */
 	cbnz	w4, 3f
-	adr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_normal, tmp=x6
+	ldr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_normal_ptr, tmp=x6
 	b	4f
-3:	adr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_critical, tmp=x6
+3:	ldr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_critical_ptr, tmp=x6
 4:
 #endif
 
diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
index 9cf2fb87584a..5b7ada9d9559 100644
--- a/arch/arm64/kernel/irq.c
+++ b/arch/arm64/kernel/irq.c
@@ -17,6 +17,7 @@
 #include <linux/init.h>
 #include <linux/irqchip.h>
 #include <linux/kprobes.h>
+#include <linux/scs.h>
 #include <linux/seq_file.h>
 #include <linux/vmalloc.h>
 #include <asm/daifflags.h>
@@ -27,6 +28,22 @@ DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts);
 
 DEFINE_PER_CPU(unsigned long *, irq_stack_ptr);
 
+
+DECLARE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr);
+
+#ifdef CONFIG_SHADOW_CALL_STACK
+DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr);
+#endif
+
+static void init_irq_scs(void)
+{
+	int cpu;
+
+	for_each_possible_cpu(cpu)
+		per_cpu(irq_shadow_call_stack_ptr, cpu) =
+			scs_alloc(cpu_to_node(cpu));
+}
+
 #ifdef CONFIG_VMAP_STACK
 static void init_irq_stacks(void)
 {
@@ -54,6 +71,8 @@ static void init_irq_stacks(void)
 void __init init_IRQ(void)
 {
 	init_irq_stacks();
+	if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK))
+		init_irq_scs();
 	irqchip_init();
 	if (!handle_arch_irq)
 		panic("No interrupt controller found.");
diff --git a/arch/arm64/kernel/scs.c b/arch/arm64/kernel/scs.c
deleted file mode 100644
index e8f7ff45dd8f..000000000000
--- a/arch/arm64/kernel/scs.c
+++ /dev/null
@@ -1,16 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Shadow Call Stack support.
- *
- * Copyright (C) 2019 Google LLC
- */
-
-#include <linux/percpu.h>
-#include <linux/scs.h>
-
-DEFINE_SCS(irq_shadow_call_stack);
-
-#ifdef CONFIG_ARM_SDE_INTERFACE
-DEFINE_SCS(sdei_shadow_call_stack_normal);
-DEFINE_SCS(sdei_shadow_call_stack_critical);
-#endif
diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c
index 7689f2031c0c..d12fd786b267 100644
--- a/arch/arm64/kernel/sdei.c
+++ b/arch/arm64/kernel/sdei.c
@@ -7,6 +7,7 @@
 #include <linux/hardirq.h>
 #include <linux/irqflags.h>
 #include <linux/sched/task_stack.h>
+#include <linux/scs.h>
 #include <linux/uaccess.h>
 
 #include <asm/alternative.h>
@@ -37,6 +38,14 @@ DEFINE_PER_CPU(unsigned long *, sdei_stack_normal_ptr);
 DEFINE_PER_CPU(unsigned long *, sdei_stack_critical_ptr);
 #endif
 
+DECLARE_PER_CPU(unsigned long *, sdei_shadow_call_stack_normal_ptr);
+DECLARE_PER_CPU(unsigned long *, sdei_shadow_call_stack_critical_ptr);
+
+#ifdef CONFIG_SHADOW_CALL_STACK
+DEFINE_PER_CPU(unsigned long *, sdei_shadow_call_stack_normal_ptr);
+DEFINE_PER_CPU(unsigned long *, sdei_shadow_call_stack_critical_ptr);
+#endif
+
 static void _free_sdei_stack(unsigned long * __percpu *ptr, int cpu)
 {
 	unsigned long *p;
@@ -90,6 +99,59 @@ static int init_sdei_stacks(void)
 	return err;
 }
 
+static void _free_sdei_scs(unsigned long * __percpu *ptr, int cpu)
+{
+	void *s;
+
+	s = per_cpu(*ptr, cpu);
+	if (s) {
+		per_cpu(*ptr, cpu) = NULL;
+		scs_free(s);
+	}
+}
+
+static void free_sdei_scs(void)
+{
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		_free_sdei_scs(&sdei_shadow_call_stack_normal_ptr, cpu);
+		_free_sdei_scs(&sdei_shadow_call_stack_critical_ptr, cpu);
+	}
+}
+
+static int _init_sdei_scs(unsigned long * __percpu *ptr, int cpu)
+{
+	void *s;
+
+	s = scs_alloc(cpu_to_node(cpu));
+	if (!s)
+		return -ENOMEM;
+	per_cpu(*ptr, cpu) = s;
+
+	return 0;
+}
+
+static int init_sdei_scs(void)
+{
+	int cpu;
+	int err = 0;
+
+	for_each_possible_cpu(cpu) {
+		err = _init_sdei_scs(&sdei_shadow_call_stack_normal_ptr, cpu);
+		if (err)
+			break;
+		err = _init_sdei_scs(&sdei_shadow_call_stack_critical_ptr, cpu);
+		if (err)
+			break;
+	}
+
+	if (err)
+		free_sdei_scs();
+
+	return err;
+}
+
 static bool on_sdei_normal_stack(unsigned long sp, struct stack_info *info)
 {
 	unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_normal_ptr);
@@ -138,6 +200,14 @@ unsigned long sdei_arch_get_entry_point(int conduit)
 			return 0;
 	}
 
+	if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK)) {
+		if (init_sdei_scs()) {
+			if (IS_ENABLED(CONFIG_VMAP_STACK))
+				free_sdei_stacks();
+			return 0;
+		}
+	}
+
 	sdei_exit_mode = (conduit == SMCCC_CONDUIT_HVC) ? SDEI_EXIT_HVC : SDEI_EXIT_SMC;
 
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
diff --git a/include/linux/scs.h b/include/linux/scs.h
index 2a506c2a16f4..18122d9e17ff 100644
--- a/include/linux/scs.h
+++ b/include/linux/scs.h
@@ -22,10 +22,6 @@
 /* An illegal pointer value to mark the end of the shadow stack. */
 #define SCS_END_MAGIC		(0x5f6UL + POISON_POINTER_DELTA)
 
-/* Allocate a static per-CPU shadow stack */
-#define DEFINE_SCS(name)						\
-	DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name)	\
-
 #define task_scs(tsk)		(task_thread_info(tsk)->scs_base)
 #define task_scs_sp(tsk)	(task_thread_info(tsk)->scs_sp)
 
-- 
2.29.2.454.gaff20da3a2-goog


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 0/2] scs: switch to vmapped shadow stacks
  2020-11-30 23:34 [PATCH v3 0/2] scs: switch to vmapped shadow stacks Sami Tolvanen
  2020-11-30 23:34 ` [PATCH v3 1/2] " Sami Tolvanen
  2020-11-30 23:34 ` [PATCH v3 2/2] arm64: scs: use vmapped IRQ and SDEI " Sami Tolvanen
@ 2020-12-01 11:40 ` Will Deacon
  2020-12-01 18:37   ` Sami Tolvanen
  2 siblings, 1 reply; 5+ messages in thread
From: Will Deacon @ 2020-12-01 11:40 UTC (permalink / raw)
  To: Catalin Marinas, Sami Tolvanen
  Cc: kernel-team, Will Deacon, linux-kernel, James Morse,
	Mark Rutland, Ard Biesheuvel, linux-arm-kernel, Kees Cook

On Mon, 30 Nov 2020 15:34:40 -0800, Sami Tolvanen wrote:
> As discussed a few months ago [1][2], virtually mapped shadow call stacks
> are better for safety and robustness. This series dusts off the VMAP
> option from the original SCS patch series and switches the kernel to use
> virtually mapped shadow stacks unconditionally when SCS is enabled.
> 
>  [1] https://lore.kernel.org/lkml/20200515172355.GD23334@willie-the-truck/
>  [2] https://lore.kernel.org/lkml/20200427220942.GB80713@google.com/
> 
> [...]

Applied to arm64 (for-next/scs), thanks!

[1/2] scs: switch to vmapped shadow stacks
      https://git.kernel.org/arm64/c/a2abe7cbd8fe
[2/2] arm64: scs: use vmapped IRQ and SDEI shadow stacks
      https://git.kernel.org/arm64/c/ac20ffbb0279

I also threw a patch on top implementing the suggestion I made on v2, so
please take a look if you get a chance.

Cheers,
-- 
Will

https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 0/2] scs: switch to vmapped shadow stacks
  2020-12-01 11:40 ` [PATCH v3 0/2] scs: switch to vmapped " Will Deacon
@ 2020-12-01 18:37   ` Sami Tolvanen
  0 siblings, 0 replies; 5+ messages in thread
From: Sami Tolvanen @ 2020-12-01 18:37 UTC (permalink / raw)
  To: Will Deacon
  Cc: Catalin Marinas, Android Kernel Team, LKML, James Morse,
	Mark Rutland, Ard Biesheuvel, linux-arm-kernel, Kees Cook

On Tue, Dec 1, 2020 at 3:40 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, 30 Nov 2020 15:34:40 -0800, Sami Tolvanen wrote:
> > As discussed a few months ago [1][2], virtually mapped shadow call stacks
> > are better for safety and robustness. This series dusts off the VMAP
> > option from the original SCS patch series and switches the kernel to use
> > virtually mapped shadow stacks unconditionally when SCS is enabled.
> >
> >  [1] https://lore.kernel.org/lkml/20200515172355.GD23334@willie-the-truck/
> >  [2] https://lore.kernel.org/lkml/20200427220942.GB80713@google.com/
> >
> > [...]
>
> Applied to arm64 (for-next/scs), thanks!
>
> [1/2] scs: switch to vmapped shadow stacks
>       https://git.kernel.org/arm64/c/a2abe7cbd8fe
> [2/2] arm64: scs: use vmapped IRQ and SDEI shadow stacks
>       https://git.kernel.org/arm64/c/ac20ffbb0279
>
> I also threw a patch on top implementing the suggestion I made on v2, so
> please take a look if you get a chance.

Looks good to me, thanks for cleaning that up!

Sami

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-12-01 18:38 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-30 23:34 [PATCH v3 0/2] scs: switch to vmapped shadow stacks Sami Tolvanen
2020-11-30 23:34 ` [PATCH v3 1/2] " Sami Tolvanen
2020-11-30 23:34 ` [PATCH v3 2/2] arm64: scs: use vmapped IRQ and SDEI " Sami Tolvanen
2020-12-01 11:40 ` [PATCH v3 0/2] scs: switch to vmapped " Will Deacon
2020-12-01 18:37   ` Sami Tolvanen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).