linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] arm64: Only call into preempt_schedule() if need_resched()
@ 2018-11-30 17:34 Will Deacon
  2018-11-30 17:34 ` [PATCH v2 1/2] preempt: Move PREEMPT_NEED_RESCHED definition into arch code Will Deacon
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Will Deacon @ 2018-11-30 17:34 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, ard.biesheuvel, catalin.marinas, rml, tglx, peterz,
	schwidefsky, Will Deacon

Hi all,

This is version two of the patches I originally posted here:

  http://lkml.kernel.org/r/1543347902-21170-1-git-send-email-will.deacon@arm.com

The only change since v1 is that  __preempt_count_dec_and_test() now
reloads the need_resched flag if it initially saw that it was set. This
resolves the issue spotted by Peter, where an IRQ coming in during the
decrement can cause a reschedule to be missed.

Feedback welcome.

Will

--->8

Will Deacon (2):
  preempt: Move PREEMPT_NEED_RESCHED definition into arch code
  arm64: preempt: Provide our own implementation of asm/preempt.h

 arch/arm64/include/asm/Kbuild        |  1 -
 arch/arm64/include/asm/preempt.h     | 88 ++++++++++++++++++++++++++++++++++++
 arch/arm64/include/asm/thread_info.h | 13 +++++-
 arch/s390/include/asm/preempt.h      |  2 +
 arch/x86/include/asm/preempt.h       |  3 ++
 include/linux/preempt.h              |  3 --
 6 files changed, 105 insertions(+), 5 deletions(-)
 create mode 100644 arch/arm64/include/asm/preempt.h

-- 
2.1.4


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 1/2] preempt: Move PREEMPT_NEED_RESCHED definition into arch code
  2018-11-30 17:34 [PATCH v2 0/2] arm64: Only call into preempt_schedule() if need_resched() Will Deacon
@ 2018-11-30 17:34 ` Will Deacon
  2018-11-30 17:34 ` [PATCH v2 2/2] arm64: preempt: Provide our own implementation of asm/preempt.h Will Deacon
  2018-12-06 15:08 ` [PATCH v2 0/2] arm64: Only call into preempt_schedule() if need_resched() Peter Zijlstra
  2 siblings, 0 replies; 6+ messages in thread
From: Will Deacon @ 2018-11-30 17:34 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, ard.biesheuvel, catalin.marinas, rml, tglx, peterz,
	schwidefsky, Will Deacon

PREEMPT_NEED_RESCHED is never used directly, so move it into the arch
code where it can potentially be implemented using either a different
bit in the preempt count or as an entirely separate entity.

Cc: Robert Love <rml@tech9.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/s390/include/asm/preempt.h | 2 ++
 arch/x86/include/asm/preempt.h  | 3 +++
 include/linux/preempt.h         | 3 ---
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/s390/include/asm/preempt.h b/arch/s390/include/asm/preempt.h
index 23a14d187fb1..b5ea9e14c017 100644
--- a/arch/s390/include/asm/preempt.h
+++ b/arch/s390/include/asm/preempt.h
@@ -8,6 +8,8 @@
 
 #ifdef CONFIG_HAVE_MARCH_Z196_FEATURES
 
+/* We use the MSB mostly because its available */
+#define PREEMPT_NEED_RESCHED	0x80000000
 #define PREEMPT_ENABLED	(0 + PREEMPT_NEED_RESCHED)
 
 static inline int preempt_count(void)
diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index 90cb2f36c042..99a7fa9ab0a3 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -8,6 +8,9 @@
 
 DECLARE_PER_CPU(int, __preempt_count);
 
+/* We use the MSB mostly because its available */
+#define PREEMPT_NEED_RESCHED	0x80000000
+
 /*
  * We use the PREEMPT_NEED_RESCHED bit as an inverted NEED_RESCHED such
  * that a decrement hitting 0 means we can and should reschedule.
diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index c01813c3fbe9..dd92b1a93919 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -53,9 +53,6 @@
 
 #define SOFTIRQ_DISABLE_OFFSET	(2 * SOFTIRQ_OFFSET)
 
-/* We use the MSB mostly because its available */
-#define PREEMPT_NEED_RESCHED	0x80000000
-
 #define PREEMPT_DISABLED	(PREEMPT_DISABLE_OFFSET + PREEMPT_ENABLED)
 
 /*
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 2/2] arm64: preempt: Provide our own implementation of asm/preempt.h
  2018-11-30 17:34 [PATCH v2 0/2] arm64: Only call into preempt_schedule() if need_resched() Will Deacon
  2018-11-30 17:34 ` [PATCH v2 1/2] preempt: Move PREEMPT_NEED_RESCHED definition into arch code Will Deacon
@ 2018-11-30 17:34 ` Will Deacon
  2018-12-04 16:09   ` Ard Biesheuvel
  2018-12-06 15:08 ` [PATCH v2 0/2] arm64: Only call into preempt_schedule() if need_resched() Peter Zijlstra
  2 siblings, 1 reply; 6+ messages in thread
From: Will Deacon @ 2018-11-30 17:34 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, ard.biesheuvel, catalin.marinas, rml, tglx, peterz,
	schwidefsky, Will Deacon

The asm-generic/preempt.h implementation doesn't make use of the
PREEMPT_NEED_RESCHED flag, since this can interact badly with load/store
architectures which rely on the preempt_count word being unchanged across
an interrupt.

However, since we're a 64-bit architecture and the preempt count is
only 32 bits wide, we can simply pack it next to the resched flag and
load the whole thing in one go, so that a dec-and-test operation doesn't
need to load twice.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/Kbuild        |  1 -
 arch/arm64/include/asm/preempt.h     | 88 ++++++++++++++++++++++++++++++++++++
 arch/arm64/include/asm/thread_info.h | 13 +++++-
 3 files changed, 100 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm64/include/asm/preempt.h

diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
index 6cd5d77b6b44..33498f900390 100644
--- a/arch/arm64/include/asm/Kbuild
+++ b/arch/arm64/include/asm/Kbuild
@@ -14,7 +14,6 @@ generic-y += local64.h
 generic-y += mcs_spinlock.h
 generic-y += mm-arch-hooks.h
 generic-y += msi.h
-generic-y += preempt.h
 generic-y += qrwlock.h
 generic-y += qspinlock.h
 generic-y += rwsem.h
diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h
new file mode 100644
index 000000000000..f1c1398cf065
--- /dev/null
+++ b/arch/arm64/include/asm/preempt.h
@@ -0,0 +1,88 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_PREEMPT_H
+#define __ASM_PREEMPT_H
+
+#include <linux/thread_info.h>
+
+#define PREEMPT_NEED_RESCHED	BIT(32)
+#define PREEMPT_ENABLED	(PREEMPT_NEED_RESCHED)
+
+static inline int preempt_count(void)
+{
+	return READ_ONCE(current_thread_info()->preempt.count);
+}
+
+static inline void preempt_count_set(u64 pc)
+{
+	/* Preserve existing value of PREEMPT_NEED_RESCHED */
+	WRITE_ONCE(current_thread_info()->preempt.count, pc);
+}
+
+#define init_task_preempt_count(p) do { \
+	task_thread_info(p)->preempt_count = FORK_PREEMPT_COUNT; \
+} while (0)
+
+#define init_idle_preempt_count(p, cpu) do { \
+	task_thread_info(p)->preempt_count = PREEMPT_ENABLED; \
+} while (0)
+
+static inline void set_preempt_need_resched(void)
+{
+	current_thread_info()->preempt.need_resched = 0;
+}
+
+static inline void clear_preempt_need_resched(void)
+{
+	current_thread_info()->preempt.need_resched = 1;
+}
+
+static inline bool test_preempt_need_resched(void)
+{
+	return !current_thread_info()->preempt.need_resched;
+}
+
+static inline void __preempt_count_add(int val)
+{
+	u32 pc = READ_ONCE(current_thread_info()->preempt.count);
+	pc += val;
+	WRITE_ONCE(current_thread_info()->preempt.count, pc);
+}
+
+static inline void __preempt_count_sub(int val)
+{
+	u32 pc = READ_ONCE(current_thread_info()->preempt.count);
+	pc -= val;
+	WRITE_ONCE(current_thread_info()->preempt.count, pc);
+}
+
+static inline bool __preempt_count_dec_and_test(void)
+{
+	struct thread_info *ti = current_thread_info();
+	u64 pc = READ_ONCE(ti->preempt_count);
+
+	WRITE_ONCE(ti->preempt.count, --pc);
+
+	/*
+	 * If we wrote back all zeroes, then we're preemptible and in
+	 * need of a reschedule. Otherwise, we need to reload the
+	 * preempt_count in case the need_resched flag was cleared by an
+	 * interrupt occurring between the non-atomic READ_ONCE/WRITE_ONCE
+	 * pair.
+	 */
+	return !pc || !READ_ONCE(ti->preempt_count);
+}
+
+static inline bool should_resched(int preempt_offset)
+{
+	u64 pc = READ_ONCE(current_thread_info()->preempt_count);
+	return pc == preempt_offset;
+}
+
+#ifdef CONFIG_PREEMPT
+void preempt_schedule(void);
+#define __preempt_schedule() preempt_schedule()
+void preempt_schedule_notrace(void);
+#define __preempt_schedule_notrace() preempt_schedule_notrace()
+#endif /* CONFIG_PREEMPT */
+
+#endif /* __ASM_PREEMPT_H */
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index cb2c10a8f0a8..bbca68b54732 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -42,7 +42,18 @@ struct thread_info {
 #ifdef CONFIG_ARM64_SW_TTBR0_PAN
 	u64			ttbr0;		/* saved TTBR0_EL1 */
 #endif
-	int			preempt_count;	/* 0 => preemptable, <0 => bug */
+	union {
+		u64		preempt_count;	/* 0 => preemptible, <0 => bug */
+		struct {
+#ifdef CONFIG_CPU_BIG_ENDIAN
+			u32	need_resched;
+			u32	count;
+#else
+			u32	count;
+			u32	need_resched;
+#endif
+		} preempt;
+	};
 };
 
 #define thread_saved_pc(tsk)	\
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 2/2] arm64: preempt: Provide our own implementation of asm/preempt.h
  2018-11-30 17:34 ` [PATCH v2 2/2] arm64: preempt: Provide our own implementation of asm/preempt.h Will Deacon
@ 2018-12-04 16:09   ` Ard Biesheuvel
  0 siblings, 0 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2018-12-04 16:09 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, Linux Kernel Mailing List, Catalin Marinas,
	rml, Thomas Gleixner, Peter Zijlstra, Martin Schwidefsky

On Fri, 30 Nov 2018 at 18:34, Will Deacon <will.deacon@arm.com> wrote:
>
> The asm-generic/preempt.h implementation doesn't make use of the
> PREEMPT_NEED_RESCHED flag, since this can interact badly with load/store
> architectures which rely on the preempt_count word being unchanged across
> an interrupt.
>
> However, since we're a 64-bit architecture and the preempt count is
> only 32 bits wide, we can simply pack it next to the resched flag and
> load the whole thing in one go, so that a dec-and-test operation doesn't
> need to load twice.
>
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

One nit below.

> ---
>  arch/arm64/include/asm/Kbuild        |  1 -
>  arch/arm64/include/asm/preempt.h     | 88 ++++++++++++++++++++++++++++++++++++
>  arch/arm64/include/asm/thread_info.h | 13 +++++-
>  3 files changed, 100 insertions(+), 2 deletions(-)
>  create mode 100644 arch/arm64/include/asm/preempt.h
>
> diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
> index 6cd5d77b6b44..33498f900390 100644
> --- a/arch/arm64/include/asm/Kbuild
> +++ b/arch/arm64/include/asm/Kbuild
> @@ -14,7 +14,6 @@ generic-y += local64.h
>  generic-y += mcs_spinlock.h
>  generic-y += mm-arch-hooks.h
>  generic-y += msi.h
> -generic-y += preempt.h
>  generic-y += qrwlock.h
>  generic-y += qspinlock.h
>  generic-y += rwsem.h
> diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h
> new file mode 100644
> index 000000000000..f1c1398cf065
> --- /dev/null
> +++ b/arch/arm64/include/asm/preempt.h
> @@ -0,0 +1,88 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ASM_PREEMPT_H
> +#define __ASM_PREEMPT_H
> +
> +#include <linux/thread_info.h>
> +
> +#define PREEMPT_NEED_RESCHED   BIT(32)
> +#define PREEMPT_ENABLED        (PREEMPT_NEED_RESCHED)
> +
> +static inline int preempt_count(void)
> +{
> +       return READ_ONCE(current_thread_info()->preempt.count);
> +}
> +
> +static inline void preempt_count_set(u64 pc)
> +{
> +       /* Preserve existing value of PREEMPT_NEED_RESCHED */
> +       WRITE_ONCE(current_thread_info()->preempt.count, pc);
> +}
> +
> +#define init_task_preempt_count(p) do { \
> +       task_thread_info(p)->preempt_count = FORK_PREEMPT_COUNT; \
> +} while (0)
> +
> +#define init_idle_preempt_count(p, cpu) do { \
> +       task_thread_info(p)->preempt_count = PREEMPT_ENABLED; \
> +} while (0)
> +
> +static inline void set_preempt_need_resched(void)
> +{
> +       current_thread_info()->preempt.need_resched = 0;
> +}
> +
> +static inline void clear_preempt_need_resched(void)
> +{
> +       current_thread_info()->preempt.need_resched = 1;
> +}
> +
> +static inline bool test_preempt_need_resched(void)
> +{
> +       return !current_thread_info()->preempt.need_resched;
> +}
> +
> +static inline void __preempt_count_add(int val)
> +{
> +       u32 pc = READ_ONCE(current_thread_info()->preempt.count);
> +       pc += val;
> +       WRITE_ONCE(current_thread_info()->preempt.count, pc);
> +}
> +
> +static inline void __preempt_count_sub(int val)
> +{
> +       u32 pc = READ_ONCE(current_thread_info()->preempt.count);
> +       pc -= val;
> +       WRITE_ONCE(current_thread_info()->preempt.count, pc);
> +}
> +
> +static inline bool __preempt_count_dec_and_test(void)
> +{
> +       struct thread_info *ti = current_thread_info();
> +       u64 pc = READ_ONCE(ti->preempt_count);
> +
> +       WRITE_ONCE(ti->preempt.count, --pc);

The difference between preempt.count and preempt_count doesn't really
stand out visually, so perhaps add a comment here that the truncation
is intentional?

> +
> +       /*
> +        * If we wrote back all zeroes, then we're preemptible and in
> +        * need of a reschedule. Otherwise, we need to reload the
> +        * preempt_count in case the need_resched flag was cleared by an
> +        * interrupt occurring between the non-atomic READ_ONCE/WRITE_ONCE
> +        * pair.
> +        */
> +       return !pc || !READ_ONCE(ti->preempt_count);
> +}
> +
> +static inline bool should_resched(int preempt_offset)
> +{
> +       u64 pc = READ_ONCE(current_thread_info()->preempt_count);
> +       return pc == preempt_offset;
> +}
> +
> +#ifdef CONFIG_PREEMPT
> +void preempt_schedule(void);
> +#define __preempt_schedule() preempt_schedule()
> +void preempt_schedule_notrace(void);
> +#define __preempt_schedule_notrace() preempt_schedule_notrace()
> +#endif /* CONFIG_PREEMPT */
> +
> +#endif /* __ASM_PREEMPT_H */
> diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
> index cb2c10a8f0a8..bbca68b54732 100644
> --- a/arch/arm64/include/asm/thread_info.h
> +++ b/arch/arm64/include/asm/thread_info.h
> @@ -42,7 +42,18 @@ struct thread_info {
>  #ifdef CONFIG_ARM64_SW_TTBR0_PAN
>         u64                     ttbr0;          /* saved TTBR0_EL1 */
>  #endif
> -       int                     preempt_count;  /* 0 => preemptable, <0 => bug */
> +       union {
> +               u64             preempt_count;  /* 0 => preemptible, <0 => bug */
> +               struct {
> +#ifdef CONFIG_CPU_BIG_ENDIAN
> +                       u32     need_resched;
> +                       u32     count;
> +#else
> +                       u32     count;
> +                       u32     need_resched;
> +#endif
> +               } preempt;
> +       };
>  };
>
>  #define thread_saved_pc(tsk)   \
> --
> 2.1.4
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 0/2] arm64: Only call into preempt_schedule() if need_resched()
  2018-11-30 17:34 [PATCH v2 0/2] arm64: Only call into preempt_schedule() if need_resched() Will Deacon
  2018-11-30 17:34 ` [PATCH v2 1/2] preempt: Move PREEMPT_NEED_RESCHED definition into arch code Will Deacon
  2018-11-30 17:34 ` [PATCH v2 2/2] arm64: preempt: Provide our own implementation of asm/preempt.h Will Deacon
@ 2018-12-06 15:08 ` Peter Zijlstra
  2018-12-06 19:18   ` Will Deacon
  2 siblings, 1 reply; 6+ messages in thread
From: Peter Zijlstra @ 2018-12-06 15:08 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, ard.biesheuvel, catalin.marinas,
	rml, tglx, schwidefsky

On Fri, Nov 30, 2018 at 05:34:29PM +0000, Will Deacon wrote:
> Hi all,
> 
> This is version two of the patches I originally posted here:
> 
>   http://lkml.kernel.org/r/1543347902-21170-1-git-send-email-will.deacon@arm.com
> 
> The only change since v1 is that  __preempt_count_dec_and_test() now
> reloads the need_resched flag if it initially saw that it was set. This
> resolves the issue spotted by Peter, where an IRQ coming in during the
> decrement can cause a reschedule to be missed.

Yes, I think this one will work, so:

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

However, this leaves me wondering if the sequence is actually much
better than what you had?

I suppose there's a win due to cache locality -- you only have to load a
single line -- but I'm thinking that on pure instruction count, you're
not actually winning much.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 0/2] arm64: Only call into preempt_schedule() if need_resched()
  2018-12-06 15:08 ` [PATCH v2 0/2] arm64: Only call into preempt_schedule() if need_resched() Peter Zijlstra
@ 2018-12-06 19:18   ` Will Deacon
  0 siblings, 0 replies; 6+ messages in thread
From: Will Deacon @ 2018-12-06 19:18 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-arm-kernel, linux-kernel, ard.biesheuvel, catalin.marinas,
	rml, tglx, schwidefsky

Hi Peter,

On Thu, Dec 06, 2018 at 04:08:50PM +0100, Peter Zijlstra wrote:
> On Fri, Nov 30, 2018 at 05:34:29PM +0000, Will Deacon wrote:
> > This is version two of the patches I originally posted here:
> > 
> >   http://lkml.kernel.org/r/1543347902-21170-1-git-send-email-will.deacon@arm.com
> > 
> > The only change since v1 is that  __preempt_count_dec_and_test() now
> > reloads the need_resched flag if it initially saw that it was set. This
> > resolves the issue spotted by Peter, where an IRQ coming in during the
> > decrement can cause a reschedule to be missed.
> 
> Yes, I think this one will work, so:
> 
> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Thanks!

> However, this leaves me wondering if the sequence is actually much
> better than what you had?
> 
> I suppose there's a win due to cache locality -- you only have to load a
> single line -- but I'm thinking that on pure instruction count, you're
> not actually winning much.

The fast path is still slightly shorter in terms of executed instructions,
but you're right that the win is likely to be because everything hits in the
cache or the store buffer when we're not preempting, so we should run
through the code reasonably quickly and avoid the unconditional call to
preempt_schedule().

Will

--->8

// Before
  20:   a9bf7bfd        stp     x29, x30, [sp, #-16]!
  24:   910003fd        mov     x29, sp
  28:   d5384101        mrs     x1, sp_el0
  2c:   b9401020        ldr     w0, [x1, #16]
  30:   51000400        sub     w0, w0, #0x1
  34:   b9001020        str     w0, [x1, #16]
  38:   350000a0        cbnz    w0, 4c <preempt_enable+0x2c>
  3c:   f9400020        ldr     x0, [x1]
  40:   721f001f        tst     w0, #0x2
  44:   54000040        b.eq    4c <preempt_enable+0x2c>  // b.none
  48:   94000000        bl      0 <preempt_schedule>
  4c:   a8c17bfd        ldp     x29, x30, [sp], #16
  50:   d65f03c0        ret

// After
  20:   a9bf7bfd        stp     x29, x30, [sp, #-16]!
  24:   910003fd        mov     x29, sp
  28:   d5384101        mrs     x1, sp_el0
  2c:   f9400820        ldr     x0, [x1, #16]
  30:   d1000400        sub     x0, x0, #0x1
  34:   b9001020        str     w0, [x1, #16]
  38:   b5000080        cbnz    x0, 48 <preempt_enable+0x28>
  3c:   94000000        bl      0 <preempt_schedule>
  40:   a8c17bfd        ldp     x29, x30, [sp], #16
  44:   d65f03c0        ret
  48:   f9400820        ldr     x0, [x1, #16]
  4c:   b5ffffa0        cbnz    x0, 40 <preempt_enable+0x20>
  50:   94000000        bl      0 <preempt_schedule>
  54:   17fffffb        b       40 <will_preempt_enable+0x20>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-12-06 19:18 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-30 17:34 [PATCH v2 0/2] arm64: Only call into preempt_schedule() if need_resched() Will Deacon
2018-11-30 17:34 ` [PATCH v2 1/2] preempt: Move PREEMPT_NEED_RESCHED definition into arch code Will Deacon
2018-11-30 17:34 ` [PATCH v2 2/2] arm64: preempt: Provide our own implementation of asm/preempt.h Will Deacon
2018-12-04 16:09   ` Ard Biesheuvel
2018-12-06 15:08 ` [PATCH v2 0/2] arm64: Only call into preempt_schedule() if need_resched() Peter Zijlstra
2018-12-06 19:18   ` Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).