All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 1/3] drivers/xen/preempt: use need_resched() instead of should_resched()
@ 2015-07-15  9:52 ` Konstantin Khlebnikov
  0 siblings, 0 replies; 24+ messages in thread
From: Konstantin Khlebnikov @ 2015-07-15  9:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-arch, x86, linux-kernel, kvm-ppc, Alexander Graf,
	Paul Mackerras, David Vrabel, xen-devel, Boris Ostrovsky,
	linuxppc-dev

This code is used only when CONFIG_PREEMPT=n and only in non-atomic context:
xen_in_preemptible_hcall is set only in privcmd_ioctl_hypercall().
Thus preempt_count is zero and should_resched() is equal to need_resched().

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
---
 drivers/xen/preempt.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
index a1800c150839..08cb419eb4e6 100644
--- a/drivers/xen/preempt.c
+++ b/drivers/xen/preempt.c
@@ -31,7 +31,7 @@ EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
 asmlinkage __visible void xen_maybe_preempt_hcall(void)
 {
 	if (unlikely(__this_cpu_read(xen_in_preemptible_hcall)
-		     && should_resched())) {
+		     && need_resched())) {
 		/*
 		 * Clear flag as we may be rescheduled on a different
 		 * cpu.


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 1/3] drivers/xen/preempt: use need_resched() instead of should_resched()
@ 2015-07-15  9:52 ` Konstantin Khlebnikov
  0 siblings, 0 replies; 24+ messages in thread
From: Konstantin Khlebnikov @ 2015-07-15  9:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-arch, x86, linux-kernel, kvm-ppc, Alexander Graf,
	Paul Mackerras, David Vrabel, xen-devel, Boris Ostrovsky,
	linuxppc-dev

This code is used only when CONFIG_PREEMPT=n and only in non-atomic context:
xen_in_preemptible_hcall is set only in privcmd_ioctl_hypercall().
Thus preempt_count is zero and should_resched() is equal to need_resched().

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
---
 drivers/xen/preempt.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
index a1800c150839..08cb419eb4e6 100644
--- a/drivers/xen/preempt.c
+++ b/drivers/xen/preempt.c
@@ -31,7 +31,7 @@ EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
 asmlinkage __visible void xen_maybe_preempt_hcall(void)
 {
 	if (unlikely(__this_cpu_read(xen_in_preemptible_hcall)
-		     && should_resched())) {
+		     && need_resched())) {
 		/*
 		 * Clear flag as we may be rescheduled on a different
 		 * cpu.


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 2/3] KVM: PPC: Book3S HV: Use need_resched() instead of should_resched()
  2015-07-15  9:52 ` Konstantin Khlebnikov
@ 2015-07-15  9:52   ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 24+ messages in thread
From: Konstantin Khlebnikov @ 2015-07-15  9:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-arch, x86, linux-kernel, kvm-ppc, Alexander Graf,
	Paul Mackerras, David Vrabel, xen-devel, Boris Ostrovsky,
	linuxppc-dev

Function should_resched() is equal to (!preempt_count() && need_resched()).
In preemptive kernel preempt_count here is non-zero because of vc->lock.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
---
 arch/powerpc/kvm/book3s_hv.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 68d067ad4222..a9f753fb73a8 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2178,7 +2178,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 		vc->runner = vcpu;
 		if (n_ceded == vc->n_runnable) {
 			kvmppc_vcore_blocked(vc);
-		} else if (should_resched()) {
+		} else if (need_resched()) {
 			vc->vcore_state = VCORE_PREEMPT;
 			/* Let something else run */
 			cond_resched_lock(&vc->lock);


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 2/3] KVM: PPC: Book3S HV: Use need_resched() instead of should_resched()
  2015-07-15  9:52 ` Konstantin Khlebnikov
  (?)
@ 2015-07-15  9:52 ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 24+ messages in thread
From: Konstantin Khlebnikov @ 2015-07-15  9:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-arch, x86, linux-kernel, kvm-ppc, Alexander Graf,
	Paul Mackerras, David Vrabel, xen-devel, Boris Ostrovsky,
	linuxppc-dev

Function should_resched() is equal to (!preempt_count() && need_resched()).
In preemptive kernel preempt_count here is non-zero because of vc->lock.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
---
 arch/powerpc/kvm/book3s_hv.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 68d067ad4222..a9f753fb73a8 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2178,7 +2178,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 		vc->runner = vcpu;
 		if (n_ceded == vc->n_runnable) {
 			kvmppc_vcore_blocked(vc);
-		} else if (should_resched()) {
+		} else if (need_resched()) {
 			vc->vcore_state = VCORE_PREEMPT;
 			/* Let something else run */
 			cond_resched_lock(&vc->lock);

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 2/3] KVM: PPC: Book3S HV: Use need_resched() instead of should_resched()
@ 2015-07-15  9:52   ` Konstantin Khlebnikov
  0 siblings, 0 replies; 24+ messages in thread
From: Konstantin Khlebnikov @ 2015-07-15  9:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-arch, x86, linux-kernel, kvm-ppc, Alexander Graf,
	Paul Mackerras, David Vrabel, xen-devel, Boris Ostrovsky,
	linuxppc-dev

Function should_resched() is equal to (!preempt_count() && need_resched()).
In preemptive kernel preempt_count here is non-zero because of vc->lock.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
---
 arch/powerpc/kvm/book3s_hv.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 68d067ad4222..a9f753fb73a8 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2178,7 +2178,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 		vc->runner = vcpu;
 		if (n_ceded = vc->n_runnable) {
 			kvmppc_vcore_blocked(vc);
-		} else if (should_resched()) {
+		} else if (need_resched()) {
 			vc->vcore_state = VCORE_PREEMPT;
 			/* Let something else run */
 			cond_resched_lock(&vc->lock);


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 3/3] sched/preempt: fix cond_resched_lock() and cond_resched_softirq()
  2015-07-15  9:52 ` Konstantin Khlebnikov
@ 2015-07-15  9:52   ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 24+ messages in thread
From: Konstantin Khlebnikov @ 2015-07-15  9:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-arch, x86, linux-kernel, kvm-ppc, Alexander Graf,
	Paul Mackerras, David Vrabel, xen-devel, Boris Ostrovsky,
	linuxppc-dev

These functions check should_resched() before unlocking spinlock/bh-enable:
preempt_count always non-zero => should_resched() always returns false.
cond_resched_lock() worked iff spin_needbreak is set.

This patch adds argument "preempt_offset" to should_resched().

preempt_count offset constants for that:

PREEMPT_DISABLE_OFFSET  - offset after preempt_disable()
PREEMPT_LOCK_OFFSET     - offset after spin_lock()
SOFTIRQ_DISABLE_OFFSET  - offset after local_bh_distable()
SOFTIRQ_LOCK_OFFSET     - offset after spin_lock_bh()

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
---
 arch/x86/include/asm/preempt.h |    4 ++--
 include/asm-generic/preempt.h  |    5 +++--
 include/linux/preempt.h        |   19 ++++++++++++++-----
 include/linux/sched.h          |    6 ------
 kernel/sched/core.c            |    6 +++---
 5 files changed, 22 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index dca71714f860..b12f81022a6b 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -90,9 +90,9 @@ static __always_inline bool __preempt_count_dec_and_test(void)
 /*
  * Returns true when we need to resched and can (barring IRQ state).
  */
-static __always_inline bool should_resched(void)
+static __always_inline bool should_resched(int preempt_offset)
 {
-	return unlikely(!raw_cpu_read_4(__preempt_count));
+	return unlikely(raw_cpu_read_4(__preempt_count) == preempt_offset);
 }
 
 #ifdef CONFIG_PREEMPT
diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h
index d0a7a4753db2..0bec580a4885 100644
--- a/include/asm-generic/preempt.h
+++ b/include/asm-generic/preempt.h
@@ -71,9 +71,10 @@ static __always_inline bool __preempt_count_dec_and_test(void)
 /*
  * Returns true when we need to resched and can (barring IRQ state).
  */
-static __always_inline bool should_resched(void)
+static __always_inline bool should_resched(int preempt_offset)
 {
-	return unlikely(!preempt_count() && tif_need_resched());
+	return unlikely(preempt_count() == preempt_offset &&
+			tif_need_resched());
 }
 
 #ifdef CONFIG_PREEMPT
diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 84991f185173..bea8dd8ff5e0 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -84,13 +84,21 @@
  */
 #define in_nmi()	(preempt_count() & NMI_MASK)
 
+/*
+ * The preempt_count offset after preempt_disable();
+ */
 #if defined(CONFIG_PREEMPT_COUNT)
-# define PREEMPT_DISABLE_OFFSET 1
+# define PREEMPT_DISABLE_OFFSET	PREEMPT_OFFSET
 #else
-# define PREEMPT_DISABLE_OFFSET 0
+# define PREEMPT_DISABLE_OFFSET	0
 #endif
 
 /*
+ * The preempt_count offset after spin_lock()
+ */
+#define PREEMPT_LOCK_OFFSET	PREEMPT_DISABLE_OFFSET
+
+/*
  * The preempt_count offset needed for things like:
  *
  *  spin_lock_bh()
@@ -103,7 +111,7 @@
  *
  * Work as expected.
  */
-#define SOFTIRQ_LOCK_OFFSET (SOFTIRQ_DISABLE_OFFSET + PREEMPT_DISABLE_OFFSET)
+#define SOFTIRQ_LOCK_OFFSET (SOFTIRQ_DISABLE_OFFSET + PREEMPT_LOCK_OFFSET)
 
 /*
  * Are we running in atomic context?  WARNING: this macro cannot
@@ -124,7 +132,8 @@
 #if defined(CONFIG_DEBUG_PREEMPT) || defined(CONFIG_PREEMPT_TRACER)
 extern void preempt_count_add(int val);
 extern void preempt_count_sub(int val);
-#define preempt_count_dec_and_test() ({ preempt_count_sub(1); should_resched(); })
+#define preempt_count_dec_and_test() \
+	({ preempt_count_sub(1); should_resched(0); })
 #else
 #define preempt_count_add(val)	__preempt_count_add(val)
 #define preempt_count_sub(val)	__preempt_count_sub(val)
@@ -184,7 +193,7 @@ do { \
 
 #define preempt_check_resched() \
 do { \
-	if (should_resched()) \
+	if (should_resched(0)) \
 		__preempt_schedule(); \
 } while (0)
 
diff --git a/include/linux/sched.h b/include/linux/sched.h
index ae21f1591615..a8e9b17acdee 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2885,12 +2885,6 @@ extern int _cond_resched(void);
 
 extern int __cond_resched_lock(spinlock_t *lock);
 
-#ifdef CONFIG_PREEMPT_COUNT
-#define PREEMPT_LOCK_OFFSET	PREEMPT_OFFSET
-#else
-#define PREEMPT_LOCK_OFFSET	0
-#endif
-
 #define cond_resched_lock(lock) ({				\
 	___might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET);\
 	__cond_resched_lock(lock);				\
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 78b4bad10081..d9a4d93dc879 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4492,7 +4492,7 @@ SYSCALL_DEFINE0(sched_yield)
 
 int __sched _cond_resched(void)
 {
-	if (should_resched()) {
+	if (should_resched(0)) {
 		preempt_schedule_common();
 		return 1;
 	}
@@ -4510,7 +4510,7 @@ EXPORT_SYMBOL(_cond_resched);
  */
 int __cond_resched_lock(spinlock_t *lock)
 {
-	int resched = should_resched();
+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
 	int ret = 0;
 
 	lockdep_assert_held(lock);
@@ -4532,7 +4532,7 @@ int __sched __cond_resched_softirq(void)
 {
 	BUG_ON(!in_softirq());
 
-	if (should_resched()) {
+	if (should_resched(SOFTIRQ_DISABLE_OFFSET)) {
 		local_bh_enable();
 		preempt_schedule_common();
 		local_bh_disable();


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 3/3] sched/preempt: fix cond_resched_lock() and cond_resched_softirq()
  2015-07-15  9:52 ` Konstantin Khlebnikov
                   ` (3 preceding siblings ...)
  (?)
@ 2015-07-15  9:52 ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 24+ messages in thread
From: Konstantin Khlebnikov @ 2015-07-15  9:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-arch, x86, linux-kernel, kvm-ppc, Alexander Graf,
	Paul Mackerras, David Vrabel, xen-devel, Boris Ostrovsky,
	linuxppc-dev

These functions check should_resched() before unlocking spinlock/bh-enable:
preempt_count always non-zero => should_resched() always returns false.
cond_resched_lock() worked iff spin_needbreak is set.

This patch adds argument "preempt_offset" to should_resched().

preempt_count offset constants for that:

PREEMPT_DISABLE_OFFSET  - offset after preempt_disable()
PREEMPT_LOCK_OFFSET     - offset after spin_lock()
SOFTIRQ_DISABLE_OFFSET  - offset after local_bh_distable()
SOFTIRQ_LOCK_OFFSET     - offset after spin_lock_bh()

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
---
 arch/x86/include/asm/preempt.h |    4 ++--
 include/asm-generic/preempt.h  |    5 +++--
 include/linux/preempt.h        |   19 ++++++++++++++-----
 include/linux/sched.h          |    6 ------
 kernel/sched/core.c            |    6 +++---
 5 files changed, 22 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index dca71714f860..b12f81022a6b 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -90,9 +90,9 @@ static __always_inline bool __preempt_count_dec_and_test(void)
 /*
  * Returns true when we need to resched and can (barring IRQ state).
  */
-static __always_inline bool should_resched(void)
+static __always_inline bool should_resched(int preempt_offset)
 {
-	return unlikely(!raw_cpu_read_4(__preempt_count));
+	return unlikely(raw_cpu_read_4(__preempt_count) == preempt_offset);
 }
 
 #ifdef CONFIG_PREEMPT
diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h
index d0a7a4753db2..0bec580a4885 100644
--- a/include/asm-generic/preempt.h
+++ b/include/asm-generic/preempt.h
@@ -71,9 +71,10 @@ static __always_inline bool __preempt_count_dec_and_test(void)
 /*
  * Returns true when we need to resched and can (barring IRQ state).
  */
-static __always_inline bool should_resched(void)
+static __always_inline bool should_resched(int preempt_offset)
 {
-	return unlikely(!preempt_count() && tif_need_resched());
+	return unlikely(preempt_count() == preempt_offset &&
+			tif_need_resched());
 }
 
 #ifdef CONFIG_PREEMPT
diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 84991f185173..bea8dd8ff5e0 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -84,13 +84,21 @@
  */
 #define in_nmi()	(preempt_count() & NMI_MASK)
 
+/*
+ * The preempt_count offset after preempt_disable();
+ */
 #if defined(CONFIG_PREEMPT_COUNT)
-# define PREEMPT_DISABLE_OFFSET 1
+# define PREEMPT_DISABLE_OFFSET	PREEMPT_OFFSET
 #else
-# define PREEMPT_DISABLE_OFFSET 0
+# define PREEMPT_DISABLE_OFFSET	0
 #endif
 
 /*
+ * The preempt_count offset after spin_lock()
+ */
+#define PREEMPT_LOCK_OFFSET	PREEMPT_DISABLE_OFFSET
+
+/*
  * The preempt_count offset needed for things like:
  *
  *  spin_lock_bh()
@@ -103,7 +111,7 @@
  *
  * Work as expected.
  */
-#define SOFTIRQ_LOCK_OFFSET (SOFTIRQ_DISABLE_OFFSET + PREEMPT_DISABLE_OFFSET)
+#define SOFTIRQ_LOCK_OFFSET (SOFTIRQ_DISABLE_OFFSET + PREEMPT_LOCK_OFFSET)
 
 /*
  * Are we running in atomic context?  WARNING: this macro cannot
@@ -124,7 +132,8 @@
 #if defined(CONFIG_DEBUG_PREEMPT) || defined(CONFIG_PREEMPT_TRACER)
 extern void preempt_count_add(int val);
 extern void preempt_count_sub(int val);
-#define preempt_count_dec_and_test() ({ preempt_count_sub(1); should_resched(); })
+#define preempt_count_dec_and_test() \
+	({ preempt_count_sub(1); should_resched(0); })
 #else
 #define preempt_count_add(val)	__preempt_count_add(val)
 #define preempt_count_sub(val)	__preempt_count_sub(val)
@@ -184,7 +193,7 @@ do { \
 
 #define preempt_check_resched() \
 do { \
-	if (should_resched()) \
+	if (should_resched(0)) \
 		__preempt_schedule(); \
 } while (0)
 
diff --git a/include/linux/sched.h b/include/linux/sched.h
index ae21f1591615..a8e9b17acdee 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2885,12 +2885,6 @@ extern int _cond_resched(void);
 
 extern int __cond_resched_lock(spinlock_t *lock);
 
-#ifdef CONFIG_PREEMPT_COUNT
-#define PREEMPT_LOCK_OFFSET	PREEMPT_OFFSET
-#else
-#define PREEMPT_LOCK_OFFSET	0
-#endif
-
 #define cond_resched_lock(lock) ({				\
 	___might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET);\
 	__cond_resched_lock(lock);				\
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 78b4bad10081..d9a4d93dc879 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4492,7 +4492,7 @@ SYSCALL_DEFINE0(sched_yield)
 
 int __sched _cond_resched(void)
 {
-	if (should_resched()) {
+	if (should_resched(0)) {
 		preempt_schedule_common();
 		return 1;
 	}
@@ -4510,7 +4510,7 @@ EXPORT_SYMBOL(_cond_resched);
  */
 int __cond_resched_lock(spinlock_t *lock)
 {
-	int resched = should_resched();
+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
 	int ret = 0;
 
 	lockdep_assert_held(lock);
@@ -4532,7 +4532,7 @@ int __sched __cond_resched_softirq(void)
 {
 	BUG_ON(!in_softirq());
 
-	if (should_resched()) {
+	if (should_resched(SOFTIRQ_DISABLE_OFFSET)) {
 		local_bh_enable();
 		preempt_schedule_common();
 		local_bh_disable();

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 3/3] sched/preempt: fix cond_resched_lock() and cond_resched_softirq()
@ 2015-07-15  9:52   ` Konstantin Khlebnikov
  0 siblings, 0 replies; 24+ messages in thread
From: Konstantin Khlebnikov @ 2015-07-15  9:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-arch, x86, linux-kernel, kvm-ppc, Alexander Graf,
	Paul Mackerras, David Vrabel, xen-devel, Boris Ostrovsky,
	linuxppc-dev

These functions check should_resched() before unlocking spinlock/bh-enable:
preempt_count always non-zero => should_resched() always returns false.
cond_resched_lock() worked iff spin_needbreak is set.

This patch adds argument "preempt_offset" to should_resched().

preempt_count offset constants for that:

PREEMPT_DISABLE_OFFSET  - offset after preempt_disable()
PREEMPT_LOCK_OFFSET     - offset after spin_lock()
SOFTIRQ_DISABLE_OFFSET  - offset after local_bh_distable()
SOFTIRQ_LOCK_OFFSET     - offset after spin_lock_bh()

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
---
 arch/x86/include/asm/preempt.h |    4 ++--
 include/asm-generic/preempt.h  |    5 +++--
 include/linux/preempt.h        |   19 ++++++++++++++-----
 include/linux/sched.h          |    6 ------
 kernel/sched/core.c            |    6 +++---
 5 files changed, 22 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index dca71714f860..b12f81022a6b 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -90,9 +90,9 @@ static __always_inline bool __preempt_count_dec_and_test(void)
 /*
  * Returns true when we need to resched and can (barring IRQ state).
  */
-static __always_inline bool should_resched(void)
+static __always_inline bool should_resched(int preempt_offset)
 {
-	return unlikely(!raw_cpu_read_4(__preempt_count));
+	return unlikely(raw_cpu_read_4(__preempt_count) = preempt_offset);
 }
 
 #ifdef CONFIG_PREEMPT
diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h
index d0a7a4753db2..0bec580a4885 100644
--- a/include/asm-generic/preempt.h
+++ b/include/asm-generic/preempt.h
@@ -71,9 +71,10 @@ static __always_inline bool __preempt_count_dec_and_test(void)
 /*
  * Returns true when we need to resched and can (barring IRQ state).
  */
-static __always_inline bool should_resched(void)
+static __always_inline bool should_resched(int preempt_offset)
 {
-	return unlikely(!preempt_count() && tif_need_resched());
+	return unlikely(preempt_count() = preempt_offset &&
+			tif_need_resched());
 }
 
 #ifdef CONFIG_PREEMPT
diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 84991f185173..bea8dd8ff5e0 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -84,13 +84,21 @@
  */
 #define in_nmi()	(preempt_count() & NMI_MASK)
 
+/*
+ * The preempt_count offset after preempt_disable();
+ */
 #if defined(CONFIG_PREEMPT_COUNT)
-# define PREEMPT_DISABLE_OFFSET 1
+# define PREEMPT_DISABLE_OFFSET	PREEMPT_OFFSET
 #else
-# define PREEMPT_DISABLE_OFFSET 0
+# define PREEMPT_DISABLE_OFFSET	0
 #endif
 
 /*
+ * The preempt_count offset after spin_lock()
+ */
+#define PREEMPT_LOCK_OFFSET	PREEMPT_DISABLE_OFFSET
+
+/*
  * The preempt_count offset needed for things like:
  *
  *  spin_lock_bh()
@@ -103,7 +111,7 @@
  *
  * Work as expected.
  */
-#define SOFTIRQ_LOCK_OFFSET (SOFTIRQ_DISABLE_OFFSET + PREEMPT_DISABLE_OFFSET)
+#define SOFTIRQ_LOCK_OFFSET (SOFTIRQ_DISABLE_OFFSET + PREEMPT_LOCK_OFFSET)
 
 /*
  * Are we running in atomic context?  WARNING: this macro cannot
@@ -124,7 +132,8 @@
 #if defined(CONFIG_DEBUG_PREEMPT) || defined(CONFIG_PREEMPT_TRACER)
 extern void preempt_count_add(int val);
 extern void preempt_count_sub(int val);
-#define preempt_count_dec_and_test() ({ preempt_count_sub(1); should_resched(); })
+#define preempt_count_dec_and_test() \
+	({ preempt_count_sub(1); should_resched(0); })
 #else
 #define preempt_count_add(val)	__preempt_count_add(val)
 #define preempt_count_sub(val)	__preempt_count_sub(val)
@@ -184,7 +193,7 @@ do { \
 
 #define preempt_check_resched() \
 do { \
-	if (should_resched()) \
+	if (should_resched(0)) \
 		__preempt_schedule(); \
 } while (0)
 
diff --git a/include/linux/sched.h b/include/linux/sched.h
index ae21f1591615..a8e9b17acdee 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2885,12 +2885,6 @@ extern int _cond_resched(void);
 
 extern int __cond_resched_lock(spinlock_t *lock);
 
-#ifdef CONFIG_PREEMPT_COUNT
-#define PREEMPT_LOCK_OFFSET	PREEMPT_OFFSET
-#else
-#define PREEMPT_LOCK_OFFSET	0
-#endif
-
 #define cond_resched_lock(lock) ({				\
 	___might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET);\
 	__cond_resched_lock(lock);				\
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 78b4bad10081..d9a4d93dc879 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4492,7 +4492,7 @@ SYSCALL_DEFINE0(sched_yield)
 
 int __sched _cond_resched(void)
 {
-	if (should_resched()) {
+	if (should_resched(0)) {
 		preempt_schedule_common();
 		return 1;
 	}
@@ -4510,7 +4510,7 @@ EXPORT_SYMBOL(_cond_resched);
  */
 int __cond_resched_lock(spinlock_t *lock)
 {
-	int resched = should_resched();
+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
 	int ret = 0;
 
 	lockdep_assert_held(lock);
@@ -4532,7 +4532,7 @@ int __sched __cond_resched_softirq(void)
 {
 	BUG_ON(!in_softirq());
 
-	if (should_resched()) {
+	if (should_resched(SOFTIRQ_DISABLE_OFFSET)) {
 		local_bh_enable();
 		preempt_schedule_common();
 		local_bh_disable();


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] sched/preempt: fix cond_resched_lock() and cond_resched_softirq()
  2015-07-15  9:52   ` Konstantin Khlebnikov
@ 2015-07-15 12:16     ` Eric Dumazet
  -1 siblings, 0 replies; 24+ messages in thread
From: Eric Dumazet @ 2015-07-15 12:16 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Peter Zijlstra, linux-arch, x86, linux-kernel, kvm-ppc,
	Alexander Graf, Paul Mackerras, David Vrabel, xen-devel,
	Boris Ostrovsky, linuxppc-dev

On Wed, 2015-07-15 at 12:52 +0300, Konstantin Khlebnikov wrote:
> These functions check should_resched() before unlocking spinlock/bh-enable:
> preempt_count always non-zero => should_resched() always returns false.
> cond_resched_lock() worked iff spin_needbreak is set.

Interesting, this definitely used to work (linux-3.11)

Any idea of which commit broke things ?




^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] sched/preempt: fix cond_resched_lock() and cond_resched_softirq()
  2015-07-15  9:52   ` Konstantin Khlebnikov
  (?)
@ 2015-07-15 12:16   ` Eric Dumazet
  -1 siblings, 0 replies; 24+ messages in thread
From: Eric Dumazet @ 2015-07-15 12:16 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-arch, Peter Zijlstra, x86, linux-kernel, kvm-ppc,
	Alexander Graf, Paul Mackerras, David Vrabel, xen-devel,
	Boris Ostrovsky, linuxppc-dev

On Wed, 2015-07-15 at 12:52 +0300, Konstantin Khlebnikov wrote:
> These functions check should_resched() before unlocking spinlock/bh-enable:
> preempt_count always non-zero => should_resched() always returns false.
> cond_resched_lock() worked iff spin_needbreak is set.

Interesting, this definitely used to work (linux-3.11)

Any idea of which commit broke things ?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] sched/preempt: fix cond_resched_lock() and cond_resched_softirq()
@ 2015-07-15 12:16     ` Eric Dumazet
  0 siblings, 0 replies; 24+ messages in thread
From: Eric Dumazet @ 2015-07-15 12:16 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Peter Zijlstra, linux-arch, x86, linux-kernel, kvm-ppc,
	Alexander Graf, Paul Mackerras, David Vrabel, xen-devel,
	Boris Ostrovsky, linuxppc-dev

On Wed, 2015-07-15 at 12:52 +0300, Konstantin Khlebnikov wrote:
> These functions check should_resched() before unlocking spinlock/bh-enable:
> preempt_count always non-zero => should_resched() always returns false.
> cond_resched_lock() worked iff spin_needbreak is set.

Interesting, this definitely used to work (linux-3.11)

Any idea of which commit broke things ?




^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] sched/preempt: fix cond_resched_lock() and cond_resched_softirq()
  2015-07-15 12:16     ` Eric Dumazet
@ 2015-07-15 12:52       ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 24+ messages in thread
From: Konstantin Khlebnikov @ 2015-07-15 12:52 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Peter Zijlstra, linux-arch, x86, linux-kernel, kvm-ppc,
	Alexander Graf, Paul Mackerras, David Vrabel, xen-devel,
	Boris Ostrovsky, linuxppc-dev

On 15.07.2015 15:16, Eric Dumazet wrote:
> On Wed, 2015-07-15 at 12:52 +0300, Konstantin Khlebnikov wrote:
>> These functions check should_resched() before unlocking spinlock/bh-enable:
>> preempt_count always non-zero => should_resched() always returns false.
>> cond_resched_lock() worked iff spin_needbreak is set.
>
> Interesting, this definitely used to work (linux-3.11)
>
> Any idea of which commit broke things ?
>

Searching... done

This one: bdb43806589096ac4272fe1307e789846ac08d7c in v3.13

before

-static inline int should_resched(void)
-{
-       return need_resched() && !(preempt_count() & PREEMPT_ACTIVE);
-}

after

+static __always_inline bool should_resched(void)
+{
+       return unlikely(!*preempt_count_ptr());
+}


So,

Fixes: bdb438065890 ("sched: Extract the basic add/sub preempt_count 
modifiers")

-- 
Konstantin

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] sched/preempt: fix cond_resched_lock() and cond_resched_softirq()
  2015-07-15 12:16     ` Eric Dumazet
  (?)
  (?)
@ 2015-07-15 12:52     ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 24+ messages in thread
From: Konstantin Khlebnikov @ 2015-07-15 12:52 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: linux-arch, Peter Zijlstra, x86, linux-kernel, kvm-ppc,
	Alexander Graf, Paul Mackerras, David Vrabel, xen-devel,
	Boris Ostrovsky, linuxppc-dev

On 15.07.2015 15:16, Eric Dumazet wrote:
> On Wed, 2015-07-15 at 12:52 +0300, Konstantin Khlebnikov wrote:
>> These functions check should_resched() before unlocking spinlock/bh-enable:
>> preempt_count always non-zero => should_resched() always returns false.
>> cond_resched_lock() worked iff spin_needbreak is set.
>
> Interesting, this definitely used to work (linux-3.11)
>
> Any idea of which commit broke things ?
>

Searching... done

This one: bdb43806589096ac4272fe1307e789846ac08d7c in v3.13

before

-static inline int should_resched(void)
-{
-       return need_resched() && !(preempt_count() & PREEMPT_ACTIVE);
-}

after

+static __always_inline bool should_resched(void)
+{
+       return unlikely(!*preempt_count_ptr());
+}


So,

Fixes: bdb438065890 ("sched: Extract the basic add/sub preempt_count 
modifiers")

-- 
Konstantin

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] sched/preempt: fix cond_resched_lock() and cond_resched_softirq()
@ 2015-07-15 12:52       ` Konstantin Khlebnikov
  0 siblings, 0 replies; 24+ messages in thread
From: Konstantin Khlebnikov @ 2015-07-15 12:52 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Peter Zijlstra, linux-arch, x86, linux-kernel, kvm-ppc,
	Alexander Graf, Paul Mackerras, David Vrabel, xen-devel,
	Boris Ostrovsky, linuxppc-dev

On 15.07.2015 15:16, Eric Dumazet wrote:
> On Wed, 2015-07-15 at 12:52 +0300, Konstantin Khlebnikov wrote:
>> These functions check should_resched() before unlocking spinlock/bh-enable:
>> preempt_count always non-zero => should_resched() always returns false.
>> cond_resched_lock() worked iff spin_needbreak is set.
>
> Interesting, this definitely used to work (linux-3.11)
>
> Any idea of which commit broke things ?
>

Searching... done

This one: bdb43806589096ac4272fe1307e789846ac08d7c in v3.13

before

-static inline int should_resched(void)
-{
-       return need_resched() && !(preempt_count() & PREEMPT_ACTIVE);
-}

after

+static __always_inline bool should_resched(void)
+{
+       return unlikely(!*preempt_count_ptr());
+}


So,

Fixes: bdb438065890 ("sched: Extract the basic add/sub preempt_count 
modifiers")

-- 
Konstantin

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] sched/preempt: fix cond_resched_lock() and cond_resched_softirq()
  2015-07-15 12:52       ` Konstantin Khlebnikov
@ 2015-07-15 13:35         ` Peter Zijlstra
  -1 siblings, 0 replies; 24+ messages in thread
From: Peter Zijlstra @ 2015-07-15 13:35 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Eric Dumazet, linux-arch, x86, linux-kernel, kvm-ppc,
	Alexander Graf, Paul Mackerras, David Vrabel, xen-devel,
	Boris Ostrovsky, linuxppc-dev

On Wed, Jul 15, 2015 at 03:52:34PM +0300, Konstantin Khlebnikov wrote:
> On 15.07.2015 15:16, Eric Dumazet wrote:
> >On Wed, 2015-07-15 at 12:52 +0300, Konstantin Khlebnikov wrote:
> >>These functions check should_resched() before unlocking spinlock/bh-enable:
> >>preempt_count always non-zero => should_resched() always returns false.
> >>cond_resched_lock() worked iff spin_needbreak is set.
> >
> >Interesting, this definitely used to work (linux-3.11)
> >
> >Any idea of which commit broke things ?
> >
> 
> Searching... done
> 
> This one: bdb43806589096ac4272fe1307e789846ac08d7c in v3.13
> 
> before
> 
> -static inline int should_resched(void)
> -{
> -       return need_resched() && !(preempt_count() & PREEMPT_ACTIVE);
> -}
> 
> after
> 
> +static __always_inline bool should_resched(void)
> +{
> +       return unlikely(!*preempt_count_ptr());
> +}

Argh, indeed!

> 
> So,
> 
> Fixes: bdb438065890 ("sched: Extract the basic add/sub preempt_count
> modifiers")

Thanks!

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] sched/preempt: fix cond_resched_lock() and cond_resched_softirq()
  2015-07-15 12:52       ` Konstantin Khlebnikov
  (?)
@ 2015-07-15 13:35       ` Peter Zijlstra
  -1 siblings, 0 replies; 24+ messages in thread
From: Peter Zijlstra @ 2015-07-15 13:35 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-arch, Eric Dumazet, x86, linux-kernel, kvm-ppc,
	Alexander Graf, Paul Mackerras, David Vrabel, xen-devel,
	Boris Ostrovsky, linuxppc-dev

On Wed, Jul 15, 2015 at 03:52:34PM +0300, Konstantin Khlebnikov wrote:
> On 15.07.2015 15:16, Eric Dumazet wrote:
> >On Wed, 2015-07-15 at 12:52 +0300, Konstantin Khlebnikov wrote:
> >>These functions check should_resched() before unlocking spinlock/bh-enable:
> >>preempt_count always non-zero => should_resched() always returns false.
> >>cond_resched_lock() worked iff spin_needbreak is set.
> >
> >Interesting, this definitely used to work (linux-3.11)
> >
> >Any idea of which commit broke things ?
> >
> 
> Searching... done
> 
> This one: bdb43806589096ac4272fe1307e789846ac08d7c in v3.13
> 
> before
> 
> -static inline int should_resched(void)
> -{
> -       return need_resched() && !(preempt_count() & PREEMPT_ACTIVE);
> -}
> 
> after
> 
> +static __always_inline bool should_resched(void)
> +{
> +       return unlikely(!*preempt_count_ptr());
> +}

Argh, indeed!

> 
> So,
> 
> Fixes: bdb438065890 ("sched: Extract the basic add/sub preempt_count
> modifiers")

Thanks!

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] sched/preempt: fix cond_resched_lock() and cond_resched_softirq()
@ 2015-07-15 13:35         ` Peter Zijlstra
  0 siblings, 0 replies; 24+ messages in thread
From: Peter Zijlstra @ 2015-07-15 13:35 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Eric Dumazet, linux-arch, x86, linux-kernel, kvm-ppc,
	Alexander Graf, Paul Mackerras, David Vrabel, xen-devel,
	Boris Ostrovsky, linuxppc-dev

On Wed, Jul 15, 2015 at 03:52:34PM +0300, Konstantin Khlebnikov wrote:
> On 15.07.2015 15:16, Eric Dumazet wrote:
> >On Wed, 2015-07-15 at 12:52 +0300, Konstantin Khlebnikov wrote:
> >>These functions check should_resched() before unlocking spinlock/bh-enable:
> >>preempt_count always non-zero => should_resched() always returns false.
> >>cond_resched_lock() worked iff spin_needbreak is set.
> >
> >Interesting, this definitely used to work (linux-3.11)
> >
> >Any idea of which commit broke things ?
> >
> 
> Searching... done
> 
> This one: bdb43806589096ac4272fe1307e789846ac08d7c in v3.13
> 
> before
> 
> -static inline int should_resched(void)
> -{
> -       return need_resched() && !(preempt_count() & PREEMPT_ACTIVE);
> -}
> 
> after
> 
> +static __always_inline bool should_resched(void)
> +{
> +       return unlikely(!*preempt_count_ptr());
> +}

Argh, indeed!

> 
> So,
> 
> Fixes: bdb438065890 ("sched: Extract the basic add/sub preempt_count
> modifiers")

Thanks!

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 1/3] drivers/xen/preempt: use need_resched() instead of should_resched()
  2015-07-15  9:52 ` Konstantin Khlebnikov
  (?)
@ 2015-07-20 13:41   ` David Vrabel
  -1 siblings, 0 replies; 24+ messages in thread
From: David Vrabel @ 2015-07-20 13:41 UTC (permalink / raw)
  To: Konstantin Khlebnikov, Peter Zijlstra
  Cc: linux-arch, x86, linux-kernel, kvm-ppc, Alexander Graf,
	Paul Mackerras, David Vrabel, xen-devel, Boris Ostrovsky,
	linuxppc-dev

On 15/07/15 10:52, Konstantin Khlebnikov wrote:
> This code is used only when CONFIG_PREEMPT=n and only in non-atomic context:
> xen_in_preemptible_hcall is set only in privcmd_ioctl_hypercall().
> Thus preempt_count is zero and should_resched() is equal to need_resched().

Applied to for-linus-4.3, thanks.

David

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 1/3] drivers/xen/preempt: use need_resched() instead of should_resched()
@ 2015-07-20 13:41   ` David Vrabel
  0 siblings, 0 replies; 24+ messages in thread
From: David Vrabel @ 2015-07-20 13:41 UTC (permalink / raw)
  To: Konstantin Khlebnikov, Peter Zijlstra
  Cc: linux-arch, x86, linux-kernel, kvm-ppc, Alexander Graf,
	Paul Mackerras, David Vrabel, xen-devel, Boris Ostrovsky,
	linuxppc-dev

On 15/07/15 10:52, Konstantin Khlebnikov wrote:
> This code is used only when CONFIG_PREEMPT=n and only in non-atomic context:
> xen_in_preemptible_hcall is set only in privcmd_ioctl_hypercall().
> Thus preempt_count is zero and should_resched() is equal to need_resched().

Applied to for-linus-4.3, thanks.

David

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/3] drivers/xen/preempt: use need_resched() instead of should_resched()
  2015-07-15  9:52 ` Konstantin Khlebnikov
                   ` (4 preceding siblings ...)
  (?)
@ 2015-07-20 13:41 ` David Vrabel
  -1 siblings, 0 replies; 24+ messages in thread
From: David Vrabel @ 2015-07-20 13:41 UTC (permalink / raw)
  To: Konstantin Khlebnikov, Peter Zijlstra
  Cc: linux-arch, x86, linux-kernel, kvm-ppc, Alexander Graf,
	Paul Mackerras, David Vrabel, xen-devel, Boris Ostrovsky,
	linuxppc-dev

On 15/07/15 10:52, Konstantin Khlebnikov wrote:
> This code is used only when CONFIG_PREEMPT=n and only in non-atomic context:
> xen_in_preemptible_hcall is set only in privcmd_ioctl_hypercall().
> Thus preempt_count is zero and should_resched() is equal to need_resched().

Applied to for-linus-4.3, thanks.

David

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 1/3] drivers/xen/preempt: use need_resched() instead of should_resched()
@ 2015-07-20 13:41   ` David Vrabel
  0 siblings, 0 replies; 24+ messages in thread
From: David Vrabel @ 2015-07-20 13:41 UTC (permalink / raw)
  To: Konstantin Khlebnikov, Peter Zijlstra
  Cc: linux-arch, x86, linux-kernel, kvm-ppc, Alexander Graf,
	Paul Mackerras, David Vrabel, xen-devel, Boris Ostrovsky,
	linuxppc-dev

On 15/07/15 10:52, Konstantin Khlebnikov wrote:
> This code is used only when CONFIG_PREEMPT=n and only in non-atomic context:
> xen_in_preemptible_hcall is set only in privcmd_ioctl_hypercall().
> Thus preempt_count is zero and should_resched() is equal to need_resched().

Applied to for-linus-4.3, thanks.

David

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [tip:sched/core] sched/preempt, xen: Use need_resched() instead of should_resched()
  2015-07-15  9:52 ` Konstantin Khlebnikov
                   ` (6 preceding siblings ...)
  (?)
@ 2015-08-03 17:07 ` tip-bot for Konstantin Khlebnikov
  -1 siblings, 0 replies; 24+ messages in thread
From: tip-bot for Konstantin Khlebnikov @ 2015-08-03 17:07 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: paulus, boris.ostrovsky, mingo, hpa, tglx, linux-kernel,
	david.vrabel, peterz, agraf, torvalds, khlebnikov, efault

Commit-ID:  0fa2f5cb2b0ecd8d56baa51f35f09aab234eb0bf
Gitweb:     http://git.kernel.org/tip/0fa2f5cb2b0ecd8d56baa51f35f09aab234eb0bf
Author:     Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
AuthorDate: Wed, 15 Jul 2015 12:52:01 +0300
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Aug 2015 12:21:23 +0200

sched/preempt, xen: Use need_resched() instead of should_resched()

This code is used only when CONFIG_PREEMPT=n and only in non-atomic context:
xen_in_preemptible_hcall is set only in privcmd_ioctl_hypercall().
Thus preempt_count is zero and should_resched() is equal to need_resched().

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Graf <agraf@suse.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150715095201.12246.49283.stgit@buzz
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 drivers/xen/preempt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
index a1800c1..08cb419 100644
--- a/drivers/xen/preempt.c
+++ b/drivers/xen/preempt.c
@@ -31,7 +31,7 @@ EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
 asmlinkage __visible void xen_maybe_preempt_hcall(void)
 {
 	if (unlikely(__this_cpu_read(xen_in_preemptible_hcall)
-		     && should_resched())) {
+		     && need_resched())) {
 		/*
 		 * Clear flag as we may be rescheduled on a different
 		 * cpu.

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [tip:sched/core] sched/preempt, powerpc, kvm: Use need_resched() instead of should_resched()
  2015-07-15  9:52   ` Konstantin Khlebnikov
  (?)
@ 2015-08-03 17:08   ` tip-bot for Konstantin Khlebnikov
  -1 siblings, 0 replies; 24+ messages in thread
From: tip-bot for Konstantin Khlebnikov @ 2015-08-03 17:08 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: efault, david.vrabel, linux-kernel, hpa, boris.ostrovsky, paulus,
	agraf, mingo, tglx, torvalds, khlebnikov, peterz

Commit-ID:  c56dadf39761a6157239cac39e3988998c994f98
Gitweb:     http://git.kernel.org/tip/c56dadf39761a6157239cac39e3988998c994f98
Author:     Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
AuthorDate: Wed, 15 Jul 2015 12:52:03 +0300
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Aug 2015 12:21:24 +0200

sched/preempt, powerpc, kvm: Use need_resched() instead of should_resched()

Function should_resched() is equal to (!preempt_count() && need_resched()).
In preemptive kernel preempt_count here is non-zero because of vc->lock.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Graf <agraf@suse.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150715095203.12246.72922.stgit@buzz
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/powerpc/kvm/book3s_hv.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 68d067a..a9f753f 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2178,7 +2178,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 		vc->runner = vcpu;
 		if (n_ceded == vc->n_runnable) {
 			kvmppc_vcore_blocked(vc);
-		} else if (should_resched()) {
+		} else if (need_resched()) {
 			vc->vcore_state = VCORE_PREEMPT;
 			/* Let something else run */
 			cond_resched_lock(&vc->lock);

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [tip:sched/core] sched/preempt: Fix cond_resched_lock() and cond_resched_softirq()
  2015-07-15  9:52   ` Konstantin Khlebnikov
                     ` (2 preceding siblings ...)
  (?)
@ 2015-08-03 17:08   ` tip-bot for Konstantin Khlebnikov
  -1 siblings, 0 replies; 24+ messages in thread
From: tip-bot for Konstantin Khlebnikov @ 2015-08-03 17:08 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: torvalds, paulus, peterz, linux-kernel, efault, hpa, mingo,
	agraf, boris.ostrovsky, david.vrabel, tglx, khlebnikov

Commit-ID:  fe32d3cd5e8eb0f82e459763374aa80797023403
Gitweb:     http://git.kernel.org/tip/fe32d3cd5e8eb0f82e459763374aa80797023403
Author:     Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
AuthorDate: Wed, 15 Jul 2015 12:52:04 +0300
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Aug 2015 12:21:24 +0200

sched/preempt: Fix cond_resched_lock() and cond_resched_softirq()

These functions check should_resched() before unlocking spinlock/bh-enable:
preempt_count always non-zero => should_resched() always returns false.
cond_resched_lock() worked iff spin_needbreak is set.

This patch adds argument "preempt_offset" to should_resched().

preempt_count offset constants for that:

  PREEMPT_DISABLE_OFFSET  - offset after preempt_disable()
  PREEMPT_LOCK_OFFSET     - offset after spin_lock()
  SOFTIRQ_DISABLE_OFFSET  - offset after local_bh_distable()
  SOFTIRQ_LOCK_OFFSET     - offset after spin_lock_bh()

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Graf <agraf@suse.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: bdb438065890 ("sched: Extract the basic add/sub preempt_count modifiers")
Link: http://lkml.kernel.org/r/20150715095204.12246.98268.stgit@buzz
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/preempt.h |  4 ++--
 include/asm-generic/preempt.h  |  5 +++--
 include/linux/preempt.h        | 19 ++++++++++++++-----
 include/linux/sched.h          |  6 ------
 kernel/sched/core.c            |  6 +++---
 5 files changed, 22 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index dca7171..b12f810 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -90,9 +90,9 @@ static __always_inline bool __preempt_count_dec_and_test(void)
 /*
  * Returns true when we need to resched and can (barring IRQ state).
  */
-static __always_inline bool should_resched(void)
+static __always_inline bool should_resched(int preempt_offset)
 {
-	return unlikely(!raw_cpu_read_4(__preempt_count));
+	return unlikely(raw_cpu_read_4(__preempt_count) == preempt_offset);
 }
 
 #ifdef CONFIG_PREEMPT
diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h
index d0a7a47..0bec580 100644
--- a/include/asm-generic/preempt.h
+++ b/include/asm-generic/preempt.h
@@ -71,9 +71,10 @@ static __always_inline bool __preempt_count_dec_and_test(void)
 /*
  * Returns true when we need to resched and can (barring IRQ state).
  */
-static __always_inline bool should_resched(void)
+static __always_inline bool should_resched(int preempt_offset)
 {
-	return unlikely(!preempt_count() && tif_need_resched());
+	return unlikely(preempt_count() == preempt_offset &&
+			tif_need_resched());
 }
 
 #ifdef CONFIG_PREEMPT
diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 84991f1..bea8dd8 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -84,13 +84,21 @@
  */
 #define in_nmi()	(preempt_count() & NMI_MASK)
 
+/*
+ * The preempt_count offset after preempt_disable();
+ */
 #if defined(CONFIG_PREEMPT_COUNT)
-# define PREEMPT_DISABLE_OFFSET 1
+# define PREEMPT_DISABLE_OFFSET	PREEMPT_OFFSET
 #else
-# define PREEMPT_DISABLE_OFFSET 0
+# define PREEMPT_DISABLE_OFFSET	0
 #endif
 
 /*
+ * The preempt_count offset after spin_lock()
+ */
+#define PREEMPT_LOCK_OFFSET	PREEMPT_DISABLE_OFFSET
+
+/*
  * The preempt_count offset needed for things like:
  *
  *  spin_lock_bh()
@@ -103,7 +111,7 @@
  *
  * Work as expected.
  */
-#define SOFTIRQ_LOCK_OFFSET (SOFTIRQ_DISABLE_OFFSET + PREEMPT_DISABLE_OFFSET)
+#define SOFTIRQ_LOCK_OFFSET (SOFTIRQ_DISABLE_OFFSET + PREEMPT_LOCK_OFFSET)
 
 /*
  * Are we running in atomic context?  WARNING: this macro cannot
@@ -124,7 +132,8 @@
 #if defined(CONFIG_DEBUG_PREEMPT) || defined(CONFIG_PREEMPT_TRACER)
 extern void preempt_count_add(int val);
 extern void preempt_count_sub(int val);
-#define preempt_count_dec_and_test() ({ preempt_count_sub(1); should_resched(); })
+#define preempt_count_dec_and_test() \
+	({ preempt_count_sub(1); should_resched(0); })
 #else
 #define preempt_count_add(val)	__preempt_count_add(val)
 #define preempt_count_sub(val)	__preempt_count_sub(val)
@@ -184,7 +193,7 @@ do { \
 
 #define preempt_check_resched() \
 do { \
-	if (should_resched()) \
+	if (should_resched(0)) \
 		__preempt_schedule(); \
 } while (0)
 
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 65a8a86..9c14465 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2891,12 +2891,6 @@ extern int _cond_resched(void);
 
 extern int __cond_resched_lock(spinlock_t *lock);
 
-#ifdef CONFIG_PREEMPT_COUNT
-#define PREEMPT_LOCK_OFFSET	PREEMPT_OFFSET
-#else
-#define PREEMPT_LOCK_OFFSET	0
-#endif
-
 #define cond_resched_lock(lock) ({				\
 	___might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET);\
 	__cond_resched_lock(lock);				\
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fa5826c..f5fad2b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4496,7 +4496,7 @@ SYSCALL_DEFINE0(sched_yield)
 
 int __sched _cond_resched(void)
 {
-	if (should_resched()) {
+	if (should_resched(0)) {
 		preempt_schedule_common();
 		return 1;
 	}
@@ -4514,7 +4514,7 @@ EXPORT_SYMBOL(_cond_resched);
  */
 int __cond_resched_lock(spinlock_t *lock)
 {
-	int resched = should_resched();
+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
 	int ret = 0;
 
 	lockdep_assert_held(lock);
@@ -4536,7 +4536,7 @@ int __sched __cond_resched_softirq(void)
 {
 	BUG_ON(!in_softirq());
 
-	if (should_resched()) {
+	if (should_resched(SOFTIRQ_DISABLE_OFFSET)) {
 		local_bh_enable();
 		preempt_schedule_common();
 		local_bh_disable();

^ permalink raw reply related	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2015-08-03 17:09 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-15  9:52 [PATCH v2 1/3] drivers/xen/preempt: use need_resched() instead of should_resched() Konstantin Khlebnikov
2015-07-15  9:52 ` Konstantin Khlebnikov
2015-07-15  9:52 ` [PATCH v2 2/3] KVM: PPC: Book3S HV: Use " Konstantin Khlebnikov
2015-07-15  9:52 ` Konstantin Khlebnikov
2015-07-15  9:52   ` Konstantin Khlebnikov
2015-08-03 17:08   ` [tip:sched/core] sched/preempt, powerpc, kvm: " tip-bot for Konstantin Khlebnikov
2015-07-15  9:52 ` [PATCH v2 3/3] sched/preempt: fix cond_resched_lock() and cond_resched_softirq() Konstantin Khlebnikov
2015-07-15  9:52   ` Konstantin Khlebnikov
2015-07-15 12:16   ` Eric Dumazet
2015-07-15 12:16   ` Eric Dumazet
2015-07-15 12:16     ` Eric Dumazet
2015-07-15 12:52     ` Konstantin Khlebnikov
2015-07-15 12:52       ` Konstantin Khlebnikov
2015-07-15 13:35       ` Peter Zijlstra
2015-07-15 13:35       ` Peter Zijlstra
2015-07-15 13:35         ` Peter Zijlstra
2015-07-15 12:52     ` Konstantin Khlebnikov
2015-08-03 17:08   ` [tip:sched/core] sched/preempt: Fix " tip-bot for Konstantin Khlebnikov
2015-07-15  9:52 ` [PATCH v2 3/3] sched/preempt: fix " Konstantin Khlebnikov
2015-07-20 13:41 ` [PATCH v2 1/3] drivers/xen/preempt: use need_resched() instead of should_resched() David Vrabel
2015-07-20 13:41 ` [Xen-devel] " David Vrabel
2015-07-20 13:41   ` David Vrabel
2015-07-20 13:41   ` David Vrabel
2015-08-03 17:07 ` [tip:sched/core] sched/preempt, xen: Use " tip-bot for Konstantin Khlebnikov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.