linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] smp: Allow smp_call_function_single_async() to insert locked csd
@ 2019-12-16 21:31 Peter Xu
  2019-12-16 21:31 ` [PATCH v2 1/3] " Peter Xu
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Peter Xu @ 2019-12-16 21:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Thomas Gleixner, Juri Lelli, peterx, Marcelo Tosatti, Peter Zijlstra

This v2 introduced two more patches to let mips/kernel/smp.c and
kernel/sched/core.c to start using the new feature, then we can drop
the customized implementations.

One thing to mention is that cpuidle_coupled_poke_pending is another
candidate that we can consider, however that cpumask is special in
that it's not only used for singleton test of the per-vcpu csd when
injecting new calls, but also in cpuidle_coupled_any_pokes_pending()
or so to check whether there's any pending pokes.  In that sense it
should be good to still keep the mask because it could be faster than
looping over each per-cpu csd.

Patch 1 is the same as v1, no change.  Patch 2-3 are new ones.

Smoke tested on x86_64 only.

Please review, thanks.

Peter Xu (3):
  smp: Allow smp_call_function_single_async() to insert locked csd
  MIPS: smp: Remove tick_broadcast_count
  sched: Remove rq.hrtick_csd_pending

 arch/mips/kernel/smp.c |  8 +-------
 kernel/sched/core.c    |  9 ++-------
 kernel/sched/sched.h   |  1 -
 kernel/smp.c           | 14 +++++++++++---
 4 files changed, 14 insertions(+), 18 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 1/3] smp: Allow smp_call_function_single_async() to insert locked csd
  2019-12-16 21:31 [PATCH v2 0/3] smp: Allow smp_call_function_single_async() to insert locked csd Peter Xu
@ 2019-12-16 21:31 ` Peter Xu
  2020-03-06 14:42   ` [tip: smp/core] " tip-bot2 for Peter Xu
  2019-12-16 21:31 ` [PATCH v2 2/3] MIPS: smp: Remove tick_broadcast_count Peter Xu
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Peter Xu @ 2019-12-16 21:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Thomas Gleixner, Juri Lelli, peterx, Marcelo Tosatti, Peter Zijlstra

Previously we will raise an warning if we want to insert a csd object
which is with the LOCK flag set, and if it happens we'll also wait for
the lock to be released.  However, this operation does not match
perfectly with how the function is named - the name with "_async"
suffix hints that this function should not block, while we will.

This patch changed this behavior by simply return -EBUSY instead of
waiting, at the meantime we allow this operation to happen without
warning the user to change this into a feature when the caller wants
to "insert a csd object, if it's there, just wait for that one".

This is pretty safe because in flush_smp_call_function_queue() for
async csd objects (where csd->flags&SYNC is zero) we'll first do the
unlock then we call the csd->func().  So if we see the csd->flags&LOCK
is true in smp_call_function_single_async(), then it's guaranteed that
csd->func() will be called after this smp_call_function_single_async()
returns -EBUSY.

Update the comment of the function too to refect this.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 kernel/smp.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 7dbcb402c2fc..dd31e8228218 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -329,6 +329,11 @@ EXPORT_SYMBOL(smp_call_function_single);
  * (ie: embedded in an object) and is responsible for synchronizing it
  * such that the IPIs performed on the @csd are strictly serialized.
  *
+ * If the function is called with one csd which has not yet been
+ * processed by previous call to smp_call_function_single_async(), the
+ * function will return immediately with -EBUSY showing that the csd
+ * object is still in progress.
+ *
  * NOTE: Be careful, there is unfortunately no current debugging facility to
  * validate the correctness of this serialization.
  */
@@ -338,14 +343,17 @@ int smp_call_function_single_async(int cpu, call_single_data_t *csd)
 
 	preempt_disable();
 
-	/* We could deadlock if we have to wait here with interrupts disabled! */
-	if (WARN_ON_ONCE(csd->flags & CSD_FLAG_LOCK))
-		csd_lock_wait(csd);
+	if (csd->flags & CSD_FLAG_LOCK) {
+		err = -EBUSY;
+		goto out;
+	}
 
 	csd->flags = CSD_FLAG_LOCK;
 	smp_wmb();
 
 	err = generic_exec_single(cpu, csd, csd->func, csd->info);
+
+out:
 	preempt_enable();
 
 	return err;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 2/3] MIPS: smp: Remove tick_broadcast_count
  2019-12-16 21:31 [PATCH v2 0/3] smp: Allow smp_call_function_single_async() to insert locked csd Peter Xu
  2019-12-16 21:31 ` [PATCH v2 1/3] " Peter Xu
@ 2019-12-16 21:31 ` Peter Xu
  2020-03-06 14:42   ` [tip: smp/core] " tip-bot2 for Peter Xu
  2019-12-16 21:31 ` [PATCH v2 3/3] sched: Remove rq.hrtick_csd_pending Peter Xu
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Peter Xu @ 2019-12-16 21:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Thomas Gleixner, Juri Lelli, peterx, Marcelo Tosatti, Peter Zijlstra

Now smp_call_function_single_async() provides the protection that
we'll return with -EBUSY if the csd object is still pending, then we
don't need the tick_broadcast_count counter any more.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 arch/mips/kernel/smp.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index f510c00bda88..0678901c214d 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -696,21 +696,16 @@ EXPORT_SYMBOL(flush_tlb_one);
 
 #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
 
-static DEFINE_PER_CPU(atomic_t, tick_broadcast_count);
 static DEFINE_PER_CPU(call_single_data_t, tick_broadcast_csd);
 
 void tick_broadcast(const struct cpumask *mask)
 {
-	atomic_t *count;
 	call_single_data_t *csd;
 	int cpu;
 
 	for_each_cpu(cpu, mask) {
-		count = &per_cpu(tick_broadcast_count, cpu);
 		csd = &per_cpu(tick_broadcast_csd, cpu);
-
-		if (atomic_inc_return(count) == 1)
-			smp_call_function_single_async(cpu, csd);
+		smp_call_function_single_async(cpu, csd);
 	}
 }
 
@@ -718,7 +713,6 @@ static void tick_broadcast_callee(void *info)
 {
 	int cpu = smp_processor_id();
 	tick_receive_broadcast();
-	atomic_set(&per_cpu(tick_broadcast_count, cpu), 0);
 }
 
 static int __init tick_broadcast_init(void)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 3/3] sched: Remove rq.hrtick_csd_pending
  2019-12-16 21:31 [PATCH v2 0/3] smp: Allow smp_call_function_single_async() to insert locked csd Peter Xu
  2019-12-16 21:31 ` [PATCH v2 1/3] " Peter Xu
  2019-12-16 21:31 ` [PATCH v2 2/3] MIPS: smp: Remove tick_broadcast_count Peter Xu
@ 2019-12-16 21:31 ` Peter Xu
  2020-03-06 14:42   ` [tip: smp/core] sched/core: " tip-bot2 for Peter Xu
  2020-01-06 16:40 ` [PATCH v2 0/3] smp: Allow smp_call_function_single_async() to insert locked csd Peter Xu
  2020-02-17 16:04 ` Peter Xu
  4 siblings, 1 reply; 9+ messages in thread
From: Peter Xu @ 2019-12-16 21:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: Thomas Gleixner, Juri Lelli, peterx, Marcelo Tosatti, Peter Zijlstra

Now smp_call_function_single_async() provides the protection that
we'll return with -EBUSY if the csd object is still pending, then we
don't need the rq.hrtick_csd_pending any more.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 kernel/sched/core.c  | 9 ++-------
 kernel/sched/sched.h | 1 -
 2 files changed, 2 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 44123b4d14e8..ef527545d349 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -268,7 +268,6 @@ static void __hrtick_start(void *arg)
 
 	rq_lock(rq, &rf);
 	__hrtick_restart(rq);
-	rq->hrtick_csd_pending = 0;
 	rq_unlock(rq, &rf);
 }
 
@@ -292,12 +291,10 @@ void hrtick_start(struct rq *rq, u64 delay)
 
 	hrtimer_set_expires(timer, time);
 
-	if (rq == this_rq()) {
+	if (rq == this_rq())
 		__hrtick_restart(rq);
-	} else if (!rq->hrtick_csd_pending) {
+	else
 		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
-		rq->hrtick_csd_pending = 1;
-	}
 }
 
 #else
@@ -321,8 +318,6 @@ void hrtick_start(struct rq *rq, u64 delay)
 static void hrtick_rq_init(struct rq *rq)
 {
 #ifdef CONFIG_SMP
-	rq->hrtick_csd_pending = 0;
-
 	rq->hrtick_csd.flags = 0;
 	rq->hrtick_csd.func = __hrtick_start;
 	rq->hrtick_csd.info = rq;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index c8870c5bd7df..79b435bbe129 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -967,7 +967,6 @@ struct rq {
 
 #ifdef CONFIG_SCHED_HRTICK
 #ifdef CONFIG_SMP
-	int			hrtick_csd_pending;
 	call_single_data_t	hrtick_csd;
 #endif
 	struct hrtimer		hrtick_timer;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 0/3] smp: Allow smp_call_function_single_async() to insert locked csd
  2019-12-16 21:31 [PATCH v2 0/3] smp: Allow smp_call_function_single_async() to insert locked csd Peter Xu
                   ` (2 preceding siblings ...)
  2019-12-16 21:31 ` [PATCH v2 3/3] sched: Remove rq.hrtick_csd_pending Peter Xu
@ 2020-01-06 16:40 ` Peter Xu
  2020-02-17 16:04 ` Peter Xu
  4 siblings, 0 replies; 9+ messages in thread
From: Peter Xu @ 2020-01-06 16:40 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Juri Lelli, Marcelo Tosatti, Peter Zijlstra

Ping - Would anyone like to review/pick this series?

Peter, is this series ok for you?

Thanks,

On Mon, Dec 16, 2019 at 04:31:22PM -0500, Peter Xu wrote:
> This v2 introduced two more patches to let mips/kernel/smp.c and
> kernel/sched/core.c to start using the new feature, then we can drop
> the customized implementations.
> 
> One thing to mention is that cpuidle_coupled_poke_pending is another
> candidate that we can consider, however that cpumask is special in
> that it's not only used for singleton test of the per-vcpu csd when
> injecting new calls, but also in cpuidle_coupled_any_pokes_pending()
> or so to check whether there's any pending pokes.  In that sense it
> should be good to still keep the mask because it could be faster than
> looping over each per-cpu csd.
> 
> Patch 1 is the same as v1, no change.  Patch 2-3 are new ones.
> 
> Smoke tested on x86_64 only.
> 
> Please review, thanks.
> 
> Peter Xu (3):
>   smp: Allow smp_call_function_single_async() to insert locked csd
>   MIPS: smp: Remove tick_broadcast_count
>   sched: Remove rq.hrtick_csd_pending
> 
>  arch/mips/kernel/smp.c |  8 +-------
>  kernel/sched/core.c    |  9 ++-------
>  kernel/sched/sched.h   |  1 -
>  kernel/smp.c           | 14 +++++++++++---
>  4 files changed, 14 insertions(+), 18 deletions(-)
> 
> -- 
> 2.23.0
> 

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 0/3] smp: Allow smp_call_function_single_async() to insert locked csd
  2019-12-16 21:31 [PATCH v2 0/3] smp: Allow smp_call_function_single_async() to insert locked csd Peter Xu
                   ` (3 preceding siblings ...)
  2020-01-06 16:40 ` [PATCH v2 0/3] smp: Allow smp_call_function_single_async() to insert locked csd Peter Xu
@ 2020-02-17 16:04 ` Peter Xu
  4 siblings, 0 replies; 9+ messages in thread
From: Peter Xu @ 2020-02-17 16:04 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Juri Lelli, Marcelo Tosatti, Peter Zijlstra

On Mon, Dec 16, 2019 at 04:31:22PM -0500, Peter Xu wrote:
> This v2 introduced two more patches to let mips/kernel/smp.c and
> kernel/sched/core.c to start using the new feature, then we can drop
> the customized implementations.
> 
> One thing to mention is that cpuidle_coupled_poke_pending is another
> candidate that we can consider, however that cpumask is special in
> that it's not only used for singleton test of the per-vcpu csd when
> injecting new calls, but also in cpuidle_coupled_any_pokes_pending()
> or so to check whether there's any pending pokes.  In that sense it
> should be good to still keep the mask because it could be faster than
> looping over each per-cpu csd.
> 
> Patch 1 is the same as v1, no change.  Patch 2-3 are new ones.
> 
> Smoke tested on x86_64 only.
> 
> Please review, thanks.

Ping?

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [tip: smp/core] MIPS: smp: Remove tick_broadcast_count
  2019-12-16 21:31 ` [PATCH v2 2/3] MIPS: smp: Remove tick_broadcast_count Peter Xu
@ 2020-03-06 14:42   ` tip-bot2 for Peter Xu
  0 siblings, 0 replies; 9+ messages in thread
From: tip-bot2 for Peter Xu @ 2020-03-06 14:42 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Xu, Peter Zijlstra (Intel), Ingo Molnar, x86, LKML

The following commit has been merged into the smp/core branch of tip:

Commit-ID:     e188f0a50f637391f440b9bf0a1066db71a20889
Gitweb:        https://git.kernel.org/tip/e188f0a50f637391f440b9bf0a1066db71a20889
Author:        Peter Xu <peterx@redhat.com>
AuthorDate:    Mon, 16 Dec 2019 16:31:24 -05:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 06 Mar 2020 13:42:28 +01:00

MIPS: smp: Remove tick_broadcast_count

Now smp_call_function_single_async() provides the protection that
we'll return with -EBUSY if the csd object is still pending, then we
don't need the tick_broadcast_count counter any more.

Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20191216213125.9536-3-peterx@redhat.com
---
 arch/mips/kernel/smp.c |  9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index f510c00..0def624 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -696,29 +696,22 @@ EXPORT_SYMBOL(flush_tlb_one);
 
 #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
 
-static DEFINE_PER_CPU(atomic_t, tick_broadcast_count);
 static DEFINE_PER_CPU(call_single_data_t, tick_broadcast_csd);
 
 void tick_broadcast(const struct cpumask *mask)
 {
-	atomic_t *count;
 	call_single_data_t *csd;
 	int cpu;
 
 	for_each_cpu(cpu, mask) {
-		count = &per_cpu(tick_broadcast_count, cpu);
 		csd = &per_cpu(tick_broadcast_csd, cpu);
-
-		if (atomic_inc_return(count) == 1)
-			smp_call_function_single_async(cpu, csd);
+		smp_call_function_single_async(cpu, csd);
 	}
 }
 
 static void tick_broadcast_callee(void *info)
 {
-	int cpu = smp_processor_id();
 	tick_receive_broadcast();
-	atomic_set(&per_cpu(tick_broadcast_count, cpu), 0);
 }
 
 static int __init tick_broadcast_init(void)

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [tip: smp/core] smp: Allow smp_call_function_single_async() to insert locked csd
  2019-12-16 21:31 ` [PATCH v2 1/3] " Peter Xu
@ 2020-03-06 14:42   ` tip-bot2 for Peter Xu
  0 siblings, 0 replies; 9+ messages in thread
From: tip-bot2 for Peter Xu @ 2020-03-06 14:42 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Xu, Peter Zijlstra (Intel), Ingo Molnar, x86, LKML

The following commit has been merged into the smp/core branch of tip:

Commit-ID:     5a18ceca63502546d6c0cab1f3f79cb6900f947a
Gitweb:        https://git.kernel.org/tip/5a18ceca63502546d6c0cab1f3f79cb6900f947a
Author:        Peter Xu <peterx@redhat.com>
AuthorDate:    Mon, 16 Dec 2019 16:31:23 -05:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 06 Mar 2020 13:42:28 +01:00

smp: Allow smp_call_function_single_async() to insert locked csd

Previously we will raise an warning if we want to insert a csd object
which is with the LOCK flag set, and if it happens we'll also wait for
the lock to be released.  However, this operation does not match
perfectly with how the function is named - the name with "_async"
suffix hints that this function should not block, while we will.

This patch changed this behavior by simply return -EBUSY instead of
waiting, at the meantime we allow this operation to happen without
warning the user to change this into a feature when the caller wants
to "insert a csd object, if it's there, just wait for that one".

This is pretty safe because in flush_smp_call_function_queue() for
async csd objects (where csd->flags&SYNC is zero) we'll first do the
unlock then we call the csd->func().  So if we see the csd->flags&LOCK
is true in smp_call_function_single_async(), then it's guaranteed that
csd->func() will be called after this smp_call_function_single_async()
returns -EBUSY.

Update the comment of the function too to refect this.

Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20191216213125.9536-2-peterx@redhat.com
---
 kernel/smp.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index d0ada39..97f1d97 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -329,6 +329,11 @@ EXPORT_SYMBOL(smp_call_function_single);
  * (ie: embedded in an object) and is responsible for synchronizing it
  * such that the IPIs performed on the @csd are strictly serialized.
  *
+ * If the function is called with one csd which has not yet been
+ * processed by previous call to smp_call_function_single_async(), the
+ * function will return immediately with -EBUSY showing that the csd
+ * object is still in progress.
+ *
  * NOTE: Be careful, there is unfortunately no current debugging facility to
  * validate the correctness of this serialization.
  */
@@ -338,14 +343,17 @@ int smp_call_function_single_async(int cpu, call_single_data_t *csd)
 
 	preempt_disable();
 
-	/* We could deadlock if we have to wait here with interrupts disabled! */
-	if (WARN_ON_ONCE(csd->flags & CSD_FLAG_LOCK))
-		csd_lock_wait(csd);
+	if (csd->flags & CSD_FLAG_LOCK) {
+		err = -EBUSY;
+		goto out;
+	}
 
 	csd->flags = CSD_FLAG_LOCK;
 	smp_wmb();
 
 	err = generic_exec_single(cpu, csd, csd->func, csd->info);
+
+out:
 	preempt_enable();
 
 	return err;

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [tip: smp/core] sched/core: Remove rq.hrtick_csd_pending
  2019-12-16 21:31 ` [PATCH v2 3/3] sched: Remove rq.hrtick_csd_pending Peter Xu
@ 2020-03-06 14:42   ` tip-bot2 for Peter Xu
  0 siblings, 0 replies; 9+ messages in thread
From: tip-bot2 for Peter Xu @ 2020-03-06 14:42 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Xu, Peter Zijlstra (Intel), Ingo Molnar, x86, LKML

The following commit has been merged into the smp/core branch of tip:

Commit-ID:     fd3eafda8f146d4ad8f95f91a8c2b9a5319ff6b2
Gitweb:        https://git.kernel.org/tip/fd3eafda8f146d4ad8f95f91a8c2b9a5319ff6b2
Author:        Peter Xu <peterx@redhat.com>
AuthorDate:    Mon, 16 Dec 2019 16:31:25 -05:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 06 Mar 2020 13:42:28 +01:00

sched/core: Remove rq.hrtick_csd_pending

Now smp_call_function_single_async() provides the protection that
we'll return with -EBUSY if the csd object is still pending, then we
don't need the rq.hrtick_csd_pending any more.

Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20191216213125.9536-4-peterx@redhat.com
---
 kernel/sched/core.c  |  9 ++-------
 kernel/sched/sched.h |  1 -
 2 files changed, 2 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1a9983d..b70ec38 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -269,7 +269,6 @@ static void __hrtick_start(void *arg)
 
 	rq_lock(rq, &rf);
 	__hrtick_restart(rq);
-	rq->hrtick_csd_pending = 0;
 	rq_unlock(rq, &rf);
 }
 
@@ -293,12 +292,10 @@ void hrtick_start(struct rq *rq, u64 delay)
 
 	hrtimer_set_expires(timer, time);
 
-	if (rq == this_rq()) {
+	if (rq == this_rq())
 		__hrtick_restart(rq);
-	} else if (!rq->hrtick_csd_pending) {
+	else
 		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
-		rq->hrtick_csd_pending = 1;
-	}
 }
 
 #else
@@ -322,8 +319,6 @@ void hrtick_start(struct rq *rq, u64 delay)
 static void hrtick_rq_init(struct rq *rq)
 {
 #ifdef CONFIG_SMP
-	rq->hrtick_csd_pending = 0;
-
 	rq->hrtick_csd.flags = 0;
 	rq->hrtick_csd.func = __hrtick_start;
 	rq->hrtick_csd.info = rq;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 9ea6478..38e60b8 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -967,7 +967,6 @@ struct rq {
 
 #ifdef CONFIG_SCHED_HRTICK
 #ifdef CONFIG_SMP
-	int			hrtick_csd_pending;
 	call_single_data_t	hrtick_csd;
 #endif
 	struct hrtimer		hrtick_timer;

^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-03-06 14:42 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-16 21:31 [PATCH v2 0/3] smp: Allow smp_call_function_single_async() to insert locked csd Peter Xu
2019-12-16 21:31 ` [PATCH v2 1/3] " Peter Xu
2020-03-06 14:42   ` [tip: smp/core] " tip-bot2 for Peter Xu
2019-12-16 21:31 ` [PATCH v2 2/3] MIPS: smp: Remove tick_broadcast_count Peter Xu
2020-03-06 14:42   ` [tip: smp/core] " tip-bot2 for Peter Xu
2019-12-16 21:31 ` [PATCH v2 3/3] sched: Remove rq.hrtick_csd_pending Peter Xu
2020-03-06 14:42   ` [tip: smp/core] sched/core: " tip-bot2 for Peter Xu
2020-01-06 16:40 ` [PATCH v2 0/3] smp: Allow smp_call_function_single_async() to insert locked csd Peter Xu
2020-02-17 16:04 ` Peter Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).