* [PATCH] kernel/smp: Fix an off by one in csd_lock_wait_toolong()
@ 2020-07-09 10:48 Dan Carpenter
2020-07-09 10:59 ` Peter Zijlstra
2020-07-09 16:59 ` Paul E. McKenney
0 siblings, 2 replies; 6+ messages in thread
From: Dan Carpenter @ 2020-07-09 10:48 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Ingo Molnar, Thomas Gleixner, Sebastian Andrzej Siewior,
Paul E. McKenney, Kaitao Cheng, linux-kernel, kernel-janitors
The __per_cpu_offset[] array has "nr_cpu_ids" elements so change the >
>= to prevent a read one element beyond the end of the array.
Fixes: 0504bc41a62c ("kernel/smp: Provide CSD lock timeout diagnostics")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
---
kernel/smp.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/smp.c b/kernel/smp.c
index 78b602cae6c2..f49966713ac3 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -171,7 +171,7 @@ static __always_inline bool csd_lock_wait_toolong(call_single_data_t *csd, u64 t
*bug_id = atomic_inc_return(&csd_bug_count);
cpu = csd_lock_wait_getcpu(csd);
smp_mb(); // No stale cur_csd values!
- if (WARN_ONCE(cpu < 0 || cpu > nr_cpu_ids, "%s: cpu = %d\n", __func__, cpu))
+ if (WARN_ONCE(cpu < 0 || cpu >= nr_cpu_ids, "%s: cpu = %d\n", __func__, cpu))
cpu_cur_csd = READ_ONCE(per_cpu(cur_csd, 0));
else
cpu_cur_csd = READ_ONCE(per_cpu(cur_csd, cpu));
--
2.27.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] kernel/smp: Fix an off by one in csd_lock_wait_toolong()
2020-07-09 10:48 [PATCH] kernel/smp: Fix an off by one in csd_lock_wait_toolong() Dan Carpenter
@ 2020-07-09 10:59 ` Peter Zijlstra
2020-07-09 11:49 ` Sebastian Andrzej Siewior
2020-07-09 14:32 ` Paul E. McKenney
2020-07-09 16:59 ` Paul E. McKenney
1 sibling, 2 replies; 6+ messages in thread
From: Peter Zijlstra @ 2020-07-09 10:59 UTC (permalink / raw)
To: Dan Carpenter
Cc: Ingo Molnar, Thomas Gleixner, Sebastian Andrzej Siewior,
Paul E. McKenney, Kaitao Cheng, linux-kernel, kernel-janitors
On Thu, Jul 09, 2020 at 01:48:18PM +0300, Dan Carpenter wrote:
> The __per_cpu_offset[] array has "nr_cpu_ids" elements so change the >
> >= to prevent a read one element beyond the end of the array.
>
> Fixes: 0504bc41a62c ("kernel/smp: Provide CSD lock timeout diagnostics")
I don't have a copy of that patch in my inbox, even though it says Cc:
me.
Paul, where do you expect that patch to go? The version I see from my
next tree needs a _lot_ of work.
> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> ---
> kernel/smp.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/smp.c b/kernel/smp.c
> index 78b602cae6c2..f49966713ac3 100644
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -171,7 +171,7 @@ static __always_inline bool csd_lock_wait_toolong(call_single_data_t *csd, u64 t
> *bug_id = atomic_inc_return(&csd_bug_count);
> cpu = csd_lock_wait_getcpu(csd);
> smp_mb(); // No stale cur_csd values!
> - if (WARN_ONCE(cpu < 0 || cpu > nr_cpu_ids, "%s: cpu = %d\n", __func__, cpu))
> + if (WARN_ONCE(cpu < 0 || cpu >= nr_cpu_ids, "%s: cpu = %d\n", __func__, cpu))
> cpu_cur_csd = READ_ONCE(per_cpu(cur_csd, 0));
> else
> cpu_cur_csd = READ_ONCE(per_cpu(cur_csd, cpu));
> --
> 2.27.0
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] kernel/smp: Fix an off by one in csd_lock_wait_toolong()
2020-07-09 10:59 ` Peter Zijlstra
@ 2020-07-09 11:49 ` Sebastian Andrzej Siewior
2020-07-09 14:36 ` Paul E. McKenney
2020-07-09 14:32 ` Paul E. McKenney
1 sibling, 1 reply; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-07-09 11:49 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Dan Carpenter, Ingo Molnar, Thomas Gleixner, Paul E. McKenney,
Kaitao Cheng, linux-kernel, kernel-janitors
On 2020-07-09 12:59:06 [+0200], Peter Zijlstra wrote:
> On Thu, Jul 09, 2020 at 01:48:18PM +0300, Dan Carpenter wrote:
> > The __per_cpu_offset[] array has "nr_cpu_ids" elements so change the >
> > >= to prevent a read one element beyond the end of the array.
> >
> > Fixes: 0504bc41a62c ("kernel/smp: Provide CSD lock timeout diagnostics")
>
> I don't have a copy of that patch in my inbox, even though it says Cc:
> me.
>
> Paul, where do you expect that patch to go? The version I see from my
> next tree needs a _lot_ of work.
There is also
https://lkml.kernel.org/r/20200705082603.GX3874@shao2-debian
https://lkml.kernel.org/r/00000000000042f21905a991ecea@google.com
it might be the same thing.
Sebastian
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] kernel/smp: Fix an off by one in csd_lock_wait_toolong()
2020-07-09 11:49 ` Sebastian Andrzej Siewior
@ 2020-07-09 14:36 ` Paul E. McKenney
0 siblings, 0 replies; 6+ messages in thread
From: Paul E. McKenney @ 2020-07-09 14:36 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: Peter Zijlstra, Dan Carpenter, Ingo Molnar, Thomas Gleixner,
Kaitao Cheng, linux-kernel, kernel-janitors
On Thu, Jul 09, 2020 at 01:49:00PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-07-09 12:59:06 [+0200], Peter Zijlstra wrote:
> > On Thu, Jul 09, 2020 at 01:48:18PM +0300, Dan Carpenter wrote:
> > > The __per_cpu_offset[] array has "nr_cpu_ids" elements so change the >
> > > >= to prevent a read one element beyond the end of the array.
> > >
> > > Fixes: 0504bc41a62c ("kernel/smp: Provide CSD lock timeout diagnostics")
> >
> > I don't have a copy of that patch in my inbox, even though it says Cc:
> > me.
> >
> > Paul, where do you expect that patch to go? The version I see from my
> > next tree needs a _lot_ of work.
>
> There is also
>
> https://lkml.kernel.org/r/20200705082603.GX3874@shao2-debian
> https://lkml.kernel.org/r/00000000000042f21905a991ecea@google.com
>
> it might be the same thing.
Same commit, different bug, but the fix should be in -next by now.
For these two reports, the problem was that I had debug-recording code
on the wrong side of a csd_unlock().
Thanx, Paul
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] kernel/smp: Fix an off by one in csd_lock_wait_toolong()
2020-07-09 10:59 ` Peter Zijlstra
2020-07-09 11:49 ` Sebastian Andrzej Siewior
@ 2020-07-09 14:32 ` Paul E. McKenney
1 sibling, 0 replies; 6+ messages in thread
From: Paul E. McKenney @ 2020-07-09 14:32 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Dan Carpenter, Ingo Molnar, Thomas Gleixner,
Sebastian Andrzej Siewior, Kaitao Cheng, linux-kernel,
kernel-janitors
On Thu, Jul 09, 2020 at 12:59:06PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 09, 2020 at 01:48:18PM +0300, Dan Carpenter wrote:
> > The __per_cpu_offset[] array has "nr_cpu_ids" elements so change the >
> > >= to prevent a read one element beyond the end of the array.
> >
> > Fixes: 0504bc41a62c ("kernel/smp: Provide CSD lock timeout diagnostics")
Good catch, will apply, thank you!
> I don't have a copy of that patch in my inbox, even though it says Cc:
> me.
I wasn't going to bother you with it until a bit after v5.9-rc1.
But it appears that this don't-bother-you strategy failed.
> Paul, where do you expect that patch to go? The version I see from my
> next tree needs a _lot_ of work.
I expect it to go nowhere until at least the v5.10 merge window.
Thanx, Paul
> > Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> > ---
> > kernel/smp.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/smp.c b/kernel/smp.c
> > index 78b602cae6c2..f49966713ac3 100644
> > --- a/kernel/smp.c
> > +++ b/kernel/smp.c
> > @@ -171,7 +171,7 @@ static __always_inline bool csd_lock_wait_toolong(call_single_data_t *csd, u64 t
> > *bug_id = atomic_inc_return(&csd_bug_count);
> > cpu = csd_lock_wait_getcpu(csd);
> > smp_mb(); // No stale cur_csd values!
> > - if (WARN_ONCE(cpu < 0 || cpu > nr_cpu_ids, "%s: cpu = %d\n", __func__, cpu))
> > + if (WARN_ONCE(cpu < 0 || cpu >= nr_cpu_ids, "%s: cpu = %d\n", __func__, cpu))
> > cpu_cur_csd = READ_ONCE(per_cpu(cur_csd, 0));
> > else
> > cpu_cur_csd = READ_ONCE(per_cpu(cur_csd, cpu));
> > --
> > 2.27.0
> >
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] kernel/smp: Fix an off by one in csd_lock_wait_toolong()
2020-07-09 10:48 [PATCH] kernel/smp: Fix an off by one in csd_lock_wait_toolong() Dan Carpenter
2020-07-09 10:59 ` Peter Zijlstra
@ 2020-07-09 16:59 ` Paul E. McKenney
1 sibling, 0 replies; 6+ messages in thread
From: Paul E. McKenney @ 2020-07-09 16:59 UTC (permalink / raw)
To: Dan Carpenter
Cc: Peter Zijlstra, Ingo Molnar, Thomas Gleixner,
Sebastian Andrzej Siewior, Kaitao Cheng, linux-kernel,
kernel-janitors
On Thu, Jul 09, 2020 at 01:48:18PM +0300, Dan Carpenter wrote:
> The __per_cpu_offset[] array has "nr_cpu_ids" elements so change the >
> >= to prevent a read one element beyond the end of the array.
>
> Fixes: 0504bc41a62c ("kernel/smp: Provide CSD lock timeout diagnostics")
> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Good eyes, thank you! Folding this into the original with
attribution.
Thanx, Paul
> ---
> kernel/smp.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/smp.c b/kernel/smp.c
> index 78b602cae6c2..f49966713ac3 100644
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -171,7 +171,7 @@ static __always_inline bool csd_lock_wait_toolong(call_single_data_t *csd, u64 t
> *bug_id = atomic_inc_return(&csd_bug_count);
> cpu = csd_lock_wait_getcpu(csd);
> smp_mb(); // No stale cur_csd values!
> - if (WARN_ONCE(cpu < 0 || cpu > nr_cpu_ids, "%s: cpu = %d\n", __func__, cpu))
> + if (WARN_ONCE(cpu < 0 || cpu >= nr_cpu_ids, "%s: cpu = %d\n", __func__, cpu))
> cpu_cur_csd = READ_ONCE(per_cpu(cur_csd, 0));
> else
> cpu_cur_csd = READ_ONCE(per_cpu(cur_csd, cpu));
> --
> 2.27.0
>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2020-07-09 16:59 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-09 10:48 [PATCH] kernel/smp: Fix an off by one in csd_lock_wait_toolong() Dan Carpenter
2020-07-09 10:59 ` Peter Zijlstra
2020-07-09 11:49 ` Sebastian Andrzej Siewior
2020-07-09 14:36 ` Paul E. McKenney
2020-07-09 14:32 ` Paul E. McKenney
2020-07-09 16:59 ` Paul E. McKenney
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).