rcu.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: Re: [PATCH V2] rcu: Make sure new krcp free business is handled after the wanted rcu grace period.
@ 2023-04-04 13:08 代子为 (Ziwei Dai)
  2023-04-04 13:54 ` Paul E. McKenney
  0 siblings, 1 reply; 5+ messages in thread
From: 代子为 (Ziwei Dai) @ 2023-04-04 13:08 UTC (permalink / raw)
  To: paulmck
  Cc: urezki, frederic, quic_neeraju, josh, rostedt, mathieu.desnoyers,
	jiangshanlai, joel, rcu, linux-kernel,
	王双 (Shuang Wang),
	辛依凡 (Yifan Xin), 王科 (Ke Wang),
	闫学文 (Xuewen Yan),
	牛志国 (Zhiguo Niu),
	黄朝阳 (Zhaoyang Huang)

Correct error line format of my mail content and add comments.

> -----邮件原件-----
> 发件人: Paul E. McKenney <paulmck@kernel.org>
> 发送时间: 2023年4月4日 11:23
> 收件人: 代子为 (Ziwei Dai) <Ziwei.Dai@unisoc.com>
> 抄送: urezki@gmail.com; frederic@kernel.org; quic_neeraju@quicinc.com; josh@joshtriplett.org; rostedt@goodmis.org;
> mathieu.desnoyers@efficios.com; jiangshanlai@gmail.com; joel@joelfernandes.org; rcu@vger.kernel.org; linux-kernel@vger.kernel.org;
> 王双 (Shuang Wang) <shuang.wang@unisoc.com>; 辛依凡 (Yifan Xin) <Yifan.Xin@unisoc.com>; 王科 (Ke Wang)
> <Ke.Wang@unisoc.com>; 闫学文 (Xuewen Yan) <Xuewen.Yan@unisoc.com>; 牛志国 (Zhiguo Niu) <Zhiguo.Niu@unisoc.com>; 黄朝
> 阳 (Zhaoyang Huang) <zhaoyang.huang@unisoc.com>
> 主题: Re: 答复: [PATCH V2] rcu: Make sure new krcp free business is handled after the wanted rcu grace period.
> 
> 
> 
> On Tue, Apr 04, 2023 at 02:49:15AM +0000, 代子为 (Ziwei Dai) wrote:
> > Hello Paul!
> >
> > > -----邮件原件-----
> > > 发件人: Paul E. McKenney <paulmck@kernel.org>
> > > 发送时间: 2023年4月4日 6:58
> > > 收件人: 代子为 (Ziwei Dai) <Ziwei.Dai@unisoc.com>
> > > 抄送: urezki@gmail.com; frederic@kernel.org; quic_neeraju@quicinc.com;
> > > josh@joshtriplett.org; rostedt@goodmis.org;
> > > mathieu.desnoyers@efficios.com; jiangshanlai@gmail.com;
> > > joel@joelfernandes.org; rcu@vger.kernel.org; linux-kernel@vger.kernel.org;
> > > 王双 (Shuang Wang) <shuang.wang@unisoc.com>; 辛依凡 (Yifan Xin)
> > > <Yifan.Xin@unisoc.com>; 王科 (Ke Wang) <Ke.Wang@unisoc.com>; 闫学文
> > > (Xuewen Yan) <Xuewen.Yan@unisoc.com>; 牛志国 (Zhiguo Niu)
> > > <Zhiguo.Niu@unisoc.com>; 黄朝阳 (Zhaoyang Huang)
> > > <zhaoyang.huang@unisoc.com>
> > > 主题: Re: [PATCH V2] rcu: Make sure new krcp free business is handled after
> > > the wanted rcu grace period.
> > >
> > >
> > > 注意: 这封邮件来自于外部。除非你确定邮件内容安全,否则不要点击任
> > > 何链接和附件。
> > > CAUTION: This email originated from outside of the organization. Do not click
> > > links or open attachments unless you recognize the sender and know the
> > > content is safe.
> > >
> > >
> > >
> > > On Fri, Mar 31, 2023 at 08:42:09PM +0800, Ziwei Dai wrote:
> > > > In kfree_rcu_monitor(), new free business at krcp is attached to any
> > > > free channel at krwp. kfree_rcu_monitor() is responsible to make sure
> > > > new free business is handled after the rcu grace period. But if there
> > > > is any none-free channel at krwp already, that means there is an
> > > > on-going rcu work, which will cause the kvfree_call_rcu()-triggered
> > > > free business is done before the wanted rcu grace period ends.
> > > >
> > > > This commit ignore krwp which has non-free channel at
> > > > kfree_rcu_monitor(), to fix the issue that kvfree_call_rcu() loses effectiveness.
> > > >
> > > > Below is the css_set obj "from_cset" use-after-free case caused by
> > > > kvfree_call_rcu() losing effectiveness.
> > > > CPU 0 calls rcu_read_lock(), then use "from_cset", then hard irq
> > > > comes, the task is schedule out.
> > > > CPU 1 calls kfree_rcu(cset, rcu_head), willing to free "from_cset" after new gp.
> > > > But "from_cset" is freed right after current gp end. "from_cset" is reallocated.
> > > > CPU 0 's task arrives back, references "from_cset"'s member, which causes crash.
> > > >
> > > > CPU 0                                 CPU 1
> > > > count_memcg_event_mm()
> > > > |rcu_read_lock()  <---
> > > > |mem_cgroup_from_task()
> > > >  |// css_set_ptr is the "from_cset" mentioned on CPU 1  |css_set_ptr =
> > > > rcu_dereference((task)->cgroups)  |// Hard irq comes, current task is
> > > > scheduled out.
> > > >
> > > >                                       cgroup_attach_task()
> > > >                                       |cgroup_migrate()
> > > >                                       |cgroup_migrate_execute()
> > > >                                       |css_set_move_task(task, from_cset, to_cset, true)
> > > >                                       |cgroup_move_task(task, to_cset)
> > > >                                       |rcu_assign_pointer(.., to_cset)
> > > >                                       |...
> > > >                                       |cgroup_migrate_finish()
> > > >                                       |put_css_set_locked(from_cset)
> > > >                                       |from_cset->refcount return 0
> > > >                                       |kfree_rcu(cset, rcu_head) // means to free from_cset after new gp
> > > >                                       |add_ptr_to_bulk_krc_lock()
> > > >                                       |schedule_delayed_work(&krcp->monitor_work, ..)
> > > >
> > > >                                       kfree_rcu_monitor()
> > > >                                       |krcp->bulk_head[0]'s work attached to krwp->bulk_head_free[]
> > > >                                       |queue_rcu_work(system_wq, &krwp->rcu_work)
> > > >                                       |if rwork->rcu.work is not in WORK_STRUCT_PENDING_BIT state,
> > > >                                       |call_rcu(&rwork->rcu, rcu_work_rcufn) <--- request a new gp
> > > >
> > > >                                       // There is a perious call_rcu(.., rcu_work_rcufn)
> > > >                                       // gp end, rcu_work_rcufn() is called.
> > > >                                       rcu_work_rcufn()
> > > >                                       |__queue_work(.., rwork->wq, &rwork->work);
> > > >
> > > >                                       |kfree_rcu_work()
> > > >                                       |krwp->bulk_head_free[0] bulk is freed before new gp end!!!
> > > >                                       |The "from_cset" is freed before new gp end.
> > > >
> > > > // the task is scheduled in after many ms.
> > > >  |css_set_ptr->subsys[(subsys_id) <--- Caused kernel crash, because css_set_ptr is freed.
> > > >
> > > > v2: Use helper function instead of inserted code block at kfree_rcu_monitor().
> > > >
> > > > Fixes: c014efeef76a ("rcu: Add multiple in-flight batches of
> > > > kfree_rcu() work")
> > > > Signed-off-by: Ziwei Dai <ziwei.dai@unisoc.com>
> > >
> > > Good catch, thank you!!!
> > >
> > > How difficult was this to trigger?  If it can be triggered easily, this of course
> > > needs to go into mainline sooner rather than later.
> >
> > Roughly we can reproduce this issue within two rounds of 48h stress test,
> > with 20 k5.15 devices. If KASAN is enabled, the reproduce rate is higher.
> > So I think sooner is better.
> 
> Thank you for the info!  This is in theory an old bug, but if you can
> easily find out, does it trigger for you on v6.2 or earlier?
> 

We haven't ported v6.2 to our device yet...

> > > Longer term, would it make sense to run the three channels through RCU
> > > separately, in order to avoid one channel refraining from starting a grace
> > > period just because some other channel has callbacks waiting for a grace
> > > period to complete?  One argument against might be energy efficiency, but
> > > perhaps the ->gp_snap field could be used to get the best of both worlds.
> >
> > I see kvfree_rcu_drain_ready(krcp) is already called at the beginning of
> > kfree_rcu_monitor(), which polls the ->gp_snap field, to decide
> > whether to free channel objects immediately or after gp.
> > Both energy efficiency and timing seems be considered?
> 
> My concern is that running the channels separately might mean more grace
> periods (and thus more energy draw) on nearly idle devices, such devices
> usually being the ones for which energy efficiency matters most.
> 
> But perhaps Vlad, Neeraj, or Joel has some insight on this, given
> that they are the ones working on battery-powered devices.
> 
> > > Either way, this fixes only one bug of two.  The second bug is in the
> > > kfree_rcu() tests, which should have caught this bug.  Thoughts on a good fix
> > > for those tests?
> >
> > I inserted a msleep() between "rcu_read_lock(), get pointer via rcu_dereference()"
> > and "reference pointer, using the member", at the rcu scenario, then we can
> > reproduce this issue very soon in stress test. Can kfree_rcu() tests insert msleep()?
> 
> Another approach is to separate concerns, so that readers interact with
> grace periods in the rcutorture.c tests, and to add the interaction
> of to-be-freed memory with grace periods in the rcuscale kvfree tests.
> I took a step in this direction with this commit on the -rcu tree's
> "dev" branch:
> 
> efbe7927f479 ("rcu/kvfree: Add debug to check grace periods")
> 
> Given this, might it be possible to make rcuscale.c's kfree_rcu()
> testing create patterns of usage of the three channels so as to
> catch this bug that way?
> 

I can try it on my k5.15 device, and need some time.
I have a question. Do you mean add code in tree.c to create pattern
while channel data is being freed?
If so, both rcuscales.c and tree.c need to be modified for the test case.

> > > I have applied Uladzislau's and Mukesh's tags, and done the usual
> > > wordsmithing as shown at the end of this message.  Please let me know if I
> > > messed anything up.
> >
> > Thank you for the improvement on the patch! It seems better now.
> 
> No problem and thank you again for the debugging and the fix!
> 
>                                                         Thanx, Paul
> 
> > > > ---
> > > >  kernel/rcu/tree.c | 27 +++++++++++++++++++--------
> > > >  1 file changed, 19 insertions(+), 8 deletions(-)
> > > >
> > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index
> > > > 8e880c0..7b95ee9 100644
> > > > --- a/kernel/rcu/tree.c
> > > > +++ b/kernel/rcu/tree.c
> > > > @@ -3024,6 +3024,18 @@ static void kfree_rcu_work(struct work_struct *work)
> > > >       return !!READ_ONCE(krcp->head);
> > > >  }
> > > >
> > > > +static bool
> > > > +need_wait_for_krwp_work(struct kfree_rcu_cpu_work *krwp) {
> > > > +     int i;
> > > > +
> > > > +     for (i = 0; i < FREE_N_CHANNELS; i++)
> > > > +             if (!list_empty(&krwp->bulk_head_free[i]))
> > > > +                     return true;
> > > > +
> > > > +     return !!krwp->head_free;
> > >
> > > This is fixed from v1, good!
> > >
> > > > +}
> > > > +
> > > >  static int krc_count(struct kfree_rcu_cpu *krcp)  {
> > > >       int sum = atomic_read(&krcp->head_count); @@ -3107,15 +3119,14
> > > > @@ static void kfree_rcu_monitor(struct work_struct *work)
> > > >       for (i = 0; i < KFREE_N_BATCHES; i++) {
> > > >               struct kfree_rcu_cpu_work *krwp = &(krcp->krw_arr[i]);
> > > >
> > > > -             // Try to detach bulk_head or head and attach it over any
> > > > -             // available corresponding free channel. It can be that
> > > > -             // a previous RCU batch is in progress, it means that
> > > > -             // immediately to queue another one is not possible so
> > > > -             // in that case the monitor work is rearmed.
> > > > -             if ((!list_empty(&krcp->bulk_head[0]) && list_empty(&krwp->bulk_head_free[0])) ||
> > > > -                     (!list_empty(&krcp->bulk_head[1]) && list_empty(&krwp->bulk_head_free[1])) ||
> > > > -                             (READ_ONCE(krcp->head) && !krwp->head_free)) {
> > > > +             // Try to detach bulk_head or head and attach it, only when
> > > > +             // all channels are free.  Any channel is not free means at krwp
> > > > +             // there is on-going rcu work to handle krwp's free business.
> > > > +             if (need_wait_for_krwp_work(krwp))
> > > > +                     continue;
> > > >
> > > > +             // kvfree_rcu_drain_ready() might handle this krcp, if so give up.
> > > > +             if (need_offload_krc(krcp)) {
> > > >                       // Channel 1 corresponds to the SLAB-pointer bulk path.
> > > >                       // Channel 2 corresponds to vmalloc-pointer bulk path.
> > > >                       for (j = 0; j < FREE_N_CHANNELS; j++) {
> > > > --
> > > > 1.9.1
> > >
> > > ------------------------------------------------------------------------
> > >
> > > commit e222f9a512539c3f4093a55d16624d9da614800b
> > > Author: Ziwei Dai <ziwei.dai@unisoc.com>
> > > Date:   Fri Mar 31 20:42:09 2023 +0800
> > >
> > >     rcu: Avoid freeing new kfree_rcu() memory after old grace period
> > >
> > >     Memory passed to kvfree_rcu() that is to be freed is tracked by a
> > >     per-CPU kfree_rcu_cpu structure, which in turn contains pointers
> > >     to kvfree_rcu_bulk_data structures that contain pointers to memory
> > >     that has not yet been handed to RCU, along with an kfree_rcu_cpu_work
> > >     structure that tracks the memory that has already been handed to RCU.
> > >     These structures track three categories of memory: (1) Memory for
> > >     kfree(), (2) Memory for kvfree(), and (3) Memory for both that arrived
> > >     during an OOM episode.  The first two categories are tracked in a
> > >     cache-friendly manner involving a dynamically allocated page of pointers
> > >     (the aforementioned kvfree_rcu_bulk_data structures), while the third
> > >     uses a simple (but decidedly cache-unfriendly) linked list through the
> > >     rcu_head structures in each block of memory.
> > >
> > >     On a given CPU, these three categories are handled as a unit, with that
> > >     CPU's kfree_rcu_cpu_work structure having one pointer for each of the
> > >     three categories.  Clearly, new memory for a given category cannot be
> > >     placed in the corresponding kfree_rcu_cpu_work structure until any old
> > >     memory has had its grace period elapse and thus has been removed. And
> > >     the kfree_rcu_monitor() function does in fact check for this.
> > >
> > >     Except that the kfree_rcu_monitor() function checks these pointers one
> > >     at a time.  This means that if the previous kfree_rcu() memory passed
> > >     to RCU had only category 1 and the current one has only category 2, the
> > >     kfree_rcu_monitor() function will send that current category-2 memory
> > >     along immediately.  This can result in memory being freed too soon,
> > >     that is, out from under unsuspecting RCU readers.
> > >
> > >     To see this, consider the following sequence of events, in which:
> > >
> > >     o       Task A on CPU 0 calls rcu_read_lock(), then uses "from_cset",
> > >             then is preempted.
> > >
> > >     o       CPU 1 calls kfree_rcu(cset, rcu_head) in order to free "from_cset"
> > >             after a later grace period.  Except that "from_cset" is freed
> > >             right after the previous grace period ended, so that "from_cset"
> > >             is immediately freed.  Task A resumes and references "from_cset"'s
> > >             member, after which nothing good happens.
> > >
> > >     In full detail:
> > >
> > >     CPU 0                                   CPU 1
> > >     ----------------------                  ----------------------
> > >     count_memcg_event_mm()
> > >     |rcu_read_lock()  <---
> > >     |mem_cgroup_from_task()
> > >      |// css_set_ptr is the "from_cset" mentioned on CPU 1
> > >      |css_set_ptr = rcu_dereference((task)->cgroups)
> > >      |// Hard irq comes, current task is scheduled out.
> > >
> > >                                             cgroup_attach_task()
> > >                                             |cgroup_migrate()
> > >                                             |cgroup_migrate_execute()
> > >                                             |css_set_move_task(task, from_cset, to_cset, true)
> > >                                             |cgroup_move_task(task, to_cset)
> > >                                             |rcu_assign_pointer(.., to_cset)
> > >                                             |...
> > >                                             |cgroup_migrate_finish()
> > >                                             |put_css_set_locked(from_cset)
> > >                                             |from_cset->refcount return 0
> > >                                             |kfree_rcu(cset, rcu_head) // free from_cset after new gp
> > >                                             |add_ptr_to_bulk_krc_lock()
> > >                                             |schedule_delayed_work(&krcp->monitor_work, ..)
> > >
> > >                                             kfree_rcu_monitor()
> > >                                             |krcp->bulk_head[0]'s work attached to krwp->bulk_head_free[]
> > >                                             |queue_rcu_work(system_wq, &krwp->rcu_work)
> > >                                             |if rwork->rcu.work is not in WORK_STRUCT_PENDING_BIT state,
> > >                                             |call_rcu(&rwork->rcu, rcu_work_rcufn) <--- request new gp
> > >
> > >                                             // There is a perious call_rcu(.., rcu_work_rcufn)
> > >                                             // gp end, rcu_work_rcufn() is called.
> > >                                             rcu_work_rcufn()
> > >                                             |__queue_work(.., rwork->wq, &rwork->work);
> > >
> > >                                             |kfree_rcu_work()
> > >                                             |krwp->bulk_head_free[0] bulk is freed before new gp end!!!
> > >                                             |The "from_cset" is freed before new gp end.
> > >
> > >     // the task resumes some time later.
> > >      |css_set_ptr->subsys[(subsys_id) <--- Caused kernel crash, because css_set_ptr is freed.
> > >
> > >     This commit therefore causes kfree_rcu_monitor() to refrain from moving
> > >     kfree_rcu() memory to the kfree_rcu_cpu_work structure until the RCU
> > >     grace period has completed for all three categories.
> > >
> > >     v2: Use helper function instead of inserted code block at kfree_rcu_monitor().
> > >
> > >     Fixes: c014efeef76a ("rcu: Add multiple in-flight batches of kfree_rcu() work")
> > >     Reported-by: Mukesh Ojha <quic_mojha@quicinc.com>
> > >     Signed-off-by: Ziwei Dai <ziwei.dai@unisoc.com>
> > >     Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> > >     Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > >
> > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index
> > > 859ee02f6614..e2dbea6cee4b 100644
> > > --- a/kernel/rcu/tree.c
> > > +++ b/kernel/rcu/tree.c
> > > @@ -3051,6 +3051,18 @@ need_offload_krc(struct kfree_rcu_cpu *krcp)
> > >         return !!READ_ONCE(krcp->head);
> > >  }
> > >
> > > +static bool
> > > +need_wait_for_krwp_work(struct kfree_rcu_cpu_work *krwp) {
> > > +       int i;
> > > +
> > > +       for (i = 0; i < FREE_N_CHANNELS; i++)
> > > +               if (!list_empty(&krwp->bulk_head_free[i]))
> > > +                       return true;
> > > +
> > > +       return !!krwp->head_free;
> > > +}
> > > +
> > >  static int krc_count(struct kfree_rcu_cpu *krcp)  {
> > >         int sum = atomic_read(&krcp->head_count); @@ -3134,15
> > > +3146,14 @@ static void kfree_rcu_monitor(struct work_struct *work)
> > >         for (i = 0; i < KFREE_N_BATCHES; i++) {
> > >                 struct kfree_rcu_cpu_work *krwp = &(krcp->krw_arr[i]);
> > >
> > > -               // Try to detach bulk_head or head and attach it over any
> > > -               // available corresponding free channel. It can be that
> > > -               // a previous RCU batch is in progress, it means that
> > > -               // immediately to queue another one is not possible so
> > > -               // in that case the monitor work is rearmed.
> > > -               if ((!list_empty(&krcp->bulk_head[0]) && list_empty(&krwp->bulk_head_free[0])) ||
> > > -                       (!list_empty(&krcp->bulk_head[1]) && list_empty(&krwp->bulk_head_free[1])) ||
> > > -                               (READ_ONCE(krcp->head) && !krwp->head_free)) {
> > > +               // Try to detach bulk_head or head and attach it, only when
> > > +               // all channels are free.  Any channel is not free means at krwp
> > > +               // there is on-going rcu work to handle krwp's free business.
> > > +               if (need_wait_for_krwp_work(krwp))
> > > +                       continue;
> > >
> > > +               // kvfree_rcu_drain_ready() might handle this krcp, if so give up.
> > > +               if (need_offload_krc(krcp)) {
> > >                         // Channel 1 corresponds to the SLAB-pointer bulk path.
> > >                         // Channel 2 corresponds to vmalloc-pointer bulk path.
> > >                         for (j = 0; j < FREE_N_CHANNELS; j++) {

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Re: [PATCH V2] rcu: Make sure new krcp free business is handled after the wanted rcu grace period.
  2023-04-04 13:08 Re: [PATCH V2] rcu: Make sure new krcp free business is handled after the wanted rcu grace period 代子为 (Ziwei Dai)
@ 2023-04-04 13:54 ` Paul E. McKenney
  2023-04-04 19:33   ` Uladzislau Rezki
  0 siblings, 1 reply; 5+ messages in thread
From: Paul E. McKenney @ 2023-04-04 13:54 UTC (permalink / raw)
  To: 代子为 (Ziwei Dai)
  Cc: urezki, frederic, quic_neeraju, josh, rostedt, mathieu.desnoyers,
	jiangshanlai, joel, rcu, linux-kernel,
	王双 (Shuang Wang),
	辛依凡 (Yifan Xin), 王科 (Ke Wang),
	闫学文 (Xuewen Yan),
	牛志国 (Zhiguo Niu),
	黄朝阳 (Zhaoyang Huang)

On Tue, Apr 04, 2023 at 01:08:39PM +0000, 代子为 (Ziwei Dai) wrote:
> Correct error line format of my mail content and add comments.
> 
> > -----邮件原件-----
> > 发件人: Paul E. McKenney <paulmck@kernel.org>
> > 发送时间: 2023年4月4日 11:23
> > 收件人: 代子为 (Ziwei Dai) <Ziwei.Dai@unisoc.com>
> > 抄送: urezki@gmail.com; frederic@kernel.org; quic_neeraju@quicinc.com; josh@joshtriplett.org; rostedt@goodmis.org;
> > mathieu.desnoyers@efficios.com; jiangshanlai@gmail.com; joel@joelfernandes.org; rcu@vger.kernel.org; linux-kernel@vger.kernel.org;
> > 王双 (Shuang Wang) <shuang.wang@unisoc.com>; 辛依凡 (Yifan Xin) <Yifan.Xin@unisoc.com>; 王科 (Ke Wang)
> > <Ke.Wang@unisoc.com>; 闫学文 (Xuewen Yan) <Xuewen.Yan@unisoc.com>; 牛志国 (Zhiguo Niu) <Zhiguo.Niu@unisoc.com>; 黄朝
> > 阳 (Zhaoyang Huang) <zhaoyang.huang@unisoc.com>
> > 主题: Re: 答复: [PATCH V2] rcu: Make sure new krcp free business is handled after the wanted rcu grace period.
> > 
> > 
> > 
> > On Tue, Apr 04, 2023 at 02:49:15AM +0000, 代子为 (Ziwei Dai) wrote:
> > > Hello Paul!
> > >
> > > > -----邮件原件-----
> > > > 发件人: Paul E. McKenney <paulmck@kernel.org>
> > > > 发送时间: 2023年4月4日 6:58
> > > > 收件人: 代子为 (Ziwei Dai) <Ziwei.Dai@unisoc.com>
> > > > 抄送: urezki@gmail.com; frederic@kernel.org; quic_neeraju@quicinc.com;
> > > > josh@joshtriplett.org; rostedt@goodmis.org;
> > > > mathieu.desnoyers@efficios.com; jiangshanlai@gmail.com;
> > > > joel@joelfernandes.org; rcu@vger.kernel.org; linux-kernel@vger.kernel.org;
> > > > 王双 (Shuang Wang) <shuang.wang@unisoc.com>; 辛依凡 (Yifan Xin)
> > > > <Yifan.Xin@unisoc.com>; 王科 (Ke Wang) <Ke.Wang@unisoc.com>; 闫学文
> > > > (Xuewen Yan) <Xuewen.Yan@unisoc.com>; 牛志国 (Zhiguo Niu)
> > > > <Zhiguo.Niu@unisoc.com>; 黄朝阳 (Zhaoyang Huang)
> > > > <zhaoyang.huang@unisoc.com>
> > > > 主题: Re: [PATCH V2] rcu: Make sure new krcp free business is handled after
> > > > the wanted rcu grace period.
> > > >
> > > >
> > > > 注意: 这封邮件来自于外部。除非你确定邮件内容安全,否则不要点击任
> > > > 何链接和附件。
> > > > CAUTION: This email originated from outside of the organization. Do not click
> > > > links or open attachments unless you recognize the sender and know the
> > > > content is safe.
> > > >
> > > >
> > > >
> > > > On Fri, Mar 31, 2023 at 08:42:09PM +0800, Ziwei Dai wrote:
> > > > > In kfree_rcu_monitor(), new free business at krcp is attached to any
> > > > > free channel at krwp. kfree_rcu_monitor() is responsible to make sure
> > > > > new free business is handled after the rcu grace period. But if there
> > > > > is any none-free channel at krwp already, that means there is an
> > > > > on-going rcu work, which will cause the kvfree_call_rcu()-triggered
> > > > > free business is done before the wanted rcu grace period ends.
> > > > >
> > > > > This commit ignore krwp which has non-free channel at
> > > > > kfree_rcu_monitor(), to fix the issue that kvfree_call_rcu() loses effectiveness.
> > > > >
> > > > > Below is the css_set obj "from_cset" use-after-free case caused by
> > > > > kvfree_call_rcu() losing effectiveness.
> > > > > CPU 0 calls rcu_read_lock(), then use "from_cset", then hard irq
> > > > > comes, the task is schedule out.
> > > > > CPU 1 calls kfree_rcu(cset, rcu_head), willing to free "from_cset" after new gp.
> > > > > But "from_cset" is freed right after current gp end. "from_cset" is reallocated.
> > > > > CPU 0 's task arrives back, references "from_cset"'s member, which causes crash.
> > > > >
> > > > > CPU 0                                 CPU 1
> > > > > count_memcg_event_mm()
> > > > > |rcu_read_lock()  <---
> > > > > |mem_cgroup_from_task()
> > > > >  |// css_set_ptr is the "from_cset" mentioned on CPU 1  |css_set_ptr =
> > > > > rcu_dereference((task)->cgroups)  |// Hard irq comes, current task is
> > > > > scheduled out.
> > > > >
> > > > >                                       cgroup_attach_task()
> > > > >                                       |cgroup_migrate()
> > > > >                                       |cgroup_migrate_execute()
> > > > >                                       |css_set_move_task(task, from_cset, to_cset, true)
> > > > >                                       |cgroup_move_task(task, to_cset)
> > > > >                                       |rcu_assign_pointer(.., to_cset)
> > > > >                                       |...
> > > > >                                       |cgroup_migrate_finish()
> > > > >                                       |put_css_set_locked(from_cset)
> > > > >                                       |from_cset->refcount return 0
> > > > >                                       |kfree_rcu(cset, rcu_head) // means to free from_cset after new gp
> > > > >                                       |add_ptr_to_bulk_krc_lock()
> > > > >                                       |schedule_delayed_work(&krcp->monitor_work, ..)
> > > > >
> > > > >                                       kfree_rcu_monitor()
> > > > >                                       |krcp->bulk_head[0]'s work attached to krwp->bulk_head_free[]
> > > > >                                       |queue_rcu_work(system_wq, &krwp->rcu_work)
> > > > >                                       |if rwork->rcu.work is not in WORK_STRUCT_PENDING_BIT state,
> > > > >                                       |call_rcu(&rwork->rcu, rcu_work_rcufn) <--- request a new gp
> > > > >
> > > > >                                       // There is a perious call_rcu(.., rcu_work_rcufn)
> > > > >                                       // gp end, rcu_work_rcufn() is called.
> > > > >                                       rcu_work_rcufn()
> > > > >                                       |__queue_work(.., rwork->wq, &rwork->work);
> > > > >
> > > > >                                       |kfree_rcu_work()
> > > > >                                       |krwp->bulk_head_free[0] bulk is freed before new gp end!!!
> > > > >                                       |The "from_cset" is freed before new gp end.
> > > > >
> > > > > // the task is scheduled in after many ms.
> > > > >  |css_set_ptr->subsys[(subsys_id) <--- Caused kernel crash, because css_set_ptr is freed.
> > > > >
> > > > > v2: Use helper function instead of inserted code block at kfree_rcu_monitor().
> > > > >
> > > > > Fixes: c014efeef76a ("rcu: Add multiple in-flight batches of
> > > > > kfree_rcu() work")
> > > > > Signed-off-by: Ziwei Dai <ziwei.dai@unisoc.com>
> > > >
> > > > Good catch, thank you!!!
> > > >
> > > > How difficult was this to trigger?  If it can be triggered easily, this of course
> > > > needs to go into mainline sooner rather than later.
> > >
> > > Roughly we can reproduce this issue within two rounds of 48h stress test,
> > > with 20 k5.15 devices. If KASAN is enabled, the reproduce rate is higher.
> > > So I think sooner is better.
> > 
> > Thank you for the info!  This is in theory an old bug, but if you can
> > easily find out, does it trigger for you on v6.2 or earlier?
> > 
> 
> We haven't ported v6.2 to our device yet...
> 
> > > > Longer term, would it make sense to run the three channels through RCU
> > > > separately, in order to avoid one channel refraining from starting a grace
> > > > period just because some other channel has callbacks waiting for a grace
> > > > period to complete?  One argument against might be energy efficiency, but
> > > > perhaps the ->gp_snap field could be used to get the best of both worlds.
> > >
> > > I see kvfree_rcu_drain_ready(krcp) is already called at the beginning of
> > > kfree_rcu_monitor(), which polls the ->gp_snap field, to decide
> > > whether to free channel objects immediately or after gp.
> > > Both energy efficiency and timing seems be considered?
> > 
> > My concern is that running the channels separately might mean more grace
> > periods (and thus more energy draw) on nearly idle devices, such devices
> > usually being the ones for which energy efficiency matters most.
> > 
> > But perhaps Vlad, Neeraj, or Joel has some insight on this, given
> > that they are the ones working on battery-powered devices.
> > 
> > > > Either way, this fixes only one bug of two.  The second bug is in the
> > > > kfree_rcu() tests, which should have caught this bug.  Thoughts on a good fix
> > > > for those tests?
> > >
> > > I inserted a msleep() between "rcu_read_lock(), get pointer via rcu_dereference()"
> > > and "reference pointer, using the member", at the rcu scenario, then we can
> > > reproduce this issue very soon in stress test. Can kfree_rcu() tests insert msleep()?
> > 
> > Another approach is to separate concerns, so that readers interact with
> > grace periods in the rcutorture.c tests, and to add the interaction
> > of to-be-freed memory with grace periods in the rcuscale kvfree tests.
> > I took a step in this direction with this commit on the -rcu tree's
> > "dev" branch:
> > 
> > efbe7927f479 ("rcu/kvfree: Add debug to check grace periods")
> > 
> > Given this, might it be possible to make rcuscale.c's kfree_rcu()
> > testing create patterns of usage of the three channels so as to
> > catch this bug that way?
> > 
> 
> I can try it on my k5.15 device, and need some time.
> I have a question. Do you mean add code in tree.c to create pattern
> while channel data is being freed?
> If so, both rcuscales.c and tree.c need to be modified for the test case.

My thought is to run the test on a system where very little else is
happening, and then creating the temporal pattern only in rcuscale.c.
One way would be to modify kfree_scale_thread(), perhaps using an
additional module parameter using torture_param().

But just out of curiosity, what changes were you thinking of making
in tree.c?

							Thanx, Paul

> > > > I have applied Uladzislau's and Mukesh's tags, and done the usual
> > > > wordsmithing as shown at the end of this message.  Please let me know if I
> > > > messed anything up.
> > >
> > > Thank you for the improvement on the patch! It seems better now.
> > 
> > No problem and thank you again for the debugging and the fix!
> > 
> >                                                         Thanx, Paul
> > 
> > > > > ---
> > > > >  kernel/rcu/tree.c | 27 +++++++++++++++++++--------
> > > > >  1 file changed, 19 insertions(+), 8 deletions(-)
> > > > >
> > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index
> > > > > 8e880c0..7b95ee9 100644
> > > > > --- a/kernel/rcu/tree.c
> > > > > +++ b/kernel/rcu/tree.c
> > > > > @@ -3024,6 +3024,18 @@ static void kfree_rcu_work(struct work_struct *work)
> > > > >       return !!READ_ONCE(krcp->head);
> > > > >  }
> > > > >
> > > > > +static bool
> > > > > +need_wait_for_krwp_work(struct kfree_rcu_cpu_work *krwp) {
> > > > > +     int i;
> > > > > +
> > > > > +     for (i = 0; i < FREE_N_CHANNELS; i++)
> > > > > +             if (!list_empty(&krwp->bulk_head_free[i]))
> > > > > +                     return true;
> > > > > +
> > > > > +     return !!krwp->head_free;
> > > >
> > > > This is fixed from v1, good!
> > > >
> > > > > +}
> > > > > +
> > > > >  static int krc_count(struct kfree_rcu_cpu *krcp)  {
> > > > >       int sum = atomic_read(&krcp->head_count); @@ -3107,15 +3119,14
> > > > > @@ static void kfree_rcu_monitor(struct work_struct *work)
> > > > >       for (i = 0; i < KFREE_N_BATCHES; i++) {
> > > > >               struct kfree_rcu_cpu_work *krwp = &(krcp->krw_arr[i]);
> > > > >
> > > > > -             // Try to detach bulk_head or head and attach it over any
> > > > > -             // available corresponding free channel. It can be that
> > > > > -             // a previous RCU batch is in progress, it means that
> > > > > -             // immediately to queue another one is not possible so
> > > > > -             // in that case the monitor work is rearmed.
> > > > > -             if ((!list_empty(&krcp->bulk_head[0]) && list_empty(&krwp->bulk_head_free[0])) ||
> > > > > -                     (!list_empty(&krcp->bulk_head[1]) && list_empty(&krwp->bulk_head_free[1])) ||
> > > > > -                             (READ_ONCE(krcp->head) && !krwp->head_free)) {
> > > > > +             // Try to detach bulk_head or head and attach it, only when
> > > > > +             // all channels are free.  Any channel is not free means at krwp
> > > > > +             // there is on-going rcu work to handle krwp's free business.
> > > > > +             if (need_wait_for_krwp_work(krwp))
> > > > > +                     continue;
> > > > >
> > > > > +             // kvfree_rcu_drain_ready() might handle this krcp, if so give up.
> > > > > +             if (need_offload_krc(krcp)) {
> > > > >                       // Channel 1 corresponds to the SLAB-pointer bulk path.
> > > > >                       // Channel 2 corresponds to vmalloc-pointer bulk path.
> > > > >                       for (j = 0; j < FREE_N_CHANNELS; j++) {
> > > > > --
> > > > > 1.9.1
> > > >
> > > > ------------------------------------------------------------------------
> > > >
> > > > commit e222f9a512539c3f4093a55d16624d9da614800b
> > > > Author: Ziwei Dai <ziwei.dai@unisoc.com>
> > > > Date:   Fri Mar 31 20:42:09 2023 +0800
> > > >
> > > >     rcu: Avoid freeing new kfree_rcu() memory after old grace period
> > > >
> > > >     Memory passed to kvfree_rcu() that is to be freed is tracked by a
> > > >     per-CPU kfree_rcu_cpu structure, which in turn contains pointers
> > > >     to kvfree_rcu_bulk_data structures that contain pointers to memory
> > > >     that has not yet been handed to RCU, along with an kfree_rcu_cpu_work
> > > >     structure that tracks the memory that has already been handed to RCU.
> > > >     These structures track three categories of memory: (1) Memory for
> > > >     kfree(), (2) Memory for kvfree(), and (3) Memory for both that arrived
> > > >     during an OOM episode.  The first two categories are tracked in a
> > > >     cache-friendly manner involving a dynamically allocated page of pointers
> > > >     (the aforementioned kvfree_rcu_bulk_data structures), while the third
> > > >     uses a simple (but decidedly cache-unfriendly) linked list through the
> > > >     rcu_head structures in each block of memory.
> > > >
> > > >     On a given CPU, these three categories are handled as a unit, with that
> > > >     CPU's kfree_rcu_cpu_work structure having one pointer for each of the
> > > >     three categories.  Clearly, new memory for a given category cannot be
> > > >     placed in the corresponding kfree_rcu_cpu_work structure until any old
> > > >     memory has had its grace period elapse and thus has been removed. And
> > > >     the kfree_rcu_monitor() function does in fact check for this.
> > > >
> > > >     Except that the kfree_rcu_monitor() function checks these pointers one
> > > >     at a time.  This means that if the previous kfree_rcu() memory passed
> > > >     to RCU had only category 1 and the current one has only category 2, the
> > > >     kfree_rcu_monitor() function will send that current category-2 memory
> > > >     along immediately.  This can result in memory being freed too soon,
> > > >     that is, out from under unsuspecting RCU readers.
> > > >
> > > >     To see this, consider the following sequence of events, in which:
> > > >
> > > >     o       Task A on CPU 0 calls rcu_read_lock(), then uses "from_cset",
> > > >             then is preempted.
> > > >
> > > >     o       CPU 1 calls kfree_rcu(cset, rcu_head) in order to free "from_cset"
> > > >             after a later grace period.  Except that "from_cset" is freed
> > > >             right after the previous grace period ended, so that "from_cset"
> > > >             is immediately freed.  Task A resumes and references "from_cset"'s
> > > >             member, after which nothing good happens.
> > > >
> > > >     In full detail:
> > > >
> > > >     CPU 0                                   CPU 1
> > > >     ----------------------                  ----------------------
> > > >     count_memcg_event_mm()
> > > >     |rcu_read_lock()  <---
> > > >     |mem_cgroup_from_task()
> > > >      |// css_set_ptr is the "from_cset" mentioned on CPU 1
> > > >      |css_set_ptr = rcu_dereference((task)->cgroups)
> > > >      |// Hard irq comes, current task is scheduled out.
> > > >
> > > >                                             cgroup_attach_task()
> > > >                                             |cgroup_migrate()
> > > >                                             |cgroup_migrate_execute()
> > > >                                             |css_set_move_task(task, from_cset, to_cset, true)
> > > >                                             |cgroup_move_task(task, to_cset)
> > > >                                             |rcu_assign_pointer(.., to_cset)
> > > >                                             |...
> > > >                                             |cgroup_migrate_finish()
> > > >                                             |put_css_set_locked(from_cset)
> > > >                                             |from_cset->refcount return 0
> > > >                                             |kfree_rcu(cset, rcu_head) // free from_cset after new gp
> > > >                                             |add_ptr_to_bulk_krc_lock()
> > > >                                             |schedule_delayed_work(&krcp->monitor_work, ..)
> > > >
> > > >                                             kfree_rcu_monitor()
> > > >                                             |krcp->bulk_head[0]'s work attached to krwp->bulk_head_free[]
> > > >                                             |queue_rcu_work(system_wq, &krwp->rcu_work)
> > > >                                             |if rwork->rcu.work is not in WORK_STRUCT_PENDING_BIT state,
> > > >                                             |call_rcu(&rwork->rcu, rcu_work_rcufn) <--- request new gp
> > > >
> > > >                                             // There is a perious call_rcu(.., rcu_work_rcufn)
> > > >                                             // gp end, rcu_work_rcufn() is called.
> > > >                                             rcu_work_rcufn()
> > > >                                             |__queue_work(.., rwork->wq, &rwork->work);
> > > >
> > > >                                             |kfree_rcu_work()
> > > >                                             |krwp->bulk_head_free[0] bulk is freed before new gp end!!!
> > > >                                             |The "from_cset" is freed before new gp end.
> > > >
> > > >     // the task resumes some time later.
> > > >      |css_set_ptr->subsys[(subsys_id) <--- Caused kernel crash, because css_set_ptr is freed.
> > > >
> > > >     This commit therefore causes kfree_rcu_monitor() to refrain from moving
> > > >     kfree_rcu() memory to the kfree_rcu_cpu_work structure until the RCU
> > > >     grace period has completed for all three categories.
> > > >
> > > >     v2: Use helper function instead of inserted code block at kfree_rcu_monitor().
> > > >
> > > >     Fixes: c014efeef76a ("rcu: Add multiple in-flight batches of kfree_rcu() work")
> > > >     Reported-by: Mukesh Ojha <quic_mojha@quicinc.com>
> > > >     Signed-off-by: Ziwei Dai <ziwei.dai@unisoc.com>
> > > >     Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> > > >     Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > > >
> > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index
> > > > 859ee02f6614..e2dbea6cee4b 100644
> > > > --- a/kernel/rcu/tree.c
> > > > +++ b/kernel/rcu/tree.c
> > > > @@ -3051,6 +3051,18 @@ need_offload_krc(struct kfree_rcu_cpu *krcp)
> > > >         return !!READ_ONCE(krcp->head);
> > > >  }
> > > >
> > > > +static bool
> > > > +need_wait_for_krwp_work(struct kfree_rcu_cpu_work *krwp) {
> > > > +       int i;
> > > > +
> > > > +       for (i = 0; i < FREE_N_CHANNELS; i++)
> > > > +               if (!list_empty(&krwp->bulk_head_free[i]))
> > > > +                       return true;
> > > > +
> > > > +       return !!krwp->head_free;
> > > > +}
> > > > +
> > > >  static int krc_count(struct kfree_rcu_cpu *krcp)  {
> > > >         int sum = atomic_read(&krcp->head_count); @@ -3134,15
> > > > +3146,14 @@ static void kfree_rcu_monitor(struct work_struct *work)
> > > >         for (i = 0; i < KFREE_N_BATCHES; i++) {
> > > >                 struct kfree_rcu_cpu_work *krwp = &(krcp->krw_arr[i]);
> > > >
> > > > -               // Try to detach bulk_head or head and attach it over any
> > > > -               // available corresponding free channel. It can be that
> > > > -               // a previous RCU batch is in progress, it means that
> > > > -               // immediately to queue another one is not possible so
> > > > -               // in that case the monitor work is rearmed.
> > > > -               if ((!list_empty(&krcp->bulk_head[0]) && list_empty(&krwp->bulk_head_free[0])) ||
> > > > -                       (!list_empty(&krcp->bulk_head[1]) && list_empty(&krwp->bulk_head_free[1])) ||
> > > > -                               (READ_ONCE(krcp->head) && !krwp->head_free)) {
> > > > +               // Try to detach bulk_head or head and attach it, only when
> > > > +               // all channels are free.  Any channel is not free means at krwp
> > > > +               // there is on-going rcu work to handle krwp's free business.
> > > > +               if (need_wait_for_krwp_work(krwp))
> > > > +                       continue;
> > > >
> > > > +               // kvfree_rcu_drain_ready() might handle this krcp, if so give up.
> > > > +               if (need_offload_krc(krcp)) {
> > > >                         // Channel 1 corresponds to the SLAB-pointer bulk path.
> > > >                         // Channel 2 corresponds to vmalloc-pointer bulk path.
> > > >                         for (j = 0; j < FREE_N_CHANNELS; j++) {

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Re: [PATCH V2] rcu: Make sure new krcp free business is handled after the wanted rcu grace period.
  2023-04-04 13:54 ` Paul E. McKenney
@ 2023-04-04 19:33   ` Uladzislau Rezki
  2023-04-04 20:08     ` Paul E. McKenney
  0 siblings, 1 reply; 5+ messages in thread
From: Uladzislau Rezki @ 2023-04-04 19:33 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: 代子为 (Ziwei Dai),
	urezki, frederic, quic_neeraju, josh, rostedt, mathieu.desnoyers,
	jiangshanlai, joel, rcu, linux-kernel,
	王双 (Shuang Wang),
	辛依凡 (Yifan Xin), 王科 (Ke Wang),
	闫学文 (Xuewen Yan),
	牛志国 (Zhiguo Niu),
	黄朝阳 (Zhaoyang Huang)

> > > My concern is that running the channels separately might mean more grace
> > > periods (and thus more energy draw) on nearly idle devices, such devices
> > > usually being the ones for which energy efficiency matters most.
> > > 
> > > But perhaps Vlad, Neeraj, or Joel has some insight on this, given
> > > that they are the ones working on battery-powered devices.
> > > 
> > > > > Either way, this fixes only one bug of two.  The second bug is in the
> > > > > kfree_rcu() tests, which should have caught this bug.  Thoughts on a good fix
> > > > > for those tests?
> > > >
> > > > I inserted a msleep() between "rcu_read_lock(), get pointer via rcu_dereference()"
> > > > and "reference pointer, using the member", at the rcu scenario, then we can
> > > > reproduce this issue very soon in stress test. Can kfree_rcu() tests insert msleep()?
> > > 
> > > Another approach is to separate concerns, so that readers interact with
> > > grace periods in the rcutorture.c tests, and to add the interaction
> > > of to-be-freed memory with grace periods in the rcuscale kvfree tests.
> > > I took a step in this direction with this commit on the -rcu tree's
> > > "dev" branch:
> > > 
> > > efbe7927f479 ("rcu/kvfree: Add debug to check grace periods")
> > > 
> > > Given this, might it be possible to make rcuscale.c's kfree_rcu()
> > > testing create patterns of usage of the three channels so as to
> > > catch this bug that way?
> > > 
> > 
> > I can try it on my k5.15 device, and need some time.
> > I have a question. Do you mean add code in tree.c to create pattern
> > while channel data is being freed?
> > If so, both rcuscales.c and tree.c need to be modified for the test case.
> 
> My thought is to run the test on a system where very little else is
> happening, and then creating the temporal pattern only in rcuscale.c.
> One way would be to modify kfree_scale_thread(), perhaps using an
> additional module parameter using torture_param().
> 
> But just out of curiosity, what changes were you thinking of making
> in tree.c?
> 
OK. I can reproduce it on latest rcu-dev:

<snip>
[   75.302795] ------------[ cut here ]------------
[   75.302801] WARNING: CPU: 50 PID: 721 at kernel/rcu/tree.c:3043 kfree_rcu_work+0x157/0x1a0
[   75.302808] Modules linked in: test_vmalloc(E+) bochs(E) drm_vram_helper(E) snd_pcm(E) drm_ttm_helper(E) ppdev(E) snd_timer(E) joydev(E) ttm(E) drm_kms_helper(E) snd(E) parport_pc(E) soundcore(E) evdev(E) pcspkr(E) sg(E) serio_raw(E) parport(E) drm(E) qemu_fw_cfg(E) button(E) ip_tables(E) x_tables(E) autofs4(E) ext4(E) crc32c_generic(E) crc16(E) mbcache(E) jbd2(E) sd_mod(E) t10_pi(E) crc64_rocksoft(E) crc64(E) crc_t10dif(E) crct10dif_generic(E) sr_mod(E) cdrom(E) crct10dif_common(E) ata_generic(E) ata_piix(E) libata(E) scsi_mod(E) psmouse(E) e1000(E) scsi_common(E) i2c_piix4(E) floppy(E)
[   75.302865] CPU: 50 PID: 721 Comm: kworker/50:1 Kdump: loaded Tainted: G            E      6.3.0-rc1+ #58
[   75.302868] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
[   75.302870] Workqueue: events kfree_rcu_work
[   75.302905] RIP: 0010:kfree_rcu_work+0x157/0x1a0
[   75.302907] Code: 8b 05 75 f9 37 01 4c 29 e8 48 83 f8 f8 76 40 48 8b 4c 24 08 48 83 f9 01 74 35 48 8b 05 ca b4 44 01 48 29 c8 48 83 f8 f8 76 25 <0f> 0b 48 8b 44 24 38 65 48 2b 04 25 28 00 00 00 75 23 48 83 c4 40
[   75.302910] RSP: 0018:ffffbd4642d8bde8 EFLAGS: 00010202
[   75.302913] RAX: fffffffffffffffc RBX: ffff9f693d5dd140 RCX: 000000000000003c
[   75.302914] RDX: 0000000000000002 RSI: ffffbd4642d8be08 RDI: ffff9f5a4d608000
[   75.302916] RBP: ffffbd4642d8be08 R08: 0000001188654ff5 R09: 0000000000000000
[   75.302918] R10: 0000000000000001 R11: 0000000000000001 R12: ffffbd46812d7000
[   75.302919] R13: 0000000000000260 R14: ffffbd4642d8bdf8 R15: ffff9f5a47637000
[   75.302922] FS:  0000000000000000(0000) GS:ffff9f693e200000(0000) knlGS:0000000000000000
[   75.302924] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   75.302926] CR2: 0000562dfe4307d0 CR3: 000000054ba26000 CR4: 00000000000006e0
[   75.302930] Call Trace:
[   75.302937]  <TASK>
[   75.302942]  ? lock_acquire+0xc8/0x1a0
[   75.302949]  process_one_work+0x29d/0x560
[   75.302957]  ? __pfx_worker_thread+0x10/0x10
[   75.302960]  worker_thread+0x52/0x3a0
[   75.302964]  ? __pfx_worker_thread+0x10/0x10
[   75.302967]  kthread+0xe7/0x110
[   75.302970]  ? __pfx_kthread+0x10/0x10
[   75.302973]  ret_from_fork+0x2c/0x50
[   75.302984]  </TASK>
[   75.302986] ---[ end trace 0000000000000000 ]---
<snip>

This is with:

<snip>
commit 8f6414680a0d539ca0e7fde80556c71b7b3da88a (HEAD -> dev)
Author: Uladzislau Rezki (Sony) <urezki@gmail.com>
Date:   Tue Apr 4 15:51:56 2023 +0200

    rcu/kvfree: Add debug check of GP ready for ptrs in a list

commit efbe7927f47958a6805da5560d9a5f469ba51e73 (origin/dev)
Author: Paul E. McKenney <paulmck@kernel.org>
Date:   Mon Apr 3 16:49:14 2023 -0700

    rcu/kvfree: Add debug to check grace periods

+ below revert

commit 6b4fef6ec689b1dda9c63be77e9a81a52cc39dc1
Author: Ziwei Dai <ziwei.dai@unisoc.com>
Date:   Fri Mar 31 20:42:09 2023 +0800

    rcu/kvfree: Avoid freeing new kfree_rcu() memory after old grace period
<snip>

The test is "sudo ./test_vmalloc.sh run_test_mask=768 nr_threads=64&"

it runs single argument and double argument to free vmalloc ptrs.,
number of threads are 64:

without revert(with a patch that is in question), i am not able to
reproduce it anymore.

--
Uladzislau Rezki

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Re: [PATCH V2] rcu: Make sure new krcp free business is handled after the wanted rcu grace period.
  2023-04-04 19:33   ` Uladzislau Rezki
@ 2023-04-04 20:08     ` Paul E. McKenney
  2023-04-05  9:05       ` Uladzislau Rezki
  0 siblings, 1 reply; 5+ messages in thread
From: Paul E. McKenney @ 2023-04-04 20:08 UTC (permalink / raw)
  To: Uladzislau Rezki
  Cc: 代子为 (Ziwei Dai),
	frederic, quic_neeraju, josh, rostedt, mathieu.desnoyers,
	jiangshanlai, joel, rcu, linux-kernel,
	王双 (Shuang Wang),
	辛依凡 (Yifan Xin), 王科 (Ke Wang),
	闫学文 (Xuewen Yan),
	牛志国 (Zhiguo Niu),
	黄朝阳 (Zhaoyang Huang)

On Tue, Apr 04, 2023 at 09:33:07PM +0200, Uladzislau Rezki wrote:
> > > > My concern is that running the channels separately might mean more grace
> > > > periods (and thus more energy draw) on nearly idle devices, such devices
> > > > usually being the ones for which energy efficiency matters most.
> > > > 
> > > > But perhaps Vlad, Neeraj, or Joel has some insight on this, given
> > > > that they are the ones working on battery-powered devices.
> > > > 
> > > > > > Either way, this fixes only one bug of two.  The second bug is in the
> > > > > > kfree_rcu() tests, which should have caught this bug.  Thoughts on a good fix
> > > > > > for those tests?
> > > > >
> > > > > I inserted a msleep() between "rcu_read_lock(), get pointer via rcu_dereference()"
> > > > > and "reference pointer, using the member", at the rcu scenario, then we can
> > > > > reproduce this issue very soon in stress test. Can kfree_rcu() tests insert msleep()?
> > > > 
> > > > Another approach is to separate concerns, so that readers interact with
> > > > grace periods in the rcutorture.c tests, and to add the interaction
> > > > of to-be-freed memory with grace periods in the rcuscale kvfree tests.
> > > > I took a step in this direction with this commit on the -rcu tree's
> > > > "dev" branch:
> > > > 
> > > > efbe7927f479 ("rcu/kvfree: Add debug to check grace periods")
> > > > 
> > > > Given this, might it be possible to make rcuscale.c's kfree_rcu()
> > > > testing create patterns of usage of the three channels so as to
> > > > catch this bug that way?
> > > > 
> > > 
> > > I can try it on my k5.15 device, and need some time.
> > > I have a question. Do you mean add code in tree.c to create pattern
> > > while channel data is being freed?
> > > If so, both rcuscales.c and tree.c need to be modified for the test case.
> > 
> > My thought is to run the test on a system where very little else is
> > happening, and then creating the temporal pattern only in rcuscale.c.
> > One way would be to modify kfree_scale_thread(), perhaps using an
> > additional module parameter using torture_param().
> > 
> > But just out of curiosity, what changes were you thinking of making
> > in tree.c?
> > 
> OK. I can reproduce it on latest rcu-dev:
> 
> <snip>
> [   75.302795] ------------[ cut here ]------------
> [   75.302801] WARNING: CPU: 50 PID: 721 at kernel/rcu/tree.c:3043 kfree_rcu_work+0x157/0x1a0
> [   75.302808] Modules linked in: test_vmalloc(E+) bochs(E) drm_vram_helper(E) snd_pcm(E) drm_ttm_helper(E) ppdev(E) snd_timer(E) joydev(E) ttm(E) drm_kms_helper(E) snd(E) parport_pc(E) soundcore(E) evdev(E) pcspkr(E) sg(E) serio_raw(E) parport(E) drm(E) qemu_fw_cfg(E) button(E) ip_tables(E) x_tables(E) autofs4(E) ext4(E) crc32c_generic(E) crc16(E) mbcache(E) jbd2(E) sd_mod(E) t10_pi(E) crc64_rocksoft(E) crc64(E) crc_t10dif(E) crct10dif_generic(E) sr_mod(E) cdrom(E) crct10dif_common(E) ata_generic(E) ata_piix(E) libata(E) scsi_mod(E) psmouse(E) e1000(E) scsi_common(E) i2c_piix4(E) floppy(E)
> [   75.302865] CPU: 50 PID: 721 Comm: kworker/50:1 Kdump: loaded Tainted: G            E      6.3.0-rc1+ #58
> [   75.302868] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
> [   75.302870] Workqueue: events kfree_rcu_work
> [   75.302905] RIP: 0010:kfree_rcu_work+0x157/0x1a0
> [   75.302907] Code: 8b 05 75 f9 37 01 4c 29 e8 48 83 f8 f8 76 40 48 8b 4c 24 08 48 83 f9 01 74 35 48 8b 05 ca b4 44 01 48 29 c8 48 83 f8 f8 76 25 <0f> 0b 48 8b 44 24 38 65 48 2b 04 25 28 00 00 00 75 23 48 83 c4 40
> [   75.302910] RSP: 0018:ffffbd4642d8bde8 EFLAGS: 00010202
> [   75.302913] RAX: fffffffffffffffc RBX: ffff9f693d5dd140 RCX: 000000000000003c
> [   75.302914] RDX: 0000000000000002 RSI: ffffbd4642d8be08 RDI: ffff9f5a4d608000
> [   75.302916] RBP: ffffbd4642d8be08 R08: 0000001188654ff5 R09: 0000000000000000
> [   75.302918] R10: 0000000000000001 R11: 0000000000000001 R12: ffffbd46812d7000
> [   75.302919] R13: 0000000000000260 R14: ffffbd4642d8bdf8 R15: ffff9f5a47637000
> [   75.302922] FS:  0000000000000000(0000) GS:ffff9f693e200000(0000) knlGS:0000000000000000
> [   75.302924] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [   75.302926] CR2: 0000562dfe4307d0 CR3: 000000054ba26000 CR4: 00000000000006e0
> [   75.302930] Call Trace:
> [   75.302937]  <TASK>
> [   75.302942]  ? lock_acquire+0xc8/0x1a0
> [   75.302949]  process_one_work+0x29d/0x560
> [   75.302957]  ? __pfx_worker_thread+0x10/0x10
> [   75.302960]  worker_thread+0x52/0x3a0
> [   75.302964]  ? __pfx_worker_thread+0x10/0x10
> [   75.302967]  kthread+0xe7/0x110
> [   75.302970]  ? __pfx_kthread+0x10/0x10
> [   75.302973]  ret_from_fork+0x2c/0x50
> [   75.302984]  </TASK>
> [   75.302986] ---[ end trace 0000000000000000 ]---
> <snip>
> 
> This is with:
> 
> <snip>
> commit 8f6414680a0d539ca0e7fde80556c71b7b3da88a (HEAD -> dev)
> Author: Uladzislau Rezki (Sony) <urezki@gmail.com>
> Date:   Tue Apr 4 15:51:56 2023 +0200
> 
>     rcu/kvfree: Add debug check of GP ready for ptrs in a list
> 
> commit efbe7927f47958a6805da5560d9a5f469ba51e73 (origin/dev)
> Author: Paul E. McKenney <paulmck@kernel.org>
> Date:   Mon Apr 3 16:49:14 2023 -0700
> 
>     rcu/kvfree: Add debug to check grace periods
> 
> + below revert
> 
> commit 6b4fef6ec689b1dda9c63be77e9a81a52cc39dc1
> Author: Ziwei Dai <ziwei.dai@unisoc.com>
> Date:   Fri Mar 31 20:42:09 2023 +0800
> 
>     rcu/kvfree: Avoid freeing new kfree_rcu() memory after old grace period
> <snip>
> 
> The test is "sudo ./test_vmalloc.sh run_test_mask=768 nr_threads=64&"
> 
> it runs single argument and double argument to free vmalloc ptrs.,
> number of threads are 64:
> 
> without revert(with a patch that is in question), i am not able to
> reproduce it anymore.

Very good!!!

This test does not fit very will into the rcutorture script framework,
but might it be able to guide changes to rcuscale.c?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Re: [PATCH V2] rcu: Make sure new krcp free business is handled after the wanted rcu grace period.
  2023-04-04 20:08     ` Paul E. McKenney
@ 2023-04-05  9:05       ` Uladzislau Rezki
  0 siblings, 0 replies; 5+ messages in thread
From: Uladzislau Rezki @ 2023-04-05  9:05 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Uladzislau Rezki, 代子为 (Ziwei Dai),
	frederic, quic_neeraju, josh, rostedt, mathieu.desnoyers,
	jiangshanlai, joel, rcu, linux-kernel,
	王双 (Shuang Wang),
	辛依凡 (Yifan Xin), 王科 (Ke Wang),
	闫学文 (Xuewen Yan),
	牛志国 (Zhiguo Niu),
	黄朝阳 (Zhaoyang Huang)

On Tue, Apr 04, 2023 at 01:08:50PM -0700, Paul E. McKenney wrote:
> On Tue, Apr 04, 2023 at 09:33:07PM +0200, Uladzislau Rezki wrote:
> > > > > My concern is that running the channels separately might mean more grace
> > > > > periods (and thus more energy draw) on nearly idle devices, such devices
> > > > > usually being the ones for which energy efficiency matters most.
> > > > > 
> > > > > But perhaps Vlad, Neeraj, or Joel has some insight on this, given
> > > > > that they are the ones working on battery-powered devices.
> > > > > 
> > > > > > > Either way, this fixes only one bug of two.  The second bug is in the
> > > > > > > kfree_rcu() tests, which should have caught this bug.  Thoughts on a good fix
> > > > > > > for those tests?
> > > > > >
> > > > > > I inserted a msleep() between "rcu_read_lock(), get pointer via rcu_dereference()"
> > > > > > and "reference pointer, using the member", at the rcu scenario, then we can
> > > > > > reproduce this issue very soon in stress test. Can kfree_rcu() tests insert msleep()?
> > > > > 
> > > > > Another approach is to separate concerns, so that readers interact with
> > > > > grace periods in the rcutorture.c tests, and to add the interaction
> > > > > of to-be-freed memory with grace periods in the rcuscale kvfree tests.
> > > > > I took a step in this direction with this commit on the -rcu tree's
> > > > > "dev" branch:
> > > > > 
> > > > > efbe7927f479 ("rcu/kvfree: Add debug to check grace periods")
> > > > > 
> > > > > Given this, might it be possible to make rcuscale.c's kfree_rcu()
> > > > > testing create patterns of usage of the three channels so as to
> > > > > catch this bug that way?
> > > > > 
> > > > 
> > > > I can try it on my k5.15 device, and need some time.
> > > > I have a question. Do you mean add code in tree.c to create pattern
> > > > while channel data is being freed?
> > > > If so, both rcuscales.c and tree.c need to be modified for the test case.
> > > 
> > > My thought is to run the test on a system where very little else is
> > > happening, and then creating the temporal pattern only in rcuscale.c.
> > > One way would be to modify kfree_scale_thread(), perhaps using an
> > > additional module parameter using torture_param().
> > > 
> > > But just out of curiosity, what changes were you thinking of making
> > > in tree.c?
> > > 
> > OK. I can reproduce it on latest rcu-dev:
> > 
> > <snip>
> > [   75.302795] ------------[ cut here ]------------
> > [   75.302801] WARNING: CPU: 50 PID: 721 at kernel/rcu/tree.c:3043 kfree_rcu_work+0x157/0x1a0
> > [   75.302808] Modules linked in: test_vmalloc(E+) bochs(E) drm_vram_helper(E) snd_pcm(E) drm_ttm_helper(E) ppdev(E) snd_timer(E) joydev(E) ttm(E) drm_kms_helper(E) snd(E) parport_pc(E) soundcore(E) evdev(E) pcspkr(E) sg(E) serio_raw(E) parport(E) drm(E) qemu_fw_cfg(E) button(E) ip_tables(E) x_tables(E) autofs4(E) ext4(E) crc32c_generic(E) crc16(E) mbcache(E) jbd2(E) sd_mod(E) t10_pi(E) crc64_rocksoft(E) crc64(E) crc_t10dif(E) crct10dif_generic(E) sr_mod(E) cdrom(E) crct10dif_common(E) ata_generic(E) ata_piix(E) libata(E) scsi_mod(E) psmouse(E) e1000(E) scsi_common(E) i2c_piix4(E) floppy(E)
> > [   75.302865] CPU: 50 PID: 721 Comm: kworker/50:1 Kdump: loaded Tainted: G            E      6.3.0-rc1+ #58
> > [   75.302868] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
> > [   75.302870] Workqueue: events kfree_rcu_work
> > [   75.302905] RIP: 0010:kfree_rcu_work+0x157/0x1a0
> > [   75.302907] Code: 8b 05 75 f9 37 01 4c 29 e8 48 83 f8 f8 76 40 48 8b 4c 24 08 48 83 f9 01 74 35 48 8b 05 ca b4 44 01 48 29 c8 48 83 f8 f8 76 25 <0f> 0b 48 8b 44 24 38 65 48 2b 04 25 28 00 00 00 75 23 48 83 c4 40
> > [   75.302910] RSP: 0018:ffffbd4642d8bde8 EFLAGS: 00010202
> > [   75.302913] RAX: fffffffffffffffc RBX: ffff9f693d5dd140 RCX: 000000000000003c
> > [   75.302914] RDX: 0000000000000002 RSI: ffffbd4642d8be08 RDI: ffff9f5a4d608000
> > [   75.302916] RBP: ffffbd4642d8be08 R08: 0000001188654ff5 R09: 0000000000000000
> > [   75.302918] R10: 0000000000000001 R11: 0000000000000001 R12: ffffbd46812d7000
> > [   75.302919] R13: 0000000000000260 R14: ffffbd4642d8bdf8 R15: ffff9f5a47637000
> > [   75.302922] FS:  0000000000000000(0000) GS:ffff9f693e200000(0000) knlGS:0000000000000000
> > [   75.302924] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [   75.302926] CR2: 0000562dfe4307d0 CR3: 000000054ba26000 CR4: 00000000000006e0
> > [   75.302930] Call Trace:
> > [   75.302937]  <TASK>
> > [   75.302942]  ? lock_acquire+0xc8/0x1a0
> > [   75.302949]  process_one_work+0x29d/0x560
> > [   75.302957]  ? __pfx_worker_thread+0x10/0x10
> > [   75.302960]  worker_thread+0x52/0x3a0
> > [   75.302964]  ? __pfx_worker_thread+0x10/0x10
> > [   75.302967]  kthread+0xe7/0x110
> > [   75.302970]  ? __pfx_kthread+0x10/0x10
> > [   75.302973]  ret_from_fork+0x2c/0x50
> > [   75.302984]  </TASK>
> > [   75.302986] ---[ end trace 0000000000000000 ]---
> > <snip>
> > 
> > This is with:
> > 
> > <snip>
> > commit 8f6414680a0d539ca0e7fde80556c71b7b3da88a (HEAD -> dev)
> > Author: Uladzislau Rezki (Sony) <urezki@gmail.com>
> > Date:   Tue Apr 4 15:51:56 2023 +0200
> > 
> >     rcu/kvfree: Add debug check of GP ready for ptrs in a list
> > 
> > commit efbe7927f47958a6805da5560d9a5f469ba51e73 (origin/dev)
> > Author: Paul E. McKenney <paulmck@kernel.org>
> > Date:   Mon Apr 3 16:49:14 2023 -0700
> > 
> >     rcu/kvfree: Add debug to check grace periods
> > 
> > + below revert
> > 
> > commit 6b4fef6ec689b1dda9c63be77e9a81a52cc39dc1
> > Author: Ziwei Dai <ziwei.dai@unisoc.com>
> > Date:   Fri Mar 31 20:42:09 2023 +0800
> > 
> >     rcu/kvfree: Avoid freeing new kfree_rcu() memory after old grace period
> > <snip>
> > 
> > The test is "sudo ./test_vmalloc.sh run_test_mask=768 nr_threads=64&"
> > 
> > it runs single argument and double argument to free vmalloc ptrs.,
> > number of threads are 64:
> > 
> > without revert(with a patch that is in question), i am not able to
> > reproduce it anymore.
> 
> Very good!!!
> 
> This test does not fit very will into the rcutorture script framework,
> but might it be able to guide changes to rcuscale.c?
> 
Today i managed to reproduce it with "rcuscale". Same logic. We should
use both single + double in parallel quite heavily. So at least two
channels are started to be used. I can trigger if i apply flooding
of kfree_rcu().

<snip>
tools/testing/selftests/rcutorture/bin/kvm.sh --memory 10G --torture rcuscale \
    --allcpus --duration 1 \
      --kconfig CONFIG_NR_CPUS=64 \
      --kconfig CONFIG_RCU_NOCB_CPU=y \
      --kconfig CONFIG_RCU_NOCB_CPU_DEFAULT_ALL=y \
      --kconfig CONFIG_RCU_LAZY=n \
      --bootargs "rcuscale.kfree_rcu_test=1 rcuscale.kfree_nthreads=64 \
      			 rcuscale.holdoff=20 rcuscale.kfree_alloc_num=1000000 \
      			 torture.disable_onoff_at_boot" --trust-make
<snip>

--
Uladzislau Rezki

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-04-05  9:05 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-04 13:08 Re: [PATCH V2] rcu: Make sure new krcp free business is handled after the wanted rcu grace period 代子为 (Ziwei Dai)
2023-04-04 13:54 ` Paul E. McKenney
2023-04-04 19:33   ` Uladzislau Rezki
2023-04-04 20:08     ` Paul E. McKenney
2023-04-05  9:05       ` Uladzislau Rezki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).