linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RCU] kernel hangs in wait_rcu_gp during suspend path
@ 2014-12-15 17:04 Arun KS
  2014-12-16  6:29 ` Arun KS
  2014-12-16 20:19 ` Paul E. McKenney
  0 siblings, 2 replies; 10+ messages in thread
From: Arun KS @ 2014-12-15 17:04 UTC (permalink / raw)
  To: linux-kernel; +Cc: paulmck

Hi,

Here is the backtrace of the process hanging in wait_rcu_gp,

PID: 247    TASK: e16e7380  CPU: 4   COMMAND: "kworker/u16:5"
 #0 [<c09fead0>] (__schedule) from [<c09fcab0>]
 #1 [<c09fcab0>] (schedule_timeout) from [<c09fe050>]
 #2 [<c09fe050>] (wait_for_common) from [<c013b2b4>]
 #3 [<c013b2b4>] (wait_rcu_gp) from [<c0142f50>]
 #4 [<c0142f50>] (atomic_notifier_chain_unregister) from [<c06b2ab8>]
 #5 [<c06b2ab8>] (cpufreq_interactive_disable_sched_input) from [<c06b32a8>]
 #6 [<c06b32a8>] (cpufreq_governor_interactive) from [<c06abbf8>]
 #7 [<c06abbf8>] (__cpufreq_governor) from [<c06ae474>]
 #8 [<c06ae474>] (__cpufreq_remove_dev_finish) from [<c06ae8c0>]
 #9 [<c06ae8c0>] (cpufreq_cpu_callback) from [<c0a0185c>]
#10 [<c0a0185c>] (notifier_call_chain) from [<c0121888>]
#11 [<c0121888>] (__cpu_notify) from [<c0121a04>]
#12 [<c0121a04>] (cpu_notify_nofail) from [<c09ee7f0>]
#13 [<c09ee7f0>] (_cpu_down) from [<c0121b70>]
#14 [<c0121b70>] (disable_nonboot_cpus) from [<c016788c>]
#15 [<c016788c>] (suspend_devices_and_enter) from [<c0167bcc>]
#16 [<c0167bcc>] (pm_suspend) from [<c0167d94>]
#17 [<c0167d94>] (try_to_suspend) from [<c0138460>]
#18 [<c0138460>] (process_one_work) from [<c0138b18>]
#19 [<c0138b18>] (worker_thread) from [<c013dc58>]
#20 [<c013dc58>] (kthread) from [<c01061b8>]

Will this patch helps here,
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c

I couldn't really understand why it got struck in  synchronize_rcu().
Please give some pointers to debug this further.

Below are the configs enable related to RCU.

CONFIG_TREE_PREEMPT_RCU=y
CONFIG_PREEMPT_RCU=y
CONFIG_RCU_STALL_COMMON=y
CONFIG_RCU_FANOUT=32
CONFIG_RCU_FANOUT_LEAF=16
CONFIG_RCU_FAST_NO_HZ=y
CONFIG_RCU_CPU_STALL_TIMEOUT=21
CONFIG_RCU_CPU_STALL_VERBOSE=y

Kernel version is 3.10.28
Architecture is ARM

Thanks,
Arun

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RCU] kernel hangs in wait_rcu_gp during suspend path
  2014-12-15 17:04 [RCU] kernel hangs in wait_rcu_gp during suspend path Arun KS
@ 2014-12-16  6:29 ` Arun KS
  2014-12-16 17:30   ` Arun KS
  2014-12-17 19:27   ` Paul E. McKenney
  2014-12-16 20:19 ` Paul E. McKenney
  1 sibling, 2 replies; 10+ messages in thread
From: Arun KS @ 2014-12-16  6:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: Paul McKenney, josh, rostedt, mathieu.desnoyers, laijs

Hello,

I dig little deeper to understand the situation.
All other cpus are in idle thread already.
As per my understanding, for the grace period to end, at-least one of
the following should happen on all online cpus,

1. a context switch.
2. user space switch.
3. switch to idle thread.

In this situation, since all the other cores are already in idle,  non
of the above are meet on all online cores.
So grace period is getting extended and never finishes. Below is the
state of runqueue when the hang happens.
--------------start------------------------------------
crash> runq
CPU 0 [OFFLINE]

CPU 1 [OFFLINE]

CPU 2 [OFFLINE]

CPU 3 [OFFLINE]

CPU 4 RUNQUEUE: c3192e40
  CURRENT: PID: 0      TASK: f0874440  COMMAND: "swapper/4"
  RT PRIO_ARRAY: c3192f20
     [no tasks queued]
  CFS RB_ROOT: c3192eb0
     [no tasks queued]

CPU 5 RUNQUEUE: c31a0e40
  CURRENT: PID: 0      TASK: f0874980  COMMAND: "swapper/5"
  RT PRIO_ARRAY: c31a0f20
     [no tasks queued]
  CFS RB_ROOT: c31a0eb0
     [no tasks queued]

CPU 6 RUNQUEUE: c31aee40
  CURRENT: PID: 0      TASK: f0874ec0  COMMAND: "swapper/6"
  RT PRIO_ARRAY: c31aef20
     [no tasks queued]
  CFS RB_ROOT: c31aeeb0
     [no tasks queued]

CPU 7 RUNQUEUE: c31bce40
  CURRENT: PID: 0      TASK: f0875400  COMMAND: "swapper/7"
  RT PRIO_ARRAY: c31bcf20
     [no tasks queued]
  CFS RB_ROOT: c31bceb0
     [no tasks queued]
--------------end------------------------------------

If my understanding is correct the below patch should help, because it
will expedite grace periods during suspend,
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c

But I wonder why it was not taken to stable trees. Can we take it?
Appreciate your help.

Thanks,
Arun

On Mon, Dec 15, 2014 at 10:34 PM, Arun KS <arunks.linux@gmail.com> wrote:
> Hi,
>
> Here is the backtrace of the process hanging in wait_rcu_gp,
>
> PID: 247    TASK: e16e7380  CPU: 4   COMMAND: "kworker/u16:5"
>  #0 [<c09fead0>] (__schedule) from [<c09fcab0>]
>  #1 [<c09fcab0>] (schedule_timeout) from [<c09fe050>]
>  #2 [<c09fe050>] (wait_for_common) from [<c013b2b4>]
>  #3 [<c013b2b4>] (wait_rcu_gp) from [<c0142f50>]
>  #4 [<c0142f50>] (atomic_notifier_chain_unregister) from [<c06b2ab8>]
>  #5 [<c06b2ab8>] (cpufreq_interactive_disable_sched_input) from [<c06b32a8>]
>  #6 [<c06b32a8>] (cpufreq_governor_interactive) from [<c06abbf8>]
>  #7 [<c06abbf8>] (__cpufreq_governor) from [<c06ae474>]
>  #8 [<c06ae474>] (__cpufreq_remove_dev_finish) from [<c06ae8c0>]
>  #9 [<c06ae8c0>] (cpufreq_cpu_callback) from [<c0a0185c>]
> #10 [<c0a0185c>] (notifier_call_chain) from [<c0121888>]
> #11 [<c0121888>] (__cpu_notify) from [<c0121a04>]
> #12 [<c0121a04>] (cpu_notify_nofail) from [<c09ee7f0>]
> #13 [<c09ee7f0>] (_cpu_down) from [<c0121b70>]
> #14 [<c0121b70>] (disable_nonboot_cpus) from [<c016788c>]
> #15 [<c016788c>] (suspend_devices_and_enter) from [<c0167bcc>]
> #16 [<c0167bcc>] (pm_suspend) from [<c0167d94>]
> #17 [<c0167d94>] (try_to_suspend) from [<c0138460>]
> #18 [<c0138460>] (process_one_work) from [<c0138b18>]
> #19 [<c0138b18>] (worker_thread) from [<c013dc58>]
> #20 [<c013dc58>] (kthread) from [<c01061b8>]
>
> Will this patch helps here,
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c
>
> I couldn't really understand why it got struck in  synchronize_rcu().
> Please give some pointers to debug this further.
>
> Below are the configs enable related to RCU.
>
> CONFIG_TREE_PREEMPT_RCU=y
> CONFIG_PREEMPT_RCU=y
> CONFIG_RCU_STALL_COMMON=y
> CONFIG_RCU_FANOUT=32
> CONFIG_RCU_FANOUT_LEAF=16
> CONFIG_RCU_FAST_NO_HZ=y
> CONFIG_RCU_CPU_STALL_TIMEOUT=21
> CONFIG_RCU_CPU_STALL_VERBOSE=y
>
> Kernel version is 3.10.28
> Architecture is ARM
>
> Thanks,
> Arun

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RCU] kernel hangs in wait_rcu_gp during suspend path
  2014-12-16  6:29 ` Arun KS
@ 2014-12-16 17:30   ` Arun KS
  2014-12-17 19:24     ` Paul E. McKenney
  2014-12-17 19:27   ` Paul E. McKenney
  1 sibling, 1 reply; 10+ messages in thread
From: Arun KS @ 2014-12-16 17:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: Paul McKenney, josh, rostedt, Mathieu Desnoyers, laijs

Hello,

Adding some more info.

Below is the rcu_data data structure corresponding to cpu4.

struct rcu_data {
  completed = 5877,
  gpnum = 5877,
  passed_quiesce = true,
  qs_pending = false,
  beenonline = true,
  preemptible = true,
  mynode = 0xc117f340 <rcu_preempt_state>,
  grpmask = 16,
  nxtlist = 0xedaaec00,
  nxttail = {0xc54366c4, 0xe84d350c, 0xe84d350c, 0xe84d350c},
  nxtcompleted = {4294967035, 5878, 5878, 5878},
  qlen_lazy = 105,
  qlen = 415,
  qlen_last_fqs_check = 0,
  n_cbs_invoked = 86323,
  n_nocbs_invoked = 0,
  n_cbs_orphaned = 0,
  n_cbs_adopted = 139,
  n_force_qs_snap = 0,
  blimit = 10,
  dynticks = 0xc5436758,
  dynticks_snap = 7582140,
  dynticks_fqs = 41,
  offline_fqs = 0,
  n_rcu_pending = 59404,
  n_rp_qs_pending = 5,
  n_rp_report_qs = 4633,
  n_rp_cb_ready = 32,
  n_rp_cpu_needs_gp = 41088,
  n_rp_gp_completed = 2844,
  n_rp_gp_started = 1150,
  n_rp_need_nothing = 9657,
  barrier_head = {
    next = 0x0,
    func = 0x0
  },
  oom_head = {
    next = 0x0,
    func = 0x0
  },
  cpu = 4,
  rsp = 0xc117f340 <rcu_preempt_state>
}



Also pasting complete rcu_preempt_state.



rcu_preempt_state = $9 = {
  node = {{
      lock = {
        raw_lock = {
          {
            slock = 3129850509,
            tickets = {
              owner = 47757,
              next = 47757
            }
          }
        },
        magic = 3735899821,
        owner_cpu = 4294967295,
        owner = 0xffffffff
      },
      gpnum = 5877,
      completed = 5877,
      qsmask = 0,
      expmask = 0,
      qsmaskinit = 240,
      grpmask = 0,
      grplo = 0,
      grphi = 7,
      grpnum = 0 '\000',
      level = 0 '\000',
      parent = 0x0,
      blkd_tasks = {
        next = 0xc117f378 <rcu_preempt_state+56>,
        prev = 0xc117f378 <rcu_preempt_state+56>
      },
      gp_tasks = 0x0,
      exp_tasks = 0x0,
      need_future_gp = {1, 0},
      fqslock = {
        raw_lock = {
          {
            slock = 0,
            tickets = {
              owner = 0,
              next = 0
            }
          }
        },
        magic = 3735899821,
        owner_cpu = 4294967295,
        owner = 0xffffffff
      }
    }},
  level = {0xc117f340 <rcu_preempt_state>},
  levelcnt = {1, 0, 0, 0, 0},
  levelspread = "\b",
  rda = 0xc115e6b0 <rcu_preempt_data>,
  call = 0xc01975ac <call_rcu>,
  fqs_state = 0 '\000',
  boost = 0 '\000',
  gpnum = 5877,
  completed = 5877,
  gp_kthread = 0xf0c9e600,
  gp_wq = {
    lock = {
      {
        rlock = {
          raw_lock = {
            {
              slock = 2160230594,
              tickets = {
                owner = 32962,
                next = 32962
              }
            }
          },
          magic = 3735899821,
          owner_cpu = 4294967295,
          owner = 0xffffffff
        }
      }
    },
    task_list = {
      next = 0xf0cd1f20,
      prev = 0xf0cd1f20
    }
  },
  gp_flags = 1,
  orphan_lock = {
    raw_lock = {
      {
        slock = 327685,
        tickets = {
          owner = 5,
          next = 5
        }
      }
    },
    magic = 3735899821,
    owner_cpu = 4294967295,
    owner = 0xffffffff
  },
  orphan_nxtlist = 0x0,
  orphan_nxttail = 0xc117f490 <rcu_preempt_state+336>,
  orphan_donelist = 0x0,
  orphan_donetail = 0xc117f498 <rcu_preempt_state+344>,
  qlen_lazy = 0,
  qlen = 0,
  onoff_mutex = {
    count = {
      counter = 1
    },
    wait_lock = {
      {
        rlock = {
          raw_lock = {
            {
              slock = 811479134,
              tickets = {
                owner = 12382,
                next = 12382
              }
            }
          },
          magic = 3735899821,
          owner_cpu = 4294967295,
          owner = 0xffffffff
        }
      }
    },
    wait_list = {
      next = 0xc117f4bc <rcu_preempt_state+380>,
      prev = 0xc117f4bc <rcu_preempt_state+380>
    },
    owner = 0x0,
    name = 0x0,
    magic = 0xc117f4a8 <rcu_preempt_state+360>
  },
  barrier_mutex = {
    count = {
      counter = 1
    },
    wait_lock = {
      {
        rlock = {
          raw_lock = {
            {
              slock = 0,
              tickets = {
                owner = 0,
                next = 0
              }
            }
          },
          magic = 3735899821,
          owner_cpu = 4294967295,
          owner = 0xffffffff
        }
      }
    },
    wait_list = {
      next = 0xc117f4e4 <rcu_preempt_state+420>,
      prev = 0xc117f4e4 <rcu_preempt_state+420>
    },
    owner = 0x0,
    name = 0x0,
    magic = 0xc117f4d0 <rcu_preempt_state+400>
  },
  barrier_cpu_count = {
    counter = 0
  },
  barrier_completion = {
    done = 0,
    wait = {
      lock = {
        {
          rlock = {
            raw_lock = {
              {
                slock = 0,
                tickets = {
                  owner = 0,
                  next = 0
                }
              }
            },
            magic = 0,
            owner_cpu = 0,
            owner = 0x0
          }
        }
      },
      task_list = {
        next = 0x0,
        prev = 0x0
      }
    }
  },
  n_barrier_done = 0,
  expedited_start = {
    counter = 0
  },
  expedited_done = {
    counter = 0
  },
  expedited_wrap = {
    counter = 0
  },
  expedited_tryfail = {
    counter = 0
  },
  expedited_workdone1 = {
    counter = 0
  },
  expedited_workdone2 = {
    counter = 0
  },
  expedited_normal = {
    counter = 0
  },
  expedited_stoppedcpus = {
    counter = 0
  },
  expedited_done_tries = {
    counter = 0
  },
  expedited_done_lost = {
    counter = 0
  },
  expedited_done_exit = {
    counter = 0
  },
  jiffies_force_qs = 4294963917,
  n_force_qs = 4028,
  n_force_qs_lh = 0,
  n_force_qs_ngp = 0,
  gp_start = 4294963911,
  jiffies_stall = 4294966011,
  gp_max = 17,
  name = 0xc0d833ab "rcu_preempt",
  abbr = 112 'p',
  flavors = {
    next = 0xc117f2ec <rcu_bh_state+556>,
    prev = 0xc117f300 <rcu_struct_flavors>
  },
  wakeup_work = {
    flags = 3,
    llnode = {
      next = 0x0
    },
    func = 0xc0195aa8 <rsp_wakeup>
  }
}

Hope this helps.

Thanks,
Arun


On Tue, Dec 16, 2014 at 11:59 AM, Arun KS <arunks.linux@gmail.com> wrote:
> Hello,
>
> I dig little deeper to understand the situation.
> All other cpus are in idle thread already.
> As per my understanding, for the grace period to end, at-least one of
> the following should happen on all online cpus,
>
> 1. a context switch.
> 2. user space switch.
> 3. switch to idle thread.
>
> In this situation, since all the other cores are already in idle,  non
> of the above are meet on all online cores.
> So grace period is getting extended and never finishes. Below is the
> state of runqueue when the hang happens.
> --------------start------------------------------------
> crash> runq
> CPU 0 [OFFLINE]
>
> CPU 1 [OFFLINE]
>
> CPU 2 [OFFLINE]
>
> CPU 3 [OFFLINE]
>
> CPU 4 RUNQUEUE: c3192e40
>   CURRENT: PID: 0      TASK: f0874440  COMMAND: "swapper/4"
>   RT PRIO_ARRAY: c3192f20
>      [no tasks queued]
>   CFS RB_ROOT: c3192eb0
>      [no tasks queued]
>
> CPU 5 RUNQUEUE: c31a0e40
>   CURRENT: PID: 0      TASK: f0874980  COMMAND: "swapper/5"
>   RT PRIO_ARRAY: c31a0f20
>      [no tasks queued]
>   CFS RB_ROOT: c31a0eb0
>      [no tasks queued]
>
> CPU 6 RUNQUEUE: c31aee40
>   CURRENT: PID: 0      TASK: f0874ec0  COMMAND: "swapper/6"
>   RT PRIO_ARRAY: c31aef20
>      [no tasks queued]
>   CFS RB_ROOT: c31aeeb0
>      [no tasks queued]
>
> CPU 7 RUNQUEUE: c31bce40
>   CURRENT: PID: 0      TASK: f0875400  COMMAND: "swapper/7"
>   RT PRIO_ARRAY: c31bcf20
>      [no tasks queued]
>   CFS RB_ROOT: c31bceb0
>      [no tasks queued]
> --------------end------------------------------------
>
> If my understanding is correct the below patch should help, because it
> will expedite grace periods during suspend,
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c
>
> But I wonder why it was not taken to stable trees. Can we take it?
> Appreciate your help.
>
> Thanks,
> Arun
>
> On Mon, Dec 15, 2014 at 10:34 PM, Arun KS <arunks.linux@gmail.com> wrote:
>> Hi,
>>
>> Here is the backtrace of the process hanging in wait_rcu_gp,
>>
>> PID: 247    TASK: e16e7380  CPU: 4   COMMAND: "kworker/u16:5"
>>  #0 [<c09fead0>] (__schedule) from [<c09fcab0>]
>>  #1 [<c09fcab0>] (schedule_timeout) from [<c09fe050>]
>>  #2 [<c09fe050>] (wait_for_common) from [<c013b2b4>]
>>  #3 [<c013b2b4>] (wait_rcu_gp) from [<c0142f50>]
>>  #4 [<c0142f50>] (atomic_notifier_chain_unregister) from [<c06b2ab8>]
>>  #5 [<c06b2ab8>] (cpufreq_interactive_disable_sched_input) from [<c06b32a8>]
>>  #6 [<c06b32a8>] (cpufreq_governor_interactive) from [<c06abbf8>]
>>  #7 [<c06abbf8>] (__cpufreq_governor) from [<c06ae474>]
>>  #8 [<c06ae474>] (__cpufreq_remove_dev_finish) from [<c06ae8c0>]
>>  #9 [<c06ae8c0>] (cpufreq_cpu_callback) from [<c0a0185c>]
>> #10 [<c0a0185c>] (notifier_call_chain) from [<c0121888>]
>> #11 [<c0121888>] (__cpu_notify) from [<c0121a04>]
>> #12 [<c0121a04>] (cpu_notify_nofail) from [<c09ee7f0>]
>> #13 [<c09ee7f0>] (_cpu_down) from [<c0121b70>]
>> #14 [<c0121b70>] (disable_nonboot_cpus) from [<c016788c>]
>> #15 [<c016788c>] (suspend_devices_and_enter) from [<c0167bcc>]
>> #16 [<c0167bcc>] (pm_suspend) from [<c0167d94>]
>> #17 [<c0167d94>] (try_to_suspend) from [<c0138460>]
>> #18 [<c0138460>] (process_one_work) from [<c0138b18>]
>> #19 [<c0138b18>] (worker_thread) from [<c013dc58>]
>> #20 [<c013dc58>] (kthread) from [<c01061b8>]
>>
>> Will this patch helps here,
>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c
>>
>> I couldn't really understand why it got struck in  synchronize_rcu().
>> Please give some pointers to debug this further.
>>
>> Below are the configs enable related to RCU.
>>
>> CONFIG_TREE_PREEMPT_RCU=y
>> CONFIG_PREEMPT_RCU=y
>> CONFIG_RCU_STALL_COMMON=y
>> CONFIG_RCU_FANOUT=32
>> CONFIG_RCU_FANOUT_LEAF=16
>> CONFIG_RCU_FAST_NO_HZ=y
>> CONFIG_RCU_CPU_STALL_TIMEOUT=21
>> CONFIG_RCU_CPU_STALL_VERBOSE=y
>>
>> Kernel version is 3.10.28
>> Architecture is ARM
>>
>> Thanks,
>> Arun

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RCU] kernel hangs in wait_rcu_gp during suspend path
  2014-12-15 17:04 [RCU] kernel hangs in wait_rcu_gp during suspend path Arun KS
  2014-12-16  6:29 ` Arun KS
@ 2014-12-16 20:19 ` Paul E. McKenney
  1 sibling, 0 replies; 10+ messages in thread
From: Paul E. McKenney @ 2014-12-16 20:19 UTC (permalink / raw)
  To: Arun KS; +Cc: linux-kernel

On Mon, Dec 15, 2014 at 10:34:58PM +0530, Arun KS wrote:
> Hi,
> 
> Here is the backtrace of the process hanging in wait_rcu_gp,
> 
> PID: 247    TASK: e16e7380  CPU: 4   COMMAND: "kworker/u16:5"
>  #0 [<c09fead0>] (__schedule) from [<c09fcab0>]
>  #1 [<c09fcab0>] (schedule_timeout) from [<c09fe050>]
>  #2 [<c09fe050>] (wait_for_common) from [<c013b2b4>]
>  #3 [<c013b2b4>] (wait_rcu_gp) from [<c0142f50>]
>  #4 [<c0142f50>] (atomic_notifier_chain_unregister) from [<c06b2ab8>]
>  #5 [<c06b2ab8>] (cpufreq_interactive_disable_sched_input) from [<c06b32a8>]
>  #6 [<c06b32a8>] (cpufreq_governor_interactive) from [<c06abbf8>]
>  #7 [<c06abbf8>] (__cpufreq_governor) from [<c06ae474>]
>  #8 [<c06ae474>] (__cpufreq_remove_dev_finish) from [<c06ae8c0>]
>  #9 [<c06ae8c0>] (cpufreq_cpu_callback) from [<c0a0185c>]
> #10 [<c0a0185c>] (notifier_call_chain) from [<c0121888>]
> #11 [<c0121888>] (__cpu_notify) from [<c0121a04>]
> #12 [<c0121a04>] (cpu_notify_nofail) from [<c09ee7f0>]
> #13 [<c09ee7f0>] (_cpu_down) from [<c0121b70>]
> #14 [<c0121b70>] (disable_nonboot_cpus) from [<c016788c>]
> #15 [<c016788c>] (suspend_devices_and_enter) from [<c0167bcc>]
> #16 [<c0167bcc>] (pm_suspend) from [<c0167d94>]
> #17 [<c0167d94>] (try_to_suspend) from [<c0138460>]
> #18 [<c0138460>] (process_one_work) from [<c0138b18>]
> #19 [<c0138b18>] (worker_thread) from [<c013dc58>]
> #20 [<c013dc58>] (kthread) from [<c01061b8>]
> 
> Will this patch helps here,
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c

Looks to me like it will help.  Why don't you give it a try?

> I couldn't really understand why it got struck in  synchronize_rcu().
> Please give some pointers to debug this further.

If this patch does work, I suggest looking at the related discussion on LKML.
Perhaps your system is suffering from the audio/irq bug mentioned in the
commit log.

For a list of reasons why synchronize_rcu() might get stuck, please take
a look at Documentation/RCU/stallwarn.txt, near the end, which I have added
to the end of this email.  Search for "What Causes RCU CPU Stall Warnings".

> Below are the configs enable related to RCU.
> 
> CONFIG_TREE_PREEMPT_RCU=y
> CONFIG_PREEMPT_RCU=y
> CONFIG_RCU_STALL_COMMON=y
> CONFIG_RCU_FANOUT=32
> CONFIG_RCU_FANOUT_LEAF=16
> CONFIG_RCU_FAST_NO_HZ=y
> CONFIG_RCU_CPU_STALL_TIMEOUT=21
> CONFIG_RCU_CPU_STALL_VERBOSE=y
> 
> Kernel version is 3.10.28
> Architecture is ARM

People familiar with distros based on 3.10 might have additional information.

							Thanx, Paul

------------------------------------------------------------------------

Using RCU's CPU Stall Detector

The rcu_cpu_stall_suppress module parameter enables RCU's CPU stall
detector, which detects conditions that unduly delay RCU grace periods.
This module parameter enables CPU stall detection by default, but
may be overridden via boot-time parameter or at runtime via sysfs.
The stall detector's idea of what constitutes "unduly delayed" is
controlled by a set of kernel configuration variables and cpp macros:

CONFIG_RCU_CPU_STALL_TIMEOUT

	This kernel configuration parameter defines the period of time
	that RCU will wait from the beginning of a grace period until it
	issues an RCU CPU stall warning.  This time period is normally
	21 seconds.

	This configuration parameter may be changed at runtime via the
	/sys/module/rcupdate/parameters/rcu_cpu_stall_timeout, however
	this parameter is checked only at the beginning of a cycle.
	So if you are 10 seconds into a 40-second stall, setting this
	sysfs parameter to (say) five will shorten the timeout for the
	-next- stall, or the following warning for the current stall
	(assuming the stall lasts long enough).  It will not affect the
	timing of the next warning for the current stall.

	Stall-warning messages may be enabled and disabled completely via
	/sys/module/rcupdate/parameters/rcu_cpu_stall_suppress.

CONFIG_RCU_CPU_STALL_INFO

	This kernel configuration parameter causes the stall warning to
	print out additional per-CPU diagnostic information, including
	information on scheduling-clock ticks and RCU's idle-CPU tracking.

RCU_STALL_DELAY_DELTA

	Although the lockdep facility is extremely useful, it does add
	some overhead.  Therefore, under CONFIG_PROVE_RCU, the
	RCU_STALL_DELAY_DELTA macro allows five extra seconds before
	giving an RCU CPU stall warning message.  (This is a cpp
	macro, not a kernel configuration parameter.)

RCU_STALL_RAT_DELAY

	The CPU stall detector tries to make the offending CPU print its
	own warnings, as this often gives better-quality stack traces.
	However, if the offending CPU does not detect its own stall in
	the number of jiffies specified by RCU_STALL_RAT_DELAY, then
	some other CPU will complain.  This delay is normally set to
	two jiffies.  (This is a cpp macro, not a kernel configuration
	parameter.)

rcupdate.rcu_task_stall_timeout

	This boot/sysfs parameter controls the RCU-tasks stall warning
	interval.  A value of zero or less suppresses RCU-tasks stall
	warnings.  A positive value sets the stall-warning interval
	in jiffies.  An RCU-tasks stall warning starts wtih the line:

		INFO: rcu_tasks detected stalls on tasks:

	And continues with the output of sched_show_task() for each
	task stalling the current RCU-tasks grace period.

For non-RCU-tasks flavors of RCU, when a CPU detects that it is stalling,
it will print a message similar to the following:

INFO: rcu_sched_state detected stall on CPU 5 (t=2500 jiffies)

This message indicates that CPU 5 detected that it was causing a stall,
and that the stall was affecting RCU-sched.  This message will normally be
followed by a stack dump of the offending CPU.  On TREE_RCU kernel builds,
RCU and RCU-sched are implemented by the same underlying mechanism,
while on PREEMPT_RCU kernel builds, RCU is instead implemented
by rcu_preempt_state.

On the other hand, if the offending CPU fails to print out a stall-warning
message quickly enough, some other CPU will print a message similar to
the following:

INFO: rcu_bh_state detected stalls on CPUs/tasks: { 3 5 } (detected by 2, 2502 jiffies)

This message indicates that CPU 2 detected that CPUs 3 and 5 were both
causing stalls, and that the stall was affecting RCU-bh.  This message
will normally be followed by stack dumps for each CPU.  Please note that
PREEMPT_RCU builds can be stalled by tasks as well as by CPUs,
and that the tasks will be indicated by PID, for example, "P3421".
It is even possible for a rcu_preempt_state stall to be caused by both
CPUs -and- tasks, in which case the offending CPUs and tasks will all
be called out in the list.

Finally, if the grace period ends just as the stall warning starts
printing, there will be a spurious stall-warning message:

INFO: rcu_bh_state detected stalls on CPUs/tasks: { } (detected by 4, 2502 jiffies)

This is rare, but does happen from time to time in real life.  It is also
possible for a zero-jiffy stall to be flagged in this case, depending
on how the stall warning and the grace-period initialization happen to
interact.  Please note that it is not possible to entirely eliminate this
sort of false positive without resorting to things like stop_machine(),
which is overkill for this sort of problem.

If the CONFIG_RCU_CPU_STALL_INFO kernel configuration parameter is set,
more information is printed with the stall-warning message, for example:

	INFO: rcu_preempt detected stall on CPU
	0: (63959 ticks this GP) idle=241/3fffffffffffffff/0 softirq=82/543
	   (t=65000 jiffies)

In kernels with CONFIG_RCU_FAST_NO_HZ, even more information is
printed:

	INFO: rcu_preempt detected stall on CPU
	0: (64628 ticks this GP) idle=dd5/3fffffffffffffff/0 softirq=82/543 last_accelerate: a345/d342 nonlazy_posted: 25 .D
	   (t=65000 jiffies)

The "(64628 ticks this GP)" indicates that this CPU has taken more
than 64,000 scheduling-clock interrupts during the current stalled
grace period.  If the CPU was not yet aware of the current grace
period (for example, if it was offline), then this part of the message
indicates how many grace periods behind the CPU is.

The "idle=" portion of the message prints the dyntick-idle state.
The hex number before the first "/" is the low-order 12 bits of the
dynticks counter, which will have an even-numbered value if the CPU is
in dyntick-idle mode and an odd-numbered value otherwise.  The hex
number between the two "/"s is the value of the nesting, which will
be a small positive number if in the idle loop and a very large positive
number (as shown above) otherwise.

The "softirq=" portion of the message tracks the number of RCU softirq
handlers that the stalled CPU has executed.  The number before the "/"
is the number that had executed since boot at the time that this CPU
last noted the beginning of a grace period, which might be the current
(stalled) grace period, or it might be some earlier grace period (for
example, if the CPU might have been in dyntick-idle mode for an extended
time period.  The number after the "/" is the number that have executed
since boot until the current time.  If this latter number stays constant
across repeated stall-warning messages, it is possible that RCU's softirq
handlers are no longer able to execute on this CPU.  This can happen if
the stalled CPU is spinning with interrupts are disabled, or, in -rt
kernels, if a high-priority process is starving RCU's softirq handler.

For CONFIG_RCU_FAST_NO_HZ kernels, the "last_accelerate:" prints the
low-order 16 bits (in hex) of the jiffies counter when this CPU last
invoked rcu_try_advance_all_cbs() from rcu_needs_cpu() or last invoked
rcu_accelerate_cbs() from rcu_prepare_for_idle().  The "nonlazy_posted:"
prints the number of non-lazy callbacks posted since the last call to
rcu_needs_cpu().  Finally, an "L" indicates that there are currently
no non-lazy callbacks ("." is printed otherwise, as shown above) and
"D" indicates that dyntick-idle processing is enabled ("." is printed
otherwise, for example, if disabled via the "nohz=" kernel boot parameter).


Multiple Warnings From One Stall

If a stall lasts long enough, multiple stall-warning messages will be
printed for it.  The second and subsequent messages are printed at
longer intervals, so that the time between (say) the first and second
message will be about three times the interval between the beginning
of the stall and the first message.


What Causes RCU CPU Stall Warnings?

So your kernel printed an RCU CPU stall warning.  The next question is
"What caused it?"  The following problems can result in RCU CPU stall
warnings:

o	A CPU looping in an RCU read-side critical section.
	
o	A CPU looping with interrupts disabled.  This condition can
	result in RCU-sched and RCU-bh stalls.

o	A CPU looping with preemption disabled.  This condition can
	result in RCU-sched stalls and, if ksoftirqd is in use, RCU-bh
	stalls.

o	A CPU looping with bottom halves disabled.  This condition can
	result in RCU-sched and RCU-bh stalls.

o	For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the
	kernel without invoking schedule().  Note that cond_resched()
	does not necessarily prevent RCU CPU stall warnings.  Therefore,
	if the looping in the kernel is really expected and desirable
	behavior, you might need to replace some of the cond_resched()
	calls with calls to cond_resched_rcu_qs().

o	Anything that prevents RCU's grace-period kthreads from running.
	This can result in the "All QSes seen" console-log message.
	This message will include information on when the kthread last
	ran and how often it should be expected to run.

o	A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might
	happen to preempt a low-priority task in the middle of an RCU
	read-side critical section.   This is especially damaging if
	that low-priority task is not permitted to run on any other CPU,
	in which case the next RCU grace period can never complete, which
	will eventually cause the system to run out of memory and hang.
	While the system is in the process of running itself out of
	memory, you might see stall-warning messages.

o	A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that
	is running at a higher priority than the RCU softirq threads.
	This will prevent RCU callbacks from ever being invoked,
	and in a CONFIG_PREEMPT_RCU kernel will further prevent
	RCU grace periods from ever completing.  Either way, the
	system will eventually run out of memory and hang.  In the
	CONFIG_PREEMPT_RCU case, you might see stall-warning
	messages.

o	A hardware or software issue shuts off the scheduler-clock
	interrupt on a CPU that is not in dyntick-idle mode.  This
	problem really has happened, and seems to be most likely to
	result in RCU CPU stall warnings for CONFIG_NO_HZ_COMMON=n kernels.

o	A bug in the RCU implementation.

o	A hardware failure.  This is quite unlikely, but has occurred
	at least once in real life.  A CPU failed in a running system,
	becoming unresponsive, but not causing an immediate crash.
	This resulted in a series of RCU CPU stall warnings, eventually
	leading the realization that the CPU had failed.

The RCU, RCU-sched, RCU-bh, and RCU-tasks implementations have CPU stall
warning.  Note that SRCU does -not- have CPU stall warnings.  Please note
that RCU only detects CPU stalls when there is a grace period in progress.
No grace period, no CPU stall warnings.

To diagnose the cause of the stall, inspect the stack traces.
The offending function will usually be near the top of the stack.
If you have a series of stall warnings from a single extended stall,
comparing the stack traces can often help determine where the stall
is occurring, which will usually be in the function nearest the top of
that portion of the stack which remains the same from trace to trace.
If you can reliably trigger the stall, ftrace can be quite helpful.

RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE
and with RCU's event tracing.  For information on RCU's event tracing,
see include/trace/events/rcu.h.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RCU] kernel hangs in wait_rcu_gp during suspend path
  2014-12-16 17:30   ` Arun KS
@ 2014-12-17 19:24     ` Paul E. McKenney
  2014-12-18 16:22       ` Arun KS
  0 siblings, 1 reply; 10+ messages in thread
From: Paul E. McKenney @ 2014-12-17 19:24 UTC (permalink / raw)
  To: Arun KS; +Cc: linux-kernel, josh, rostedt, Mathieu Desnoyers, laijs

On Tue, Dec 16, 2014 at 11:00:20PM +0530, Arun KS wrote:
> Hello,
> 
> Adding some more info.
> 
> Below is the rcu_data data structure corresponding to cpu4.

This shows that RCU is idle.  What was the state of the system at the
time you collected this data?

							Thanx, Paul

> struct rcu_data {
>   completed = 5877,
>   gpnum = 5877,
>   passed_quiesce = true,
>   qs_pending = false,
>   beenonline = true,
>   preemptible = true,
>   mynode = 0xc117f340 <rcu_preempt_state>,
>   grpmask = 16,
>   nxtlist = 0xedaaec00,
>   nxttail = {0xc54366c4, 0xe84d350c, 0xe84d350c, 0xe84d350c},
>   nxtcompleted = {4294967035, 5878, 5878, 5878},
>   qlen_lazy = 105,
>   qlen = 415,
>   qlen_last_fqs_check = 0,
>   n_cbs_invoked = 86323,
>   n_nocbs_invoked = 0,
>   n_cbs_orphaned = 0,
>   n_cbs_adopted = 139,
>   n_force_qs_snap = 0,
>   blimit = 10,
>   dynticks = 0xc5436758,
>   dynticks_snap = 7582140,
>   dynticks_fqs = 41,
>   offline_fqs = 0,
>   n_rcu_pending = 59404,
>   n_rp_qs_pending = 5,
>   n_rp_report_qs = 4633,
>   n_rp_cb_ready = 32,
>   n_rp_cpu_needs_gp = 41088,
>   n_rp_gp_completed = 2844,
>   n_rp_gp_started = 1150,
>   n_rp_need_nothing = 9657,
>   barrier_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   oom_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   cpu = 4,
>   rsp = 0xc117f340 <rcu_preempt_state>
> }
> 
> 
> 
> Also pasting complete rcu_preempt_state.
> 
> 
> 
> rcu_preempt_state = $9 = {
>   node = {{
>       lock = {
>         raw_lock = {
>           {
>             slock = 3129850509,
>             tickets = {
>               owner = 47757,
>               next = 47757
>             }
>           }
>         },
>         magic = 3735899821,
>         owner_cpu = 4294967295,
>         owner = 0xffffffff
>       },
>       gpnum = 5877,
>       completed = 5877,
>       qsmask = 0,
>       expmask = 0,
>       qsmaskinit = 240,
>       grpmask = 0,
>       grplo = 0,
>       grphi = 7,
>       grpnum = 0 '\000',
>       level = 0 '\000',
>       parent = 0x0,
>       blkd_tasks = {
>         next = 0xc117f378 <rcu_preempt_state+56>,
>         prev = 0xc117f378 <rcu_preempt_state+56>
>       },
>       gp_tasks = 0x0,
>       exp_tasks = 0x0,
>       need_future_gp = {1, 0},
>       fqslock = {
>         raw_lock = {
>           {
>             slock = 0,
>             tickets = {
>               owner = 0,
>               next = 0
>             }
>           }
>         },
>         magic = 3735899821,
>         owner_cpu = 4294967295,
>         owner = 0xffffffff
>       }
>     }},
>   level = {0xc117f340 <rcu_preempt_state>},
>   levelcnt = {1, 0, 0, 0, 0},
>   levelspread = "\b",
>   rda = 0xc115e6b0 <rcu_preempt_data>,
>   call = 0xc01975ac <call_rcu>,
>   fqs_state = 0 '\000',
>   boost = 0 '\000',
>   gpnum = 5877,
>   completed = 5877,
>   gp_kthread = 0xf0c9e600,
>   gp_wq = {
>     lock = {
>       {
>         rlock = {
>           raw_lock = {
>             {
>               slock = 2160230594,
>               tickets = {
>                 owner = 32962,
>                 next = 32962
>               }
>             }
>           },
>           magic = 3735899821,
>           owner_cpu = 4294967295,
>           owner = 0xffffffff
>         }
>       }
>     },
>     task_list = {
>       next = 0xf0cd1f20,
>       prev = 0xf0cd1f20
>     }
>   },
>   gp_flags = 1,
>   orphan_lock = {
>     raw_lock = {
>       {
>         slock = 327685,
>         tickets = {
>           owner = 5,
>           next = 5
>         }
>       }
>     },
>     magic = 3735899821,
>     owner_cpu = 4294967295,
>     owner = 0xffffffff
>   },
>   orphan_nxtlist = 0x0,
>   orphan_nxttail = 0xc117f490 <rcu_preempt_state+336>,
>   orphan_donelist = 0x0,
>   orphan_donetail = 0xc117f498 <rcu_preempt_state+344>,
>   qlen_lazy = 0,
>   qlen = 0,
>   onoff_mutex = {
>     count = {
>       counter = 1
>     },
>     wait_lock = {
>       {
>         rlock = {
>           raw_lock = {
>             {
>               slock = 811479134,
>               tickets = {
>                 owner = 12382,
>                 next = 12382
>               }
>             }
>           },
>           magic = 3735899821,
>           owner_cpu = 4294967295,
>           owner = 0xffffffff
>         }
>       }
>     },
>     wait_list = {
>       next = 0xc117f4bc <rcu_preempt_state+380>,
>       prev = 0xc117f4bc <rcu_preempt_state+380>
>     },
>     owner = 0x0,
>     name = 0x0,
>     magic = 0xc117f4a8 <rcu_preempt_state+360>
>   },
>   barrier_mutex = {
>     count = {
>       counter = 1
>     },
>     wait_lock = {
>       {
>         rlock = {
>           raw_lock = {
>             {
>               slock = 0,
>               tickets = {
>                 owner = 0,
>                 next = 0
>               }
>             }
>           },
>           magic = 3735899821,
>           owner_cpu = 4294967295,
>           owner = 0xffffffff
>         }
>       }
>     },
>     wait_list = {
>       next = 0xc117f4e4 <rcu_preempt_state+420>,
>       prev = 0xc117f4e4 <rcu_preempt_state+420>
>     },
>     owner = 0x0,
>     name = 0x0,
>     magic = 0xc117f4d0 <rcu_preempt_state+400>
>   },
>   barrier_cpu_count = {
>     counter = 0
>   },
>   barrier_completion = {
>     done = 0,
>     wait = {
>       lock = {
>         {
>           rlock = {
>             raw_lock = {
>               {
>                 slock = 0,
>                 tickets = {
>                   owner = 0,
>                   next = 0
>                 }
>               }
>             },
>             magic = 0,
>             owner_cpu = 0,
>             owner = 0x0
>           }
>         }
>       },
>       task_list = {
>         next = 0x0,
>         prev = 0x0
>       }
>     }
>   },
>   n_barrier_done = 0,
>   expedited_start = {
>     counter = 0
>   },
>   expedited_done = {
>     counter = 0
>   },
>   expedited_wrap = {
>     counter = 0
>   },
>   expedited_tryfail = {
>     counter = 0
>   },
>   expedited_workdone1 = {
>     counter = 0
>   },
>   expedited_workdone2 = {
>     counter = 0
>   },
>   expedited_normal = {
>     counter = 0
>   },
>   expedited_stoppedcpus = {
>     counter = 0
>   },
>   expedited_done_tries = {
>     counter = 0
>   },
>   expedited_done_lost = {
>     counter = 0
>   },
>   expedited_done_exit = {
>     counter = 0
>   },
>   jiffies_force_qs = 4294963917,
>   n_force_qs = 4028,
>   n_force_qs_lh = 0,
>   n_force_qs_ngp = 0,
>   gp_start = 4294963911,
>   jiffies_stall = 4294966011,
>   gp_max = 17,
>   name = 0xc0d833ab "rcu_preempt",
>   abbr = 112 'p',
>   flavors = {
>     next = 0xc117f2ec <rcu_bh_state+556>,
>     prev = 0xc117f300 <rcu_struct_flavors>
>   },
>   wakeup_work = {
>     flags = 3,
>     llnode = {
>       next = 0x0
>     },
>     func = 0xc0195aa8 <rsp_wakeup>
>   }
> }
> 
> Hope this helps.
> 
> Thanks,
> Arun
> 
> 
> On Tue, Dec 16, 2014 at 11:59 AM, Arun KS <arunks.linux@gmail.com> wrote:
> > Hello,
> >
> > I dig little deeper to understand the situation.
> > All other cpus are in idle thread already.
> > As per my understanding, for the grace period to end, at-least one of
> > the following should happen on all online cpus,
> >
> > 1. a context switch.
> > 2. user space switch.
> > 3. switch to idle thread.
> >
> > In this situation, since all the other cores are already in idle,  non
> > of the above are meet on all online cores.
> > So grace period is getting extended and never finishes. Below is the
> > state of runqueue when the hang happens.
> > --------------start------------------------------------
> > crash> runq
> > CPU 0 [OFFLINE]
> >
> > CPU 1 [OFFLINE]
> >
> > CPU 2 [OFFLINE]
> >
> > CPU 3 [OFFLINE]
> >
> > CPU 4 RUNQUEUE: c3192e40
> >   CURRENT: PID: 0      TASK: f0874440  COMMAND: "swapper/4"
> >   RT PRIO_ARRAY: c3192f20
> >      [no tasks queued]
> >   CFS RB_ROOT: c3192eb0
> >      [no tasks queued]
> >
> > CPU 5 RUNQUEUE: c31a0e40
> >   CURRENT: PID: 0      TASK: f0874980  COMMAND: "swapper/5"
> >   RT PRIO_ARRAY: c31a0f20
> >      [no tasks queued]
> >   CFS RB_ROOT: c31a0eb0
> >      [no tasks queued]
> >
> > CPU 6 RUNQUEUE: c31aee40
> >   CURRENT: PID: 0      TASK: f0874ec0  COMMAND: "swapper/6"
> >   RT PRIO_ARRAY: c31aef20
> >      [no tasks queued]
> >   CFS RB_ROOT: c31aeeb0
> >      [no tasks queued]
> >
> > CPU 7 RUNQUEUE: c31bce40
> >   CURRENT: PID: 0      TASK: f0875400  COMMAND: "swapper/7"
> >   RT PRIO_ARRAY: c31bcf20
> >      [no tasks queued]
> >   CFS RB_ROOT: c31bceb0
> >      [no tasks queued]
> > --------------end------------------------------------
> >
> > If my understanding is correct the below patch should help, because it
> > will expedite grace periods during suspend,
> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c
> >
> > But I wonder why it was not taken to stable trees. Can we take it?
> > Appreciate your help.
> >
> > Thanks,
> > Arun
> >
> > On Mon, Dec 15, 2014 at 10:34 PM, Arun KS <arunks.linux@gmail.com> wrote:
> >> Hi,
> >>
> >> Here is the backtrace of the process hanging in wait_rcu_gp,
> >>
> >> PID: 247    TASK: e16e7380  CPU: 4   COMMAND: "kworker/u16:5"
> >>  #0 [<c09fead0>] (__schedule) from [<c09fcab0>]
> >>  #1 [<c09fcab0>] (schedule_timeout) from [<c09fe050>]
> >>  #2 [<c09fe050>] (wait_for_common) from [<c013b2b4>]
> >>  #3 [<c013b2b4>] (wait_rcu_gp) from [<c0142f50>]
> >>  #4 [<c0142f50>] (atomic_notifier_chain_unregister) from [<c06b2ab8>]
> >>  #5 [<c06b2ab8>] (cpufreq_interactive_disable_sched_input) from [<c06b32a8>]
> >>  #6 [<c06b32a8>] (cpufreq_governor_interactive) from [<c06abbf8>]
> >>  #7 [<c06abbf8>] (__cpufreq_governor) from [<c06ae474>]
> >>  #8 [<c06ae474>] (__cpufreq_remove_dev_finish) from [<c06ae8c0>]
> >>  #9 [<c06ae8c0>] (cpufreq_cpu_callback) from [<c0a0185c>]
> >> #10 [<c0a0185c>] (notifier_call_chain) from [<c0121888>]
> >> #11 [<c0121888>] (__cpu_notify) from [<c0121a04>]
> >> #12 [<c0121a04>] (cpu_notify_nofail) from [<c09ee7f0>]
> >> #13 [<c09ee7f0>] (_cpu_down) from [<c0121b70>]
> >> #14 [<c0121b70>] (disable_nonboot_cpus) from [<c016788c>]
> >> #15 [<c016788c>] (suspend_devices_and_enter) from [<c0167bcc>]
> >> #16 [<c0167bcc>] (pm_suspend) from [<c0167d94>]
> >> #17 [<c0167d94>] (try_to_suspend) from [<c0138460>]
> >> #18 [<c0138460>] (process_one_work) from [<c0138b18>]
> >> #19 [<c0138b18>] (worker_thread) from [<c013dc58>]
> >> #20 [<c013dc58>] (kthread) from [<c01061b8>]
> >>
> >> Will this patch helps here,
> >> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c
> >>
> >> I couldn't really understand why it got struck in  synchronize_rcu().
> >> Please give some pointers to debug this further.
> >>
> >> Below are the configs enable related to RCU.
> >>
> >> CONFIG_TREE_PREEMPT_RCU=y
> >> CONFIG_PREEMPT_RCU=y
> >> CONFIG_RCU_STALL_COMMON=y
> >> CONFIG_RCU_FANOUT=32
> >> CONFIG_RCU_FANOUT_LEAF=16
> >> CONFIG_RCU_FAST_NO_HZ=y
> >> CONFIG_RCU_CPU_STALL_TIMEOUT=21
> >> CONFIG_RCU_CPU_STALL_VERBOSE=y
> >>
> >> Kernel version is 3.10.28
> >> Architecture is ARM
> >>
> >> Thanks,
> >> Arun
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RCU] kernel hangs in wait_rcu_gp during suspend path
  2014-12-16  6:29 ` Arun KS
  2014-12-16 17:30   ` Arun KS
@ 2014-12-17 19:27   ` Paul E. McKenney
  1 sibling, 0 replies; 10+ messages in thread
From: Paul E. McKenney @ 2014-12-17 19:27 UTC (permalink / raw)
  To: Arun KS; +Cc: linux-kernel, josh, rostedt, mathieu.desnoyers, laijs

On Tue, Dec 16, 2014 at 11:59:07AM +0530, Arun KS wrote:
> Hello,
> 
> I dig little deeper to understand the situation.
> All other cpus are in idle thread already.
> As per my understanding, for the grace period to end, at-least one of
> the following should happen on all online cpus,
> 
> 1. a context switch.
> 2. user space switch.
> 3. switch to idle thread.

This is the case for rcu_sched, and the other flavors vary a bit.

> In this situation, since all the other cores are already in idle,  non
> of the above are meet on all online cores.
> So grace period is getting extended and never finishes. Below is the
> state of runqueue when the hang happens.
> --------------start------------------------------------
> crash> runq
> CPU 0 [OFFLINE]
> 
> CPU 1 [OFFLINE]
> 
> CPU 2 [OFFLINE]
> 
> CPU 3 [OFFLINE]
> 
> CPU 4 RUNQUEUE: c3192e40
>   CURRENT: PID: 0      TASK: f0874440  COMMAND: "swapper/4"
>   RT PRIO_ARRAY: c3192f20
>      [no tasks queued]
>   CFS RB_ROOT: c3192eb0
>      [no tasks queued]
> 
> CPU 5 RUNQUEUE: c31a0e40
>   CURRENT: PID: 0      TASK: f0874980  COMMAND: "swapper/5"
>   RT PRIO_ARRAY: c31a0f20
>      [no tasks queued]
>   CFS RB_ROOT: c31a0eb0
>      [no tasks queued]
> 
> CPU 6 RUNQUEUE: c31aee40
>   CURRENT: PID: 0      TASK: f0874ec0  COMMAND: "swapper/6"
>   RT PRIO_ARRAY: c31aef20
>      [no tasks queued]
>   CFS RB_ROOT: c31aeeb0
>      [no tasks queued]
> 
> CPU 7 RUNQUEUE: c31bce40
>   CURRENT: PID: 0      TASK: f0875400  COMMAND: "swapper/7"
>   RT PRIO_ARRAY: c31bcf20
>      [no tasks queued]
>   CFS RB_ROOT: c31bceb0
>      [no tasks queued]
> --------------end------------------------------------
> 
> If my understanding is correct the below patch should help, because it
> will expedite grace periods during suspend,
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c

I believe that we already covered this, but I do suggest that you give
it a try.

> But I wonder why it was not taken to stable trees. Can we take it?
> Appreciate your help.

I have no objection to your taking it, but have you tried it yet?

							Thanx, Paul

> Thanks,
> Arun
> 
> On Mon, Dec 15, 2014 at 10:34 PM, Arun KS <arunks.linux@gmail.com> wrote:
> > Hi,
> >
> > Here is the backtrace of the process hanging in wait_rcu_gp,
> >
> > PID: 247    TASK: e16e7380  CPU: 4   COMMAND: "kworker/u16:5"
> >  #0 [<c09fead0>] (__schedule) from [<c09fcab0>]
> >  #1 [<c09fcab0>] (schedule_timeout) from [<c09fe050>]
> >  #2 [<c09fe050>] (wait_for_common) from [<c013b2b4>]
> >  #3 [<c013b2b4>] (wait_rcu_gp) from [<c0142f50>]
> >  #4 [<c0142f50>] (atomic_notifier_chain_unregister) from [<c06b2ab8>]
> >  #5 [<c06b2ab8>] (cpufreq_interactive_disable_sched_input) from [<c06b32a8>]
> >  #6 [<c06b32a8>] (cpufreq_governor_interactive) from [<c06abbf8>]
> >  #7 [<c06abbf8>] (__cpufreq_governor) from [<c06ae474>]
> >  #8 [<c06ae474>] (__cpufreq_remove_dev_finish) from [<c06ae8c0>]
> >  #9 [<c06ae8c0>] (cpufreq_cpu_callback) from [<c0a0185c>]
> > #10 [<c0a0185c>] (notifier_call_chain) from [<c0121888>]
> > #11 [<c0121888>] (__cpu_notify) from [<c0121a04>]
> > #12 [<c0121a04>] (cpu_notify_nofail) from [<c09ee7f0>]
> > #13 [<c09ee7f0>] (_cpu_down) from [<c0121b70>]
> > #14 [<c0121b70>] (disable_nonboot_cpus) from [<c016788c>]
> > #15 [<c016788c>] (suspend_devices_and_enter) from [<c0167bcc>]
> > #16 [<c0167bcc>] (pm_suspend) from [<c0167d94>]
> > #17 [<c0167d94>] (try_to_suspend) from [<c0138460>]
> > #18 [<c0138460>] (process_one_work) from [<c0138b18>]
> > #19 [<c0138b18>] (worker_thread) from [<c013dc58>]
> > #20 [<c013dc58>] (kthread) from [<c01061b8>]
> >
> > Will this patch helps here,
> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c
> >
> > I couldn't really understand why it got struck in  synchronize_rcu().
> > Please give some pointers to debug this further.
> >
> > Below are the configs enable related to RCU.
> >
> > CONFIG_TREE_PREEMPT_RCU=y
> > CONFIG_PREEMPT_RCU=y
> > CONFIG_RCU_STALL_COMMON=y
> > CONFIG_RCU_FANOUT=32
> > CONFIG_RCU_FANOUT_LEAF=16
> > CONFIG_RCU_FAST_NO_HZ=y
> > CONFIG_RCU_CPU_STALL_TIMEOUT=21
> > CONFIG_RCU_CPU_STALL_VERBOSE=y
> >
> > Kernel version is 3.10.28
> > Architecture is ARM
> >
> > Thanks,
> > Arun
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RCU] kernel hangs in wait_rcu_gp during suspend path
  2014-12-17 19:24     ` Paul E. McKenney
@ 2014-12-18 16:22       ` Arun KS
  2014-12-18 20:05         ` Paul E. McKenney
  0 siblings, 1 reply; 10+ messages in thread
From: Arun KS @ 2014-12-18 16:22 UTC (permalink / raw)
  To: Paul McKenney
  Cc: linux-kernel, josh, rostedt, Mathieu Desnoyers, laijs, Arun KS

[-- Attachment #1: Type: text/plain, Size: 15117 bytes --]

Hi Paul,

On Thu, Dec 18, 2014 at 12:54 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> On Tue, Dec 16, 2014 at 11:00:20PM +0530, Arun KS wrote:
>> Hello,
>>
>> Adding some more info.
>>
>> Below is the rcu_data data structure corresponding to cpu4.
>
> This shows that RCU is idle.  What was the state of the system at the
> time you collected this data?

System initiated a suspend sequence and currently at disable_nonboot_cpus().
It has hotplugged 0, 1 and 2 successfully.  And even successful in hot
plugging cpu3.
But while calling the CPU_POST_DEAD notifier for cpu3, another driver tried to
unregister an atomic notifier. Which eventually calls syncronize_rcu()
which hangs the suspend task.

bt as follows,
PID: 202    TASK: edcd2a00  CPU: 4   COMMAND: "kworker/u16:4"
 #0 [<c0a1f8c0>] (__schedule) from [<c0a1d054>]
 #1 [<c0a1d054>] (schedule_timeout) from [<c0a1f018>]
 #2 [<c0a1f018>] (wait_for_common) from [<c013c570>]
 #3 [<c013c570>] (wait_rcu_gp) from [<c014407c>]
 #4 [<c014407c>] (atomic_notifier_chain_unregister) from [<c06be62c>]
 #5 [<c06be62c>] (cpufreq_interactive_disable_sched_input) from [<c06bee1c>]
 #6 [<c06bee1c>] (cpufreq_governor_interactive) from [<c06b7724>]
 #7 [<c06b7724>] (__cpufreq_governor) from [<c06b9f74>]
 #8 [<c06b9f74>] (__cpufreq_remove_dev_finish) from [<c06ba3a4>]
 #9 [<c06ba3a4>] (cpufreq_cpu_callback) from [<c0a22674>]
#10 [<c0a22674>] (notifier_call_chain) from [<c012284c>]
#11 [<c012284c>] (__cpu_notify) from [<c01229dc>]
#12 [<c01229dc>] (cpu_notify_nofail) from [<c0a0dd1c>]
#13 [<c0a0dd1c>] (_cpu_down) from [<c0122b48>]
#14 [<c0122b48>] (disable_nonboot_cpus) from [<c0168cd8>]
#15 [<c0168cd8>] (suspend_devices_and_enter) from [<c0169018>]
#16 [<c0169018>] (pm_suspend) from [<c01691e0>]
#17 [<c01691e0>] (try_to_suspend) from [<c01396f0>]
#18 [<c01396f0>] (process_one_work) from [<c0139db0>]
#19 [<c0139db0>] (worker_thread) from [<c013efa4>]
#20 [<c013efa4>] (kthread) from [<c01061f8>]

But the other cores 4-7 are active. I can see them going to idle tasks
coming out from idle because of interrupts, scheduling kworkers etc.
So when I took the data, all the online cores(4-7) were in idle as
shown below from runq data structures.

------start--------------
crash> runq
CPU 0 [OFFLINE]

CPU 1 [OFFLINE]

CPU 2 [OFFLINE]

CPU 3 [OFFLINE]

CPU 4 RUNQUEUE: c5439040
  CURRENT: PID: 0      TASK: f0c9d400  COMMAND: "swapper/4"
  RT PRIO_ARRAY: c5439130
     [no tasks queued]
  CFS RB_ROOT: c54390c0
     [no tasks queued]

CPU 5 RUNQUEUE: c5447040
  CURRENT: PID: 0      TASK: f0c9aa00  COMMAND: "swapper/5"
  RT PRIO_ARRAY: c5447130
     [no tasks queued]
  CFS RB_ROOT: c54470c0
     [no tasks queued]

CPU 6 RUNQUEUE: c5455040
  CURRENT: PID: 0      TASK: f0c9ce00  COMMAND: "swapper/6"
  RT PRIO_ARRAY: c5455130
     [no tasks queued]
  CFS RB_ROOT: c54550c0
     [no tasks queued]

CPU 7 RUNQUEUE: c5463040
  CURRENT: PID: 0      TASK: f0c9b000  COMMAND: "swapper/7"
  RT PRIO_ARRAY: c5463130
     [no tasks queued]
  CFS RB_ROOT: c54630c0
     [no tasks queued]
------end--------------

but one strange thing i can see is that rcu_read_lock_nesting for idle
tasks running on cpu 5 and cpu 6 are set to 1.

PID: 0      TASK: f0c9d400  CPU: 4   COMMAND: "swapper/4"
  rcu_read_lock_nesting = 0,

PID: 0      TASK: f0c9aa00  CPU: 5   COMMAND: "swapper/5"
  rcu_read_lock_nesting = 1,

PID: 0      TASK: f0c9ce00  CPU: 6   COMMAND: "swapper/6"
  rcu_read_lock_nesting = 1,

PID: 0      TASK: f0c9b000  CPU: 7   COMMAND: "swapper/7"
  rcu_read_lock_nesting = 0,

Does this means that the current grace period(suspend thread is
waiting on) is getting extended infinitely?
Also attaching the per_cpu rcu_data for online and offline cores.

Thanks,
Arun

>
>                                                         Thanx, Paul
>
>> struct rcu_data {
>>   completed = 5877,
>>   gpnum = 5877,
>>   passed_quiesce = true,
>>   qs_pending = false,
>>   beenonline = true,
>>   preemptible = true,
>>   mynode = 0xc117f340 <rcu_preempt_state>,
>>   grpmask = 16,
>>   nxtlist = 0xedaaec00,
>>   nxttail = {0xc54366c4, 0xe84d350c, 0xe84d350c, 0xe84d350c},
>>   nxtcompleted = {4294967035, 5878, 5878, 5878},
>>   qlen_lazy = 105,
>>   qlen = 415,
>>   qlen_last_fqs_check = 0,
>>   n_cbs_invoked = 86323,
>>   n_nocbs_invoked = 0,
>>   n_cbs_orphaned = 0,
>>   n_cbs_adopted = 139,
>>   n_force_qs_snap = 0,
>>   blimit = 10,
>>   dynticks = 0xc5436758,
>>   dynticks_snap = 7582140,
>>   dynticks_fqs = 41,
>>   offline_fqs = 0,
>>   n_rcu_pending = 59404,
>>   n_rp_qs_pending = 5,
>>   n_rp_report_qs = 4633,
>>   n_rp_cb_ready = 32,
>>   n_rp_cpu_needs_gp = 41088,
>>   n_rp_gp_completed = 2844,
>>   n_rp_gp_started = 1150,
>>   n_rp_need_nothing = 9657,
>>   barrier_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   oom_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   cpu = 4,
>>   rsp = 0xc117f340 <rcu_preempt_state>
>> }
>>
>>
>>
>> Also pasting complete rcu_preempt_state.
>>
>>
>>
>> rcu_preempt_state = $9 = {
>>   node = {{
>>       lock = {
>>         raw_lock = {
>>           {
>>             slock = 3129850509,
>>             tickets = {
>>               owner = 47757,
>>               next = 47757
>>             }
>>           }
>>         },
>>         magic = 3735899821,
>>         owner_cpu = 4294967295,
>>         owner = 0xffffffff
>>       },
>>       gpnum = 5877,
>>       completed = 5877,
>>       qsmask = 0,
>>       expmask = 0,
>>       qsmaskinit = 240,
>>       grpmask = 0,
>>       grplo = 0,
>>       grphi = 7,
>>       grpnum = 0 '\000',
>>       level = 0 '\000',
>>       parent = 0x0,
>>       blkd_tasks = {
>>         next = 0xc117f378 <rcu_preempt_state+56>,
>>         prev = 0xc117f378 <rcu_preempt_state+56>
>>       },
>>       gp_tasks = 0x0,
>>       exp_tasks = 0x0,
>>       need_future_gp = {1, 0},
>>       fqslock = {
>>         raw_lock = {
>>           {
>>             slock = 0,
>>             tickets = {
>>               owner = 0,
>>               next = 0
>>             }
>>           }
>>         },
>>         magic = 3735899821,
>>         owner_cpu = 4294967295,
>>         owner = 0xffffffff
>>       }
>>     }},
>>   level = {0xc117f340 <rcu_preempt_state>},
>>   levelcnt = {1, 0, 0, 0, 0},
>>   levelspread = "\b",
>>   rda = 0xc115e6b0 <rcu_preempt_data>,
>>   call = 0xc01975ac <call_rcu>,
>>   fqs_state = 0 '\000',
>>   boost = 0 '\000',
>>   gpnum = 5877,
>>   completed = 5877,
>>   gp_kthread = 0xf0c9e600,
>>   gp_wq = {
>>     lock = {
>>       {
>>         rlock = {
>>           raw_lock = {
>>             {
>>               slock = 2160230594,
>>               tickets = {
>>                 owner = 32962,
>>                 next = 32962
>>               }
>>             }
>>           },
>>           magic = 3735899821,
>>           owner_cpu = 4294967295,
>>           owner = 0xffffffff
>>         }
>>       }
>>     },
>>     task_list = {
>>       next = 0xf0cd1f20,
>>       prev = 0xf0cd1f20
>>     }
>>   },
>>   gp_flags = 1,
>>   orphan_lock = {
>>     raw_lock = {
>>       {
>>         slock = 327685,
>>         tickets = {
>>           owner = 5,
>>           next = 5
>>         }
>>       }
>>     },
>>     magic = 3735899821,
>>     owner_cpu = 4294967295,
>>     owner = 0xffffffff
>>   },
>>   orphan_nxtlist = 0x0,
>>   orphan_nxttail = 0xc117f490 <rcu_preempt_state+336>,
>>   orphan_donelist = 0x0,
>>   orphan_donetail = 0xc117f498 <rcu_preempt_state+344>,
>>   qlen_lazy = 0,
>>   qlen = 0,
>>   onoff_mutex = {
>>     count = {
>>       counter = 1
>>     },
>>     wait_lock = {
>>       {
>>         rlock = {
>>           raw_lock = {
>>             {
>>               slock = 811479134,
>>               tickets = {
>>                 owner = 12382,
>>                 next = 12382
>>               }
>>             }
>>           },
>>           magic = 3735899821,
>>           owner_cpu = 4294967295,
>>           owner = 0xffffffff
>>         }
>>       }
>>     },
>>     wait_list = {
>>       next = 0xc117f4bc <rcu_preempt_state+380>,
>>       prev = 0xc117f4bc <rcu_preempt_state+380>
>>     },
>>     owner = 0x0,
>>     name = 0x0,
>>     magic = 0xc117f4a8 <rcu_preempt_state+360>
>>   },
>>   barrier_mutex = {
>>     count = {
>>       counter = 1
>>     },
>>     wait_lock = {
>>       {
>>         rlock = {
>>           raw_lock = {
>>             {
>>               slock = 0,
>>               tickets = {
>>                 owner = 0,
>>                 next = 0
>>               }
>>             }
>>           },
>>           magic = 3735899821,
>>           owner_cpu = 4294967295,
>>           owner = 0xffffffff
>>         }
>>       }
>>     },
>>     wait_list = {
>>       next = 0xc117f4e4 <rcu_preempt_state+420>,
>>       prev = 0xc117f4e4 <rcu_preempt_state+420>
>>     },
>>     owner = 0x0,
>>     name = 0x0,
>>     magic = 0xc117f4d0 <rcu_preempt_state+400>
>>   },
>>   barrier_cpu_count = {
>>     counter = 0
>>   },
>>   barrier_completion = {
>>     done = 0,
>>     wait = {
>>       lock = {
>>         {
>>           rlock = {
>>             raw_lock = {
>>               {
>>                 slock = 0,
>>                 tickets = {
>>                   owner = 0,
>>                   next = 0
>>                 }
>>               }
>>             },
>>             magic = 0,
>>             owner_cpu = 0,
>>             owner = 0x0
>>           }
>>         }
>>       },
>>       task_list = {
>>         next = 0x0,
>>         prev = 0x0
>>       }
>>     }
>>   },
>>   n_barrier_done = 0,
>>   expedited_start = {
>>     counter = 0
>>   },
>>   expedited_done = {
>>     counter = 0
>>   },
>>   expedited_wrap = {
>>     counter = 0
>>   },
>>   expedited_tryfail = {
>>     counter = 0
>>   },
>>   expedited_workdone1 = {
>>     counter = 0
>>   },
>>   expedited_workdone2 = {
>>     counter = 0
>>   },
>>   expedited_normal = {
>>     counter = 0
>>   },
>>   expedited_stoppedcpus = {
>>     counter = 0
>>   },
>>   expedited_done_tries = {
>>     counter = 0
>>   },
>>   expedited_done_lost = {
>>     counter = 0
>>   },
>>   expedited_done_exit = {
>>     counter = 0
>>   },
>>   jiffies_force_qs = 4294963917,
>>   n_force_qs = 4028,
>>   n_force_qs_lh = 0,
>>   n_force_qs_ngp = 0,
>>   gp_start = 4294963911,
>>   jiffies_stall = 4294966011,
>>   gp_max = 17,
>>   name = 0xc0d833ab "rcu_preempt",
>>   abbr = 112 'p',
>>   flavors = {
>>     next = 0xc117f2ec <rcu_bh_state+556>,
>>     prev = 0xc117f300 <rcu_struct_flavors>
>>   },
>>   wakeup_work = {
>>     flags = 3,
>>     llnode = {
>>       next = 0x0
>>     },
>>     func = 0xc0195aa8 <rsp_wakeup>
>>   }
>> }
>>
>> Hope this helps.
>>
>> Thanks,
>> Arun
>>
>>
>> On Tue, Dec 16, 2014 at 11:59 AM, Arun KS <arunks.linux@gmail.com> wrote:
>> > Hello,
>> >
>> > I dig little deeper to understand the situation.
>> > All other cpus are in idle thread already.
>> > As per my understanding, for the grace period to end, at-least one of
>> > the following should happen on all online cpus,
>> >
>> > 1. a context switch.
>> > 2. user space switch.
>> > 3. switch to idle thread.
>> >
>> > In this situation, since all the other cores are already in idle,  non
>> > of the above are meet on all online cores.
>> > So grace period is getting extended and never finishes. Below is the
>> > state of runqueue when the hang happens.
>> > --------------start------------------------------------
>> > crash> runq
>> > CPU 0 [OFFLINE]
>> >
>> > CPU 1 [OFFLINE]
>> >
>> > CPU 2 [OFFLINE]
>> >
>> > CPU 3 [OFFLINE]
>> >
>> > CPU 4 RUNQUEUE: c3192e40
>> >   CURRENT: PID: 0      TASK: f0874440  COMMAND: "swapper/4"
>> >   RT PRIO_ARRAY: c3192f20
>> >      [no tasks queued]
>> >   CFS RB_ROOT: c3192eb0
>> >      [no tasks queued]
>> >
>> > CPU 5 RUNQUEUE: c31a0e40
>> >   CURRENT: PID: 0      TASK: f0874980  COMMAND: "swapper/5"
>> >   RT PRIO_ARRAY: c31a0f20
>> >      [no tasks queued]
>> >   CFS RB_ROOT: c31a0eb0
>> >      [no tasks queued]
>> >
>> > CPU 6 RUNQUEUE: c31aee40
>> >   CURRENT: PID: 0      TASK: f0874ec0  COMMAND: "swapper/6"
>> >   RT PRIO_ARRAY: c31aef20
>> >      [no tasks queued]
>> >   CFS RB_ROOT: c31aeeb0
>> >      [no tasks queued]
>> >
>> > CPU 7 RUNQUEUE: c31bce40
>> >   CURRENT: PID: 0      TASK: f0875400  COMMAND: "swapper/7"
>> >   RT PRIO_ARRAY: c31bcf20
>> >      [no tasks queued]
>> >   CFS RB_ROOT: c31bceb0
>> >      [no tasks queued]
>> > --------------end------------------------------------
>> >
>> > If my understanding is correct the below patch should help, because it
>> > will expedite grace periods during suspend,
>> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c
>> >
>> > But I wonder why it was not taken to stable trees. Can we take it?
>> > Appreciate your help.
>> >
>> > Thanks,
>> > Arun
>> >
>> > On Mon, Dec 15, 2014 at 10:34 PM, Arun KS <arunks.linux@gmail.com> wrote:
>> >> Hi,
>> >>
>> >> Here is the backtrace of the process hanging in wait_rcu_gp,
>> >>
>> >> PID: 247    TASK: e16e7380  CPU: 4   COMMAND: "kworker/u16:5"
>> >>  #0 [<c09fead0>] (__schedule) from [<c09fcab0>]
>> >>  #1 [<c09fcab0>] (schedule_timeout) from [<c09fe050>]
>> >>  #2 [<c09fe050>] (wait_for_common) from [<c013b2b4>]
>> >>  #3 [<c013b2b4>] (wait_rcu_gp) from [<c0142f50>]
>> >>  #4 [<c0142f50>] (atomic_notifier_chain_unregister) from [<c06b2ab8>]
>> >>  #5 [<c06b2ab8>] (cpufreq_interactive_disable_sched_input) from [<c06b32a8>]
>> >>  #6 [<c06b32a8>] (cpufreq_governor_interactive) from [<c06abbf8>]
>> >>  #7 [<c06abbf8>] (__cpufreq_governor) from [<c06ae474>]
>> >>  #8 [<c06ae474>] (__cpufreq_remove_dev_finish) from [<c06ae8c0>]
>> >>  #9 [<c06ae8c0>] (cpufreq_cpu_callback) from [<c0a0185c>]
>> >> #10 [<c0a0185c>] (notifier_call_chain) from [<c0121888>]
>> >> #11 [<c0121888>] (__cpu_notify) from [<c0121a04>]
>> >> #12 [<c0121a04>] (cpu_notify_nofail) from [<c09ee7f0>]
>> >> #13 [<c09ee7f0>] (_cpu_down) from [<c0121b70>]
>> >> #14 [<c0121b70>] (disable_nonboot_cpus) from [<c016788c>]
>> >> #15 [<c016788c>] (suspend_devices_and_enter) from [<c0167bcc>]
>> >> #16 [<c0167bcc>] (pm_suspend) from [<c0167d94>]
>> >> #17 [<c0167d94>] (try_to_suspend) from [<c0138460>]
>> >> #18 [<c0138460>] (process_one_work) from [<c0138b18>]
>> >> #19 [<c0138b18>] (worker_thread) from [<c013dc58>]
>> >> #20 [<c013dc58>] (kthread) from [<c01061b8>]
>> >>
>> >> Will this patch helps here,
>> >> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c
>> >>
>> >> I couldn't really understand why it got struck in  synchronize_rcu().
>> >> Please give some pointers to debug this further.
>> >>
>> >> Below are the configs enable related to RCU.
>> >>
>> >> CONFIG_TREE_PREEMPT_RCU=y
>> >> CONFIG_PREEMPT_RCU=y
>> >> CONFIG_RCU_STALL_COMMON=y
>> >> CONFIG_RCU_FANOUT=32
>> >> CONFIG_RCU_FANOUT_LEAF=16
>> >> CONFIG_RCU_FAST_NO_HZ=y
>> >> CONFIG_RCU_CPU_STALL_TIMEOUT=21
>> >> CONFIG_RCU_CPU_STALL_VERBOSE=y
>> >>
>> >> Kernel version is 3.10.28
>> >> Architecture is ARM
>> >>
>> >> Thanks,
>> >> Arun
>>
>

[-- Attachment #2: rcu_data_offline_cpus_0_3.txt --]
[-- Type: text/plain, Size: 4166 bytes --]

crash> struct rcu_data C54286B0
struct rcu_data {
  completed = 2833,
  gpnum = 2833,
  passed_quiesce = false,
  qs_pending = false,
  beenonline = true,
  preemptible = true,
  mynode = 0xc117f340 <rcu_preempt_state>,
  grpmask = 8,
  nxtlist = 0x0,
  nxttail = {0xc54286c4, 0xc54286c4, 0xc54286c4, 0x0},
  nxtcompleted = {0, 4294967136, 4294967137, 4294967137},
  qlen_lazy = 0,
  qlen = 0,
  qlen_last_fqs_check = 0,
  n_cbs_invoked = 609,
  n_nocbs_invoked = 0,
  n_cbs_orphaned = 13,
  n_cbs_adopted = 0,
  n_force_qs_snap = 1428,
  blimit = 10,
  dynticks = 0xc5428758,
  dynticks_snap = 13206053,
  dynticks_fqs = 16,
  offline_fqs = 0,
  n_rcu_pending = 181,
  n_rp_qs_pending = 1,
  n_rp_report_qs = 21,
  n_rp_cb_ready = 0,
  n_rp_cpu_needs_gp = 0,
  n_rp_gp_completed = 22,
  n_rp_gp_started = 8,
  n_rp_need_nothing = 130,
  barrier_head = {
    next = 0x0,
    func = 0x0
  },
  oom_head = {
    next = 0x0,
    func = 0x0
  },
  cpu = 3,
  rsp = 0xc117f340 <rcu_preempt_state>
}

crash> struct rcu_data C541A6B0
struct rcu_data {
  completed = 5877,
  gpnum = 5877,
  passed_quiesce = true,
  qs_pending = false,
  beenonline = true,
  preemptible = true,
  mynode = 0xc117f340 <rcu_preempt_state>,
  grpmask = 4,
  nxtlist = 0x0,
  nxttail = {0xc541a6c4, 0xc541a6c4, 0xc541a6c4, 0x0},
  nxtcompleted = {0, 5877, 5878, 5878},
  qlen_lazy = 0,
  qlen = 0,
  qlen_last_fqs_check = 0,
  n_cbs_invoked = 61565,
  n_nocbs_invoked = 0,
  n_cbs_orphaned = 139,
  n_cbs_adopted = 100,
  n_force_qs_snap = 0,
  blimit = 10,
  dynticks = 0xc541a758,
  dynticks_snap = 13901017,
  dynticks_fqs = 75,
  offline_fqs = 0,
  n_rcu_pending = 16546,
  n_rp_qs_pending = 3,
  n_rp_report_qs = 4539,
  n_rp_cb_ready = 69,
  n_rp_cpu_needs_gp = 782,
  n_rp_gp_completed = 4196,
  n_rp_gp_started = 1739,
  n_rp_need_nothing = 5221,
  barrier_head = {
    next = 0x0,
    func = 0x0
  },
  oom_head = {
    next = 0x0,
    func = 0x0
  },
  cpu = 2,
  rsp = 0xc117f340 <rcu_preempt_state>
}

crash> struct rcu_data C540C6B0
struct rcu_data {
  completed = 5877,
  gpnum = 5877,
  passed_quiesce = true,
  qs_pending = false,
  beenonline = true,
  preemptible = true,
  mynode = 0xc117f340 <rcu_preempt_state>,
  grpmask = 2,
  nxtlist = 0x0,
  nxttail = {0xc540c6c4, 0xc540c6c4, 0xc540c6c4, 0x0},
  nxtcompleted = {4294967030, 5878, 5878, 5878},
  qlen_lazy = 0,
  qlen = 0,
  qlen_last_fqs_check = 0,
  n_cbs_invoked = 74292,
  n_nocbs_invoked = 0,
  n_cbs_orphaned = 100,
  n_cbs_adopted = 65,
  n_force_qs_snap = 0,
  blimit = 10,
  dynticks = 0xc540c758,
  dynticks_snap = 10753433,
  dynticks_fqs = 69,
  offline_fqs = 0,
  n_rcu_pending = 18350,
  n_rp_qs_pending = 6,
  n_rp_report_qs = 5009,
  n_rp_cb_ready = 50,
  n_rp_cpu_needs_gp = 915,
  n_rp_gp_completed = 4423,
  n_rp_gp_started = 1826,
  n_rp_need_nothing = 6127,
  barrier_head = {
    next = 0x0,
    func = 0x0
  },
  oom_head = {
    next = 0x0,
    func = 0x0
  },
  cpu = 1,
  rsp = 0xc117f340 <rcu_preempt_state>
}
crash> struct rcu_data C53FE6B0
struct rcu_data {
  completed = 5877,
  gpnum = 5877,
  passed_quiesce = true,
  qs_pending = false,
  beenonline = true,
  preemptible = true,
  mynode = 0xc117f340 <rcu_preempt_state>,
  grpmask = 1,
  nxtlist = 0x0,
  nxttail = {0xc53fe6c4, 0xc53fe6c4, 0xc53fe6c4, 0x0},
  nxtcompleted = {4294966997, 5875, 5876, 5876},
  qlen_lazy = 0,
  qlen = 0,
  qlen_last_fqs_check = 0,
  n_cbs_invoked = 123175,
  n_nocbs_invoked = 0,
  n_cbs_orphaned = 52,
  n_cbs_adopted = 0,
  n_force_qs_snap = 0,
  blimit = 10,
  dynticks = 0xc53fe758,
  dynticks_snap = 6330446,
  dynticks_fqs = 46,
  offline_fqs = 0,
  n_rcu_pending = 22529,
  n_rp_qs_pending = 3,
  n_rp_report_qs = 5290,
  n_rp_cb_ready = 279,
  n_rp_cpu_needs_gp = 740,
  n_rp_gp_completed = 2707,
  n_rp_gp_started = 1208,
  n_rp_need_nothing = 12305,
  barrier_head = {
    next = 0x0,
    func = 0x0
  },
  oom_head = {
    next = 0x0,
    func = 0x0
  },
  cpu = 0,
  rsp = 0xc117f340 <rcu_preempt_state>
}


[-- Attachment #3: rcu_data_online_cpus_4_7.txt --]
[-- Type: text/plain, Size: 4215 bytes --]

crash> struct rcu_data c54366b0
struct rcu_data {
  completed = 5877,
  gpnum = 5877,
  passed_quiesce = true,
  qs_pending = false,
  beenonline = true,
  preemptible = true,
  mynode = 0xc117f340 <rcu_preempt_state>,
  grpmask = 16,
  nxtlist = 0xedaaec00,
  nxttail = {0xc54366c4, 0xe84d350c, 0xe84d350c, 0xe84d350c},
  nxtcompleted = {4294967035, 5878, 5878, 5878},
  qlen_lazy = 105,
  qlen = 415,
  qlen_last_fqs_check = 0,
  n_cbs_invoked = 86323,
  n_nocbs_invoked = 0,
  n_cbs_orphaned = 0,
  n_cbs_adopted = 139,
  n_force_qs_snap = 0,
  blimit = 10,
  dynticks = 0xc5436758,
  dynticks_snap = 7582140,
  dynticks_fqs = 41,
  offline_fqs = 0,
  n_rcu_pending = 59404,
  n_rp_qs_pending = 5,
  n_rp_report_qs = 4633,
  n_rp_cb_ready = 32,
  n_rp_cpu_needs_gp = 41088,
  n_rp_gp_completed = 2844,
  n_rp_gp_started = 1150,
  n_rp_need_nothing = 9657,
  barrier_head = {
    next = 0x0,
    func = 0x0
  },
  oom_head = {
    next = 0x0,
    func = 0x0
  },
  cpu = 4,
  rsp = 0xc117f340 <rcu_preempt_state>
}

crash> struct rcu_data c54446b0
struct rcu_data {
  completed = 5877,
  gpnum = 5877,
  passed_quiesce = true,
  qs_pending = false,
  beenonline = true,
  preemptible = true,
  mynode = 0xc117f340 <rcu_preempt_state>,
  grpmask = 32,
  nxtlist = 0xcf9e856c,
  nxttail = {0xc54446c4, 0xcfb3050c, 0xcfb3050c, 0xcfb3050c},
  nxtcompleted = {0, 5878, 5878, 5878},
  qlen_lazy = 0,
  qlen = 117,
  qlen_last_fqs_check = 0,
  n_cbs_invoked = 36951,
  n_nocbs_invoked = 0,
  n_cbs_orphaned = 0,
  n_cbs_adopted = 0,
  n_force_qs_snap = 1428,
  blimit = 10,
  dynticks = 0xc5444758,
  dynticks_snap = 86034,
  dynticks_fqs = 46,
  offline_fqs = 0,
  n_rcu_pending = 49104,
  n_rp_qs_pending = 3,
  n_rp_report_qs = 2360,
  n_rp_cb_ready = 18,
  n_rp_cpu_needs_gp = 40106,
  n_rp_gp_completed = 1334,
  n_rp_gp_started = 791,
  n_rp_need_nothing = 4495,
  barrier_head = {
    next = 0x0,
    func = 0x0
  },
  oom_head = {
    next = 0x0,
    func = 0x0
  },
  cpu = 5,
  rsp = 0xc117f340 <rcu_preempt_state>
}

crash> struct rcu_data c54526b0
struct rcu_data {
  completed = 5877,
  gpnum = 5877,
  passed_quiesce = true,
  qs_pending = false,
  beenonline = true,
  preemptible = true,
  mynode = 0xc117f340 <rcu_preempt_state>,
  grpmask = 64,
  nxtlist = 0xe613d200,
  nxttail = {0xc54526c4, 0xe6fc9d0c, 0xe6fc9d0c, 0xe6fc9d0c},
  nxtcompleted = {0, 5878, 5878, 5878},
  qlen_lazy = 2,
  qlen = 35,
  qlen_last_fqs_check = 0,
  n_cbs_invoked = 34459,
  n_nocbs_invoked = 0,
  n_cbs_orphaned = 0,
  n_cbs_adopted = 0,
  n_force_qs_snap = 1428,
  blimit = 10,
  dynticks = 0xc5452758,
  dynticks_snap = 116840,
  dynticks_fqs = 47,
  offline_fqs = 0,
  n_rcu_pending = 48486,
  n_rp_qs_pending = 3,
  n_rp_report_qs = 2223,
  n_rp_cb_ready = 24,
  n_rp_cpu_needs_gp = 40101,
  n_rp_gp_completed = 1226,
  n_rp_gp_started = 789,
  n_rp_need_nothing = 4123,
  barrier_head = {
    next = 0x0,
    func = 0x0
  },
  oom_head = {
    next = 0x0,
    func = 0x0
  },
  cpu = 6,
  rsp = 0xc117f340 <rcu_preempt_state>
}

crash> struct rcu_data c54606b0
struct rcu_data {
  completed = 5877,
  gpnum = 5877,
  passed_quiesce = true,
  qs_pending = false,
  beenonline = true,
  preemptible = true,
  mynode = 0xc117f340 <rcu_preempt_state>,
  grpmask = 128,
  nxtlist = 0xdec32a6c,
  nxttail = {0xc54606c4, 0xe6fcf10c, 0xe6fcf10c, 0xe6fcf10c},
  nxtcompleted = {0, 5878, 5878, 5878},
  qlen_lazy = 1,
  qlen = 30,
  qlen_last_fqs_check = 0,
  n_cbs_invoked = 31998,
  n_nocbs_invoked = 0,
  n_cbs_orphaned = 0,
  n_cbs_adopted = 0,
  n_force_qs_snap = 1428,
  blimit = 10,
  dynticks = 0xc5460758,
  dynticks_snap = 57846,
  dynticks_fqs = 54,
  offline_fqs = 0,
  n_rcu_pending = 47502,
  n_rp_qs_pending = 2,
  n_rp_report_qs = 2142,
  n_rp_cb_ready = 37,
  n_rp_cpu_needs_gp = 40049,
  n_rp_gp_completed = 1223,
  n_rp_gp_started = 661,
  n_rp_need_nothing = 3390,
  barrier_head = {
    next = 0x0,
    func = 0x0
  },
  oom_head = {
    next = 0x0,
    func = 0x0
  },
  cpu = 7,
  rsp = 0xc117f340 <rcu_preempt_state>
}

[-- Attachment #4: rcu_preept_state.txt --]
[-- Type: text/plain, Size: 5182 bytes --]


rcu_preempt_state = $9 = {
  node = {{
      lock = {
        raw_lock = {
          {
            slock = 3129850509,
            tickets = {
              owner = 47757,
              next = 47757
            }
          }
        },
        magic = 3735899821,
        owner_cpu = 4294967295,
        owner = 0xffffffff
      },
      gpnum = 5877,
      completed = 5877,
      qsmask = 0,
      expmask = 0,
      qsmaskinit = 240,
      grpmask = 0,
      grplo = 0,
      grphi = 7,
      grpnum = 0 '\000',
      level = 0 '\000',
      parent = 0x0,
      blkd_tasks = {
        next = 0xc117f378 <rcu_preempt_state+56>,
        prev = 0xc117f378 <rcu_preempt_state+56>
      },
      gp_tasks = 0x0,
      exp_tasks = 0x0,
      need_future_gp = {1, 0},
      fqslock = {
        raw_lock = {
          {
            slock = 0,
            tickets = {
              owner = 0,
              next = 0
            }
          }
        },
        magic = 3735899821,
        owner_cpu = 4294967295,
        owner = 0xffffffff
      }
    }},
  level = {0xc117f340 <rcu_preempt_state>},
  levelcnt = {1, 0, 0, 0, 0},
  levelspread = "\b",
  rda = 0xc115e6b0 <rcu_preempt_data>,
  call = 0xc01975ac <call_rcu>,
  fqs_state = 0 '\000',
  boost = 0 '\000',
  gpnum = 5877,
  completed = 5877,
  gp_kthread = 0xf0c9e600,
  gp_wq = {
    lock = {
      {
        rlock = {
          raw_lock = {
            {
              slock = 2160230594,
              tickets = {
                owner = 32962,
                next = 32962
              }
            }
          },
          magic = 3735899821,
          owner_cpu = 4294967295,
          owner = 0xffffffff
        }
      }
    },
    task_list = {
      next = 0xf0cd1f20,
      prev = 0xf0cd1f20
    }
  },
  gp_flags = 1,
  orphan_lock = {
    raw_lock = {
      {
        slock = 327685,
        tickets = {
          owner = 5,
          next = 5
        }
      }
    },
    magic = 3735899821,
    owner_cpu = 4294967295,
    owner = 0xffffffff
  },
  orphan_nxtlist = 0x0,
  orphan_nxttail = 0xc117f490 <rcu_preempt_state+336>,
  orphan_donelist = 0x0,
  orphan_donetail = 0xc117f498 <rcu_preempt_state+344>,
  qlen_lazy = 0,
  qlen = 0,
  onoff_mutex = {
    count = {
      counter = 1
    },
    wait_lock = {
      {
        rlock = {
          raw_lock = {
            {
              slock = 811479134,
              tickets = {
                owner = 12382,
                next = 12382
              }
            }
          },
          magic = 3735899821,
          owner_cpu = 4294967295,
          owner = 0xffffffff
        }
      }
    },
    wait_list = {
      next = 0xc117f4bc <rcu_preempt_state+380>,
      prev = 0xc117f4bc <rcu_preempt_state+380>
    },
    owner = 0x0,
    name = 0x0,
    magic = 0xc117f4a8 <rcu_preempt_state+360>
  },
  barrier_mutex = {
    count = {
      counter = 1
    },
    wait_lock = {
      {
        rlock = {
          raw_lock = {
            {
              slock = 0,
              tickets = {
                owner = 0,
                next = 0
              }
            }
          },
          magic = 3735899821,
          owner_cpu = 4294967295,
          owner = 0xffffffff
        }
      }
    },
    wait_list = {
      next = 0xc117f4e4 <rcu_preempt_state+420>,
      prev = 0xc117f4e4 <rcu_preempt_state+420>
    },
    owner = 0x0,
    name = 0x0,
    magic = 0xc117f4d0 <rcu_preempt_state+400>
  },
  barrier_cpu_count = {
    counter = 0
  },
  barrier_completion = {
    done = 0,
    wait = {
      lock = {
        {
          rlock = {
            raw_lock = {
              {
                slock = 0,
                tickets = {
                  owner = 0,
                  next = 0
                }
              }
            },
            magic = 0,
            owner_cpu = 0,
            owner = 0x0
          }
        }
      },
      task_list = {
        next = 0x0,
        prev = 0x0
      }
    }
  },
  n_barrier_done = 0,
  expedited_start = {
    counter = 0
  },
  expedited_done = {
    counter = 0
  },
  expedited_wrap = {
    counter = 0
  },
  expedited_tryfail = {
    counter = 0
  },
  expedited_workdone1 = {
    counter = 0
  },
  expedited_workdone2 = {
    counter = 0
  },
  expedited_normal = {
    counter = 0
  },
  expedited_stoppedcpus = {
    counter = 0
  },
  expedited_done_tries = {
    counter = 0
  },
  expedited_done_lost = {
    counter = 0
  },
  expedited_done_exit = {
    counter = 0
  },
  jiffies_force_qs = 4294963917,
  n_force_qs = 4028,
  n_force_qs_lh = 0,
  n_force_qs_ngp = 0,
  gp_start = 4294963911,
  jiffies_stall = 4294966011,
  gp_max = 17,
  name = 0xc0d833ab "rcu_preempt",
  abbr = 112 'p',
  flavors = {
    next = 0xc117f2ec <rcu_bh_state+556>,
    prev = 0xc117f300 <rcu_struct_flavors>
  },
  wakeup_work = {
    flags = 3,
    llnode = {
      next = 0x0
    },
    func = 0xc0195aa8 <rsp_wakeup>
  }



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RCU] kernel hangs in wait_rcu_gp during suspend path
  2014-12-18 16:22       ` Arun KS
@ 2014-12-18 20:05         ` Paul E. McKenney
  2014-12-19 18:55           ` Arun KS
  0 siblings, 1 reply; 10+ messages in thread
From: Paul E. McKenney @ 2014-12-18 20:05 UTC (permalink / raw)
  To: Arun KS; +Cc: linux-kernel, josh, rostedt, Mathieu Desnoyers, laijs, Arun KS

On Thu, Dec 18, 2014 at 09:52:28PM +0530, Arun KS wrote:
> Hi Paul,
> 
> On Thu, Dec 18, 2014 at 12:54 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > On Tue, Dec 16, 2014 at 11:00:20PM +0530, Arun KS wrote:
> >> Hello,
> >>
> >> Adding some more info.
> >>
> >> Below is the rcu_data data structure corresponding to cpu4.
> >
> > This shows that RCU is idle.  What was the state of the system at the
> > time you collected this data?
> 
> System initiated a suspend sequence and currently at disable_nonboot_cpus().
> It has hotplugged 0, 1 and 2 successfully.  And even successful in hot
> plugging cpu3.
> But while calling the CPU_POST_DEAD notifier for cpu3, another driver tried to
> unregister an atomic notifier. Which eventually calls syncronize_rcu()
> which hangs the suspend task.
> 
> bt as follows,
> PID: 202    TASK: edcd2a00  CPU: 4   COMMAND: "kworker/u16:4"
>  #0 [<c0a1f8c0>] (__schedule) from [<c0a1d054>]
>  #1 [<c0a1d054>] (schedule_timeout) from [<c0a1f018>]
>  #2 [<c0a1f018>] (wait_for_common) from [<c013c570>]
>  #3 [<c013c570>] (wait_rcu_gp) from [<c014407c>]
>  #4 [<c014407c>] (atomic_notifier_chain_unregister) from [<c06be62c>]
>  #5 [<c06be62c>] (cpufreq_interactive_disable_sched_input) from [<c06bee1c>]
>  #6 [<c06bee1c>] (cpufreq_governor_interactive) from [<c06b7724>]
>  #7 [<c06b7724>] (__cpufreq_governor) from [<c06b9f74>]
>  #8 [<c06b9f74>] (__cpufreq_remove_dev_finish) from [<c06ba3a4>]
>  #9 [<c06ba3a4>] (cpufreq_cpu_callback) from [<c0a22674>]
> #10 [<c0a22674>] (notifier_call_chain) from [<c012284c>]
> #11 [<c012284c>] (__cpu_notify) from [<c01229dc>]
> #12 [<c01229dc>] (cpu_notify_nofail) from [<c0a0dd1c>]
> #13 [<c0a0dd1c>] (_cpu_down) from [<c0122b48>]
> #14 [<c0122b48>] (disable_nonboot_cpus) from [<c0168cd8>]
> #15 [<c0168cd8>] (suspend_devices_and_enter) from [<c0169018>]
> #16 [<c0169018>] (pm_suspend) from [<c01691e0>]
> #17 [<c01691e0>] (try_to_suspend) from [<c01396f0>]
> #18 [<c01396f0>] (process_one_work) from [<c0139db0>]
> #19 [<c0139db0>] (worker_thread) from [<c013efa4>]
> #20 [<c013efa4>] (kthread) from [<c01061f8>]
> 
> But the other cores 4-7 are active. I can see them going to idle tasks
> coming out from idle because of interrupts, scheduling kworkers etc.
> So when I took the data, all the online cores(4-7) were in idle as
> shown below from runq data structures.
> 
> ------start--------------
> crash> runq
> CPU 0 [OFFLINE]
> 
> CPU 1 [OFFLINE]
> 
> CPU 2 [OFFLINE]
> 
> CPU 3 [OFFLINE]
> 
> CPU 4 RUNQUEUE: c5439040
>   CURRENT: PID: 0      TASK: f0c9d400  COMMAND: "swapper/4"
>   RT PRIO_ARRAY: c5439130
>      [no tasks queued]
>   CFS RB_ROOT: c54390c0
>      [no tasks queued]
> 
> CPU 5 RUNQUEUE: c5447040
>   CURRENT: PID: 0      TASK: f0c9aa00  COMMAND: "swapper/5"
>   RT PRIO_ARRAY: c5447130
>      [no tasks queued]
>   CFS RB_ROOT: c54470c0
>      [no tasks queued]
> 
> CPU 6 RUNQUEUE: c5455040
>   CURRENT: PID: 0      TASK: f0c9ce00  COMMAND: "swapper/6"
>   RT PRIO_ARRAY: c5455130
>      [no tasks queued]
>   CFS RB_ROOT: c54550c0
>      [no tasks queued]
> 
> CPU 7 RUNQUEUE: c5463040
>   CURRENT: PID: 0      TASK: f0c9b000  COMMAND: "swapper/7"
>   RT PRIO_ARRAY: c5463130
>      [no tasks queued]
>   CFS RB_ROOT: c54630c0
>      [no tasks queued]
> ------end--------------
> 
> but one strange thing i can see is that rcu_read_lock_nesting for idle
> tasks running on cpu 5 and cpu 6 are set to 1.
> 
> PID: 0      TASK: f0c9d400  CPU: 4   COMMAND: "swapper/4"
>   rcu_read_lock_nesting = 0,
> 
> PID: 0      TASK: f0c9aa00  CPU: 5   COMMAND: "swapper/5"
>   rcu_read_lock_nesting = 1,
> 
> PID: 0      TASK: f0c9ce00  CPU: 6   COMMAND: "swapper/6"
>   rcu_read_lock_nesting = 1,
> 
> PID: 0      TASK: f0c9b000  CPU: 7   COMMAND: "swapper/7"
>   rcu_read_lock_nesting = 0,
> 
> Does this means that the current grace period(suspend thread is
> waiting on) is getting extended infinitely?

Indeed it does, good catch!  Looks like someone entered an RCU read-side
critical section, then forgot to exit it, which would prevent grace
periods from ever completing.  CONFIG_PROVE_RCU=y might be
helpful in tracking this down.

> Also attaching the per_cpu rcu_data for online and offline cores.

But these still look like there is no grace period in progress.

Still, it would be good to try CONFIG_PROVE_RCU=y and see what it
shows you.

Also, I am not seeing similar complaints about 3.10, so it is quite
possible that a recent change in the ARM-specific idle-loop code is
doing this to you.  It might be well worth looking through recent
changes in this area, particularly if some older version works well
for you.

							Thanx, Paul

> Thanks,
> Arun
> 
> >
> >                                                         Thanx, Paul
> >
> >> struct rcu_data {
> >>   completed = 5877,
> >>   gpnum = 5877,
> >>   passed_quiesce = true,
> >>   qs_pending = false,
> >>   beenonline = true,
> >>   preemptible = true,
> >>   mynode = 0xc117f340 <rcu_preempt_state>,
> >>   grpmask = 16,
> >>   nxtlist = 0xedaaec00,
> >>   nxttail = {0xc54366c4, 0xe84d350c, 0xe84d350c, 0xe84d350c},
> >>   nxtcompleted = {4294967035, 5878, 5878, 5878},
> >>   qlen_lazy = 105,
> >>   qlen = 415,
> >>   qlen_last_fqs_check = 0,
> >>   n_cbs_invoked = 86323,
> >>   n_nocbs_invoked = 0,
> >>   n_cbs_orphaned = 0,
> >>   n_cbs_adopted = 139,
> >>   n_force_qs_snap = 0,
> >>   blimit = 10,
> >>   dynticks = 0xc5436758,
> >>   dynticks_snap = 7582140,
> >>   dynticks_fqs = 41,
> >>   offline_fqs = 0,
> >>   n_rcu_pending = 59404,
> >>   n_rp_qs_pending = 5,
> >>   n_rp_report_qs = 4633,
> >>   n_rp_cb_ready = 32,
> >>   n_rp_cpu_needs_gp = 41088,
> >>   n_rp_gp_completed = 2844,
> >>   n_rp_gp_started = 1150,
> >>   n_rp_need_nothing = 9657,
> >>   barrier_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   oom_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   cpu = 4,
> >>   rsp = 0xc117f340 <rcu_preempt_state>
> >> }
> >>
> >>
> >>
> >> Also pasting complete rcu_preempt_state.
> >>
> >>
> >>
> >> rcu_preempt_state = $9 = {
> >>   node = {{
> >>       lock = {
> >>         raw_lock = {
> >>           {
> >>             slock = 3129850509,
> >>             tickets = {
> >>               owner = 47757,
> >>               next = 47757
> >>             }
> >>           }
> >>         },
> >>         magic = 3735899821,
> >>         owner_cpu = 4294967295,
> >>         owner = 0xffffffff
> >>       },
> >>       gpnum = 5877,
> >>       completed = 5877,
> >>       qsmask = 0,
> >>       expmask = 0,
> >>       qsmaskinit = 240,
> >>       grpmask = 0,
> >>       grplo = 0,
> >>       grphi = 7,
> >>       grpnum = 0 '\000',
> >>       level = 0 '\000',
> >>       parent = 0x0,
> >>       blkd_tasks = {
> >>         next = 0xc117f378 <rcu_preempt_state+56>,
> >>         prev = 0xc117f378 <rcu_preempt_state+56>
> >>       },
> >>       gp_tasks = 0x0,
> >>       exp_tasks = 0x0,
> >>       need_future_gp = {1, 0},
> >>       fqslock = {
> >>         raw_lock = {
> >>           {
> >>             slock = 0,
> >>             tickets = {
> >>               owner = 0,
> >>               next = 0
> >>             }
> >>           }
> >>         },
> >>         magic = 3735899821,
> >>         owner_cpu = 4294967295,
> >>         owner = 0xffffffff
> >>       }
> >>     }},
> >>   level = {0xc117f340 <rcu_preempt_state>},
> >>   levelcnt = {1, 0, 0, 0, 0},
> >>   levelspread = "\b",
> >>   rda = 0xc115e6b0 <rcu_preempt_data>,
> >>   call = 0xc01975ac <call_rcu>,
> >>   fqs_state = 0 '\000',
> >>   boost = 0 '\000',
> >>   gpnum = 5877,
> >>   completed = 5877,
> >>   gp_kthread = 0xf0c9e600,
> >>   gp_wq = {
> >>     lock = {
> >>       {
> >>         rlock = {
> >>           raw_lock = {
> >>             {
> >>               slock = 2160230594,
> >>               tickets = {
> >>                 owner = 32962,
> >>                 next = 32962
> >>               }
> >>             }
> >>           },
> >>           magic = 3735899821,
> >>           owner_cpu = 4294967295,
> >>           owner = 0xffffffff
> >>         }
> >>       }
> >>     },
> >>     task_list = {
> >>       next = 0xf0cd1f20,
> >>       prev = 0xf0cd1f20
> >>     }
> >>   },
> >>   gp_flags = 1,
> >>   orphan_lock = {
> >>     raw_lock = {
> >>       {
> >>         slock = 327685,
> >>         tickets = {
> >>           owner = 5,
> >>           next = 5
> >>         }
> >>       }
> >>     },
> >>     magic = 3735899821,
> >>     owner_cpu = 4294967295,
> >>     owner = 0xffffffff
> >>   },
> >>   orphan_nxtlist = 0x0,
> >>   orphan_nxttail = 0xc117f490 <rcu_preempt_state+336>,
> >>   orphan_donelist = 0x0,
> >>   orphan_donetail = 0xc117f498 <rcu_preempt_state+344>,
> >>   qlen_lazy = 0,
> >>   qlen = 0,
> >>   onoff_mutex = {
> >>     count = {
> >>       counter = 1
> >>     },
> >>     wait_lock = {
> >>       {
> >>         rlock = {
> >>           raw_lock = {
> >>             {
> >>               slock = 811479134,
> >>               tickets = {
> >>                 owner = 12382,
> >>                 next = 12382
> >>               }
> >>             }
> >>           },
> >>           magic = 3735899821,
> >>           owner_cpu = 4294967295,
> >>           owner = 0xffffffff
> >>         }
> >>       }
> >>     },
> >>     wait_list = {
> >>       next = 0xc117f4bc <rcu_preempt_state+380>,
> >>       prev = 0xc117f4bc <rcu_preempt_state+380>
> >>     },
> >>     owner = 0x0,
> >>     name = 0x0,
> >>     magic = 0xc117f4a8 <rcu_preempt_state+360>
> >>   },
> >>   barrier_mutex = {
> >>     count = {
> >>       counter = 1
> >>     },
> >>     wait_lock = {
> >>       {
> >>         rlock = {
> >>           raw_lock = {
> >>             {
> >>               slock = 0,
> >>               tickets = {
> >>                 owner = 0,
> >>                 next = 0
> >>               }
> >>             }
> >>           },
> >>           magic = 3735899821,
> >>           owner_cpu = 4294967295,
> >>           owner = 0xffffffff
> >>         }
> >>       }
> >>     },
> >>     wait_list = {
> >>       next = 0xc117f4e4 <rcu_preempt_state+420>,
> >>       prev = 0xc117f4e4 <rcu_preempt_state+420>
> >>     },
> >>     owner = 0x0,
> >>     name = 0x0,
> >>     magic = 0xc117f4d0 <rcu_preempt_state+400>
> >>   },
> >>   barrier_cpu_count = {
> >>     counter = 0
> >>   },
> >>   barrier_completion = {
> >>     done = 0,
> >>     wait = {
> >>       lock = {
> >>         {
> >>           rlock = {
> >>             raw_lock = {
> >>               {
> >>                 slock = 0,
> >>                 tickets = {
> >>                   owner = 0,
> >>                   next = 0
> >>                 }
> >>               }
> >>             },
> >>             magic = 0,
> >>             owner_cpu = 0,
> >>             owner = 0x0
> >>           }
> >>         }
> >>       },
> >>       task_list = {
> >>         next = 0x0,
> >>         prev = 0x0
> >>       }
> >>     }
> >>   },
> >>   n_barrier_done = 0,
> >>   expedited_start = {
> >>     counter = 0
> >>   },
> >>   expedited_done = {
> >>     counter = 0
> >>   },
> >>   expedited_wrap = {
> >>     counter = 0
> >>   },
> >>   expedited_tryfail = {
> >>     counter = 0
> >>   },
> >>   expedited_workdone1 = {
> >>     counter = 0
> >>   },
> >>   expedited_workdone2 = {
> >>     counter = 0
> >>   },
> >>   expedited_normal = {
> >>     counter = 0
> >>   },
> >>   expedited_stoppedcpus = {
> >>     counter = 0
> >>   },
> >>   expedited_done_tries = {
> >>     counter = 0
> >>   },
> >>   expedited_done_lost = {
> >>     counter = 0
> >>   },
> >>   expedited_done_exit = {
> >>     counter = 0
> >>   },
> >>   jiffies_force_qs = 4294963917,
> >>   n_force_qs = 4028,
> >>   n_force_qs_lh = 0,
> >>   n_force_qs_ngp = 0,
> >>   gp_start = 4294963911,
> >>   jiffies_stall = 4294966011,
> >>   gp_max = 17,
> >>   name = 0xc0d833ab "rcu_preempt",
> >>   abbr = 112 'p',
> >>   flavors = {
> >>     next = 0xc117f2ec <rcu_bh_state+556>,
> >>     prev = 0xc117f300 <rcu_struct_flavors>
> >>   },
> >>   wakeup_work = {
> >>     flags = 3,
> >>     llnode = {
> >>       next = 0x0
> >>     },
> >>     func = 0xc0195aa8 <rsp_wakeup>
> >>   }
> >> }
> >>
> >> Hope this helps.
> >>
> >> Thanks,
> >> Arun
> >>
> >>
> >> On Tue, Dec 16, 2014 at 11:59 AM, Arun KS <arunks.linux@gmail.com> wrote:
> >> > Hello,
> >> >
> >> > I dig little deeper to understand the situation.
> >> > All other cpus are in idle thread already.
> >> > As per my understanding, for the grace period to end, at-least one of
> >> > the following should happen on all online cpus,
> >> >
> >> > 1. a context switch.
> >> > 2. user space switch.
> >> > 3. switch to idle thread.
> >> >
> >> > In this situation, since all the other cores are already in idle,  non
> >> > of the above are meet on all online cores.
> >> > So grace period is getting extended and never finishes. Below is the
> >> > state of runqueue when the hang happens.
> >> > --------------start------------------------------------
> >> > crash> runq
> >> > CPU 0 [OFFLINE]
> >> >
> >> > CPU 1 [OFFLINE]
> >> >
> >> > CPU 2 [OFFLINE]
> >> >
> >> > CPU 3 [OFFLINE]
> >> >
> >> > CPU 4 RUNQUEUE: c3192e40
> >> >   CURRENT: PID: 0      TASK: f0874440  COMMAND: "swapper/4"
> >> >   RT PRIO_ARRAY: c3192f20
> >> >      [no tasks queued]
> >> >   CFS RB_ROOT: c3192eb0
> >> >      [no tasks queued]
> >> >
> >> > CPU 5 RUNQUEUE: c31a0e40
> >> >   CURRENT: PID: 0      TASK: f0874980  COMMAND: "swapper/5"
> >> >   RT PRIO_ARRAY: c31a0f20
> >> >      [no tasks queued]
> >> >   CFS RB_ROOT: c31a0eb0
> >> >      [no tasks queued]
> >> >
> >> > CPU 6 RUNQUEUE: c31aee40
> >> >   CURRENT: PID: 0      TASK: f0874ec0  COMMAND: "swapper/6"
> >> >   RT PRIO_ARRAY: c31aef20
> >> >      [no tasks queued]
> >> >   CFS RB_ROOT: c31aeeb0
> >> >      [no tasks queued]
> >> >
> >> > CPU 7 RUNQUEUE: c31bce40
> >> >   CURRENT: PID: 0      TASK: f0875400  COMMAND: "swapper/7"
> >> >   RT PRIO_ARRAY: c31bcf20
> >> >      [no tasks queued]
> >> >   CFS RB_ROOT: c31bceb0
> >> >      [no tasks queued]
> >> > --------------end------------------------------------
> >> >
> >> > If my understanding is correct the below patch should help, because it
> >> > will expedite grace periods during suspend,
> >> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c
> >> >
> >> > But I wonder why it was not taken to stable trees. Can we take it?
> >> > Appreciate your help.
> >> >
> >> > Thanks,
> >> > Arun
> >> >
> >> > On Mon, Dec 15, 2014 at 10:34 PM, Arun KS <arunks.linux@gmail.com> wrote:
> >> >> Hi,
> >> >>
> >> >> Here is the backtrace of the process hanging in wait_rcu_gp,
> >> >>
> >> >> PID: 247    TASK: e16e7380  CPU: 4   COMMAND: "kworker/u16:5"
> >> >>  #0 [<c09fead0>] (__schedule) from [<c09fcab0>]
> >> >>  #1 [<c09fcab0>] (schedule_timeout) from [<c09fe050>]
> >> >>  #2 [<c09fe050>] (wait_for_common) from [<c013b2b4>]
> >> >>  #3 [<c013b2b4>] (wait_rcu_gp) from [<c0142f50>]
> >> >>  #4 [<c0142f50>] (atomic_notifier_chain_unregister) from [<c06b2ab8>]
> >> >>  #5 [<c06b2ab8>] (cpufreq_interactive_disable_sched_input) from [<c06b32a8>]
> >> >>  #6 [<c06b32a8>] (cpufreq_governor_interactive) from [<c06abbf8>]
> >> >>  #7 [<c06abbf8>] (__cpufreq_governor) from [<c06ae474>]
> >> >>  #8 [<c06ae474>] (__cpufreq_remove_dev_finish) from [<c06ae8c0>]
> >> >>  #9 [<c06ae8c0>] (cpufreq_cpu_callback) from [<c0a0185c>]
> >> >> #10 [<c0a0185c>] (notifier_call_chain) from [<c0121888>]
> >> >> #11 [<c0121888>] (__cpu_notify) from [<c0121a04>]
> >> >> #12 [<c0121a04>] (cpu_notify_nofail) from [<c09ee7f0>]
> >> >> #13 [<c09ee7f0>] (_cpu_down) from [<c0121b70>]
> >> >> #14 [<c0121b70>] (disable_nonboot_cpus) from [<c016788c>]
> >> >> #15 [<c016788c>] (suspend_devices_and_enter) from [<c0167bcc>]
> >> >> #16 [<c0167bcc>] (pm_suspend) from [<c0167d94>]
> >> >> #17 [<c0167d94>] (try_to_suspend) from [<c0138460>]
> >> >> #18 [<c0138460>] (process_one_work) from [<c0138b18>]
> >> >> #19 [<c0138b18>] (worker_thread) from [<c013dc58>]
> >> >> #20 [<c013dc58>] (kthread) from [<c01061b8>]
> >> >>
> >> >> Will this patch helps here,
> >> >> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c
> >> >>
> >> >> I couldn't really understand why it got struck in  synchronize_rcu().
> >> >> Please give some pointers to debug this further.
> >> >>
> >> >> Below are the configs enable related to RCU.
> >> >>
> >> >> CONFIG_TREE_PREEMPT_RCU=y
> >> >> CONFIG_PREEMPT_RCU=y
> >> >> CONFIG_RCU_STALL_COMMON=y
> >> >> CONFIG_RCU_FANOUT=32
> >> >> CONFIG_RCU_FANOUT_LEAF=16
> >> >> CONFIG_RCU_FAST_NO_HZ=y
> >> >> CONFIG_RCU_CPU_STALL_TIMEOUT=21
> >> >> CONFIG_RCU_CPU_STALL_VERBOSE=y
> >> >>
> >> >> Kernel version is 3.10.28
> >> >> Architecture is ARM
> >> >>
> >> >> Thanks,
> >> >> Arun
> >>
> >

> crash> struct rcu_data C54286B0
> struct rcu_data {
>   completed = 2833,
>   gpnum = 2833,
>   passed_quiesce = false,
>   qs_pending = false,
>   beenonline = true,
>   preemptible = true,
>   mynode = 0xc117f340 <rcu_preempt_state>,
>   grpmask = 8,
>   nxtlist = 0x0,
>   nxttail = {0xc54286c4, 0xc54286c4, 0xc54286c4, 0x0},
>   nxtcompleted = {0, 4294967136, 4294967137, 4294967137},
>   qlen_lazy = 0,
>   qlen = 0,
>   qlen_last_fqs_check = 0,
>   n_cbs_invoked = 609,
>   n_nocbs_invoked = 0,
>   n_cbs_orphaned = 13,
>   n_cbs_adopted = 0,
>   n_force_qs_snap = 1428,
>   blimit = 10,
>   dynticks = 0xc5428758,
>   dynticks_snap = 13206053,
>   dynticks_fqs = 16,
>   offline_fqs = 0,
>   n_rcu_pending = 181,
>   n_rp_qs_pending = 1,
>   n_rp_report_qs = 21,
>   n_rp_cb_ready = 0,
>   n_rp_cpu_needs_gp = 0,
>   n_rp_gp_completed = 22,
>   n_rp_gp_started = 8,
>   n_rp_need_nothing = 130,
>   barrier_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   oom_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   cpu = 3,
>   rsp = 0xc117f340 <rcu_preempt_state>
> }
> 
> crash> struct rcu_data C541A6B0
> struct rcu_data {
>   completed = 5877,
>   gpnum = 5877,
>   passed_quiesce = true,
>   qs_pending = false,
>   beenonline = true,
>   preemptible = true,
>   mynode = 0xc117f340 <rcu_preempt_state>,
>   grpmask = 4,
>   nxtlist = 0x0,
>   nxttail = {0xc541a6c4, 0xc541a6c4, 0xc541a6c4, 0x0},
>   nxtcompleted = {0, 5877, 5878, 5878},
>   qlen_lazy = 0,
>   qlen = 0,
>   qlen_last_fqs_check = 0,
>   n_cbs_invoked = 61565,
>   n_nocbs_invoked = 0,
>   n_cbs_orphaned = 139,
>   n_cbs_adopted = 100,
>   n_force_qs_snap = 0,
>   blimit = 10,
>   dynticks = 0xc541a758,
>   dynticks_snap = 13901017,
>   dynticks_fqs = 75,
>   offline_fqs = 0,
>   n_rcu_pending = 16546,
>   n_rp_qs_pending = 3,
>   n_rp_report_qs = 4539,
>   n_rp_cb_ready = 69,
>   n_rp_cpu_needs_gp = 782,
>   n_rp_gp_completed = 4196,
>   n_rp_gp_started = 1739,
>   n_rp_need_nothing = 5221,
>   barrier_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   oom_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   cpu = 2,
>   rsp = 0xc117f340 <rcu_preempt_state>
> }
> 
> crash> struct rcu_data C540C6B0
> struct rcu_data {
>   completed = 5877,
>   gpnum = 5877,
>   passed_quiesce = true,
>   qs_pending = false,
>   beenonline = true,
>   preemptible = true,
>   mynode = 0xc117f340 <rcu_preempt_state>,
>   grpmask = 2,
>   nxtlist = 0x0,
>   nxttail = {0xc540c6c4, 0xc540c6c4, 0xc540c6c4, 0x0},
>   nxtcompleted = {4294967030, 5878, 5878, 5878},
>   qlen_lazy = 0,
>   qlen = 0,
>   qlen_last_fqs_check = 0,
>   n_cbs_invoked = 74292,
>   n_nocbs_invoked = 0,
>   n_cbs_orphaned = 100,
>   n_cbs_adopted = 65,
>   n_force_qs_snap = 0,
>   blimit = 10,
>   dynticks = 0xc540c758,
>   dynticks_snap = 10753433,
>   dynticks_fqs = 69,
>   offline_fqs = 0,
>   n_rcu_pending = 18350,
>   n_rp_qs_pending = 6,
>   n_rp_report_qs = 5009,
>   n_rp_cb_ready = 50,
>   n_rp_cpu_needs_gp = 915,
>   n_rp_gp_completed = 4423,
>   n_rp_gp_started = 1826,
>   n_rp_need_nothing = 6127,
>   barrier_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   oom_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   cpu = 1,
>   rsp = 0xc117f340 <rcu_preempt_state>
> }
> crash> struct rcu_data C53FE6B0
> struct rcu_data {
>   completed = 5877,
>   gpnum = 5877,
>   passed_quiesce = true,
>   qs_pending = false,
>   beenonline = true,
>   preemptible = true,
>   mynode = 0xc117f340 <rcu_preempt_state>,
>   grpmask = 1,
>   nxtlist = 0x0,
>   nxttail = {0xc53fe6c4, 0xc53fe6c4, 0xc53fe6c4, 0x0},
>   nxtcompleted = {4294966997, 5875, 5876, 5876},
>   qlen_lazy = 0,
>   qlen = 0,
>   qlen_last_fqs_check = 0,
>   n_cbs_invoked = 123175,
>   n_nocbs_invoked = 0,
>   n_cbs_orphaned = 52,
>   n_cbs_adopted = 0,
>   n_force_qs_snap = 0,
>   blimit = 10,
>   dynticks = 0xc53fe758,
>   dynticks_snap = 6330446,
>   dynticks_fqs = 46,
>   offline_fqs = 0,
>   n_rcu_pending = 22529,
>   n_rp_qs_pending = 3,
>   n_rp_report_qs = 5290,
>   n_rp_cb_ready = 279,
>   n_rp_cpu_needs_gp = 740,
>   n_rp_gp_completed = 2707,
>   n_rp_gp_started = 1208,
>   n_rp_need_nothing = 12305,
>   barrier_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   oom_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   cpu = 0,
>   rsp = 0xc117f340 <rcu_preempt_state>
> }
> 

> crash> struct rcu_data c54366b0
> struct rcu_data {
>   completed = 5877,
>   gpnum = 5877,
>   passed_quiesce = true,
>   qs_pending = false,
>   beenonline = true,
>   preemptible = true,
>   mynode = 0xc117f340 <rcu_preempt_state>,
>   grpmask = 16,
>   nxtlist = 0xedaaec00,
>   nxttail = {0xc54366c4, 0xe84d350c, 0xe84d350c, 0xe84d350c},
>   nxtcompleted = {4294967035, 5878, 5878, 5878},
>   qlen_lazy = 105,
>   qlen = 415,
>   qlen_last_fqs_check = 0,
>   n_cbs_invoked = 86323,
>   n_nocbs_invoked = 0,
>   n_cbs_orphaned = 0,
>   n_cbs_adopted = 139,
>   n_force_qs_snap = 0,
>   blimit = 10,
>   dynticks = 0xc5436758,
>   dynticks_snap = 7582140,
>   dynticks_fqs = 41,
>   offline_fqs = 0,
>   n_rcu_pending = 59404,
>   n_rp_qs_pending = 5,
>   n_rp_report_qs = 4633,
>   n_rp_cb_ready = 32,
>   n_rp_cpu_needs_gp = 41088,
>   n_rp_gp_completed = 2844,
>   n_rp_gp_started = 1150,
>   n_rp_need_nothing = 9657,
>   barrier_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   oom_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   cpu = 4,
>   rsp = 0xc117f340 <rcu_preempt_state>
> }
> 
> crash> struct rcu_data c54446b0
> struct rcu_data {
>   completed = 5877,
>   gpnum = 5877,
>   passed_quiesce = true,
>   qs_pending = false,
>   beenonline = true,
>   preemptible = true,
>   mynode = 0xc117f340 <rcu_preempt_state>,
>   grpmask = 32,
>   nxtlist = 0xcf9e856c,
>   nxttail = {0xc54446c4, 0xcfb3050c, 0xcfb3050c, 0xcfb3050c},
>   nxtcompleted = {0, 5878, 5878, 5878},
>   qlen_lazy = 0,
>   qlen = 117,
>   qlen_last_fqs_check = 0,
>   n_cbs_invoked = 36951,
>   n_nocbs_invoked = 0,
>   n_cbs_orphaned = 0,
>   n_cbs_adopted = 0,
>   n_force_qs_snap = 1428,
>   blimit = 10,
>   dynticks = 0xc5444758,
>   dynticks_snap = 86034,
>   dynticks_fqs = 46,
>   offline_fqs = 0,
>   n_rcu_pending = 49104,
>   n_rp_qs_pending = 3,
>   n_rp_report_qs = 2360,
>   n_rp_cb_ready = 18,
>   n_rp_cpu_needs_gp = 40106,
>   n_rp_gp_completed = 1334,
>   n_rp_gp_started = 791,
>   n_rp_need_nothing = 4495,
>   barrier_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   oom_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   cpu = 5,
>   rsp = 0xc117f340 <rcu_preempt_state>
> }
> 
> crash> struct rcu_data c54526b0
> struct rcu_data {
>   completed = 5877,
>   gpnum = 5877,
>   passed_quiesce = true,
>   qs_pending = false,
>   beenonline = true,
>   preemptible = true,
>   mynode = 0xc117f340 <rcu_preempt_state>,
>   grpmask = 64,
>   nxtlist = 0xe613d200,
>   nxttail = {0xc54526c4, 0xe6fc9d0c, 0xe6fc9d0c, 0xe6fc9d0c},
>   nxtcompleted = {0, 5878, 5878, 5878},
>   qlen_lazy = 2,
>   qlen = 35,
>   qlen_last_fqs_check = 0,
>   n_cbs_invoked = 34459,
>   n_nocbs_invoked = 0,
>   n_cbs_orphaned = 0,
>   n_cbs_adopted = 0,
>   n_force_qs_snap = 1428,
>   blimit = 10,
>   dynticks = 0xc5452758,
>   dynticks_snap = 116840,
>   dynticks_fqs = 47,
>   offline_fqs = 0,
>   n_rcu_pending = 48486,
>   n_rp_qs_pending = 3,
>   n_rp_report_qs = 2223,
>   n_rp_cb_ready = 24,
>   n_rp_cpu_needs_gp = 40101,
>   n_rp_gp_completed = 1226,
>   n_rp_gp_started = 789,
>   n_rp_need_nothing = 4123,
>   barrier_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   oom_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   cpu = 6,
>   rsp = 0xc117f340 <rcu_preempt_state>
> }
> 
> crash> struct rcu_data c54606b0
> struct rcu_data {
>   completed = 5877,
>   gpnum = 5877,
>   passed_quiesce = true,
>   qs_pending = false,
>   beenonline = true,
>   preemptible = true,
>   mynode = 0xc117f340 <rcu_preempt_state>,
>   grpmask = 128,
>   nxtlist = 0xdec32a6c,
>   nxttail = {0xc54606c4, 0xe6fcf10c, 0xe6fcf10c, 0xe6fcf10c},
>   nxtcompleted = {0, 5878, 5878, 5878},
>   qlen_lazy = 1,
>   qlen = 30,
>   qlen_last_fqs_check = 0,
>   n_cbs_invoked = 31998,
>   n_nocbs_invoked = 0,
>   n_cbs_orphaned = 0,
>   n_cbs_adopted = 0,
>   n_force_qs_snap = 1428,
>   blimit = 10,
>   dynticks = 0xc5460758,
>   dynticks_snap = 57846,
>   dynticks_fqs = 54,
>   offline_fqs = 0,
>   n_rcu_pending = 47502,
>   n_rp_qs_pending = 2,
>   n_rp_report_qs = 2142,
>   n_rp_cb_ready = 37,
>   n_rp_cpu_needs_gp = 40049,
>   n_rp_gp_completed = 1223,
>   n_rp_gp_started = 661,
>   n_rp_need_nothing = 3390,
>   barrier_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   oom_head = {
>     next = 0x0,
>     func = 0x0
>   },
>   cpu = 7,
>   rsp = 0xc117f340 <rcu_preempt_state>
> }

> 
> rcu_preempt_state = $9 = {
>   node = {{
>       lock = {
>         raw_lock = {
>           {
>             slock = 3129850509,
>             tickets = {
>               owner = 47757,
>               next = 47757
>             }
>           }
>         },
>         magic = 3735899821,
>         owner_cpu = 4294967295,
>         owner = 0xffffffff
>       },
>       gpnum = 5877,
>       completed = 5877,
>       qsmask = 0,
>       expmask = 0,
>       qsmaskinit = 240,
>       grpmask = 0,
>       grplo = 0,
>       grphi = 7,
>       grpnum = 0 '\000',
>       level = 0 '\000',
>       parent = 0x0,
>       blkd_tasks = {
>         next = 0xc117f378 <rcu_preempt_state+56>,
>         prev = 0xc117f378 <rcu_preempt_state+56>
>       },
>       gp_tasks = 0x0,
>       exp_tasks = 0x0,
>       need_future_gp = {1, 0},
>       fqslock = {
>         raw_lock = {
>           {
>             slock = 0,
>             tickets = {
>               owner = 0,
>               next = 0
>             }
>           }
>         },
>         magic = 3735899821,
>         owner_cpu = 4294967295,
>         owner = 0xffffffff
>       }
>     }},
>   level = {0xc117f340 <rcu_preempt_state>},
>   levelcnt = {1, 0, 0, 0, 0},
>   levelspread = "\b",
>   rda = 0xc115e6b0 <rcu_preempt_data>,
>   call = 0xc01975ac <call_rcu>,
>   fqs_state = 0 '\000',
>   boost = 0 '\000',
>   gpnum = 5877,
>   completed = 5877,
>   gp_kthread = 0xf0c9e600,
>   gp_wq = {
>     lock = {
>       {
>         rlock = {
>           raw_lock = {
>             {
>               slock = 2160230594,
>               tickets = {
>                 owner = 32962,
>                 next = 32962
>               }
>             }
>           },
>           magic = 3735899821,
>           owner_cpu = 4294967295,
>           owner = 0xffffffff
>         }
>       }
>     },
>     task_list = {
>       next = 0xf0cd1f20,
>       prev = 0xf0cd1f20
>     }
>   },
>   gp_flags = 1,
>   orphan_lock = {
>     raw_lock = {
>       {
>         slock = 327685,
>         tickets = {
>           owner = 5,
>           next = 5
>         }
>       }
>     },
>     magic = 3735899821,
>     owner_cpu = 4294967295,
>     owner = 0xffffffff
>   },
>   orphan_nxtlist = 0x0,
>   orphan_nxttail = 0xc117f490 <rcu_preempt_state+336>,
>   orphan_donelist = 0x0,
>   orphan_donetail = 0xc117f498 <rcu_preempt_state+344>,
>   qlen_lazy = 0,
>   qlen = 0,
>   onoff_mutex = {
>     count = {
>       counter = 1
>     },
>     wait_lock = {
>       {
>         rlock = {
>           raw_lock = {
>             {
>               slock = 811479134,
>               tickets = {
>                 owner = 12382,
>                 next = 12382
>               }
>             }
>           },
>           magic = 3735899821,
>           owner_cpu = 4294967295,
>           owner = 0xffffffff
>         }
>       }
>     },
>     wait_list = {
>       next = 0xc117f4bc <rcu_preempt_state+380>,
>       prev = 0xc117f4bc <rcu_preempt_state+380>
>     },
>     owner = 0x0,
>     name = 0x0,
>     magic = 0xc117f4a8 <rcu_preempt_state+360>
>   },
>   barrier_mutex = {
>     count = {
>       counter = 1
>     },
>     wait_lock = {
>       {
>         rlock = {
>           raw_lock = {
>             {
>               slock = 0,
>               tickets = {
>                 owner = 0,
>                 next = 0
>               }
>             }
>           },
>           magic = 3735899821,
>           owner_cpu = 4294967295,
>           owner = 0xffffffff
>         }
>       }
>     },
>     wait_list = {
>       next = 0xc117f4e4 <rcu_preempt_state+420>,
>       prev = 0xc117f4e4 <rcu_preempt_state+420>
>     },
>     owner = 0x0,
>     name = 0x0,
>     magic = 0xc117f4d0 <rcu_preempt_state+400>
>   },
>   barrier_cpu_count = {
>     counter = 0
>   },
>   barrier_completion = {
>     done = 0,
>     wait = {
>       lock = {
>         {
>           rlock = {
>             raw_lock = {
>               {
>                 slock = 0,
>                 tickets = {
>                   owner = 0,
>                   next = 0
>                 }
>               }
>             },
>             magic = 0,
>             owner_cpu = 0,
>             owner = 0x0
>           }
>         }
>       },
>       task_list = {
>         next = 0x0,
>         prev = 0x0
>       }
>     }
>   },
>   n_barrier_done = 0,
>   expedited_start = {
>     counter = 0
>   },
>   expedited_done = {
>     counter = 0
>   },
>   expedited_wrap = {
>     counter = 0
>   },
>   expedited_tryfail = {
>     counter = 0
>   },
>   expedited_workdone1 = {
>     counter = 0
>   },
>   expedited_workdone2 = {
>     counter = 0
>   },
>   expedited_normal = {
>     counter = 0
>   },
>   expedited_stoppedcpus = {
>     counter = 0
>   },
>   expedited_done_tries = {
>     counter = 0
>   },
>   expedited_done_lost = {
>     counter = 0
>   },
>   expedited_done_exit = {
>     counter = 0
>   },
>   jiffies_force_qs = 4294963917,
>   n_force_qs = 4028,
>   n_force_qs_lh = 0,
>   n_force_qs_ngp = 0,
>   gp_start = 4294963911,
>   jiffies_stall = 4294966011,
>   gp_max = 17,
>   name = 0xc0d833ab "rcu_preempt",
>   abbr = 112 'p',
>   flavors = {
>     next = 0xc117f2ec <rcu_bh_state+556>,
>     prev = 0xc117f300 <rcu_struct_flavors>
>   },
>   wakeup_work = {
>     flags = 3,
>     llnode = {
>       next = 0x0
>     },
>     func = 0xc0195aa8 <rsp_wakeup>
>   }
> 
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RCU] kernel hangs in wait_rcu_gp during suspend path
  2014-12-18 20:05         ` Paul E. McKenney
@ 2014-12-19 18:55           ` Arun KS
  2014-12-20  0:25             ` Paul E. McKenney
  0 siblings, 1 reply; 10+ messages in thread
From: Arun KS @ 2014-12-19 18:55 UTC (permalink / raw)
  To: Paul McKenney
  Cc: linux-kernel, josh, rostedt, Mathieu Desnoyers, laijs, Arun KS

Hi Paul,

On Fri, Dec 19, 2014 at 1:35 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> On Thu, Dec 18, 2014 at 09:52:28PM +0530, Arun KS wrote:
>> Hi Paul,
>>
>> On Thu, Dec 18, 2014 at 12:54 AM, Paul E. McKenney
>> <paulmck@linux.vnet.ibm.com> wrote:
>> > On Tue, Dec 16, 2014 at 11:00:20PM +0530, Arun KS wrote:
>> >> Hello,
>> >>
>> >> Adding some more info.
>> >>
>> >> Below is the rcu_data data structure corresponding to cpu4.
>> >
>> > This shows that RCU is idle.  What was the state of the system at the
>> > time you collected this data?
>>
>> System initiated a suspend sequence and currently at disable_nonboot_cpus().
>> It has hotplugged 0, 1 and 2 successfully.  And even successful in hot
>> plugging cpu3.
>> But while calling the CPU_POST_DEAD notifier for cpu3, another driver tried to
>> unregister an atomic notifier. Which eventually calls syncronize_rcu()
>> which hangs the suspend task.
>>
>> bt as follows,
>> PID: 202    TASK: edcd2a00  CPU: 4   COMMAND: "kworker/u16:4"
>>  #0 [<c0a1f8c0>] (__schedule) from [<c0a1d054>]
>>  #1 [<c0a1d054>] (schedule_timeout) from [<c0a1f018>]
>>  #2 [<c0a1f018>] (wait_for_common) from [<c013c570>]
>>  #3 [<c013c570>] (wait_rcu_gp) from [<c014407c>]
>>  #4 [<c014407c>] (atomic_notifier_chain_unregister) from [<c06be62c>]
>>  #5 [<c06be62c>] (cpufreq_interactive_disable_sched_input) from [<c06bee1c>]
>>  #6 [<c06bee1c>] (cpufreq_governor_interactive) from [<c06b7724>]
>>  #7 [<c06b7724>] (__cpufreq_governor) from [<c06b9f74>]
>>  #8 [<c06b9f74>] (__cpufreq_remove_dev_finish) from [<c06ba3a4>]
>>  #9 [<c06ba3a4>] (cpufreq_cpu_callback) from [<c0a22674>]
>> #10 [<c0a22674>] (notifier_call_chain) from [<c012284c>]
>> #11 [<c012284c>] (__cpu_notify) from [<c01229dc>]
>> #12 [<c01229dc>] (cpu_notify_nofail) from [<c0a0dd1c>]
>> #13 [<c0a0dd1c>] (_cpu_down) from [<c0122b48>]
>> #14 [<c0122b48>] (disable_nonboot_cpus) from [<c0168cd8>]
>> #15 [<c0168cd8>] (suspend_devices_and_enter) from [<c0169018>]
>> #16 [<c0169018>] (pm_suspend) from [<c01691e0>]
>> #17 [<c01691e0>] (try_to_suspend) from [<c01396f0>]
>> #18 [<c01396f0>] (process_one_work) from [<c0139db0>]
>> #19 [<c0139db0>] (worker_thread) from [<c013efa4>]
>> #20 [<c013efa4>] (kthread) from [<c01061f8>]
>>
>> But the other cores 4-7 are active. I can see them going to idle tasks
>> coming out from idle because of interrupts, scheduling kworkers etc.
>> So when I took the data, all the online cores(4-7) were in idle as
>> shown below from runq data structures.
>>
>> ------start--------------
>> crash> runq
>> CPU 0 [OFFLINE]
>>
>> CPU 1 [OFFLINE]
>>
>> CPU 2 [OFFLINE]
>>
>> CPU 3 [OFFLINE]
>>
>> CPU 4 RUNQUEUE: c5439040
>>   CURRENT: PID: 0      TASK: f0c9d400  COMMAND: "swapper/4"
>>   RT PRIO_ARRAY: c5439130
>>      [no tasks queued]
>>   CFS RB_ROOT: c54390c0
>>      [no tasks queued]
>>
>> CPU 5 RUNQUEUE: c5447040
>>   CURRENT: PID: 0      TASK: f0c9aa00  COMMAND: "swapper/5"
>>   RT PRIO_ARRAY: c5447130
>>      [no tasks queued]
>>   CFS RB_ROOT: c54470c0
>>      [no tasks queued]
>>
>> CPU 6 RUNQUEUE: c5455040
>>   CURRENT: PID: 0      TASK: f0c9ce00  COMMAND: "swapper/6"
>>   RT PRIO_ARRAY: c5455130
>>      [no tasks queued]
>>   CFS RB_ROOT: c54550c0
>>      [no tasks queued]
>>
>> CPU 7 RUNQUEUE: c5463040
>>   CURRENT: PID: 0      TASK: f0c9b000  COMMAND: "swapper/7"
>>   RT PRIO_ARRAY: c5463130
>>      [no tasks queued]
>>   CFS RB_ROOT: c54630c0
>>      [no tasks queued]
>> ------end--------------
>>
>> but one strange thing i can see is that rcu_read_lock_nesting for idle
>> tasks running on cpu 5 and cpu 6 are set to 1.
>>
>> PID: 0      TASK: f0c9d400  CPU: 4   COMMAND: "swapper/4"
>>   rcu_read_lock_nesting = 0,
>>
>> PID: 0      TASK: f0c9aa00  CPU: 5   COMMAND: "swapper/5"
>>   rcu_read_lock_nesting = 1,
>>
>> PID: 0      TASK: f0c9ce00  CPU: 6   COMMAND: "swapper/6"
>>   rcu_read_lock_nesting = 1,
>>
>> PID: 0      TASK: f0c9b000  CPU: 7   COMMAND: "swapper/7"
>>   rcu_read_lock_nesting = 0,
>>
>> Does this means that the current grace period(suspend thread is
>> waiting on) is getting extended infinitely?
>
> Indeed it does, good catch!  Looks like someone entered an RCU read-side
> critical section, then forgot to exit it, which would prevent grace
> periods from ever completing.  CONFIG_PROVE_RCU=y might be
> helpful in tracking this down.
>
>> Also attaching the per_cpu rcu_data for online and offline cores.
>
> But these still look like there is no grace period in progress.
>
> Still, it would be good to try CONFIG_PROVE_RCU=y and see what it
> shows you.
>
> Also, I am not seeing similar complaints about 3.10, so it is quite
> possible that a recent change in the ARM-specific idle-loop code is
> doing this to you.  It might be well worth looking through recent
> changes in this area, particularly if some older version works well
> for you.

Enabling CONFIG_PROVE_RCU also didn't help.

But we figured out the problem. Thanks for your help. And need your
suggestion in fixing.

if we dump the irq_work_list,

crash> irq_work_list
PER-CPU DATA TYPE:
  struct llist_head irq_work_list;
PER-CPU ADDRESSES:
  [0]: c53ff90c
  [1]: c540d90c
  [2]: c541b90c
  [3]: c542990c
  [4]: c543790c
  [5]: c544590c
  [6]: c545390c
  [7]: c546190c
crash>
crash> list irq_work.llnode -s irq_work.func -h c117f0b4
c117f0b4
  func = 0xc0195aa8 <rsp_wakeup>
c117f574
  func = 0xc0195aa8 <rsp_wakeup>
crash>

rsp_wakeup is pending in the cpu1's irq_work_list. And cpu1 is already
hot-plugged out.
All the later irq_work_queue calls returns because the work is already pending.

When the issue happens, noticed that a hotplug is happening during
early stages of boot(due to thermal), even before irq_work registers a
cpu_notifer callback. And hence __irq_work_run() will not run as a
part of CPU_DYING notifier.
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/kernel/irq_work.c?id=refs/tags/v3.10.63#n198

and rsp_wakeup is pending from there.

In first approach, we changed the
device_initcall(irq_work_init_cpu_notifier) to earlyinit_initcall and
the issue goes away. Because this makes sure that we have a cpu
notifier before any hotplug happens.
diff --git a/kernel/irq_work.c b/kernel/irq_work.c
index 55fcce6..5e58767 100644
--- a/kernel/irq_work.c
+++ b/kernel/irq_work.c
@@ -198,6 +198,6 @@ static __init int irq_work_init_cpu_notifier(void)
        register_cpu_notifier(&cpu_notify);
        return 0;
 }
-device_initcall(irq_work_init_cpu_notifier);
+early_initcall(irq_work_init_cpu_notifier);

 #endif /* CONFIG_HOTPLUG_CPU */


Another approach is to add syncronize_rcu() in cpu_down path.  This
way we makes sure that hotplug waits until a grace period gets over.
This also fixes the problem.
diff --git a/kernel/cpu.c b/kernel/cpu.c
index c56b958..00bdd90 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -311,6 +311,11 @@ static int __ref _cpu_down(unsigned int cpu, int
tasks_frozen)
                                __func__, cpu);
                goto out_release;
        }
+#ifdef CONFIG_PREEMPT
+       synchronize_sched();
+#endif
+       synchronize_rcu();
+
        smpboot_park_threads(cpu);

        err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));



Can you please suggest the rite approach?

Thanks,
Arun





>
>                                                         Thanx, Paul
>
>> Thanks,
>> Arun
>>
>> >
>> >                                                         Thanx, Paul
>> >
>> >> struct rcu_data {
>> >>   completed = 5877,
>> >>   gpnum = 5877,
>> >>   passed_quiesce = true,
>> >>   qs_pending = false,
>> >>   beenonline = true,
>> >>   preemptible = true,
>> >>   mynode = 0xc117f340 <rcu_preempt_state>,
>> >>   grpmask = 16,
>> >>   nxtlist = 0xedaaec00,
>> >>   nxttail = {0xc54366c4, 0xe84d350c, 0xe84d350c, 0xe84d350c},
>> >>   nxtcompleted = {4294967035, 5878, 5878, 5878},
>> >>   qlen_lazy = 105,
>> >>   qlen = 415,
>> >>   qlen_last_fqs_check = 0,
>> >>   n_cbs_invoked = 86323,
>> >>   n_nocbs_invoked = 0,
>> >>   n_cbs_orphaned = 0,
>> >>   n_cbs_adopted = 139,
>> >>   n_force_qs_snap = 0,
>> >>   blimit = 10,
>> >>   dynticks = 0xc5436758,
>> >>   dynticks_snap = 7582140,
>> >>   dynticks_fqs = 41,
>> >>   offline_fqs = 0,
>> >>   n_rcu_pending = 59404,
>> >>   n_rp_qs_pending = 5,
>> >>   n_rp_report_qs = 4633,
>> >>   n_rp_cb_ready = 32,
>> >>   n_rp_cpu_needs_gp = 41088,
>> >>   n_rp_gp_completed = 2844,
>> >>   n_rp_gp_started = 1150,
>> >>   n_rp_need_nothing = 9657,
>> >>   barrier_head = {
>> >>     next = 0x0,
>> >>     func = 0x0
>> >>   },
>> >>   oom_head = {
>> >>     next = 0x0,
>> >>     func = 0x0
>> >>   },
>> >>   cpu = 4,
>> >>   rsp = 0xc117f340 <rcu_preempt_state>
>> >> }
>> >>
>> >>
>> >>
>> >> Also pasting complete rcu_preempt_state.
>> >>
>> >>
>> >>
>> >> rcu_preempt_state = $9 = {
>> >>   node = {{
>> >>       lock = {
>> >>         raw_lock = {
>> >>           {
>> >>             slock = 3129850509,
>> >>             tickets = {
>> >>               owner = 47757,
>> >>               next = 47757
>> >>             }
>> >>           }
>> >>         },
>> >>         magic = 3735899821,
>> >>         owner_cpu = 4294967295,
>> >>         owner = 0xffffffff
>> >>       },
>> >>       gpnum = 5877,
>> >>       completed = 5877,
>> >>       qsmask = 0,
>> >>       expmask = 0,
>> >>       qsmaskinit = 240,
>> >>       grpmask = 0,
>> >>       grplo = 0,
>> >>       grphi = 7,
>> >>       grpnum = 0 '\000',
>> >>       level = 0 '\000',
>> >>       parent = 0x0,
>> >>       blkd_tasks = {
>> >>         next = 0xc117f378 <rcu_preempt_state+56>,
>> >>         prev = 0xc117f378 <rcu_preempt_state+56>
>> >>       },
>> >>       gp_tasks = 0x0,
>> >>       exp_tasks = 0x0,
>> >>       need_future_gp = {1, 0},
>> >>       fqslock = {
>> >>         raw_lock = {
>> >>           {
>> >>             slock = 0,
>> >>             tickets = {
>> >>               owner = 0,
>> >>               next = 0
>> >>             }
>> >>           }
>> >>         },
>> >>         magic = 3735899821,
>> >>         owner_cpu = 4294967295,
>> >>         owner = 0xffffffff
>> >>       }
>> >>     }},
>> >>   level = {0xc117f340 <rcu_preempt_state>},
>> >>   levelcnt = {1, 0, 0, 0, 0},
>> >>   levelspread = "\b",
>> >>   rda = 0xc115e6b0 <rcu_preempt_data>,
>> >>   call = 0xc01975ac <call_rcu>,
>> >>   fqs_state = 0 '\000',
>> >>   boost = 0 '\000',
>> >>   gpnum = 5877,
>> >>   completed = 5877,
>> >>   gp_kthread = 0xf0c9e600,
>> >>   gp_wq = {
>> >>     lock = {
>> >>       {
>> >>         rlock = {
>> >>           raw_lock = {
>> >>             {
>> >>               slock = 2160230594,
>> >>               tickets = {
>> >>                 owner = 32962,
>> >>                 next = 32962
>> >>               }
>> >>             }
>> >>           },
>> >>           magic = 3735899821,
>> >>           owner_cpu = 4294967295,
>> >>           owner = 0xffffffff
>> >>         }
>> >>       }
>> >>     },
>> >>     task_list = {
>> >>       next = 0xf0cd1f20,
>> >>       prev = 0xf0cd1f20
>> >>     }
>> >>   },
>> >>   gp_flags = 1,
>> >>   orphan_lock = {
>> >>     raw_lock = {
>> >>       {
>> >>         slock = 327685,
>> >>         tickets = {
>> >>           owner = 5,
>> >>           next = 5
>> >>         }
>> >>       }
>> >>     },
>> >>     magic = 3735899821,
>> >>     owner_cpu = 4294967295,
>> >>     owner = 0xffffffff
>> >>   },
>> >>   orphan_nxtlist = 0x0,
>> >>   orphan_nxttail = 0xc117f490 <rcu_preempt_state+336>,
>> >>   orphan_donelist = 0x0,
>> >>   orphan_donetail = 0xc117f498 <rcu_preempt_state+344>,
>> >>   qlen_lazy = 0,
>> >>   qlen = 0,
>> >>   onoff_mutex = {
>> >>     count = {
>> >>       counter = 1
>> >>     },
>> >>     wait_lock = {
>> >>       {
>> >>         rlock = {
>> >>           raw_lock = {
>> >>             {
>> >>               slock = 811479134,
>> >>               tickets = {
>> >>                 owner = 12382,
>> >>                 next = 12382
>> >>               }
>> >>             }
>> >>           },
>> >>           magic = 3735899821,
>> >>           owner_cpu = 4294967295,
>> >>           owner = 0xffffffff
>> >>         }
>> >>       }
>> >>     },
>> >>     wait_list = {
>> >>       next = 0xc117f4bc <rcu_preempt_state+380>,
>> >>       prev = 0xc117f4bc <rcu_preempt_state+380>
>> >>     },
>> >>     owner = 0x0,
>> >>     name = 0x0,
>> >>     magic = 0xc117f4a8 <rcu_preempt_state+360>
>> >>   },
>> >>   barrier_mutex = {
>> >>     count = {
>> >>       counter = 1
>> >>     },
>> >>     wait_lock = {
>> >>       {
>> >>         rlock = {
>> >>           raw_lock = {
>> >>             {
>> >>               slock = 0,
>> >>               tickets = {
>> >>                 owner = 0,
>> >>                 next = 0
>> >>               }
>> >>             }
>> >>           },
>> >>           magic = 3735899821,
>> >>           owner_cpu = 4294967295,
>> >>           owner = 0xffffffff
>> >>         }
>> >>       }
>> >>     },
>> >>     wait_list = {
>> >>       next = 0xc117f4e4 <rcu_preempt_state+420>,
>> >>       prev = 0xc117f4e4 <rcu_preempt_state+420>
>> >>     },
>> >>     owner = 0x0,
>> >>     name = 0x0,
>> >>     magic = 0xc117f4d0 <rcu_preempt_state+400>
>> >>   },
>> >>   barrier_cpu_count = {
>> >>     counter = 0
>> >>   },
>> >>   barrier_completion = {
>> >>     done = 0,
>> >>     wait = {
>> >>       lock = {
>> >>         {
>> >>           rlock = {
>> >>             raw_lock = {
>> >>               {
>> >>                 slock = 0,
>> >>                 tickets = {
>> >>                   owner = 0,
>> >>                   next = 0
>> >>                 }
>> >>               }
>> >>             },
>> >>             magic = 0,
>> >>             owner_cpu = 0,
>> >>             owner = 0x0
>> >>           }
>> >>         }
>> >>       },
>> >>       task_list = {
>> >>         next = 0x0,
>> >>         prev = 0x0
>> >>       }
>> >>     }
>> >>   },
>> >>   n_barrier_done = 0,
>> >>   expedited_start = {
>> >>     counter = 0
>> >>   },
>> >>   expedited_done = {
>> >>     counter = 0
>> >>   },
>> >>   expedited_wrap = {
>> >>     counter = 0
>> >>   },
>> >>   expedited_tryfail = {
>> >>     counter = 0
>> >>   },
>> >>   expedited_workdone1 = {
>> >>     counter = 0
>> >>   },
>> >>   expedited_workdone2 = {
>> >>     counter = 0
>> >>   },
>> >>   expedited_normal = {
>> >>     counter = 0
>> >>   },
>> >>   expedited_stoppedcpus = {
>> >>     counter = 0
>> >>   },
>> >>   expedited_done_tries = {
>> >>     counter = 0
>> >>   },
>> >>   expedited_done_lost = {
>> >>     counter = 0
>> >>   },
>> >>   expedited_done_exit = {
>> >>     counter = 0
>> >>   },
>> >>   jiffies_force_qs = 4294963917,
>> >>   n_force_qs = 4028,
>> >>   n_force_qs_lh = 0,
>> >>   n_force_qs_ngp = 0,
>> >>   gp_start = 4294963911,
>> >>   jiffies_stall = 4294966011,
>> >>   gp_max = 17,
>> >>   name = 0xc0d833ab "rcu_preempt",
>> >>   abbr = 112 'p',
>> >>   flavors = {
>> >>     next = 0xc117f2ec <rcu_bh_state+556>,
>> >>     prev = 0xc117f300 <rcu_struct_flavors>
>> >>   },
>> >>   wakeup_work = {
>> >>     flags = 3,
>> >>     llnode = {
>> >>       next = 0x0
>> >>     },
>> >>     func = 0xc0195aa8 <rsp_wakeup>
>> >>   }
>> >> }
>> >>
>> >> Hope this helps.
>> >>
>> >> Thanks,
>> >> Arun
>> >>
>> >>
>> >> On Tue, Dec 16, 2014 at 11:59 AM, Arun KS <arunks.linux@gmail.com> wrote:
>> >> > Hello,
>> >> >
>> >> > I dig little deeper to understand the situation.
>> >> > All other cpus are in idle thread already.
>> >> > As per my understanding, for the grace period to end, at-least one of
>> >> > the following should happen on all online cpus,
>> >> >
>> >> > 1. a context switch.
>> >> > 2. user space switch.
>> >> > 3. switch to idle thread.
>> >> >
>> >> > In this situation, since all the other cores are already in idle,  non
>> >> > of the above are meet on all online cores.
>> >> > So grace period is getting extended and never finishes. Below is the
>> >> > state of runqueue when the hang happens.
>> >> > --------------start------------------------------------
>> >> > crash> runq
>> >> > CPU 0 [OFFLINE]
>> >> >
>> >> > CPU 1 [OFFLINE]
>> >> >
>> >> > CPU 2 [OFFLINE]
>> >> >
>> >> > CPU 3 [OFFLINE]
>> >> >
>> >> > CPU 4 RUNQUEUE: c3192e40
>> >> >   CURRENT: PID: 0      TASK: f0874440  COMMAND: "swapper/4"
>> >> >   RT PRIO_ARRAY: c3192f20
>> >> >      [no tasks queued]
>> >> >   CFS RB_ROOT: c3192eb0
>> >> >      [no tasks queued]
>> >> >
>> >> > CPU 5 RUNQUEUE: c31a0e40
>> >> >   CURRENT: PID: 0      TASK: f0874980  COMMAND: "swapper/5"
>> >> >   RT PRIO_ARRAY: c31a0f20
>> >> >      [no tasks queued]
>> >> >   CFS RB_ROOT: c31a0eb0
>> >> >      [no tasks queued]
>> >> >
>> >> > CPU 6 RUNQUEUE: c31aee40
>> >> >   CURRENT: PID: 0      TASK: f0874ec0  COMMAND: "swapper/6"
>> >> >   RT PRIO_ARRAY: c31aef20
>> >> >      [no tasks queued]
>> >> >   CFS RB_ROOT: c31aeeb0
>> >> >      [no tasks queued]
>> >> >
>> >> > CPU 7 RUNQUEUE: c31bce40
>> >> >   CURRENT: PID: 0      TASK: f0875400  COMMAND: "swapper/7"
>> >> >   RT PRIO_ARRAY: c31bcf20
>> >> >      [no tasks queued]
>> >> >   CFS RB_ROOT: c31bceb0
>> >> >      [no tasks queued]
>> >> > --------------end------------------------------------
>> >> >
>> >> > If my understanding is correct the below patch should help, because it
>> >> > will expedite grace periods during suspend,
>> >> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c
>> >> >
>> >> > But I wonder why it was not taken to stable trees. Can we take it?
>> >> > Appreciate your help.
>> >> >
>> >> > Thanks,
>> >> > Arun
>> >> >
>> >> > On Mon, Dec 15, 2014 at 10:34 PM, Arun KS <arunks.linux@gmail.com> wrote:
>> >> >> Hi,
>> >> >>
>> >> >> Here is the backtrace of the process hanging in wait_rcu_gp,
>> >> >>
>> >> >> PID: 247    TASK: e16e7380  CPU: 4   COMMAND: "kworker/u16:5"
>> >> >>  #0 [<c09fead0>] (__schedule) from [<c09fcab0>]
>> >> >>  #1 [<c09fcab0>] (schedule_timeout) from [<c09fe050>]
>> >> >>  #2 [<c09fe050>] (wait_for_common) from [<c013b2b4>]
>> >> >>  #3 [<c013b2b4>] (wait_rcu_gp) from [<c0142f50>]
>> >> >>  #4 [<c0142f50>] (atomic_notifier_chain_unregister) from [<c06b2ab8>]
>> >> >>  #5 [<c06b2ab8>] (cpufreq_interactive_disable_sched_input) from [<c06b32a8>]
>> >> >>  #6 [<c06b32a8>] (cpufreq_governor_interactive) from [<c06abbf8>]
>> >> >>  #7 [<c06abbf8>] (__cpufreq_governor) from [<c06ae474>]
>> >> >>  #8 [<c06ae474>] (__cpufreq_remove_dev_finish) from [<c06ae8c0>]
>> >> >>  #9 [<c06ae8c0>] (cpufreq_cpu_callback) from [<c0a0185c>]
>> >> >> #10 [<c0a0185c>] (notifier_call_chain) from [<c0121888>]
>> >> >> #11 [<c0121888>] (__cpu_notify) from [<c0121a04>]
>> >> >> #12 [<c0121a04>] (cpu_notify_nofail) from [<c09ee7f0>]
>> >> >> #13 [<c09ee7f0>] (_cpu_down) from [<c0121b70>]
>> >> >> #14 [<c0121b70>] (disable_nonboot_cpus) from [<c016788c>]
>> >> >> #15 [<c016788c>] (suspend_devices_and_enter) from [<c0167bcc>]
>> >> >> #16 [<c0167bcc>] (pm_suspend) from [<c0167d94>]
>> >> >> #17 [<c0167d94>] (try_to_suspend) from [<c0138460>]
>> >> >> #18 [<c0138460>] (process_one_work) from [<c0138b18>]
>> >> >> #19 [<c0138b18>] (worker_thread) from [<c013dc58>]
>> >> >> #20 [<c013dc58>] (kthread) from [<c01061b8>]
>> >> >>
>> >> >> Will this patch helps here,
>> >> >> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c
>> >> >>
>> >> >> I couldn't really understand why it got struck in  synchronize_rcu().
>> >> >> Please give some pointers to debug this further.
>> >> >>
>> >> >> Below are the configs enable related to RCU.
>> >> >>
>> >> >> CONFIG_TREE_PREEMPT_RCU=y
>> >> >> CONFIG_PREEMPT_RCU=y
>> >> >> CONFIG_RCU_STALL_COMMON=y
>> >> >> CONFIG_RCU_FANOUT=32
>> >> >> CONFIG_RCU_FANOUT_LEAF=16
>> >> >> CONFIG_RCU_FAST_NO_HZ=y
>> >> >> CONFIG_RCU_CPU_STALL_TIMEOUT=21
>> >> >> CONFIG_RCU_CPU_STALL_VERBOSE=y
>> >> >>
>> >> >> Kernel version is 3.10.28
>> >> >> Architecture is ARM
>> >> >>
>> >> >> Thanks,
>> >> >> Arun
>> >>
>> >
>
>> crash> struct rcu_data C54286B0
>> struct rcu_data {
>>   completed = 2833,
>>   gpnum = 2833,
>>   passed_quiesce = false,
>>   qs_pending = false,
>>   beenonline = true,
>>   preemptible = true,
>>   mynode = 0xc117f340 <rcu_preempt_state>,
>>   grpmask = 8,
>>   nxtlist = 0x0,
>>   nxttail = {0xc54286c4, 0xc54286c4, 0xc54286c4, 0x0},
>>   nxtcompleted = {0, 4294967136, 4294967137, 4294967137},
>>   qlen_lazy = 0,
>>   qlen = 0,
>>   qlen_last_fqs_check = 0,
>>   n_cbs_invoked = 609,
>>   n_nocbs_invoked = 0,
>>   n_cbs_orphaned = 13,
>>   n_cbs_adopted = 0,
>>   n_force_qs_snap = 1428,
>>   blimit = 10,
>>   dynticks = 0xc5428758,
>>   dynticks_snap = 13206053,
>>   dynticks_fqs = 16,
>>   offline_fqs = 0,
>>   n_rcu_pending = 181,
>>   n_rp_qs_pending = 1,
>>   n_rp_report_qs = 21,
>>   n_rp_cb_ready = 0,
>>   n_rp_cpu_needs_gp = 0,
>>   n_rp_gp_completed = 22,
>>   n_rp_gp_started = 8,
>>   n_rp_need_nothing = 130,
>>   barrier_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   oom_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   cpu = 3,
>>   rsp = 0xc117f340 <rcu_preempt_state>
>> }
>>
>> crash> struct rcu_data C541A6B0
>> struct rcu_data {
>>   completed = 5877,
>>   gpnum = 5877,
>>   passed_quiesce = true,
>>   qs_pending = false,
>>   beenonline = true,
>>   preemptible = true,
>>   mynode = 0xc117f340 <rcu_preempt_state>,
>>   grpmask = 4,
>>   nxtlist = 0x0,
>>   nxttail = {0xc541a6c4, 0xc541a6c4, 0xc541a6c4, 0x0},
>>   nxtcompleted = {0, 5877, 5878, 5878},
>>   qlen_lazy = 0,
>>   qlen = 0,
>>   qlen_last_fqs_check = 0,
>>   n_cbs_invoked = 61565,
>>   n_nocbs_invoked = 0,
>>   n_cbs_orphaned = 139,
>>   n_cbs_adopted = 100,
>>   n_force_qs_snap = 0,
>>   blimit = 10,
>>   dynticks = 0xc541a758,
>>   dynticks_snap = 13901017,
>>   dynticks_fqs = 75,
>>   offline_fqs = 0,
>>   n_rcu_pending = 16546,
>>   n_rp_qs_pending = 3,
>>   n_rp_report_qs = 4539,
>>   n_rp_cb_ready = 69,
>>   n_rp_cpu_needs_gp = 782,
>>   n_rp_gp_completed = 4196,
>>   n_rp_gp_started = 1739,
>>   n_rp_need_nothing = 5221,
>>   barrier_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   oom_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   cpu = 2,
>>   rsp = 0xc117f340 <rcu_preempt_state>
>> }
>>
>> crash> struct rcu_data C540C6B0
>> struct rcu_data {
>>   completed = 5877,
>>   gpnum = 5877,
>>   passed_quiesce = true,
>>   qs_pending = false,
>>   beenonline = true,
>>   preemptible = true,
>>   mynode = 0xc117f340 <rcu_preempt_state>,
>>   grpmask = 2,
>>   nxtlist = 0x0,
>>   nxttail = {0xc540c6c4, 0xc540c6c4, 0xc540c6c4, 0x0},
>>   nxtcompleted = {4294967030, 5878, 5878, 5878},
>>   qlen_lazy = 0,
>>   qlen = 0,
>>   qlen_last_fqs_check = 0,
>>   n_cbs_invoked = 74292,
>>   n_nocbs_invoked = 0,
>>   n_cbs_orphaned = 100,
>>   n_cbs_adopted = 65,
>>   n_force_qs_snap = 0,
>>   blimit = 10,
>>   dynticks = 0xc540c758,
>>   dynticks_snap = 10753433,
>>   dynticks_fqs = 69,
>>   offline_fqs = 0,
>>   n_rcu_pending = 18350,
>>   n_rp_qs_pending = 6,
>>   n_rp_report_qs = 5009,
>>   n_rp_cb_ready = 50,
>>   n_rp_cpu_needs_gp = 915,
>>   n_rp_gp_completed = 4423,
>>   n_rp_gp_started = 1826,
>>   n_rp_need_nothing = 6127,
>>   barrier_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   oom_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   cpu = 1,
>>   rsp = 0xc117f340 <rcu_preempt_state>
>> }
>> crash> struct rcu_data C53FE6B0
>> struct rcu_data {
>>   completed = 5877,
>>   gpnum = 5877,
>>   passed_quiesce = true,
>>   qs_pending = false,
>>   beenonline = true,
>>   preemptible = true,
>>   mynode = 0xc117f340 <rcu_preempt_state>,
>>   grpmask = 1,
>>   nxtlist = 0x0,
>>   nxttail = {0xc53fe6c4, 0xc53fe6c4, 0xc53fe6c4, 0x0},
>>   nxtcompleted = {4294966997, 5875, 5876, 5876},
>>   qlen_lazy = 0,
>>   qlen = 0,
>>   qlen_last_fqs_check = 0,
>>   n_cbs_invoked = 123175,
>>   n_nocbs_invoked = 0,
>>   n_cbs_orphaned = 52,
>>   n_cbs_adopted = 0,
>>   n_force_qs_snap = 0,
>>   blimit = 10,
>>   dynticks = 0xc53fe758,
>>   dynticks_snap = 6330446,
>>   dynticks_fqs = 46,
>>   offline_fqs = 0,
>>   n_rcu_pending = 22529,
>>   n_rp_qs_pending = 3,
>>   n_rp_report_qs = 5290,
>>   n_rp_cb_ready = 279,
>>   n_rp_cpu_needs_gp = 740,
>>   n_rp_gp_completed = 2707,
>>   n_rp_gp_started = 1208,
>>   n_rp_need_nothing = 12305,
>>   barrier_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   oom_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   cpu = 0,
>>   rsp = 0xc117f340 <rcu_preempt_state>
>> }
>>
>
>> crash> struct rcu_data c54366b0
>> struct rcu_data {
>>   completed = 5877,
>>   gpnum = 5877,
>>   passed_quiesce = true,
>>   qs_pending = false,
>>   beenonline = true,
>>   preemptible = true,
>>   mynode = 0xc117f340 <rcu_preempt_state>,
>>   grpmask = 16,
>>   nxtlist = 0xedaaec00,
>>   nxttail = {0xc54366c4, 0xe84d350c, 0xe84d350c, 0xe84d350c},
>>   nxtcompleted = {4294967035, 5878, 5878, 5878},
>>   qlen_lazy = 105,
>>   qlen = 415,
>>   qlen_last_fqs_check = 0,
>>   n_cbs_invoked = 86323,
>>   n_nocbs_invoked = 0,
>>   n_cbs_orphaned = 0,
>>   n_cbs_adopted = 139,
>>   n_force_qs_snap = 0,
>>   blimit = 10,
>>   dynticks = 0xc5436758,
>>   dynticks_snap = 7582140,
>>   dynticks_fqs = 41,
>>   offline_fqs = 0,
>>   n_rcu_pending = 59404,
>>   n_rp_qs_pending = 5,
>>   n_rp_report_qs = 4633,
>>   n_rp_cb_ready = 32,
>>   n_rp_cpu_needs_gp = 41088,
>>   n_rp_gp_completed = 2844,
>>   n_rp_gp_started = 1150,
>>   n_rp_need_nothing = 9657,
>>   barrier_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   oom_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   cpu = 4,
>>   rsp = 0xc117f340 <rcu_preempt_state>
>> }
>>
>> crash> struct rcu_data c54446b0
>> struct rcu_data {
>>   completed = 5877,
>>   gpnum = 5877,
>>   passed_quiesce = true,
>>   qs_pending = false,
>>   beenonline = true,
>>   preemptible = true,
>>   mynode = 0xc117f340 <rcu_preempt_state>,
>>   grpmask = 32,
>>   nxtlist = 0xcf9e856c,
>>   nxttail = {0xc54446c4, 0xcfb3050c, 0xcfb3050c, 0xcfb3050c},
>>   nxtcompleted = {0, 5878, 5878, 5878},
>>   qlen_lazy = 0,
>>   qlen = 117,
>>   qlen_last_fqs_check = 0,
>>   n_cbs_invoked = 36951,
>>   n_nocbs_invoked = 0,
>>   n_cbs_orphaned = 0,
>>   n_cbs_adopted = 0,
>>   n_force_qs_snap = 1428,
>>   blimit = 10,
>>   dynticks = 0xc5444758,
>>   dynticks_snap = 86034,
>>   dynticks_fqs = 46,
>>   offline_fqs = 0,
>>   n_rcu_pending = 49104,
>>   n_rp_qs_pending = 3,
>>   n_rp_report_qs = 2360,
>>   n_rp_cb_ready = 18,
>>   n_rp_cpu_needs_gp = 40106,
>>   n_rp_gp_completed = 1334,
>>   n_rp_gp_started = 791,
>>   n_rp_need_nothing = 4495,
>>   barrier_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   oom_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   cpu = 5,
>>   rsp = 0xc117f340 <rcu_preempt_state>
>> }
>>
>> crash> struct rcu_data c54526b0
>> struct rcu_data {
>>   completed = 5877,
>>   gpnum = 5877,
>>   passed_quiesce = true,
>>   qs_pending = false,
>>   beenonline = true,
>>   preemptible = true,
>>   mynode = 0xc117f340 <rcu_preempt_state>,
>>   grpmask = 64,
>>   nxtlist = 0xe613d200,
>>   nxttail = {0xc54526c4, 0xe6fc9d0c, 0xe6fc9d0c, 0xe6fc9d0c},
>>   nxtcompleted = {0, 5878, 5878, 5878},
>>   qlen_lazy = 2,
>>   qlen = 35,
>>   qlen_last_fqs_check = 0,
>>   n_cbs_invoked = 34459,
>>   n_nocbs_invoked = 0,
>>   n_cbs_orphaned = 0,
>>   n_cbs_adopted = 0,
>>   n_force_qs_snap = 1428,
>>   blimit = 10,
>>   dynticks = 0xc5452758,
>>   dynticks_snap = 116840,
>>   dynticks_fqs = 47,
>>   offline_fqs = 0,
>>   n_rcu_pending = 48486,
>>   n_rp_qs_pending = 3,
>>   n_rp_report_qs = 2223,
>>   n_rp_cb_ready = 24,
>>   n_rp_cpu_needs_gp = 40101,
>>   n_rp_gp_completed = 1226,
>>   n_rp_gp_started = 789,
>>   n_rp_need_nothing = 4123,
>>   barrier_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   oom_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   cpu = 6,
>>   rsp = 0xc117f340 <rcu_preempt_state>
>> }
>>
>> crash> struct rcu_data c54606b0
>> struct rcu_data {
>>   completed = 5877,
>>   gpnum = 5877,
>>   passed_quiesce = true,
>>   qs_pending = false,
>>   beenonline = true,
>>   preemptible = true,
>>   mynode = 0xc117f340 <rcu_preempt_state>,
>>   grpmask = 128,
>>   nxtlist = 0xdec32a6c,
>>   nxttail = {0xc54606c4, 0xe6fcf10c, 0xe6fcf10c, 0xe6fcf10c},
>>   nxtcompleted = {0, 5878, 5878, 5878},
>>   qlen_lazy = 1,
>>   qlen = 30,
>>   qlen_last_fqs_check = 0,
>>   n_cbs_invoked = 31998,
>>   n_nocbs_invoked = 0,
>>   n_cbs_orphaned = 0,
>>   n_cbs_adopted = 0,
>>   n_force_qs_snap = 1428,
>>   blimit = 10,
>>   dynticks = 0xc5460758,
>>   dynticks_snap = 57846,
>>   dynticks_fqs = 54,
>>   offline_fqs = 0,
>>   n_rcu_pending = 47502,
>>   n_rp_qs_pending = 2,
>>   n_rp_report_qs = 2142,
>>   n_rp_cb_ready = 37,
>>   n_rp_cpu_needs_gp = 40049,
>>   n_rp_gp_completed = 1223,
>>   n_rp_gp_started = 661,
>>   n_rp_need_nothing = 3390,
>>   barrier_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   oom_head = {
>>     next = 0x0,
>>     func = 0x0
>>   },
>>   cpu = 7,
>>   rsp = 0xc117f340 <rcu_preempt_state>
>> }
>
>>
>> rcu_preempt_state = $9 = {
>>   node = {{
>>       lock = {
>>         raw_lock = {
>>           {
>>             slock = 3129850509,
>>             tickets = {
>>               owner = 47757,
>>               next = 47757
>>             }
>>           }
>>         },
>>         magic = 3735899821,
>>         owner_cpu = 4294967295,
>>         owner = 0xffffffff
>>       },
>>       gpnum = 5877,
>>       completed = 5877,
>>       qsmask = 0,
>>       expmask = 0,
>>       qsmaskinit = 240,
>>       grpmask = 0,
>>       grplo = 0,
>>       grphi = 7,
>>       grpnum = 0 '\000',
>>       level = 0 '\000',
>>       parent = 0x0,
>>       blkd_tasks = {
>>         next = 0xc117f378 <rcu_preempt_state+56>,
>>         prev = 0xc117f378 <rcu_preempt_state+56>
>>       },
>>       gp_tasks = 0x0,
>>       exp_tasks = 0x0,
>>       need_future_gp = {1, 0},
>>       fqslock = {
>>         raw_lock = {
>>           {
>>             slock = 0,
>>             tickets = {
>>               owner = 0,
>>               next = 0
>>             }
>>           }
>>         },
>>         magic = 3735899821,
>>         owner_cpu = 4294967295,
>>         owner = 0xffffffff
>>       }
>>     }},
>>   level = {0xc117f340 <rcu_preempt_state>},
>>   levelcnt = {1, 0, 0, 0, 0},
>>   levelspread = "\b",
>>   rda = 0xc115e6b0 <rcu_preempt_data>,
>>   call = 0xc01975ac <call_rcu>,
>>   fqs_state = 0 '\000',
>>   boost = 0 '\000',
>>   gpnum = 5877,
>>   completed = 5877,
>>   gp_kthread = 0xf0c9e600,
>>   gp_wq = {
>>     lock = {
>>       {
>>         rlock = {
>>           raw_lock = {
>>             {
>>               slock = 2160230594,
>>               tickets = {
>>                 owner = 32962,
>>                 next = 32962
>>               }
>>             }
>>           },
>>           magic = 3735899821,
>>           owner_cpu = 4294967295,
>>           owner = 0xffffffff
>>         }
>>       }
>>     },
>>     task_list = {
>>       next = 0xf0cd1f20,
>>       prev = 0xf0cd1f20
>>     }
>>   },
>>   gp_flags = 1,
>>   orphan_lock = {
>>     raw_lock = {
>>       {
>>         slock = 327685,
>>         tickets = {
>>           owner = 5,
>>           next = 5
>>         }
>>       }
>>     },
>>     magic = 3735899821,
>>     owner_cpu = 4294967295,
>>     owner = 0xffffffff
>>   },
>>   orphan_nxtlist = 0x0,
>>   orphan_nxttail = 0xc117f490 <rcu_preempt_state+336>,
>>   orphan_donelist = 0x0,
>>   orphan_donetail = 0xc117f498 <rcu_preempt_state+344>,
>>   qlen_lazy = 0,
>>   qlen = 0,
>>   onoff_mutex = {
>>     count = {
>>       counter = 1
>>     },
>>     wait_lock = {
>>       {
>>         rlock = {
>>           raw_lock = {
>>             {
>>               slock = 811479134,
>>               tickets = {
>>                 owner = 12382,
>>                 next = 12382
>>               }
>>             }
>>           },
>>           magic = 3735899821,
>>           owner_cpu = 4294967295,
>>           owner = 0xffffffff
>>         }
>>       }
>>     },
>>     wait_list = {
>>       next = 0xc117f4bc <rcu_preempt_state+380>,
>>       prev = 0xc117f4bc <rcu_preempt_state+380>
>>     },
>>     owner = 0x0,
>>     name = 0x0,
>>     magic = 0xc117f4a8 <rcu_preempt_state+360>
>>   },
>>   barrier_mutex = {
>>     count = {
>>       counter = 1
>>     },
>>     wait_lock = {
>>       {
>>         rlock = {
>>           raw_lock = {
>>             {
>>               slock = 0,
>>               tickets = {
>>                 owner = 0,
>>                 next = 0
>>               }
>>             }
>>           },
>>           magic = 3735899821,
>>           owner_cpu = 4294967295,
>>           owner = 0xffffffff
>>         }
>>       }
>>     },
>>     wait_list = {
>>       next = 0xc117f4e4 <rcu_preempt_state+420>,
>>       prev = 0xc117f4e4 <rcu_preempt_state+420>
>>     },
>>     owner = 0x0,
>>     name = 0x0,
>>     magic = 0xc117f4d0 <rcu_preempt_state+400>
>>   },
>>   barrier_cpu_count = {
>>     counter = 0
>>   },
>>   barrier_completion = {
>>     done = 0,
>>     wait = {
>>       lock = {
>>         {
>>           rlock = {
>>             raw_lock = {
>>               {
>>                 slock = 0,
>>                 tickets = {
>>                   owner = 0,
>>                   next = 0
>>                 }
>>               }
>>             },
>>             magic = 0,
>>             owner_cpu = 0,
>>             owner = 0x0
>>           }
>>         }
>>       },
>>       task_list = {
>>         next = 0x0,
>>         prev = 0x0
>>       }
>>     }
>>   },
>>   n_barrier_done = 0,
>>   expedited_start = {
>>     counter = 0
>>   },
>>   expedited_done = {
>>     counter = 0
>>   },
>>   expedited_wrap = {
>>     counter = 0
>>   },
>>   expedited_tryfail = {
>>     counter = 0
>>   },
>>   expedited_workdone1 = {
>>     counter = 0
>>   },
>>   expedited_workdone2 = {
>>     counter = 0
>>   },
>>   expedited_normal = {
>>     counter = 0
>>   },
>>   expedited_stoppedcpus = {
>>     counter = 0
>>   },
>>   expedited_done_tries = {
>>     counter = 0
>>   },
>>   expedited_done_lost = {
>>     counter = 0
>>   },
>>   expedited_done_exit = {
>>     counter = 0
>>   },
>>   jiffies_force_qs = 4294963917,
>>   n_force_qs = 4028,
>>   n_force_qs_lh = 0,
>>   n_force_qs_ngp = 0,
>>   gp_start = 4294963911,
>>   jiffies_stall = 4294966011,
>>   gp_max = 17,
>>   name = 0xc0d833ab "rcu_preempt",
>>   abbr = 112 'p',
>>   flavors = {
>>     next = 0xc117f2ec <rcu_bh_state+556>,
>>     prev = 0xc117f300 <rcu_struct_flavors>
>>   },
>>   wakeup_work = {
>>     flags = 3,
>>     llnode = {
>>       next = 0x0
>>     },
>>     func = 0xc0195aa8 <rsp_wakeup>
>>   }
>>
>>
>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [RCU] kernel hangs in wait_rcu_gp during suspend path
  2014-12-19 18:55           ` Arun KS
@ 2014-12-20  0:25             ` Paul E. McKenney
  0 siblings, 0 replies; 10+ messages in thread
From: Paul E. McKenney @ 2014-12-20  0:25 UTC (permalink / raw)
  To: Arun KS; +Cc: linux-kernel, josh, rostedt, Mathieu Desnoyers, laijs, Arun KS

On Sat, Dec 20, 2014 at 12:25:57AM +0530, Arun KS wrote:
> Hi Paul,
> 
> On Fri, Dec 19, 2014 at 1:35 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > On Thu, Dec 18, 2014 at 09:52:28PM +0530, Arun KS wrote:
> >> Hi Paul,
> >>
> >> On Thu, Dec 18, 2014 at 12:54 AM, Paul E. McKenney
> >> <paulmck@linux.vnet.ibm.com> wrote:
> >> > On Tue, Dec 16, 2014 at 11:00:20PM +0530, Arun KS wrote:
> >> >> Hello,
> >> >>
> >> >> Adding some more info.
> >> >>
> >> >> Below is the rcu_data data structure corresponding to cpu4.
> >> >
> >> > This shows that RCU is idle.  What was the state of the system at the
> >> > time you collected this data?
> >>
> >> System initiated a suspend sequence and currently at disable_nonboot_cpus().
> >> It has hotplugged 0, 1 and 2 successfully.  And even successful in hot
> >> plugging cpu3.
> >> But while calling the CPU_POST_DEAD notifier for cpu3, another driver tried to
> >> unregister an atomic notifier. Which eventually calls syncronize_rcu()
> >> which hangs the suspend task.
> >>
> >> bt as follows,
> >> PID: 202    TASK: edcd2a00  CPU: 4   COMMAND: "kworker/u16:4"
> >>  #0 [<c0a1f8c0>] (__schedule) from [<c0a1d054>]
> >>  #1 [<c0a1d054>] (schedule_timeout) from [<c0a1f018>]
> >>  #2 [<c0a1f018>] (wait_for_common) from [<c013c570>]
> >>  #3 [<c013c570>] (wait_rcu_gp) from [<c014407c>]
> >>  #4 [<c014407c>] (atomic_notifier_chain_unregister) from [<c06be62c>]
> >>  #5 [<c06be62c>] (cpufreq_interactive_disable_sched_input) from [<c06bee1c>]
> >>  #6 [<c06bee1c>] (cpufreq_governor_interactive) from [<c06b7724>]
> >>  #7 [<c06b7724>] (__cpufreq_governor) from [<c06b9f74>]
> >>  #8 [<c06b9f74>] (__cpufreq_remove_dev_finish) from [<c06ba3a4>]
> >>  #9 [<c06ba3a4>] (cpufreq_cpu_callback) from [<c0a22674>]
> >> #10 [<c0a22674>] (notifier_call_chain) from [<c012284c>]
> >> #11 [<c012284c>] (__cpu_notify) from [<c01229dc>]
> >> #12 [<c01229dc>] (cpu_notify_nofail) from [<c0a0dd1c>]
> >> #13 [<c0a0dd1c>] (_cpu_down) from [<c0122b48>]
> >> #14 [<c0122b48>] (disable_nonboot_cpus) from [<c0168cd8>]
> >> #15 [<c0168cd8>] (suspend_devices_and_enter) from [<c0169018>]
> >> #16 [<c0169018>] (pm_suspend) from [<c01691e0>]
> >> #17 [<c01691e0>] (try_to_suspend) from [<c01396f0>]
> >> #18 [<c01396f0>] (process_one_work) from [<c0139db0>]
> >> #19 [<c0139db0>] (worker_thread) from [<c013efa4>]
> >> #20 [<c013efa4>] (kthread) from [<c01061f8>]
> >>
> >> But the other cores 4-7 are active. I can see them going to idle tasks
> >> coming out from idle because of interrupts, scheduling kworkers etc.
> >> So when I took the data, all the online cores(4-7) were in idle as
> >> shown below from runq data structures.
> >>
> >> ------start--------------
> >> crash> runq
> >> CPU 0 [OFFLINE]
> >>
> >> CPU 1 [OFFLINE]
> >>
> >> CPU 2 [OFFLINE]
> >>
> >> CPU 3 [OFFLINE]
> >>
> >> CPU 4 RUNQUEUE: c5439040
> >>   CURRENT: PID: 0      TASK: f0c9d400  COMMAND: "swapper/4"
> >>   RT PRIO_ARRAY: c5439130
> >>      [no tasks queued]
> >>   CFS RB_ROOT: c54390c0
> >>      [no tasks queued]
> >>
> >> CPU 5 RUNQUEUE: c5447040
> >>   CURRENT: PID: 0      TASK: f0c9aa00  COMMAND: "swapper/5"
> >>   RT PRIO_ARRAY: c5447130
> >>      [no tasks queued]
> >>   CFS RB_ROOT: c54470c0
> >>      [no tasks queued]
> >>
> >> CPU 6 RUNQUEUE: c5455040
> >>   CURRENT: PID: 0      TASK: f0c9ce00  COMMAND: "swapper/6"
> >>   RT PRIO_ARRAY: c5455130
> >>      [no tasks queued]
> >>   CFS RB_ROOT: c54550c0
> >>      [no tasks queued]
> >>
> >> CPU 7 RUNQUEUE: c5463040
> >>   CURRENT: PID: 0      TASK: f0c9b000  COMMAND: "swapper/7"
> >>   RT PRIO_ARRAY: c5463130
> >>      [no tasks queued]
> >>   CFS RB_ROOT: c54630c0
> >>      [no tasks queued]
> >> ------end--------------
> >>
> >> but one strange thing i can see is that rcu_read_lock_nesting for idle
> >> tasks running on cpu 5 and cpu 6 are set to 1.
> >>
> >> PID: 0      TASK: f0c9d400  CPU: 4   COMMAND: "swapper/4"
> >>   rcu_read_lock_nesting = 0,
> >>
> >> PID: 0      TASK: f0c9aa00  CPU: 5   COMMAND: "swapper/5"
> >>   rcu_read_lock_nesting = 1,
> >>
> >> PID: 0      TASK: f0c9ce00  CPU: 6   COMMAND: "swapper/6"
> >>   rcu_read_lock_nesting = 1,
> >>
> >> PID: 0      TASK: f0c9b000  CPU: 7   COMMAND: "swapper/7"
> >>   rcu_read_lock_nesting = 0,
> >>
> >> Does this means that the current grace period(suspend thread is
> >> waiting on) is getting extended infinitely?
> >
> > Indeed it does, good catch!  Looks like someone entered an RCU read-side
> > critical section, then forgot to exit it, which would prevent grace
> > periods from ever completing.  CONFIG_PROVE_RCU=y might be
> > helpful in tracking this down.
> >
> >> Also attaching the per_cpu rcu_data for online and offline cores.
> >
> > But these still look like there is no grace period in progress.
> >
> > Still, it would be good to try CONFIG_PROVE_RCU=y and see what it
> > shows you.
> >
> > Also, I am not seeing similar complaints about 3.10, so it is quite
> > possible that a recent change in the ARM-specific idle-loop code is
> > doing this to you.  It might be well worth looking through recent
> > changes in this area, particularly if some older version works well
> > for you.
> 
> Enabling CONFIG_PROVE_RCU also didn't help.
> 
> But we figured out the problem. Thanks for your help. And need your
> suggestion in fixing.
> 
> if we dump the irq_work_list,
> 
> crash> irq_work_list
> PER-CPU DATA TYPE:
>   struct llist_head irq_work_list;
> PER-CPU ADDRESSES:
>   [0]: c53ff90c
>   [1]: c540d90c
>   [2]: c541b90c
>   [3]: c542990c
>   [4]: c543790c
>   [5]: c544590c
>   [6]: c545390c
>   [7]: c546190c
> crash>
> crash> list irq_work.llnode -s irq_work.func -h c117f0b4
> c117f0b4
>   func = 0xc0195aa8 <rsp_wakeup>
> c117f574
>   func = 0xc0195aa8 <rsp_wakeup>
> crash>
> 
> rsp_wakeup is pending in the cpu1's irq_work_list. And cpu1 is already
> hot-plugged out.
> All the later irq_work_queue calls returns because the work is already pending.
> 
> When the issue happens, noticed that a hotplug is happening during
> early stages of boot(due to thermal), even before irq_work registers a
> cpu_notifer callback. And hence __irq_work_run() will not run as a
> part of CPU_DYING notifier.
> https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/kernel/irq_work.c?id=refs/tags/v3.10.63#n198
> 
> and rsp_wakeup is pending from there.
> 
> In first approach, we changed the
> device_initcall(irq_work_init_cpu_notifier) to earlyinit_initcall and
> the issue goes away. Because this makes sure that we have a cpu
> notifier before any hotplug happens.
> diff --git a/kernel/irq_work.c b/kernel/irq_work.c
> index 55fcce6..5e58767 100644
> --- a/kernel/irq_work.c
> +++ b/kernel/irq_work.c
> @@ -198,6 +198,6 @@ static __init int irq_work_init_cpu_notifier(void)
>         register_cpu_notifier(&cpu_notify);
>         return 0;
>  }
> -device_initcall(irq_work_init_cpu_notifier);
> +early_initcall(irq_work_init_cpu_notifier);

I prefer this approach.  Another alternative is to keep the device_initcall(),
but to do something more or less like:

	get_online_cpus();
	for_each_possible_cpu(cpu)
		if (cpu_online(cpu))
			your_notifier(&cpu_notify, CPU_ONLINE, cpu);
		else
			your_notifier(&cpu_notify, CPU_DEAD, cpu);
	put_online_cpus();

The details will depend on how your notifier is structured.

If feasible, moving to early_initcall() is simpler.  You could of course
move some of the code to an early_initcall() while leaving the rest at
device_initcall() time.

>  #endif /* CONFIG_HOTPLUG_CPU */
> 
> 
> Another approach is to add syncronize_rcu() in cpu_down path.  This
> way we makes sure that hotplug waits until a grace period gets over.
> This also fixes the problem.
> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index c56b958..00bdd90 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
> @@ -311,6 +311,11 @@ static int __ref _cpu_down(unsigned int cpu, int
> tasks_frozen)
>                                 __func__, cpu);
>                 goto out_release;
>         }
> +#ifdef CONFIG_PREEMPT
> +       synchronize_sched();
> +#endif
> +       synchronize_rcu();
> +

This would seriously slow down CPU hotplug for everyone, which is not
warranted to fix a single architecture.

So the earlier move to early_initcall() is much better.

							Thanx, Paul

>         smpboot_park_threads(cpu);
> 
>         err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));
> 
> 
> 
> Can you please suggest the rite approach?
> 
> Thanks,
> Arun
> 
> 
> 
> 
> 
> >
> >                                                         Thanx, Paul
> >
> >> Thanks,
> >> Arun
> >>
> >> >
> >> >                                                         Thanx, Paul
> >> >
> >> >> struct rcu_data {
> >> >>   completed = 5877,
> >> >>   gpnum = 5877,
> >> >>   passed_quiesce = true,
> >> >>   qs_pending = false,
> >> >>   beenonline = true,
> >> >>   preemptible = true,
> >> >>   mynode = 0xc117f340 <rcu_preempt_state>,
> >> >>   grpmask = 16,
> >> >>   nxtlist = 0xedaaec00,
> >> >>   nxttail = {0xc54366c4, 0xe84d350c, 0xe84d350c, 0xe84d350c},
> >> >>   nxtcompleted = {4294967035, 5878, 5878, 5878},
> >> >>   qlen_lazy = 105,
> >> >>   qlen = 415,
> >> >>   qlen_last_fqs_check = 0,
> >> >>   n_cbs_invoked = 86323,
> >> >>   n_nocbs_invoked = 0,
> >> >>   n_cbs_orphaned = 0,
> >> >>   n_cbs_adopted = 139,
> >> >>   n_force_qs_snap = 0,
> >> >>   blimit = 10,
> >> >>   dynticks = 0xc5436758,
> >> >>   dynticks_snap = 7582140,
> >> >>   dynticks_fqs = 41,
> >> >>   offline_fqs = 0,
> >> >>   n_rcu_pending = 59404,
> >> >>   n_rp_qs_pending = 5,
> >> >>   n_rp_report_qs = 4633,
> >> >>   n_rp_cb_ready = 32,
> >> >>   n_rp_cpu_needs_gp = 41088,
> >> >>   n_rp_gp_completed = 2844,
> >> >>   n_rp_gp_started = 1150,
> >> >>   n_rp_need_nothing = 9657,
> >> >>   barrier_head = {
> >> >>     next = 0x0,
> >> >>     func = 0x0
> >> >>   },
> >> >>   oom_head = {
> >> >>     next = 0x0,
> >> >>     func = 0x0
> >> >>   },
> >> >>   cpu = 4,
> >> >>   rsp = 0xc117f340 <rcu_preempt_state>
> >> >> }
> >> >>
> >> >>
> >> >>
> >> >> Also pasting complete rcu_preempt_state.
> >> >>
> >> >>
> >> >>
> >> >> rcu_preempt_state = $9 = {
> >> >>   node = {{
> >> >>       lock = {
> >> >>         raw_lock = {
> >> >>           {
> >> >>             slock = 3129850509,
> >> >>             tickets = {
> >> >>               owner = 47757,
> >> >>               next = 47757
> >> >>             }
> >> >>           }
> >> >>         },
> >> >>         magic = 3735899821,
> >> >>         owner_cpu = 4294967295,
> >> >>         owner = 0xffffffff
> >> >>       },
> >> >>       gpnum = 5877,
> >> >>       completed = 5877,
> >> >>       qsmask = 0,
> >> >>       expmask = 0,
> >> >>       qsmaskinit = 240,
> >> >>       grpmask = 0,
> >> >>       grplo = 0,
> >> >>       grphi = 7,
> >> >>       grpnum = 0 '\000',
> >> >>       level = 0 '\000',
> >> >>       parent = 0x0,
> >> >>       blkd_tasks = {
> >> >>         next = 0xc117f378 <rcu_preempt_state+56>,
> >> >>         prev = 0xc117f378 <rcu_preempt_state+56>
> >> >>       },
> >> >>       gp_tasks = 0x0,
> >> >>       exp_tasks = 0x0,
> >> >>       need_future_gp = {1, 0},
> >> >>       fqslock = {
> >> >>         raw_lock = {
> >> >>           {
> >> >>             slock = 0,
> >> >>             tickets = {
> >> >>               owner = 0,
> >> >>               next = 0
> >> >>             }
> >> >>           }
> >> >>         },
> >> >>         magic = 3735899821,
> >> >>         owner_cpu = 4294967295,
> >> >>         owner = 0xffffffff
> >> >>       }
> >> >>     }},
> >> >>   level = {0xc117f340 <rcu_preempt_state>},
> >> >>   levelcnt = {1, 0, 0, 0, 0},
> >> >>   levelspread = "\b",
> >> >>   rda = 0xc115e6b0 <rcu_preempt_data>,
> >> >>   call = 0xc01975ac <call_rcu>,
> >> >>   fqs_state = 0 '\000',
> >> >>   boost = 0 '\000',
> >> >>   gpnum = 5877,
> >> >>   completed = 5877,
> >> >>   gp_kthread = 0xf0c9e600,
> >> >>   gp_wq = {
> >> >>     lock = {
> >> >>       {
> >> >>         rlock = {
> >> >>           raw_lock = {
> >> >>             {
> >> >>               slock = 2160230594,
> >> >>               tickets = {
> >> >>                 owner = 32962,
> >> >>                 next = 32962
> >> >>               }
> >> >>             }
> >> >>           },
> >> >>           magic = 3735899821,
> >> >>           owner_cpu = 4294967295,
> >> >>           owner = 0xffffffff
> >> >>         }
> >> >>       }
> >> >>     },
> >> >>     task_list = {
> >> >>       next = 0xf0cd1f20,
> >> >>       prev = 0xf0cd1f20
> >> >>     }
> >> >>   },
> >> >>   gp_flags = 1,
> >> >>   orphan_lock = {
> >> >>     raw_lock = {
> >> >>       {
> >> >>         slock = 327685,
> >> >>         tickets = {
> >> >>           owner = 5,
> >> >>           next = 5
> >> >>         }
> >> >>       }
> >> >>     },
> >> >>     magic = 3735899821,
> >> >>     owner_cpu = 4294967295,
> >> >>     owner = 0xffffffff
> >> >>   },
> >> >>   orphan_nxtlist = 0x0,
> >> >>   orphan_nxttail = 0xc117f490 <rcu_preempt_state+336>,
> >> >>   orphan_donelist = 0x0,
> >> >>   orphan_donetail = 0xc117f498 <rcu_preempt_state+344>,
> >> >>   qlen_lazy = 0,
> >> >>   qlen = 0,
> >> >>   onoff_mutex = {
> >> >>     count = {
> >> >>       counter = 1
> >> >>     },
> >> >>     wait_lock = {
> >> >>       {
> >> >>         rlock = {
> >> >>           raw_lock = {
> >> >>             {
> >> >>               slock = 811479134,
> >> >>               tickets = {
> >> >>                 owner = 12382,
> >> >>                 next = 12382
> >> >>               }
> >> >>             }
> >> >>           },
> >> >>           magic = 3735899821,
> >> >>           owner_cpu = 4294967295,
> >> >>           owner = 0xffffffff
> >> >>         }
> >> >>       }
> >> >>     },
> >> >>     wait_list = {
> >> >>       next = 0xc117f4bc <rcu_preempt_state+380>,
> >> >>       prev = 0xc117f4bc <rcu_preempt_state+380>
> >> >>     },
> >> >>     owner = 0x0,
> >> >>     name = 0x0,
> >> >>     magic = 0xc117f4a8 <rcu_preempt_state+360>
> >> >>   },
> >> >>   barrier_mutex = {
> >> >>     count = {
> >> >>       counter = 1
> >> >>     },
> >> >>     wait_lock = {
> >> >>       {
> >> >>         rlock = {
> >> >>           raw_lock = {
> >> >>             {
> >> >>               slock = 0,
> >> >>               tickets = {
> >> >>                 owner = 0,
> >> >>                 next = 0
> >> >>               }
> >> >>             }
> >> >>           },
> >> >>           magic = 3735899821,
> >> >>           owner_cpu = 4294967295,
> >> >>           owner = 0xffffffff
> >> >>         }
> >> >>       }
> >> >>     },
> >> >>     wait_list = {
> >> >>       next = 0xc117f4e4 <rcu_preempt_state+420>,
> >> >>       prev = 0xc117f4e4 <rcu_preempt_state+420>
> >> >>     },
> >> >>     owner = 0x0,
> >> >>     name = 0x0,
> >> >>     magic = 0xc117f4d0 <rcu_preempt_state+400>
> >> >>   },
> >> >>   barrier_cpu_count = {
> >> >>     counter = 0
> >> >>   },
> >> >>   barrier_completion = {
> >> >>     done = 0,
> >> >>     wait = {
> >> >>       lock = {
> >> >>         {
> >> >>           rlock = {
> >> >>             raw_lock = {
> >> >>               {
> >> >>                 slock = 0,
> >> >>                 tickets = {
> >> >>                   owner = 0,
> >> >>                   next = 0
> >> >>                 }
> >> >>               }
> >> >>             },
> >> >>             magic = 0,
> >> >>             owner_cpu = 0,
> >> >>             owner = 0x0
> >> >>           }
> >> >>         }
> >> >>       },
> >> >>       task_list = {
> >> >>         next = 0x0,
> >> >>         prev = 0x0
> >> >>       }
> >> >>     }
> >> >>   },
> >> >>   n_barrier_done = 0,
> >> >>   expedited_start = {
> >> >>     counter = 0
> >> >>   },
> >> >>   expedited_done = {
> >> >>     counter = 0
> >> >>   },
> >> >>   expedited_wrap = {
> >> >>     counter = 0
> >> >>   },
> >> >>   expedited_tryfail = {
> >> >>     counter = 0
> >> >>   },
> >> >>   expedited_workdone1 = {
> >> >>     counter = 0
> >> >>   },
> >> >>   expedited_workdone2 = {
> >> >>     counter = 0
> >> >>   },
> >> >>   expedited_normal = {
> >> >>     counter = 0
> >> >>   },
> >> >>   expedited_stoppedcpus = {
> >> >>     counter = 0
> >> >>   },
> >> >>   expedited_done_tries = {
> >> >>     counter = 0
> >> >>   },
> >> >>   expedited_done_lost = {
> >> >>     counter = 0
> >> >>   },
> >> >>   expedited_done_exit = {
> >> >>     counter = 0
> >> >>   },
> >> >>   jiffies_force_qs = 4294963917,
> >> >>   n_force_qs = 4028,
> >> >>   n_force_qs_lh = 0,
> >> >>   n_force_qs_ngp = 0,
> >> >>   gp_start = 4294963911,
> >> >>   jiffies_stall = 4294966011,
> >> >>   gp_max = 17,
> >> >>   name = 0xc0d833ab "rcu_preempt",
> >> >>   abbr = 112 'p',
> >> >>   flavors = {
> >> >>     next = 0xc117f2ec <rcu_bh_state+556>,
> >> >>     prev = 0xc117f300 <rcu_struct_flavors>
> >> >>   },
> >> >>   wakeup_work = {
> >> >>     flags = 3,
> >> >>     llnode = {
> >> >>       next = 0x0
> >> >>     },
> >> >>     func = 0xc0195aa8 <rsp_wakeup>
> >> >>   }
> >> >> }
> >> >>
> >> >> Hope this helps.
> >> >>
> >> >> Thanks,
> >> >> Arun
> >> >>
> >> >>
> >> >> On Tue, Dec 16, 2014 at 11:59 AM, Arun KS <arunks.linux@gmail.com> wrote:
> >> >> > Hello,
> >> >> >
> >> >> > I dig little deeper to understand the situation.
> >> >> > All other cpus are in idle thread already.
> >> >> > As per my understanding, for the grace period to end, at-least one of
> >> >> > the following should happen on all online cpus,
> >> >> >
> >> >> > 1. a context switch.
> >> >> > 2. user space switch.
> >> >> > 3. switch to idle thread.
> >> >> >
> >> >> > In this situation, since all the other cores are already in idle,  non
> >> >> > of the above are meet on all online cores.
> >> >> > So grace period is getting extended and never finishes. Below is the
> >> >> > state of runqueue when the hang happens.
> >> >> > --------------start------------------------------------
> >> >> > crash> runq
> >> >> > CPU 0 [OFFLINE]
> >> >> >
> >> >> > CPU 1 [OFFLINE]
> >> >> >
> >> >> > CPU 2 [OFFLINE]
> >> >> >
> >> >> > CPU 3 [OFFLINE]
> >> >> >
> >> >> > CPU 4 RUNQUEUE: c3192e40
> >> >> >   CURRENT: PID: 0      TASK: f0874440  COMMAND: "swapper/4"
> >> >> >   RT PRIO_ARRAY: c3192f20
> >> >> >      [no tasks queued]
> >> >> >   CFS RB_ROOT: c3192eb0
> >> >> >      [no tasks queued]
> >> >> >
> >> >> > CPU 5 RUNQUEUE: c31a0e40
> >> >> >   CURRENT: PID: 0      TASK: f0874980  COMMAND: "swapper/5"
> >> >> >   RT PRIO_ARRAY: c31a0f20
> >> >> >      [no tasks queued]
> >> >> >   CFS RB_ROOT: c31a0eb0
> >> >> >      [no tasks queued]
> >> >> >
> >> >> > CPU 6 RUNQUEUE: c31aee40
> >> >> >   CURRENT: PID: 0      TASK: f0874ec0  COMMAND: "swapper/6"
> >> >> >   RT PRIO_ARRAY: c31aef20
> >> >> >      [no tasks queued]
> >> >> >   CFS RB_ROOT: c31aeeb0
> >> >> >      [no tasks queued]
> >> >> >
> >> >> > CPU 7 RUNQUEUE: c31bce40
> >> >> >   CURRENT: PID: 0      TASK: f0875400  COMMAND: "swapper/7"
> >> >> >   RT PRIO_ARRAY: c31bcf20
> >> >> >      [no tasks queued]
> >> >> >   CFS RB_ROOT: c31bceb0
> >> >> >      [no tasks queued]
> >> >> > --------------end------------------------------------
> >> >> >
> >> >> > If my understanding is correct the below patch should help, because it
> >> >> > will expedite grace periods during suspend,
> >> >> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c
> >> >> >
> >> >> > But I wonder why it was not taken to stable trees. Can we take it?
> >> >> > Appreciate your help.
> >> >> >
> >> >> > Thanks,
> >> >> > Arun
> >> >> >
> >> >> > On Mon, Dec 15, 2014 at 10:34 PM, Arun KS <arunks.linux@gmail.com> wrote:
> >> >> >> Hi,
> >> >> >>
> >> >> >> Here is the backtrace of the process hanging in wait_rcu_gp,
> >> >> >>
> >> >> >> PID: 247    TASK: e16e7380  CPU: 4   COMMAND: "kworker/u16:5"
> >> >> >>  #0 [<c09fead0>] (__schedule) from [<c09fcab0>]
> >> >> >>  #1 [<c09fcab0>] (schedule_timeout) from [<c09fe050>]
> >> >> >>  #2 [<c09fe050>] (wait_for_common) from [<c013b2b4>]
> >> >> >>  #3 [<c013b2b4>] (wait_rcu_gp) from [<c0142f50>]
> >> >> >>  #4 [<c0142f50>] (atomic_notifier_chain_unregister) from [<c06b2ab8>]
> >> >> >>  #5 [<c06b2ab8>] (cpufreq_interactive_disable_sched_input) from [<c06b32a8>]
> >> >> >>  #6 [<c06b32a8>] (cpufreq_governor_interactive) from [<c06abbf8>]
> >> >> >>  #7 [<c06abbf8>] (__cpufreq_governor) from [<c06ae474>]
> >> >> >>  #8 [<c06ae474>] (__cpufreq_remove_dev_finish) from [<c06ae8c0>]
> >> >> >>  #9 [<c06ae8c0>] (cpufreq_cpu_callback) from [<c0a0185c>]
> >> >> >> #10 [<c0a0185c>] (notifier_call_chain) from [<c0121888>]
> >> >> >> #11 [<c0121888>] (__cpu_notify) from [<c0121a04>]
> >> >> >> #12 [<c0121a04>] (cpu_notify_nofail) from [<c09ee7f0>]
> >> >> >> #13 [<c09ee7f0>] (_cpu_down) from [<c0121b70>]
> >> >> >> #14 [<c0121b70>] (disable_nonboot_cpus) from [<c016788c>]
> >> >> >> #15 [<c016788c>] (suspend_devices_and_enter) from [<c0167bcc>]
> >> >> >> #16 [<c0167bcc>] (pm_suspend) from [<c0167d94>]
> >> >> >> #17 [<c0167d94>] (try_to_suspend) from [<c0138460>]
> >> >> >> #18 [<c0138460>] (process_one_work) from [<c0138b18>]
> >> >> >> #19 [<c0138b18>] (worker_thread) from [<c013dc58>]
> >> >> >> #20 [<c013dc58>] (kthread) from [<c01061b8>]
> >> >> >>
> >> >> >> Will this patch helps here,
> >> >> >> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1d74d14e98a6be740a6f12456c7d9ad47be9c9c
> >> >> >>
> >> >> >> I couldn't really understand why it got struck in  synchronize_rcu().
> >> >> >> Please give some pointers to debug this further.
> >> >> >>
> >> >> >> Below are the configs enable related to RCU.
> >> >> >>
> >> >> >> CONFIG_TREE_PREEMPT_RCU=y
> >> >> >> CONFIG_PREEMPT_RCU=y
> >> >> >> CONFIG_RCU_STALL_COMMON=y
> >> >> >> CONFIG_RCU_FANOUT=32
> >> >> >> CONFIG_RCU_FANOUT_LEAF=16
> >> >> >> CONFIG_RCU_FAST_NO_HZ=y
> >> >> >> CONFIG_RCU_CPU_STALL_TIMEOUT=21
> >> >> >> CONFIG_RCU_CPU_STALL_VERBOSE=y
> >> >> >>
> >> >> >> Kernel version is 3.10.28
> >> >> >> Architecture is ARM
> >> >> >>
> >> >> >> Thanks,
> >> >> >> Arun
> >> >>
> >> >
> >
> >> crash> struct rcu_data C54286B0
> >> struct rcu_data {
> >>   completed = 2833,
> >>   gpnum = 2833,
> >>   passed_quiesce = false,
> >>   qs_pending = false,
> >>   beenonline = true,
> >>   preemptible = true,
> >>   mynode = 0xc117f340 <rcu_preempt_state>,
> >>   grpmask = 8,
> >>   nxtlist = 0x0,
> >>   nxttail = {0xc54286c4, 0xc54286c4, 0xc54286c4, 0x0},
> >>   nxtcompleted = {0, 4294967136, 4294967137, 4294967137},
> >>   qlen_lazy = 0,
> >>   qlen = 0,
> >>   qlen_last_fqs_check = 0,
> >>   n_cbs_invoked = 609,
> >>   n_nocbs_invoked = 0,
> >>   n_cbs_orphaned = 13,
> >>   n_cbs_adopted = 0,
> >>   n_force_qs_snap = 1428,
> >>   blimit = 10,
> >>   dynticks = 0xc5428758,
> >>   dynticks_snap = 13206053,
> >>   dynticks_fqs = 16,
> >>   offline_fqs = 0,
> >>   n_rcu_pending = 181,
> >>   n_rp_qs_pending = 1,
> >>   n_rp_report_qs = 21,
> >>   n_rp_cb_ready = 0,
> >>   n_rp_cpu_needs_gp = 0,
> >>   n_rp_gp_completed = 22,
> >>   n_rp_gp_started = 8,
> >>   n_rp_need_nothing = 130,
> >>   barrier_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   oom_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   cpu = 3,
> >>   rsp = 0xc117f340 <rcu_preempt_state>
> >> }
> >>
> >> crash> struct rcu_data C541A6B0
> >> struct rcu_data {
> >>   completed = 5877,
> >>   gpnum = 5877,
> >>   passed_quiesce = true,
> >>   qs_pending = false,
> >>   beenonline = true,
> >>   preemptible = true,
> >>   mynode = 0xc117f340 <rcu_preempt_state>,
> >>   grpmask = 4,
> >>   nxtlist = 0x0,
> >>   nxttail = {0xc541a6c4, 0xc541a6c4, 0xc541a6c4, 0x0},
> >>   nxtcompleted = {0, 5877, 5878, 5878},
> >>   qlen_lazy = 0,
> >>   qlen = 0,
> >>   qlen_last_fqs_check = 0,
> >>   n_cbs_invoked = 61565,
> >>   n_nocbs_invoked = 0,
> >>   n_cbs_orphaned = 139,
> >>   n_cbs_adopted = 100,
> >>   n_force_qs_snap = 0,
> >>   blimit = 10,
> >>   dynticks = 0xc541a758,
> >>   dynticks_snap = 13901017,
> >>   dynticks_fqs = 75,
> >>   offline_fqs = 0,
> >>   n_rcu_pending = 16546,
> >>   n_rp_qs_pending = 3,
> >>   n_rp_report_qs = 4539,
> >>   n_rp_cb_ready = 69,
> >>   n_rp_cpu_needs_gp = 782,
> >>   n_rp_gp_completed = 4196,
> >>   n_rp_gp_started = 1739,
> >>   n_rp_need_nothing = 5221,
> >>   barrier_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   oom_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   cpu = 2,
> >>   rsp = 0xc117f340 <rcu_preempt_state>
> >> }
> >>
> >> crash> struct rcu_data C540C6B0
> >> struct rcu_data {
> >>   completed = 5877,
> >>   gpnum = 5877,
> >>   passed_quiesce = true,
> >>   qs_pending = false,
> >>   beenonline = true,
> >>   preemptible = true,
> >>   mynode = 0xc117f340 <rcu_preempt_state>,
> >>   grpmask = 2,
> >>   nxtlist = 0x0,
> >>   nxttail = {0xc540c6c4, 0xc540c6c4, 0xc540c6c4, 0x0},
> >>   nxtcompleted = {4294967030, 5878, 5878, 5878},
> >>   qlen_lazy = 0,
> >>   qlen = 0,
> >>   qlen_last_fqs_check = 0,
> >>   n_cbs_invoked = 74292,
> >>   n_nocbs_invoked = 0,
> >>   n_cbs_orphaned = 100,
> >>   n_cbs_adopted = 65,
> >>   n_force_qs_snap = 0,
> >>   blimit = 10,
> >>   dynticks = 0xc540c758,
> >>   dynticks_snap = 10753433,
> >>   dynticks_fqs = 69,
> >>   offline_fqs = 0,
> >>   n_rcu_pending = 18350,
> >>   n_rp_qs_pending = 6,
> >>   n_rp_report_qs = 5009,
> >>   n_rp_cb_ready = 50,
> >>   n_rp_cpu_needs_gp = 915,
> >>   n_rp_gp_completed = 4423,
> >>   n_rp_gp_started = 1826,
> >>   n_rp_need_nothing = 6127,
> >>   barrier_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   oom_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   cpu = 1,
> >>   rsp = 0xc117f340 <rcu_preempt_state>
> >> }
> >> crash> struct rcu_data C53FE6B0
> >> struct rcu_data {
> >>   completed = 5877,
> >>   gpnum = 5877,
> >>   passed_quiesce = true,
> >>   qs_pending = false,
> >>   beenonline = true,
> >>   preemptible = true,
> >>   mynode = 0xc117f340 <rcu_preempt_state>,
> >>   grpmask = 1,
> >>   nxtlist = 0x0,
> >>   nxttail = {0xc53fe6c4, 0xc53fe6c4, 0xc53fe6c4, 0x0},
> >>   nxtcompleted = {4294966997, 5875, 5876, 5876},
> >>   qlen_lazy = 0,
> >>   qlen = 0,
> >>   qlen_last_fqs_check = 0,
> >>   n_cbs_invoked = 123175,
> >>   n_nocbs_invoked = 0,
> >>   n_cbs_orphaned = 52,
> >>   n_cbs_adopted = 0,
> >>   n_force_qs_snap = 0,
> >>   blimit = 10,
> >>   dynticks = 0xc53fe758,
> >>   dynticks_snap = 6330446,
> >>   dynticks_fqs = 46,
> >>   offline_fqs = 0,
> >>   n_rcu_pending = 22529,
> >>   n_rp_qs_pending = 3,
> >>   n_rp_report_qs = 5290,
> >>   n_rp_cb_ready = 279,
> >>   n_rp_cpu_needs_gp = 740,
> >>   n_rp_gp_completed = 2707,
> >>   n_rp_gp_started = 1208,
> >>   n_rp_need_nothing = 12305,
> >>   barrier_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   oom_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   cpu = 0,
> >>   rsp = 0xc117f340 <rcu_preempt_state>
> >> }
> >>
> >
> >> crash> struct rcu_data c54366b0
> >> struct rcu_data {
> >>   completed = 5877,
> >>   gpnum = 5877,
> >>   passed_quiesce = true,
> >>   qs_pending = false,
> >>   beenonline = true,
> >>   preemptible = true,
> >>   mynode = 0xc117f340 <rcu_preempt_state>,
> >>   grpmask = 16,
> >>   nxtlist = 0xedaaec00,
> >>   nxttail = {0xc54366c4, 0xe84d350c, 0xe84d350c, 0xe84d350c},
> >>   nxtcompleted = {4294967035, 5878, 5878, 5878},
> >>   qlen_lazy = 105,
> >>   qlen = 415,
> >>   qlen_last_fqs_check = 0,
> >>   n_cbs_invoked = 86323,
> >>   n_nocbs_invoked = 0,
> >>   n_cbs_orphaned = 0,
> >>   n_cbs_adopted = 139,
> >>   n_force_qs_snap = 0,
> >>   blimit = 10,
> >>   dynticks = 0xc5436758,
> >>   dynticks_snap = 7582140,
> >>   dynticks_fqs = 41,
> >>   offline_fqs = 0,
> >>   n_rcu_pending = 59404,
> >>   n_rp_qs_pending = 5,
> >>   n_rp_report_qs = 4633,
> >>   n_rp_cb_ready = 32,
> >>   n_rp_cpu_needs_gp = 41088,
> >>   n_rp_gp_completed = 2844,
> >>   n_rp_gp_started = 1150,
> >>   n_rp_need_nothing = 9657,
> >>   barrier_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   oom_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   cpu = 4,
> >>   rsp = 0xc117f340 <rcu_preempt_state>
> >> }
> >>
> >> crash> struct rcu_data c54446b0
> >> struct rcu_data {
> >>   completed = 5877,
> >>   gpnum = 5877,
> >>   passed_quiesce = true,
> >>   qs_pending = false,
> >>   beenonline = true,
> >>   preemptible = true,
> >>   mynode = 0xc117f340 <rcu_preempt_state>,
> >>   grpmask = 32,
> >>   nxtlist = 0xcf9e856c,
> >>   nxttail = {0xc54446c4, 0xcfb3050c, 0xcfb3050c, 0xcfb3050c},
> >>   nxtcompleted = {0, 5878, 5878, 5878},
> >>   qlen_lazy = 0,
> >>   qlen = 117,
> >>   qlen_last_fqs_check = 0,
> >>   n_cbs_invoked = 36951,
> >>   n_nocbs_invoked = 0,
> >>   n_cbs_orphaned = 0,
> >>   n_cbs_adopted = 0,
> >>   n_force_qs_snap = 1428,
> >>   blimit = 10,
> >>   dynticks = 0xc5444758,
> >>   dynticks_snap = 86034,
> >>   dynticks_fqs = 46,
> >>   offline_fqs = 0,
> >>   n_rcu_pending = 49104,
> >>   n_rp_qs_pending = 3,
> >>   n_rp_report_qs = 2360,
> >>   n_rp_cb_ready = 18,
> >>   n_rp_cpu_needs_gp = 40106,
> >>   n_rp_gp_completed = 1334,
> >>   n_rp_gp_started = 791,
> >>   n_rp_need_nothing = 4495,
> >>   barrier_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   oom_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   cpu = 5,
> >>   rsp = 0xc117f340 <rcu_preempt_state>
> >> }
> >>
> >> crash> struct rcu_data c54526b0
> >> struct rcu_data {
> >>   completed = 5877,
> >>   gpnum = 5877,
> >>   passed_quiesce = true,
> >>   qs_pending = false,
> >>   beenonline = true,
> >>   preemptible = true,
> >>   mynode = 0xc117f340 <rcu_preempt_state>,
> >>   grpmask = 64,
> >>   nxtlist = 0xe613d200,
> >>   nxttail = {0xc54526c4, 0xe6fc9d0c, 0xe6fc9d0c, 0xe6fc9d0c},
> >>   nxtcompleted = {0, 5878, 5878, 5878},
> >>   qlen_lazy = 2,
> >>   qlen = 35,
> >>   qlen_last_fqs_check = 0,
> >>   n_cbs_invoked = 34459,
> >>   n_nocbs_invoked = 0,
> >>   n_cbs_orphaned = 0,
> >>   n_cbs_adopted = 0,
> >>   n_force_qs_snap = 1428,
> >>   blimit = 10,
> >>   dynticks = 0xc5452758,
> >>   dynticks_snap = 116840,
> >>   dynticks_fqs = 47,
> >>   offline_fqs = 0,
> >>   n_rcu_pending = 48486,
> >>   n_rp_qs_pending = 3,
> >>   n_rp_report_qs = 2223,
> >>   n_rp_cb_ready = 24,
> >>   n_rp_cpu_needs_gp = 40101,
> >>   n_rp_gp_completed = 1226,
> >>   n_rp_gp_started = 789,
> >>   n_rp_need_nothing = 4123,
> >>   barrier_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   oom_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   cpu = 6,
> >>   rsp = 0xc117f340 <rcu_preempt_state>
> >> }
> >>
> >> crash> struct rcu_data c54606b0
> >> struct rcu_data {
> >>   completed = 5877,
> >>   gpnum = 5877,
> >>   passed_quiesce = true,
> >>   qs_pending = false,
> >>   beenonline = true,
> >>   preemptible = true,
> >>   mynode = 0xc117f340 <rcu_preempt_state>,
> >>   grpmask = 128,
> >>   nxtlist = 0xdec32a6c,
> >>   nxttail = {0xc54606c4, 0xe6fcf10c, 0xe6fcf10c, 0xe6fcf10c},
> >>   nxtcompleted = {0, 5878, 5878, 5878},
> >>   qlen_lazy = 1,
> >>   qlen = 30,
> >>   qlen_last_fqs_check = 0,
> >>   n_cbs_invoked = 31998,
> >>   n_nocbs_invoked = 0,
> >>   n_cbs_orphaned = 0,
> >>   n_cbs_adopted = 0,
> >>   n_force_qs_snap = 1428,
> >>   blimit = 10,
> >>   dynticks = 0xc5460758,
> >>   dynticks_snap = 57846,
> >>   dynticks_fqs = 54,
> >>   offline_fqs = 0,
> >>   n_rcu_pending = 47502,
> >>   n_rp_qs_pending = 2,
> >>   n_rp_report_qs = 2142,
> >>   n_rp_cb_ready = 37,
> >>   n_rp_cpu_needs_gp = 40049,
> >>   n_rp_gp_completed = 1223,
> >>   n_rp_gp_started = 661,
> >>   n_rp_need_nothing = 3390,
> >>   barrier_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   oom_head = {
> >>     next = 0x0,
> >>     func = 0x0
> >>   },
> >>   cpu = 7,
> >>   rsp = 0xc117f340 <rcu_preempt_state>
> >> }
> >
> >>
> >> rcu_preempt_state = $9 = {
> >>   node = {{
> >>       lock = {
> >>         raw_lock = {
> >>           {
> >>             slock = 3129850509,
> >>             tickets = {
> >>               owner = 47757,
> >>               next = 47757
> >>             }
> >>           }
> >>         },
> >>         magic = 3735899821,
> >>         owner_cpu = 4294967295,
> >>         owner = 0xffffffff
> >>       },
> >>       gpnum = 5877,
> >>       completed = 5877,
> >>       qsmask = 0,
> >>       expmask = 0,
> >>       qsmaskinit = 240,
> >>       grpmask = 0,
> >>       grplo = 0,
> >>       grphi = 7,
> >>       grpnum = 0 '\000',
> >>       level = 0 '\000',
> >>       parent = 0x0,
> >>       blkd_tasks = {
> >>         next = 0xc117f378 <rcu_preempt_state+56>,
> >>         prev = 0xc117f378 <rcu_preempt_state+56>
> >>       },
> >>       gp_tasks = 0x0,
> >>       exp_tasks = 0x0,
> >>       need_future_gp = {1, 0},
> >>       fqslock = {
> >>         raw_lock = {
> >>           {
> >>             slock = 0,
> >>             tickets = {
> >>               owner = 0,
> >>               next = 0
> >>             }
> >>           }
> >>         },
> >>         magic = 3735899821,
> >>         owner_cpu = 4294967295,
> >>         owner = 0xffffffff
> >>       }
> >>     }},
> >>   level = {0xc117f340 <rcu_preempt_state>},
> >>   levelcnt = {1, 0, 0, 0, 0},
> >>   levelspread = "\b",
> >>   rda = 0xc115e6b0 <rcu_preempt_data>,
> >>   call = 0xc01975ac <call_rcu>,
> >>   fqs_state = 0 '\000',
> >>   boost = 0 '\000',
> >>   gpnum = 5877,
> >>   completed = 5877,
> >>   gp_kthread = 0xf0c9e600,
> >>   gp_wq = {
> >>     lock = {
> >>       {
> >>         rlock = {
> >>           raw_lock = {
> >>             {
> >>               slock = 2160230594,
> >>               tickets = {
> >>                 owner = 32962,
> >>                 next = 32962
> >>               }
> >>             }
> >>           },
> >>           magic = 3735899821,
> >>           owner_cpu = 4294967295,
> >>           owner = 0xffffffff
> >>         }
> >>       }
> >>     },
> >>     task_list = {
> >>       next = 0xf0cd1f20,
> >>       prev = 0xf0cd1f20
> >>     }
> >>   },
> >>   gp_flags = 1,
> >>   orphan_lock = {
> >>     raw_lock = {
> >>       {
> >>         slock = 327685,
> >>         tickets = {
> >>           owner = 5,
> >>           next = 5
> >>         }
> >>       }
> >>     },
> >>     magic = 3735899821,
> >>     owner_cpu = 4294967295,
> >>     owner = 0xffffffff
> >>   },
> >>   orphan_nxtlist = 0x0,
> >>   orphan_nxttail = 0xc117f490 <rcu_preempt_state+336>,
> >>   orphan_donelist = 0x0,
> >>   orphan_donetail = 0xc117f498 <rcu_preempt_state+344>,
> >>   qlen_lazy = 0,
> >>   qlen = 0,
> >>   onoff_mutex = {
> >>     count = {
> >>       counter = 1
> >>     },
> >>     wait_lock = {
> >>       {
> >>         rlock = {
> >>           raw_lock = {
> >>             {
> >>               slock = 811479134,
> >>               tickets = {
> >>                 owner = 12382,
> >>                 next = 12382
> >>               }
> >>             }
> >>           },
> >>           magic = 3735899821,
> >>           owner_cpu = 4294967295,
> >>           owner = 0xffffffff
> >>         }
> >>       }
> >>     },
> >>     wait_list = {
> >>       next = 0xc117f4bc <rcu_preempt_state+380>,
> >>       prev = 0xc117f4bc <rcu_preempt_state+380>
> >>     },
> >>     owner = 0x0,
> >>     name = 0x0,
> >>     magic = 0xc117f4a8 <rcu_preempt_state+360>
> >>   },
> >>   barrier_mutex = {
> >>     count = {
> >>       counter = 1
> >>     },
> >>     wait_lock = {
> >>       {
> >>         rlock = {
> >>           raw_lock = {
> >>             {
> >>               slock = 0,
> >>               tickets = {
> >>                 owner = 0,
> >>                 next = 0
> >>               }
> >>             }
> >>           },
> >>           magic = 3735899821,
> >>           owner_cpu = 4294967295,
> >>           owner = 0xffffffff
> >>         }
> >>       }
> >>     },
> >>     wait_list = {
> >>       next = 0xc117f4e4 <rcu_preempt_state+420>,
> >>       prev = 0xc117f4e4 <rcu_preempt_state+420>
> >>     },
> >>     owner = 0x0,
> >>     name = 0x0,
> >>     magic = 0xc117f4d0 <rcu_preempt_state+400>
> >>   },
> >>   barrier_cpu_count = {
> >>     counter = 0
> >>   },
> >>   barrier_completion = {
> >>     done = 0,
> >>     wait = {
> >>       lock = {
> >>         {
> >>           rlock = {
> >>             raw_lock = {
> >>               {
> >>                 slock = 0,
> >>                 tickets = {
> >>                   owner = 0,
> >>                   next = 0
> >>                 }
> >>               }
> >>             },
> >>             magic = 0,
> >>             owner_cpu = 0,
> >>             owner = 0x0
> >>           }
> >>         }
> >>       },
> >>       task_list = {
> >>         next = 0x0,
> >>         prev = 0x0
> >>       }
> >>     }
> >>   },
> >>   n_barrier_done = 0,
> >>   expedited_start = {
> >>     counter = 0
> >>   },
> >>   expedited_done = {
> >>     counter = 0
> >>   },
> >>   expedited_wrap = {
> >>     counter = 0
> >>   },
> >>   expedited_tryfail = {
> >>     counter = 0
> >>   },
> >>   expedited_workdone1 = {
> >>     counter = 0
> >>   },
> >>   expedited_workdone2 = {
> >>     counter = 0
> >>   },
> >>   expedited_normal = {
> >>     counter = 0
> >>   },
> >>   expedited_stoppedcpus = {
> >>     counter = 0
> >>   },
> >>   expedited_done_tries = {
> >>     counter = 0
> >>   },
> >>   expedited_done_lost = {
> >>     counter = 0
> >>   },
> >>   expedited_done_exit = {
> >>     counter = 0
> >>   },
> >>   jiffies_force_qs = 4294963917,
> >>   n_force_qs = 4028,
> >>   n_force_qs_lh = 0,
> >>   n_force_qs_ngp = 0,
> >>   gp_start = 4294963911,
> >>   jiffies_stall = 4294966011,
> >>   gp_max = 17,
> >>   name = 0xc0d833ab "rcu_preempt",
> >>   abbr = 112 'p',
> >>   flavors = {
> >>     next = 0xc117f2ec <rcu_bh_state+556>,
> >>     prev = 0xc117f300 <rcu_struct_flavors>
> >>   },
> >>   wakeup_work = {
> >>     flags = 3,
> >>     llnode = {
> >>       next = 0x0
> >>     },
> >>     func = 0xc0195aa8 <rsp_wakeup>
> >>   }
> >>
> >>
> >
> 
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-12-20  0:25 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-12-15 17:04 [RCU] kernel hangs in wait_rcu_gp during suspend path Arun KS
2014-12-16  6:29 ` Arun KS
2014-12-16 17:30   ` Arun KS
2014-12-17 19:24     ` Paul E. McKenney
2014-12-18 16:22       ` Arun KS
2014-12-18 20:05         ` Paul E. McKenney
2014-12-19 18:55           ` Arun KS
2014-12-20  0:25             ` Paul E. McKenney
2014-12-17 19:27   ` Paul E. McKenney
2014-12-16 20:19 ` Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).