All of lore.kernel.org
 help / color / mirror / Atom feed
From: jing xia <jing.xia.mail@gmail.com>
To: Mikulas Patocka <mpatocka@redhat.com>
Cc: Mike Snitzer <snitzer@redhat.com>,
	agk@redhat.com, dm-devel@redhat.com,
	linux-kernel@vger.kernel.org
Subject: Re: dm bufio: Reduce dm_bufio_lock contention
Date: Thu, 14 Jun 2018 15:18:58 +0800	[thread overview]
Message-ID: <CAN=25QMQiJ7wvfvYvmZnEnrkeb-SA7_hPj+N2RnO8y-aVO8wOQ@mail.gmail.com> (raw)
In-Reply-To: <alpine.LRH.2.02.1806131001250.15845@file01.intranet.prod.int.rdu2.redhat.com>

Thanks for your comment.

On Wed, Jun 13, 2018 at 10:02 PM, Mikulas Patocka <mpatocka@redhat.com> wrote:
>
>
> On Tue, 12 Jun 2018, Mike Snitzer wrote:
>
>> On Tue, Jun 12 2018 at  4:03am -0400,
>> Jing Xia <jing.xia.mail@gmail.com> wrote:
>>
>> > Performance test in android reports that the phone sometimes gets
>> > hanged and shows black screen for about several minutes.The sysdump shows:
>> > 1. kswapd and other tasks who enter the direct-reclaim path are waiting
>> > on the dm_bufio_lock;
>>
>> Do you have an understanding of where they are waiting?  Is it in
>> dm_bufio_shrink_scan()?
>>
>> > 2. the task who gets the dm_bufio_lock is stalled for IO completions,
>> > the relevant stack trace as :
>> >
>> > PID: 22920  TASK: ffffffc0120f1a00  CPU: 1   COMMAND: "kworker/u8:2"
>> >  #0 [ffffffc0282af3d0] __switch_to at ffffff8008085e48
>> >  #1 [ffffffc0282af3f0] __schedule at ffffff8008850cc8
>> >  #2 [ffffffc0282af450] schedule at ffffff8008850f4c
>> >  #3 [ffffffc0282af470] schedule_timeout at ffffff8008853a0c
>> >  #4 [ffffffc0282af520] schedule_timeout_uninterruptible at ffffff8008853aa8
>> >  #5 [ffffffc0282af530] wait_iff_congested at ffffff8008181b40
>> >  #6 [ffffffc0282af5b0] shrink_inactive_list at ffffff8008177c80
>> >  #7 [ffffffc0282af680] shrink_lruvec at ffffff8008178510
>> >  #8 [ffffffc0282af790] mem_cgroup_shrink_node_zone at ffffff80081793bc
>> >  #9 [ffffffc0282af840] mem_cgroup_soft_limit_reclaim at ffffff80081b6040
>
> Please send the full stacktrace of this task.
>
> Then, we can see, why is it waiting here.
>
Please refer to:

PID: 22920  TASK: ffffffc0120f1a00  CPU: 1   COMMAND: "kworker/u8:2"
 #0 [ffffffc0282af3d0] __switch_to at ffffff8008085e48
 #1 [ffffffc0282af3f0] __schedule at ffffff8008850cc8
 #2 [ffffffc0282af450] schedule at ffffff8008850f4c
 #3 [ffffffc0282af470] schedule_timeout at ffffff8008853a0c
 #4 [ffffffc0282af520] schedule_timeout_uninterruptible at ffffff8008853aa8
 #5 [ffffffc0282af530] wait_iff_congested at ffffff8008181b40
 #6 [ffffffc0282af5b0] shrink_inactive_list at ffffff8008177c80
 #7 [ffffffc0282af680] shrink_lruvec at ffffff8008178510
 #8 [ffffffc0282af790] mem_cgroup_shrink_node_zone at ffffff80081793bc
 #9 [ffffffc0282af840] mem_cgroup_soft_limit_reclaim at ffffff80081b6040
#10 [ffffffc0282af8f0] do_try_to_free_pages at ffffff8008178b6c
#11 [ffffffc0282af990] try_to_free_pages at ffffff8008178f3c
#12 [ffffffc0282afa30] __perform_reclaim at ffffff8008169130
#13 [ffffffc0282afab0] __alloc_pages_nodemask at ffffff800816c9b8
#14 [ffffffc0282afbd0] __get_free_pages at ffffff800816cd6c
#15 [ffffffc0282afbe0] alloc_buffer at ffffff8008591a94
#16 [ffffffc0282afc20] __bufio_new at ffffff8008592e94
#17 [ffffffc0282afc70] dm_bufio_prefetch at ffffff8008593198
#18 [ffffffc0282afd20] verity_prefetch_io at ffffff8008598384
#19 [ffffffc0282afd70] process_one_work at ffffff80080b5b3c
#20 [ffffffc0282afdc0] worker_thread at ffffff80080b64fc
#21 [ffffffc0282afe20] kthread at ffffff80080bae34

> Mikulas

  reply	other threads:[~2018-06-14  7:19 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-12  8:03 [PATCH] dm bufio: Reduce dm_bufio_lock contention Jing Xia
2018-06-12 21:20 ` Mike Snitzer
2018-06-13 14:02   ` Mikulas Patocka
2018-06-14  7:18     ` jing xia [this message]
2018-06-14  7:31       ` Michal Hocko
2018-06-14 18:34         ` Mikulas Patocka
2018-06-15  7:32           ` Michal Hocko
2018-06-15 11:35             ` Mikulas Patocka
2018-06-15 11:55               ` Michal Hocko
2018-06-15 12:47                 ` Mikulas Patocka
2018-06-15 13:09                   ` Michal Hocko
2018-06-18 22:11                     ` Mikulas Patocka
2018-06-18 22:11                       ` Mikulas Patocka
2018-06-19 10:43                       ` Michal Hocko
2018-06-22  1:17                         ` Mikulas Patocka
2018-06-22  9:01                           ` Michal Hocko
2018-06-22  9:09                             ` Michal Hocko
2018-06-22 12:52                               ` Mikulas Patocka
2018-06-22 13:05                                 ` Michal Hocko
2018-06-22 18:57                                   ` Mikulas Patocka
2018-06-25  9:09                                     ` Michal Hocko
2018-06-25 13:53                                       ` Mikulas Patocka
2018-06-25 13:53                                         ` Mikulas Patocka
2018-06-25 14:14                                         ` Michal Hocko
2018-06-25 14:42                                           ` Mikulas Patocka
2018-06-25 14:42                                             ` Mikulas Patocka
2018-06-25 14:57                                             ` Michal Hocko
2018-06-29  2:43                                               ` Mikulas Patocka
2018-06-29  2:43                                                 ` Mikulas Patocka
2018-06-29  8:29                                                 ` Michal Hocko
2018-06-22 12:44                             ` Mikulas Patocka
2018-06-22 13:10                               ` Michal Hocko
2018-06-22 18:46                                 ` Mikulas Patocka
2018-08-01  2:48         ` jing xia
2018-08-01  7:03           ` Michal Hocko
2018-09-03 22:23           ` Mikulas Patocka
2018-09-04  7:08             ` Michal Hocko
2018-09-04 15:18               ` Mike Snitzer
2018-09-04 16:08                 ` Michal Hocko
2018-09-04 17:30                   ` Mikulas Patocka
2018-09-04 17:30                     ` Mikulas Patocka
2018-09-04 17:45                     ` Michal Hocko
2018-09-04 17:45                       ` Michal Hocko
2018-06-14  7:16   ` jing xia

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAN=25QMQiJ7wvfvYvmZnEnrkeb-SA7_hPj+N2RnO8y-aVO8wOQ@mail.gmail.com' \
    --to=jing.xia.mail@gmail.com \
    --cc=agk@redhat.com \
    --cc=dm-devel@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mpatocka@redhat.com \
    --cc=snitzer@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.