All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mikulas Patocka <mpatocka@redhat.com>
To: jing xia <jing.xia.mail@gmail.com>
Cc: Michal Hocko <mhocko@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	agk@redhat.com, dm-devel@redhat.com,
	linux-kernel@vger.kernel.org
Subject: Re: dm bufio: Reduce dm_bufio_lock contention
Date: Mon, 3 Sep 2018 18:23:17 -0400 (EDT)	[thread overview]
Message-ID: <alpine.LRH.2.02.1809031816120.27063@file01.intranet.prod.int.rdu2.redhat.com> (raw)
In-Reply-To: <CAN=25QNrENVFbb9UAAAJJqKsEeMY+qNs5cQA4EGwgB2dMYTZug@mail.gmail.com>



On Wed, 1 Aug 2018, jing xia wrote:

> We reproduced this issue again and found out the root cause.
> dm_bufio_prefetch() with dm_bufio_lock enters the direct reclaim and
> takes a long time to do the soft_limit_reclaim, because of the huge
> number of memory excess of the memcg.
> Then, all the task who do shrink_slab() wait for  dm_bufio_lock.
> 
> Any suggestions for this?Thanks.

There's hardly any solution because Michal Hocko refuses to change 
__GFP_NORETRY behavior.

The patches 41c73a49df31151f4ff868f28fe4f129f113fa2c and 
d12067f428c037b4575aaeb2be00847fc214c24a could reduce the lock contention 
on the dm-bufio lock - the patches don't fix the high CPU consumption 
inside the memory allocation, but the kernel code should wait less on the 
bufio lock.

Mikulas


> On Thu, Jun 14, 2018 at 3:31 PM, Michal Hocko <mhocko@kernel.org> wrote:
> > On Thu 14-06-18 15:18:58, jing xia wrote:
> > [...]
> >> PID: 22920  TASK: ffffffc0120f1a00  CPU: 1   COMMAND: "kworker/u8:2"
> >>  #0 [ffffffc0282af3d0] __switch_to at ffffff8008085e48
> >>  #1 [ffffffc0282af3f0] __schedule at ffffff8008850cc8
> >>  #2 [ffffffc0282af450] schedule at ffffff8008850f4c
> >>  #3 [ffffffc0282af470] schedule_timeout at ffffff8008853a0c
> >>  #4 [ffffffc0282af520] schedule_timeout_uninterruptible at ffffff8008853aa8
> >>  #5 [ffffffc0282af530] wait_iff_congested at ffffff8008181b40
> >
> > This trace doesn't provide the full picture unfortunately. Waiting in
> > the direct reclaim means that the underlying bdi is congested. The real
> > question is why it doesn't flush IO in time.
> >
> >>  #6 [ffffffc0282af5b0] shrink_inactive_list at ffffff8008177c80
> >>  #7 [ffffffc0282af680] shrink_lruvec at ffffff8008178510
> >>  #8 [ffffffc0282af790] mem_cgroup_shrink_node_zone at ffffff80081793bc
> >>  #9 [ffffffc0282af840] mem_cgroup_soft_limit_reclaim at ffffff80081b6040
> >> #10 [ffffffc0282af8f0] do_try_to_free_pages at ffffff8008178b6c
> >> #11 [ffffffc0282af990] try_to_free_pages at ffffff8008178f3c
> >> #12 [ffffffc0282afa30] __perform_reclaim at ffffff8008169130
> >> #13 [ffffffc0282afab0] __alloc_pages_nodemask at ffffff800816c9b8
> >> #14 [ffffffc0282afbd0] __get_free_pages at ffffff800816cd6c
> >> #15 [ffffffc0282afbe0] alloc_buffer at ffffff8008591a94
> >> #16 [ffffffc0282afc20] __bufio_new at ffffff8008592e94
> >> #17 [ffffffc0282afc70] dm_bufio_prefetch at ffffff8008593198
> >> #18 [ffffffc0282afd20] verity_prefetch_io at ffffff8008598384
> >> #19 [ffffffc0282afd70] process_one_work at ffffff80080b5b3c
> >> #20 [ffffffc0282afdc0] worker_thread at ffffff80080b64fc
> >> #21 [ffffffc0282afe20] kthread at ffffff80080bae34
> >>
> >> > Mikulas
> >
> > --
> > Michal Hocko
> > SUSE Labs
> 

  parent reply	other threads:[~2018-09-03 22:23 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-12  8:03 [PATCH] dm bufio: Reduce dm_bufio_lock contention Jing Xia
2018-06-12 21:20 ` Mike Snitzer
2018-06-13 14:02   ` Mikulas Patocka
2018-06-14  7:18     ` jing xia
2018-06-14  7:31       ` Michal Hocko
2018-06-14 18:34         ` Mikulas Patocka
2018-06-15  7:32           ` Michal Hocko
2018-06-15 11:35             ` Mikulas Patocka
2018-06-15 11:55               ` Michal Hocko
2018-06-15 12:47                 ` Mikulas Patocka
2018-06-15 13:09                   ` Michal Hocko
2018-06-18 22:11                     ` Mikulas Patocka
2018-06-18 22:11                       ` Mikulas Patocka
2018-06-19 10:43                       ` Michal Hocko
2018-06-22  1:17                         ` Mikulas Patocka
2018-06-22  9:01                           ` Michal Hocko
2018-06-22  9:09                             ` Michal Hocko
2018-06-22 12:52                               ` Mikulas Patocka
2018-06-22 13:05                                 ` Michal Hocko
2018-06-22 18:57                                   ` Mikulas Patocka
2018-06-25  9:09                                     ` Michal Hocko
2018-06-25 13:53                                       ` Mikulas Patocka
2018-06-25 13:53                                         ` Mikulas Patocka
2018-06-25 14:14                                         ` Michal Hocko
2018-06-25 14:42                                           ` Mikulas Patocka
2018-06-25 14:42                                             ` Mikulas Patocka
2018-06-25 14:57                                             ` Michal Hocko
2018-06-29  2:43                                               ` Mikulas Patocka
2018-06-29  2:43                                                 ` Mikulas Patocka
2018-06-29  8:29                                                 ` Michal Hocko
2018-06-22 12:44                             ` Mikulas Patocka
2018-06-22 13:10                               ` Michal Hocko
2018-06-22 18:46                                 ` Mikulas Patocka
2018-08-01  2:48         ` jing xia
2018-08-01  7:03           ` Michal Hocko
2018-09-03 22:23           ` Mikulas Patocka [this message]
2018-09-04  7:08             ` Michal Hocko
2018-09-04 15:18               ` Mike Snitzer
2018-09-04 16:08                 ` Michal Hocko
2018-09-04 17:30                   ` Mikulas Patocka
2018-09-04 17:30                     ` Mikulas Patocka
2018-09-04 17:45                     ` Michal Hocko
2018-09-04 17:45                       ` Michal Hocko
2018-06-14  7:16   ` jing xia

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.LRH.2.02.1809031816120.27063@file01.intranet.prod.int.rdu2.redhat.com \
    --to=mpatocka@redhat.com \
    --cc=agk@redhat.com \
    --cc=dm-devel@redhat.com \
    --cc=jing.xia.mail@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mhocko@kernel.org \
    --cc=snitzer@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.