From: Michal Hocko <mhocko@kernel.org>
To: NeilBrown <neilb@suse.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>,
LKML <linux-kernel@vger.kernel.org>,
linux-mm@kvack.org, dm-devel@redhat.com,
Mikulas Patocka <mpatocka@redhat.com>,
Mel Gorman <mgorman@suse.de>,
David Rientjes <rientjes@google.com>,
Ondrej Kozina <okozina@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [dm-devel] [RFC PATCH 2/2] mm, mempool: do not throttle PF_LESS_THROTTLE tasks
Date: Thu, 28 Jul 2016 09:17:12 +0200 [thread overview]
Message-ID: <20160728071711.GB31860@dhcp22.suse.cz> (raw)
In-Reply-To: <87eg6e4vhc.fsf@notabene.neil.brown.name>
On Thu 28-07-16 07:33:19, NeilBrown wrote:
> On Thu, Jul 28 2016, Michal Hocko wrote:
>
> > On Wed 27-07-16 13:43:35, NeilBrown wrote:
> >> On Mon, Jul 25 2016, Michal Hocko wrote:
> >>
> >> > On Sat 23-07-16 10:12:24, NeilBrown wrote:
> > [...]
> >> So should there be a limit on dirty
> >> pages in the swap cache just like there is for dirty pages in any
> >> filesystem (the max_dirty_ratio thing) ??
> >> Maybe there is?
> >
> > There is no limit AFAIK. We are relying that the reclaim is throttled
> > when necessary.
>
> Is that a bit indirect?
Yes it is. Dunno, how much of a problem is that, though.
> It is hard to tell without a clear big-picture.
> Something to keep in mind anyway.
>
> >
> >> I think we'd end up with cleaner code if we removed the cute-hacks. And
> >> we'd be able to use 6 more GFP flags!! (though I do wonder if we really
> >> need all those 26).
> >
> > Well, maybe we are able to remove those hacks, I wouldn't definitely
> > be opposed. But right now I am not even convinced that the mempool
> > specific gfp flags is the right way to go.
>
> I'm not suggesting a mempool-specific gfp flag. I'm suggesting a
> transient-allocation gfp flag, which would be quite useful for mempool.
>
> Can you give more details on why using a gfp flag isn't your first choice
> for guiding what happens when the system is trying to get a free page
> :-?
If we get rid of throttle_vm_writeout then I guess it might turn out to
be unnecessary. There are other places which will still throttle but I
believe those should be kept regardless of who is doing the allocation
because they are helping the LRU scanning sane. I might be wrong here
and bailing out from the reclaim rather than waiting would turn out
better for some users but I would like to see whether the first approach
works reasonably well.
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-07-28 7:17 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-18 8:39 [RFC PATCH 0/2] mempool vs. page allocator interaction Michal Hocko
2016-07-18 8:41 ` [RFC PATCH 1/2] mempool: do not consume memory reserves from the reclaim path Michal Hocko
2016-07-18 8:41 ` [RFC PATCH 2/2] mm, mempool: do not throttle PF_LESS_THROTTLE tasks Michal Hocko
2016-07-19 21:50 ` Mikulas Patocka
2016-07-22 8:46 ` NeilBrown
2016-07-22 9:04 ` NeilBrown
2016-07-22 9:15 ` Michal Hocko
2016-07-23 0:12 ` NeilBrown
2016-07-25 8:32 ` Michal Hocko
2016-07-25 19:23 ` Michal Hocko
2016-07-26 7:07 ` Michal Hocko
2016-07-27 3:43 ` [dm-devel] " NeilBrown
2016-07-27 18:24 ` Michal Hocko
2016-07-27 21:33 ` NeilBrown
2016-07-28 7:17 ` Michal Hocko [this message]
2016-08-03 12:53 ` Mikulas Patocka
2016-08-03 14:34 ` Michal Hocko
2016-08-04 18:49 ` Mikulas Patocka
2016-08-12 12:32 ` Michal Hocko
2016-08-13 17:34 ` Mikulas Patocka
2016-08-14 10:34 ` Michal Hocko
2016-08-15 16:15 ` Mikulas Patocka
2016-11-23 21:11 ` Mikulas Patocka
2016-11-24 13:29 ` Michal Hocko
2016-11-24 17:10 ` Mikulas Patocka
2016-11-28 14:06 ` Michal Hocko
2016-07-25 21:52 ` Mikulas Patocka
2016-07-26 7:25 ` Michal Hocko
2016-07-27 4:02 ` [dm-devel] " NeilBrown
2016-07-27 14:28 ` Mikulas Patocka
2016-07-27 18:40 ` Michal Hocko
2016-08-03 13:59 ` Mikulas Patocka
2016-08-03 14:42 ` Michal Hocko
2016-08-04 18:46 ` Mikulas Patocka
2016-07-27 21:36 ` NeilBrown
2016-07-19 2:00 ` [RFC PATCH 1/2] mempool: do not consume memory reserves from the reclaim path David Rientjes
2016-07-19 7:49 ` Michal Hocko
2016-07-19 13:54 ` Johannes Weiner
2016-07-19 14:19 ` Michal Hocko
2016-07-19 22:01 ` Mikulas Patocka
2016-07-19 20:45 ` David Rientjes
2016-07-20 8:15 ` Michal Hocko
2016-07-20 21:06 ` David Rientjes
2016-07-21 8:52 ` Michal Hocko
2016-07-21 12:13 ` Johannes Weiner
2016-07-21 14:53 ` Michal Hocko
2016-07-21 15:26 ` Johannes Weiner
2016-07-22 1:41 ` NeilBrown
2016-07-22 6:37 ` Michal Hocko
2016-07-22 12:26 ` Vlastimil Babka
2016-07-22 19:44 ` Andrew Morton
2016-07-23 18:52 ` Vlastimil Babka
2016-07-19 21:50 ` Mikulas Patocka
2016-07-20 6:44 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160728071711.GB31860@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=dm-devel@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mpatocka@redhat.com \
--cc=neilb@suse.com \
--cc=okozina@redhat.com \
--cc=penguin-kernel@I-love.SAKURA.ne.jp \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).