From: Michal Hocko <mhocko@kernel.org>
To: Qian Cai <cai@lca.pw>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>,
Eric Dumazet <eric.dumazet@gmail.com>,
davem@davemloft.net, netdev@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, Petr Mladek <pmladek@suse.com>,
Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
Steven Rostedt <rostedt@goodmis.org>
Subject: Re: [PATCH] net/skbuff: silence warnings under memory pressure
Date: Wed, 4 Sep 2019 14:07:07 +0200 [thread overview]
Message-ID: <20190904120707.GU3838@dhcp22.suse.cz> (raw)
In-Reply-To: <1567598357.5576.70.camel@lca.pw>
On Wed 04-09-19 07:59:17, Qian Cai wrote:
> On Wed, 2019-09-04 at 10:25 +0200, Michal Hocko wrote:
> > On Wed 04-09-19 16:00:42, Sergey Senozhatsky wrote:
> > > On (09/04/19 15:41), Sergey Senozhatsky wrote:
> > > > But the thing is different in case of dump_stack() + show_mem() +
> > > > some other output. Because now we ratelimit not a single printk() line,
> > > > but hundreds of them. The ratelimit becomes - 10 * $$$ lines in 5 seconds
> > > > (IOW, now we talk about thousands of lines).
> > >
> > > And on devices with slow serial consoles this can be somewhat close to
> > > "no ratelimit". *Suppose* that warn_alloc() adds 700 lines each time.
> > > Within 5 seconds we can call warn_alloc() 10 times, which will add 7000
> > > lines to the logbuf. If printk() can evict only 6000 lines in 5 seconds
> > > then we have a growing number of pending logbuf messages.
> >
> > Yes, ratelimit is problematic when the ratelimited operation is slow. I
> > guess that is a well known problem and we would need to rework both the
> > api and the implementation to make it work in those cases as well.
> > Essentially we need to make the ratelimit act as a gatekeeper to an
> > operation section - something like a critical section except you can
> > tolerate more code executions but not too many. So effectively
> >
> > start_throttle(rate, number);
> > /* here goes your operation */
> > end_throttle();
> >
> > one operation is not considered done until the whole section ends.
> > Or something along those lines.
> >
> > In this particular case we can increase the rate limit parameters of
> > course but I think that longterm we need a better api.
>
> The problem is when a system is under heavy memory pressure, everything is
> becoming slower, so I don't know how to come up with a sane default for rate
> limit parameters as a generic solution that would work for every machine out
> there. Sure, it is possible to set a limit as low as possible that would work
> for the majority of systems apart from people may complain that they are now
> missing important warnings, but using __GFP_NOWARN in this code would work for
> all systems. You could even argument there is even a separate benefit that it
> could reduce the noise-level overall from those build_skb() allocation failures
> as it has a fall-back mechanism anyway.
As Vlastimil already pointed out, __GFP_NOWARN would hide that reserves
might be configured too low.
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2019-09-04 12:07 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-30 14:57 [PATCH] net/skbuff: silence warnings under memory pressure Qian Cai
2019-08-30 15:11 ` Eric Dumazet
2019-08-30 15:25 ` Qian Cai
2019-08-30 16:15 ` Eric Dumazet
2019-08-30 18:06 ` Qian Cai
2019-09-03 13:22 ` Michal Hocko
2019-09-03 15:42 ` Qian Cai
2019-09-03 18:53 ` Michal Hocko
2019-09-03 21:42 ` Qian Cai
2019-09-04 6:15 ` Michal Hocko
2019-09-04 6:41 ` Sergey Senozhatsky
2019-09-04 6:54 ` Michal Hocko
2019-09-04 7:19 ` Sergey Senozhatsky
2019-09-04 7:43 ` Sergey Senozhatsky
2019-09-04 12:14 ` Qian Cai
2019-09-04 14:48 ` Sergey Senozhatsky
2019-09-04 15:07 ` Qian Cai
2019-09-04 20:42 ` Qian Cai
2019-09-05 8:32 ` Eric Dumazet
2019-09-05 14:09 ` Qian Cai
2019-09-05 15:06 ` Eric Dumazet
2019-09-05 15:14 ` Eric Dumazet
2019-09-05 11:32 ` Sergey Senozhatsky
2019-09-05 16:03 ` Qian Cai
2019-09-05 17:14 ` Steven Rostedt
2019-09-06 2:50 ` Sergey Senozhatsky
2019-09-06 4:32 ` Sergey Senozhatsky
2019-09-06 21:17 ` Qian Cai
2019-09-05 17:23 ` Steven Rostedt
2019-09-06 3:39 ` Sergey Senozhatsky
2019-09-06 15:32 ` Petr Mladek
2019-09-09 1:10 ` Sergey Senozhatsky
2019-09-06 14:55 ` Petr Mladek
2019-09-06 19:51 ` Sergey Senozhatsky
2019-11-14 17:12 ` Qian Cai
2019-11-18 15:27 ` Petr Mladek
2019-11-19 0:41 ` Sergey Senozhatsky
2019-11-19 9:41 ` Petr Mladek
2019-11-19 15:58 ` Qian Cai
2019-11-20 1:30 ` Sergey Senozhatsky
2019-11-20 16:13 ` Petr Mladek
2019-11-21 1:05 ` Sergey Senozhatsky
2019-11-21 9:15 ` Petr Mladek
2019-09-04 7:00 ` Sergey Senozhatsky
2019-09-04 8:25 ` Michal Hocko
2019-09-04 11:59 ` Qian Cai
2019-09-04 12:07 ` Michal Hocko [this message]
2019-09-04 12:28 ` Qian Cai
2019-09-07 11:00 ` Tetsuo Handa
2019-09-04 6:15 ` Michal Hocko
2019-09-02 14:24 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190904120707.GU3838@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=cai@lca.pw \
--cc=davem@davemloft.net \
--cc=eric.dumazet@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=netdev@vger.kernel.org \
--cc=pmladek@suse.com \
--cc=rostedt@goodmis.org \
--cc=sergey.senozhatsky.work@gmail.com \
--cc=sergey.senozhatsky@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).