From: ebiederm@xmission.com (Eric W. Biederman)
To: Heinrich Schuchardt <xypron.glpk@gmx.de>
Cc: Michal Hocko <mhocko@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: threads-max observe limits
Date: Sun, 22 Sep 2019 16:40:26 -0500 [thread overview]
Message-ID: <875zlk2enp.fsf@x220.int.ebiederm.org> (raw)
In-Reply-To: <f1b89360-a70c-0a30-6a7b-9bafe74701ed@gmx.de> (Heinrich Schuchardt's message of "Sun, 22 Sep 2019 17:31:16 +0200")
Heinrich Schuchardt <xypron.glpk@gmx.de> writes:
> Did this patch when applied to the customer's kernel solve any problem?
>
> WebSphere MQ is a messaging application. If it hits the current limits
> of threads-max, there is a bug in the software or in the way that it has
> been set up at the customer. Instead of messing around with the kernel
> the application should be fixed.
While it is true that almost every workload will be buggy if it exceeds
1/8 of memory with just the kernel data structures for threads. It is
not necessary true of every application. I can easily imagine cases
up around 1/2 of memory where things could work reasonably.
Further we can exhaust all of memory much more simply in a default
configuration by malloc'ing more memory that in physically present
and zeroing it all.
Henrich, you were the one messed with the kernel by breaking a
reasonable kernel tunable. AKA you caused a regression. That violates
the no regression rule.
As much as possible we fix regressions so software that used to work
continues to work. Removing footguns is not a reason to introduce a
regression.
I do agree that Michal's customer's problem sounds like it is something
else but if the kernel did not have a regression we could focus on the
real problem instead of being side tracked by the regression.
> With this patch you allow administrators to set values that will crash
> their system. And they will not even have a way to find out the limits
> which he should adhere to. So expect a lot of systems to be downed
> this way.
Nope. The system administrator just setting a higher value whon't crash
their system. Only using that many resources would crash the system.
Nor is a sysctl like this for discovering the physical limits of a
machine. Which the current value is vastly inappropriate for. As
the physical limits of many machines are much higher than 1/8 of memory.
Eric
next prev parent reply other threads:[~2019-09-22 22:32 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-17 10:03 threads-max observe limits Michal Hocko
2019-09-17 15:28 ` Heinrich Schuchardt
2019-09-17 15:38 ` Michal Hocko
2019-09-17 17:26 ` Eric W. Biederman
2019-09-18 7:15 ` Michal Hocko
2019-09-19 7:59 ` Michal Hocko
2019-09-19 19:38 ` Andrew Morton
2019-09-19 19:33 ` Eric W. Biederman
2019-09-22 6:58 ` Michal Hocko
2019-09-22 15:31 ` Heinrich Schuchardt
2019-09-22 21:40 ` Eric W. Biederman [this message]
2019-09-22 21:24 ` Eric W. Biederman
2019-09-23 8:08 ` Michal Hocko
2019-09-23 21:23 ` Eric W. Biederman
2019-09-24 8:48 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=875zlk2enp.fsf@x220.int.ebiederm.org \
--to=ebiederm@xmission.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mhocko@kernel.org \
--cc=xypron.glpk@gmx.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).