All of lore.kernel.org
 help / color / mirror / Atom feed
* Resource limits
@ 2002-10-24 12:13 Frank Cornelis
  2002-10-24 16:46 ` Randolph Bentson
  0 siblings, 1 reply; 24+ messages in thread
From: Frank Cornelis @ 2002-10-24 12:13 UTC (permalink / raw)
  To: linux-kernel; +Cc: Frank.Cornelis

Hi,

Wouldn't it make sense that somewhere in kernel/fork.c:dup_task_struct we 
copy the resource limits as follows.
	int i;
	for (i = 0; i < RLIM_NLIMITS; i++)
		tsk->rlim[i].rlim_cur =
			tsk->rlim[i].rlim_max =
			orig->rlim[i].rlim_cur;
This way a parent process is able to temporary drop some of its limits in 
order to make a restricted child process and restore its resource limits 
afterwards. Currenly it is not possible to make a child process with 
smaller resource limits than the parent process without the parent process 
losing its (hard) max limits (As far as I know, correct me if I'm wrong). 
I could very much use this to control core dumping of child processes in 
a better way. Of course I don't know to what extent this will break 
things. POSIX???...couldn't find anything on it.

Please CC me.

Frank.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2002-10-24 12:13 Resource limits Frank Cornelis
@ 2002-10-24 16:46 ` Randolph Bentson
  0 siblings, 0 replies; 24+ messages in thread
From: Randolph Bentson @ 2002-10-24 16:46 UTC (permalink / raw)
  To: Frank Cornelis; +Cc: linux-kernel, Frank.Cornelis

On Thu, Oct 24, 2002 at 02:13:01PM +0200, Frank Cornelis wrote:
> This way a parent process is able to temporary drop some of its
> limits in order to make a restricted child process and restore
> its resource limits afterwards. Currenly it is not possible to
> make a child process with smaller resource limits than the parent
> process without the parent process losing its (hard) max limits
> (As far as I know, correct me if I'm wrong).

Hmm, this statement suggests the author misunderstands the Unix-based
conventional use of the separated fork/exec calls.  After the fork
call, the child process is still running code common to the parent,
but typically (by convention) a different leg of an if-then-else
statement.  This code in this leg can reduce resource limits before
make an exec call to start a new program.  The parent's limits are
not affected.  There's no need to change the kernel.

-- 
Randolph Bentson
bentson@holmsjoen.com

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-27  5:08               ` Al Boldi
  2005-09-27 12:08                 ` Neil Horman
@ 2005-09-27 21:35                 ` Chandra Seetharaman
  1 sibling, 0 replies; 24+ messages in thread
From: Chandra Seetharaman @ 2005-09-27 21:35 UTC (permalink / raw)
  To: Al Boldi; +Cc: Neil Horman, Matthew Helsley, linux-kernel

On Tue, 2005-09-27 at 08:08 +0300, Al Boldi wrote:
<snip>
> Consider this dilemma:
> Runaway proc/s hit the limit.
> Try to kill some and you are denied due to the resource limit.
> Use some previously running app like top, hope it hasn't been killed by some 
> OOM situation, try killing some procs and another one takes it's place 
> because of the runaway situation.
> Raise the limit, and it gets filled by the runaways.
> You are pretty much stuck.

CKRM can solve this problem nicely. You can define classes (for example,
you can define a class and it attach to a user). Limits will be applied
only to that class(user), failures will be seen only by that class(user)
and the rest of the system will be free to operate without getting into
the situation stated above.

>  and associate resources to a class 
> You may get around the problem by a user-space solution, but this will always 
> run the risks associated with user-space.
> 
> > > The issue here is a general lack of proper kernel support for resource
> > > limits.  The fork problem is just an example.
> >
> > Thats not really true.  As Mr. Helsley pointed out, CKRM is available
> 
> Matthew Helsley wrote:
> > 	Have you looked at Class-Based Kernel Resource Managment (CKRM)
> > (http://ckrm.sf.net) to see if it fits your needs? My initial thought is
> > that the CKRM numtasks controller may help limit forks in the way you
> > describe.
> 
> Thanks for the link!  CKRM is great!

Thank you!! :)
> 
> Is there a CKRM-lite version?  This would make it easier to be included into 

we are currently working on reducing the codesize and complexity of
CKRM, which will be lot thinner and less complex than what was in -mm
tree a while ago. The development is underway and you can follow the
progress of the f-series in ckrm-tech mailing list. 

> the mainline, something that would concentrate on the pressing issues, like 
> lock-out prevention, and leave all the management features as an option.
> 

You are welcome to join the mailing list and provide feedback on how the
f-series shapes up.

Thanks,

chandra 
> Thanks!
> 
> --
> Al
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 
> 
-- 

----------------------------------------------------------------------
    Chandra Seetharaman               | Be careful what you choose....
              - sekharan@us.ibm.com   |      .......you may get it.
----------------------------------------------------------------------



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-27 15:50                       ` Al Boldi
@ 2005-09-27 17:25                         ` Neil Horman
  0 siblings, 0 replies; 24+ messages in thread
From: Neil Horman @ 2005-09-27 17:25 UTC (permalink / raw)
  To: Al Boldi; +Cc: Matthew Helsley, linux-kernel

On Tue, Sep 27, 2005 at 06:50:01PM +0300, Al Boldi wrote:
> Neil Horman wrote:
> > On Tue, Sep 27, 2005 at 04:42:07PM +0300, Al Boldi wrote:
> > > Neil Horman wrote:
> > > > On Tue, Sep 27, 2005 at 08:08:21AM +0300, Al Boldi wrote:
> > > > > Neil Horman wrote:
> > > > > > On Mon, Sep 26, 2005 at 11:26:10PM +0300, Al Boldi wrote:
> > > > > > > Neil Horman wrote:
> > > > > > > > On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > > > > > > > > Neil Horman wrote:
> > > > > > > > > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > > > > > > > > Rik van Riel wrote:
> > > > > > > > > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > > > > > > > > Too many process forks and your system may crash.
> > > > > > > > > > > > > This can be capped with threads-max, but may lead
> > > > > > > > > > > > > you into a lock-out.
> > > > > > > > > > > > >
> > > > > > > > > > > > > What is needed is a soft, hard, and a special
> > > > > > > > > > > > > emergency limit that would allow you to use the
> > > > > > > > > > > > > resource for a limited time to circumvent a
> > > > > > > > > > > > > lock-out.
> > > > > > > > > > > >
> > > > > > > > > > > > How would you reclaim the resource after that limited
> > > > > > > > > > > > time is over ?  Kill processes?
> > > > > > > > > > >
> > > > > > > > > > > That's one way,  but really, the issue needs some deep
> > > > > > > > > > > thought. Leaving Linux exposed to a lock-out is rather
> > > > > > > > > > > frightening.
> > > > > > > > > >
> > > > > > > > > > What exactly is it that you're worried about here?
> > > > > > > > >
> > > > > > > > > Think about a DoS attack.
> > > > > > > >
> > > > > > > > Be more specific.  Are you talking about a fork bomb, a ICMP
> > > > > > > > flood, what?
> > > > >
> > > > > Consider this dilemma:
> > > > > Runaway proc/s hit the limit.
> > > > > Try to kill some and you are denied due to the resource limit.
> > > > > Use some previously running app like top, hope it hasn't been killed
> > > > > by some OOM situation, try killing some procs and another one takes
> > > > > it's place because of the runaway situation.
> > > > > Raise the limit, and it gets filled by the runaways.
> > > > > You are pretty much stuck.
> > > >
> > > > Not really, this is the sort of thing ulimit is meant for.  To keep
> > > > processes from any one user from running away.  It lets you limit the
> > > > damage it can do, until such time as you can control it and fix the
> > > > runaway application.
> > >
> > > threads-max = 1024
> > > ulimit = 100 forks
> > > 11 runaway procs hitting the threads-max limit
> >
> > This is incorrect.  If you ulimit a user to 100 forks, and 11 processes
> > running with that uid
> 
> Different uid.
> 
Then yes, if you set a system-wide limit that is less than the sum of the limits
imposed on each accountable part of the system you can have lock out.  But thats
your fault for misconfiguring the system.  Don't do that.

> > If you have a user process that for some reason legitimately needs to try
> > use every process resource available in the system, then yes, you are prone
> > to a lock out condition
> 
> Couldn't this be easily fixed in kernel-space?
> 
You're not getting it.  The resource limits applied by ulimit (and CKRM as
far as I know), _are_ inforced in kernel space.  The ulimit library call and its
corresponding setrlimit system call set resource limitations in the rlim array
thats part of each task struct.  These limits are queried whenever an instance
of the corresponding resource is requested by a user space process, if the
requesting process is over its limit, the request is deined.

Regards
Neil

> Thanks!
> 
> --
> Al
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
/***************************************************
 *Neil Horman
 *Software Engineer
 *gpg keyid: 1024D / 0x92A74FA1 - http://pgp.mit.edu
 ***************************************************/

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-27 14:36                     ` Neil Horman
@ 2005-09-27 15:50                       ` Al Boldi
  2005-09-27 17:25                         ` Neil Horman
  0 siblings, 1 reply; 24+ messages in thread
From: Al Boldi @ 2005-09-27 15:50 UTC (permalink / raw)
  To: Neil Horman; +Cc: Matthew Helsley, linux-kernel

Neil Horman wrote:
> On Tue, Sep 27, 2005 at 04:42:07PM +0300, Al Boldi wrote:
> > Neil Horman wrote:
> > > On Tue, Sep 27, 2005 at 08:08:21AM +0300, Al Boldi wrote:
> > > > Neil Horman wrote:
> > > > > On Mon, Sep 26, 2005 at 11:26:10PM +0300, Al Boldi wrote:
> > > > > > Neil Horman wrote:
> > > > > > > On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > > > > > > > Neil Horman wrote:
> > > > > > > > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > > > > > > > Rik van Riel wrote:
> > > > > > > > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > > > > > > > Too many process forks and your system may crash.
> > > > > > > > > > > > This can be capped with threads-max, but may lead
> > > > > > > > > > > > you into a lock-out.
> > > > > > > > > > > >
> > > > > > > > > > > > What is needed is a soft, hard, and a special
> > > > > > > > > > > > emergency limit that would allow you to use the
> > > > > > > > > > > > resource for a limited time to circumvent a
> > > > > > > > > > > > lock-out.
> > > > > > > > > > >
> > > > > > > > > > > How would you reclaim the resource after that limited
> > > > > > > > > > > time is over ?  Kill processes?
> > > > > > > > > >
> > > > > > > > > > That's one way,  but really, the issue needs some deep
> > > > > > > > > > thought. Leaving Linux exposed to a lock-out is rather
> > > > > > > > > > frightening.
> > > > > > > > >
> > > > > > > > > What exactly is it that you're worried about here?
> > > > > > > >
> > > > > > > > Think about a DoS attack.
> > > > > > >
> > > > > > > Be more specific.  Are you talking about a fork bomb, a ICMP
> > > > > > > flood, what?
> > > >
> > > > Consider this dilemma:
> > > > Runaway proc/s hit the limit.
> > > > Try to kill some and you are denied due to the resource limit.
> > > > Use some previously running app like top, hope it hasn't been killed
> > > > by some OOM situation, try killing some procs and another one takes
> > > > it's place because of the runaway situation.
> > > > Raise the limit, and it gets filled by the runaways.
> > > > You are pretty much stuck.
> > >
> > > Not really, this is the sort of thing ulimit is meant for.  To keep
> > > processes from any one user from running away.  It lets you limit the
> > > damage it can do, until such time as you can control it and fix the
> > > runaway application.
> >
> > threads-max = 1024
> > ulimit = 100 forks
> > 11 runaway procs hitting the threads-max limit
>
> This is incorrect.  If you ulimit a user to 100 forks, and 11 processes
> running with that uid

Different uid.

> If you have a user process that for some reason legitimately needs to try
> use every process resource available in the system, then yes, you are prone
> to a lock out condition

Couldn't this be easily fixed in kernel-space?

Thanks!

--
Al


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-27 13:42                   ` Al Boldi
@ 2005-09-27 14:36                     ` Neil Horman
  2005-09-27 15:50                       ` Al Boldi
  0 siblings, 1 reply; 24+ messages in thread
From: Neil Horman @ 2005-09-27 14:36 UTC (permalink / raw)
  To: Al Boldi; +Cc: Matthew Helsley, linux-kernel

On Tue, Sep 27, 2005 at 04:42:07PM +0300, Al Boldi wrote:
> Neil Horman wrote:
> > On Tue, Sep 27, 2005 at 08:08:21AM +0300, Al Boldi wrote:
> > > Neil Horman wrote:
> > > > On Mon, Sep 26, 2005 at 11:26:10PM +0300, Al Boldi wrote:
> > > > > Neil Horman wrote:
> > > > > > On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > > > > > > Neil Horman wrote:
> > > > > > > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > > > > > > Rik van Riel wrote:
> > > > > > > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > > > > > > Too many process forks and your system may crash.
> > > > > > > > > > > This can be capped with threads-max, but may lead you
> > > > > > > > > > > into a lock-out.
> > > > > > > > > > >
> > > > > > > > > > > What is needed is a soft, hard, and a special emergency
> > > > > > > > > > > limit that would allow you to use the resource for a
> > > > > > > > > > > limited time to circumvent a lock-out.
> > > > > > > > > >
> > > > > > > > > > How would you reclaim the resource after that limited time
> > > > > > > > > > is over ?  Kill processes?
> > > > > > > > >
> > > > > > > > > That's one way,  but really, the issue needs some deep
> > > > > > > > > thought. Leaving Linux exposed to a lock-out is rather
> > > > > > > > > frightening.
> > > > > > > >
> > > > > > > > What exactly is it that you're worried about here?
> > > > > > >
> > > > > > > Think about a DoS attack.
> > > > > >
> > > > > > Be more specific.  Are you talking about a fork bomb, a ICMP
> > > > > > flood, what?
> > >
> > > Consider this dilemma:
> > > Runaway proc/s hit the limit.
> > > Try to kill some and you are denied due to the resource limit.
> > > Use some previously running app like top, hope it hasn't been killed by
> > > some OOM situation, try killing some procs and another one takes it's
> > > place because of the runaway situation.
> > > Raise the limit, and it gets filled by the runaways.
> > > You are pretty much stuck.
> >
> > Not really, this is the sort of thing ulimit is meant for.  To keep
> > processes from any one user from running away.  It lets you limit the
> > damage it can do, until such time as you can control it and fix the
> > runaway application.
> 
> threads-max = 1024
> ulimit = 100 forks
> 11 runaway procs hitting the threads-max limit
> 
This is incorrect.  If you ulimit a user to 100 forks, and 11 processes running
with that uid start to fork repeatedly, they will get fork failures after they
have, in aggregate called fork 89 times.  That user can have no more than 100
processes running in the system at any given time.  Another user (or root) can
fork another process to kill one of the runaways.

If you have a user process that for some reason legitimately needs to try use
every process resource available in the system, the yes, you are prone to a lock
out condition, if you have no way of killing those processes from a controlling
terminal, then yes, you are prone to lock out.  In those conditions I would set
my ulimit on processes for the user running this process to something less than
threads-max, so that I could have some wiggle room to get out of that situation.
I would of course also file a bug report with the application author, but thats
another discussion :).
Regards
Neil

> This example is extreme, but it's possible, and there should be a safe and 
> easy way out.
> 
> What do you think?
> 
> Thanks!
> --
> Al

-- 
/***************************************************
 *Neil Horman
 *Software Engineer
 *gpg keyid: 1024D / 0x92A74FA1 - http://pgp.mit.edu
 ***************************************************/

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-27 12:08                 ` Neil Horman
@ 2005-09-27 13:42                   ` Al Boldi
  2005-09-27 14:36                     ` Neil Horman
  0 siblings, 1 reply; 24+ messages in thread
From: Al Boldi @ 2005-09-27 13:42 UTC (permalink / raw)
  To: Neil Horman; +Cc: Matthew Helsley, linux-kernel

Neil Horman wrote:
> On Tue, Sep 27, 2005 at 08:08:21AM +0300, Al Boldi wrote:
> > Neil Horman wrote:
> > > On Mon, Sep 26, 2005 at 11:26:10PM +0300, Al Boldi wrote:
> > > > Neil Horman wrote:
> > > > > On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > > > > > Neil Horman wrote:
> > > > > > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > > > > > Rik van Riel wrote:
> > > > > > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > > > > > Too many process forks and your system may crash.
> > > > > > > > > > This can be capped with threads-max, but may lead you
> > > > > > > > > > into a lock-out.
> > > > > > > > > >
> > > > > > > > > > What is needed is a soft, hard, and a special emergency
> > > > > > > > > > limit that would allow you to use the resource for a
> > > > > > > > > > limited time to circumvent a lock-out.
> > > > > > > > >
> > > > > > > > > How would you reclaim the resource after that limited time
> > > > > > > > > is over ?  Kill processes?
> > > > > > > >
> > > > > > > > That's one way,  but really, the issue needs some deep
> > > > > > > > thought. Leaving Linux exposed to a lock-out is rather
> > > > > > > > frightening.
> > > > > > >
> > > > > > > What exactly is it that you're worried about here?
> > > > > >
> > > > > > Think about a DoS attack.
> > > > >
> > > > > Be more specific.  Are you talking about a fork bomb, a ICMP
> > > > > flood, what?
> >
> > Consider this dilemma:
> > Runaway proc/s hit the limit.
> > Try to kill some and you are denied due to the resource limit.
> > Use some previously running app like top, hope it hasn't been killed by
> > some OOM situation, try killing some procs and another one takes it's
> > place because of the runaway situation.
> > Raise the limit, and it gets filled by the runaways.
> > You are pretty much stuck.
>
> Not really, this is the sort of thing ulimit is meant for.  To keep
> processes from any one user from running away.  It lets you limit the
> damage it can do, until such time as you can control it and fix the
> runaway application.

threads-max = 1024
ulimit = 100 forks
11 runaway procs hitting the threads-max limit

This example is extreme, but it's possible, and there should be a safe and 
easy way out.

What do you think?

Thanks!
--
Al


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-27  5:08               ` Al Boldi
@ 2005-09-27 12:08                 ` Neil Horman
  2005-09-27 13:42                   ` Al Boldi
  2005-09-27 21:35                 ` Chandra Seetharaman
  1 sibling, 1 reply; 24+ messages in thread
From: Neil Horman @ 2005-09-27 12:08 UTC (permalink / raw)
  To: Al Boldi; +Cc: Matthew Helsley, linux-kernel

On Tue, Sep 27, 2005 at 08:08:21AM +0300, Al Boldi wrote:
> Neil Horman wrote:
> > On Mon, Sep 26, 2005 at 11:26:10PM +0300, Al Boldi wrote:
> > > Neil Horman wrote:
> > > > On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > > > > Neil Horman wrote:
> > > > > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > > > > Rik van Riel wrote:
> > > > > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > > > > Too many process forks and your system may crash.
> > > > > > > > > This can be capped with threads-max, but may lead you into a
> > > > > > > > > lock-out.
> > > > > > > > >
> > > > > > > > > What is needed is a soft, hard, and a special emergency
> > > > > > > > > limit that would allow you to use the resource for a limited
> > > > > > > > > time to circumvent a lock-out.
> > > > > > > >
> > > > > > > > How would you reclaim the resource after that limited time is
> > > > > > > > over ?  Kill processes?
> > > > > > >
> > > > > > > That's one way,  but really, the issue needs some deep thought.
> > > > > > > Leaving Linux exposed to a lock-out is rather frightening.
> > > > > >
> > > > > > What exactly is it that you're worried about here?
> > > > >
> > > > > Think about a DoS attack.
> > > >
> > > > Be more specific.  Are you talking about a fork bomb, a ICMP flood,
> > > > what?
> > >
> > > How would you deal with a situation where the system hit the threads-max
> > > ceiling?
> >
> > Nominally I would log the inability to successfully create a new
> > process/thread, attempt to free some of my applications resources, and try
> > again.
> 
> Consider this dilemma:
> Runaway proc/s hit the limit.
> Try to kill some and you are denied due to the resource limit.
> Use some previously running app like top, hope it hasn't been killed by some 
> OOM situation, try killing some procs and another one takes it's place 
> because of the runaway situation.
> Raise the limit, and it gets filled by the runaways.
> You are pretty much stuck.
> 
Not really, this is the sort of thing ulimit is meant for.  To keep processes
from any one user from running away.  It lets you limit the damage it can do,
until such time as you can control it and fix the runaway application.

> You may get around the problem by a user-space solution, but this will always 
> run the risks associated with user-space.
> 
Ulimit isn't a user-space solution, its a user-_based_ restriction mechanism for
resources.  It allows you to prevent user X (or group X, IIRC) from creating
more than A MB of files, or B processes, or allocating C KB of memory, etc.  man
3 ulimit.


> > > The issue here is a general lack of proper kernel support for resource
> > > limits.  The fork problem is just an example.
> >
> > Thats not really true.  As Mr. Helsley pointed out, CKRM is available
> 
> Matthew Helsley wrote:
> > 	Have you looked at Class-Based Kernel Resource Managment (CKRM)
> > (http://ckrm.sf.net) to see if it fits your needs? My initial thought is
> > that the CKRM numtasks controller may help limit forks in the way you
> > describe.
> 
> Thanks for the link!  CKRM is great!
> 
> Is there a CKRM-lite version?  This would make it easier to be included into 
> the mainline, something that would concentrate on the pressing issues, like 
> lock-out prevention, and leave all the management features as an option.
> 
> Thanks!
> 
> --
> Al

-- 
/***************************************************
 *Neil Horman
 *Software Engineer
 *gpg keyid: 1024D / 0x92A74FA1 - http://pgp.mit.edu
 ***************************************************/

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-27  1:05             ` Neil Horman
@ 2005-09-27  5:08               ` Al Boldi
  2005-09-27 12:08                 ` Neil Horman
  2005-09-27 21:35                 ` Chandra Seetharaman
  0 siblings, 2 replies; 24+ messages in thread
From: Al Boldi @ 2005-09-27  5:08 UTC (permalink / raw)
  To: Neil Horman, Matthew Helsley; +Cc: linux-kernel

Neil Horman wrote:
> On Mon, Sep 26, 2005 at 11:26:10PM +0300, Al Boldi wrote:
> > Neil Horman wrote:
> > > On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > > > Neil Horman wrote:
> > > > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > > > Rik van Riel wrote:
> > > > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > > > Too many process forks and your system may crash.
> > > > > > > > This can be capped with threads-max, but may lead you into a
> > > > > > > > lock-out.
> > > > > > > >
> > > > > > > > What is needed is a soft, hard, and a special emergency
> > > > > > > > limit that would allow you to use the resource for a limited
> > > > > > > > time to circumvent a lock-out.
> > > > > > >
> > > > > > > How would you reclaim the resource after that limited time is
> > > > > > > over ?  Kill processes?
> > > > > >
> > > > > > That's one way,  but really, the issue needs some deep thought.
> > > > > > Leaving Linux exposed to a lock-out is rather frightening.
> > > > >
> > > > > What exactly is it that you're worried about here?
> > > >
> > > > Think about a DoS attack.
> > >
> > > Be more specific.  Are you talking about a fork bomb, a ICMP flood,
> > > what?
> >
> > How would you deal with a situation where the system hit the threads-max
> > ceiling?
>
> Nominally I would log the inability to successfully create a new
> process/thread, attempt to free some of my applications resources, and try
> again.

Consider this dilemma:
Runaway proc/s hit the limit.
Try to kill some and you are denied due to the resource limit.
Use some previously running app like top, hope it hasn't been killed by some 
OOM situation, try killing some procs and another one takes it's place 
because of the runaway situation.
Raise the limit, and it gets filled by the runaways.
You are pretty much stuck.

You may get around the problem by a user-space solution, but this will always 
run the risks associated with user-space.

> > The issue here is a general lack of proper kernel support for resource
> > limits.  The fork problem is just an example.
>
> Thats not really true.  As Mr. Helsley pointed out, CKRM is available

Matthew Helsley wrote:
> 	Have you looked at Class-Based Kernel Resource Managment (CKRM)
> (http://ckrm.sf.net) to see if it fits your needs? My initial thought is
> that the CKRM numtasks controller may help limit forks in the way you
> describe.

Thanks for the link!  CKRM is great!

Is there a CKRM-lite version?  This would make it easier to be included into 
the mainline, something that would concentrate on the pressing issues, like 
lock-out prevention, and leave all the management features as an option.

Thanks!

--
Al


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-26 14:44 ` Roger Heflin
  2005-09-26 17:11   ` Alan Cox
@ 2005-09-27  3:50   ` Coywolf Qi Hunt
  1 sibling, 0 replies; 24+ messages in thread
From: Coywolf Qi Hunt @ 2005-09-27  3:50 UTC (permalink / raw)
  To: Roger Heflin; +Cc: Al Boldi, linux-kernel

On 9/26/05, Roger Heflin <rheflin@atipa.com> wrote:
>
> While talking about limits, one of my customers report that if
> they set "ulimit -d" to be say 8GB, and then a program goes and
> attempts to allocate 16GB (in one shot), that the process will
> hang on the 16GB allocate as the machine does not have enough
> memory+swap to handle this, the process is at this time unkillable,
> the customers method to kill the process is to send the process
> a kill signal, and then create enough swap to be able to meet
> the request, after the request is filled the process terminates.
>
> It would seem that the best thing to do would be to abort on
> allocates that will by themselves exceed the limit.
>
> This was a custom version of a earlier version of the 2.6 kernel,
> I would bet that this has not changed in quite a while.
>
>                         Roger

It's simple. Set /proc/sys/vm/overcommit_memory to 2 (iirc) to get
arround this `bug' .
--
Coywolf Qi Hunt
http://sosdg.org/~coywolf/

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-26 20:26           ` Al Boldi
@ 2005-09-27  1:05             ` Neil Horman
  2005-09-27  5:08               ` Al Boldi
  0 siblings, 1 reply; 24+ messages in thread
From: Neil Horman @ 2005-09-27  1:05 UTC (permalink / raw)
  To: Al Boldi; +Cc: linux-kernel

On Mon, Sep 26, 2005 at 11:26:10PM +0300, Al Boldi wrote:
> Neil Horman wrote:
> > On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > > Neil Horman wrote:
> > > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > > Rik van Riel wrote:
> > > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > > Too many process forks and your system may crash.
> > > > > > > This can be capped with threads-max, but may lead you into a
> > > > > > > lock-out.
> > > > > > >
> > > > > > > What is needed is a soft, hard, and a special emergency limit
> > > > > > > that would allow you to use the resource for a limited time to
> > > > > > > circumvent a lock-out.
> > > > > >
> > > > > > How would you reclaim the resource after that limited time is
> > > > > > over ?  Kill processes?
> > > > >
> > > > > That's one way,  but really, the issue needs some deep thought.
> > > > > Leaving Linux exposed to a lock-out is rather frightening.
> > > >
> > > > What exactly is it that you're worried about here?
> > >
> > > Think about a DoS attack.
> >
> > Be more specific.  Are you talking about a fork bomb, a ICMP flood, what?
> 
> How would you deal with a situation where the system hit the threads-max 
> ceiling?
> 
Nominally I would log the inability to successfully create a new process/thread,
attempt to free some of my applications resources, and try again.

> > preventing resource starvation/exhaustion is often handled in a way thats
> > dovetailed to the semantics of how that resources is allocated (i.e. you
> > prevent syn-flood attacks differently than you manage excessive disk
> > usage).
> 
> The issue here is a general lack of proper kernel support for resource 
> limits.  The fork problem is just an example.
> 
Thats not really true.  As Mr. Helsley pointed out, CKRM is available to provide
a level of class based resource management if you need it. By default you can
also create a level of resource limitation with ulimits as I mentioned.  But no
matter what you do, the only way you can guarantee that a system will be able to
provide the resources your workload needs is to limit the number of resources
your workload asks for, and in the event it asks for too much, make sure it can
handle the denial of the resource gracefully.

Thanks and regards
Neil

> Thanks!
> 
> --
> Al
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
/***************************************************
 *Neil Horman
 *Software Engineer
 *gpg keyid: 1024D / 0x92A74FA1 - http://pgp.mit.edu
 ***************************************************/

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: Resource limits
  2005-09-26 17:11   ` Alan Cox
  2005-09-26 17:32     ` Al Boldi
@ 2005-09-26 21:21     ` Roger Heflin
  1 sibling, 0 replies; 24+ messages in thread
From: Roger Heflin @ 2005-09-26 21:21 UTC (permalink / raw)
  To: 'Alan Cox'; +Cc: 'Al Boldi', linux-kernel

 

> On Llu, 2005-09-26 at 09:44 -0500, Roger Heflin wrote:
> > While talking about limits, one of my customers report that if they 
> > set "ulimit -d" to be say 8GB, and then a program goes and
> 
> The kernel doesn't yet support rlimit64() - glibc does but it 
> emulates it best effort. Thats a good intro project for someone
> 
> > It would seem that the best thing to do would be to abort 
> on allocates 
> > that will by themselves exceed the limit.
> 
> 2.6 supports "no overcommit" modes.
> 
> Alan
> 

Ah.

So any limit over 4GB, is emulated through glibc which means the
fix would need to be in the emulation that is outside of the
kernel.

And I think they were setting the limit to more like 32 or 48GB,
and having single allocation's go over that.   Some of the machines
in question have 32GB of ram, others have 64GB of ram, both with
fair amounts of swap, and when the event happens they need to create
enough swap to get enough swap to process the request.

The overcommit thing may do what they want.

                              Thanks.
                              Roger


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-26 17:51         ` Neil Horman
@ 2005-09-26 20:26           ` Al Boldi
  2005-09-27  1:05             ` Neil Horman
  0 siblings, 1 reply; 24+ messages in thread
From: Al Boldi @ 2005-09-26 20:26 UTC (permalink / raw)
  To: Neil Horman; +Cc: linux-kernel

Neil Horman wrote:
> On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > Neil Horman wrote:
> > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > Rik van Riel wrote:
> > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > Too many process forks and your system may crash.
> > > > > > This can be capped with threads-max, but may lead you into a
> > > > > > lock-out.
> > > > > >
> > > > > > What is needed is a soft, hard, and a special emergency limit
> > > > > > that would allow you to use the resource for a limited time to
> > > > > > circumvent a lock-out.
> > > > >
> > > > > How would you reclaim the resource after that limited time is
> > > > > over ?  Kill processes?
> > > >
> > > > That's one way,  but really, the issue needs some deep thought.
> > > > Leaving Linux exposed to a lock-out is rather frightening.
> > >
> > > What exactly is it that you're worried about here?
> >
> > Think about a DoS attack.
>
> Be more specific.  Are you talking about a fork bomb, a ICMP flood, what?

How would you deal with a situation where the system hit the threads-max 
ceiling?

> preventing resource starvation/exhaustion is often handled in a way thats
> dovetailed to the semantics of how that resources is allocated (i.e. you
> prevent syn-flood attacks differently than you manage excessive disk
> usage).

The issue here is a general lack of proper kernel support for resource 
limits.  The fork problem is just an example.

Thanks!

--
Al


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-25 14:12 Al Boldi
                   ` (2 preceding siblings ...)
  2005-09-26 14:44 ` Roger Heflin
@ 2005-09-26 19:07 ` Matthew Helsley
  3 siblings, 0 replies; 24+ messages in thread
From: Matthew Helsley @ 2005-09-26 19:07 UTC (permalink / raw)
  To: Al Boldi; +Cc: LKML, Chandra S. Seetharaman

On Sun, 2005-09-25 at 17:12 +0300, Al Boldi wrote:
> Resource limits in Linux, when available, are currently very limited.
> 
> i.e.:
> Too many process forks and your system may crash.
> This can be capped with threads-max, but may lead you into a lock-out.
> 
> What is needed is a soft, hard, and a special emergency limit that would 
> allow you to use the resource for a limited time to circumvent a lock-out.
> 
> Would this be difficult to implement?
> 
> Thanks!
> 
> --
> Al

	Have you looked at Class-Based Kernel Resource Managment (CKRM)
(http://ckrm.sf.net) to see if it fits your needs? My initial thought is
that the CKRM numtasks controller may help limit forks in the way you
describe.

	If you have any questions about it please join the CKRM-Tech mailing
list (ckrm-tech@lists.sourceforge.net) or chat with folks on the OFTC
IRC #ckrm channel.

Cheers,
	-Matt Helsley


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-26 17:32       ` Al Boldi
@ 2005-09-26 17:51         ` Neil Horman
  2005-09-26 20:26           ` Al Boldi
  0 siblings, 1 reply; 24+ messages in thread
From: Neil Horman @ 2005-09-26 17:51 UTC (permalink / raw)
  To: Al Boldi; +Cc: Rik van Riel, linux-kernel

On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> Neil Horman wrote:
> > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > Rik van Riel wrote:
> > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > Resource limits in Linux, when available, are currently very
> > > > > limited.
> > > > >
> > > > > i.e.:
> > > > > Too many process forks and your system may crash.
> > > > > This can be capped with threads-max, but may lead you into a
> > > > > lock-out.
> > > > >
> > > > > What is needed is a soft, hard, and a special emergency limit that
> > > > > would allow you to use the resource for a limited time to circumvent
> > > > > a lock-out.
> > > > >
> > > > > Would this be difficult to implement?
> > > >
> > > > How would you reclaim the resource after that limited time is
> > > > over ?  Kill processes?
> > >
> > > That's one way,  but really, the issue needs some deep thought.
> > > Leaving Linux exposed to a lock-out is rather frightening.
> >
> > What exactly is it that you're worried about here?  Do you have a
> > particular concern that a process won't be able to fork or create a
> > thread?  Resources that can be allocated to user space processes always
> > run the risk that their allocation will not succede.  Its up to the
> > application to deal with that.
> 
> Think about a DoS attack.
> 
> Thanks!
> 
Be more specific.  Are you talking about a fork bomb, a ICMP flood, what?
preventing resource starvation/exhaustion is often handled in a way thats
dovetailed to the semantics of how that resources is allocated (i.e. you prevent
syn-flood attacks differently than you manage excessive disk usage).

Regards
Neil

> --
> Al
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
/***************************************************
 *Neil Horman
 *Software Engineer
 *gpg keyid: 1024D / 0x92A74FA1 - http://pgp.mit.edu
 ***************************************************/

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-26 15:56     ` Neil Horman
@ 2005-09-26 17:32       ` Al Boldi
  2005-09-26 17:51         ` Neil Horman
  0 siblings, 1 reply; 24+ messages in thread
From: Al Boldi @ 2005-09-26 17:32 UTC (permalink / raw)
  To: Neil Horman; +Cc: Rik van Riel, linux-kernel

Neil Horman wrote:
> On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > Rik van Riel wrote:
> > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > Resource limits in Linux, when available, are currently very
> > > > limited.
> > > >
> > > > i.e.:
> > > > Too many process forks and your system may crash.
> > > > This can be capped with threads-max, but may lead you into a
> > > > lock-out.
> > > >
> > > > What is needed is a soft, hard, and a special emergency limit that
> > > > would allow you to use the resource for a limited time to circumvent
> > > > a lock-out.
> > > >
> > > > Would this be difficult to implement?
> > >
> > > How would you reclaim the resource after that limited time is
> > > over ?  Kill processes?
> >
> > That's one way,  but really, the issue needs some deep thought.
> > Leaving Linux exposed to a lock-out is rather frightening.
>
> What exactly is it that you're worried about here?  Do you have a
> particular concern that a process won't be able to fork or create a
> thread?  Resources that can be allocated to user space processes always
> run the risk that their allocation will not succede.  Its up to the
> application to deal with that.

Think about a DoS attack.

Thanks!

--
Al


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-26 17:11   ` Alan Cox
@ 2005-09-26 17:32     ` Al Boldi
  2005-09-26 21:21     ` Roger Heflin
  1 sibling, 0 replies; 24+ messages in thread
From: Al Boldi @ 2005-09-26 17:32 UTC (permalink / raw)
  To: Alan Cox; +Cc: linux-kernel

Alan Cox wrote:
> On Llu, 2005-09-26 at 09:44 -0500, Roger Heflin wrote:
> > While talking about limits, one of my customers report that if
> > they set "ulimit -d" to be say 8GB, and then a program goes and
>
> The kernel doesn't yet support rlimit64() - glibc does but it emulates
> it best effort. Thats a good intro project for someone
>
> > It would seem that the best thing to do would be to abort on
> > allocates that will by themselves exceed the limit.
>
> 2.6 supports "no overcommit" modes.

By name only.  see "Kswapd flaw" thread.

Thanks!

--
Al


^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: Resource limits
  2005-09-26 14:44 ` Roger Heflin
@ 2005-09-26 17:11   ` Alan Cox
  2005-09-26 17:32     ` Al Boldi
  2005-09-26 21:21     ` Roger Heflin
  2005-09-27  3:50   ` Coywolf Qi Hunt
  1 sibling, 2 replies; 24+ messages in thread
From: Alan Cox @ 2005-09-26 17:11 UTC (permalink / raw)
  To: Roger Heflin; +Cc: 'Al Boldi', linux-kernel

On Llu, 2005-09-26 at 09:44 -0500, Roger Heflin wrote:
> While talking about limits, one of my customers report that if
> they set "ulimit -d" to be say 8GB, and then a program goes and

The kernel doesn't yet support rlimit64() - glibc does but it emulates
it best effort. Thats a good intro project for someone

> It would seem that the best thing to do would be to abort on
> allocates that will by themselves exceed the limit.

2.6 supports "no overcommit" modes.

Alan


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-26 14:18   ` Al Boldi
@ 2005-09-26 15:56     ` Neil Horman
  2005-09-26 17:32       ` Al Boldi
  0 siblings, 1 reply; 24+ messages in thread
From: Neil Horman @ 2005-09-26 15:56 UTC (permalink / raw)
  To: Al Boldi; +Cc: Rik van Riel, linux-kernel

On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> Rik van Riel wrote:
> > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > Resource limits in Linux, when available, are currently very limited.
> > >
> > > i.e.:
> > > Too many process forks and your system may crash.
> > > This can be capped with threads-max, but may lead you into a lock-out.
> > >
> > > What is needed is a soft, hard, and a special emergency limit that would
> > > allow you to use the resource for a limited time to circumvent a
> > > lock-out.
> > >
> > > Would this be difficult to implement?
> >
> > How would you reclaim the resource after that limited time is
> > over ?  Kill processes?
> 
> That's one way,  but really, the issue needs some deep thought.
> Leaving Linux exposed to a lock-out is rather frightening.
> 
What exactly is it that you're worried about here?  Do you have a particular
concern that a process won't be able to fork or create a thread?  Resources that
can be allocated to user space processes always run the risk that their
allocation will not succede.  Its up to the application to deal with that.

> Neil Horman wrote:
> > Whats insufficient about the per-user limits that can be imposed by the
> > ulimit syscall?
> 
> Are they system wide or per-user?
> 
ulimits are per-user.
Neil

> --
> Al
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
/***************************************************
 *Neil Horman
 *Software Engineer
 *gpg keyid: 1024D / 0x92A74FA1 - http://pgp.mit.edu
 ***************************************************/

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: Resource limits
  2005-09-25 14:12 Al Boldi
  2005-09-26  3:36 ` Rik van Riel
  2005-09-26 12:28 ` Neil Horman
@ 2005-09-26 14:44 ` Roger Heflin
  2005-09-26 17:11   ` Alan Cox
  2005-09-27  3:50   ` Coywolf Qi Hunt
  2005-09-26 19:07 ` Matthew Helsley
  3 siblings, 2 replies; 24+ messages in thread
From: Roger Heflin @ 2005-09-26 14:44 UTC (permalink / raw)
  To: 'Al Boldi', linux-kernel


While talking about limits, one of my customers report that if
they set "ulimit -d" to be say 8GB, and then a program goes and
attempts to allocate 16GB (in one shot), that the process will
hang on the 16GB allocate as the machine does not have enough
memory+swap to handle this, the process is at this time unkillable,
the customers method to kill the process is to send the process
a kill signal, and then create enough swap to be able to meet
the request, after the request is filled the process terminates.

It would seem that the best thing to do would be to abort on
allocates that will by themselves exceed the limit.

This was a custom version of a earlier version of the 2.6 kernel, 
I would bet that this has not changed in quite a while.

                        Roger


> -----Original Message-----
> From: linux-kernel-owner@vger.kernel.org 
> [mailto:linux-kernel-owner@vger.kernel.org] On Behalf Of Al Boldi
> Sent: Sunday, September 25, 2005 9:13 AM
> To: linux-kernel@vger.kernel.org
> Subject: Resource limits
> 
> 
> Resource limits in Linux, when available, are currently very limited.
> 
> i.e.:
> Too many process forks and your system may crash.
> This can be capped with threads-max, but may lead you into a lock-out.
> 
> What is needed is a soft, hard, and a special emergency limit 
> that would allow you to use the resource for a limited time 
> to circumvent a lock-out.
> 
> Would this be difficult to implement?
> 
> Thanks!
> 
> --
> Al
> 
> -
> To unsubscribe from this list: send the line "unsubscribe 
> linux-kernel" in the body of a message to 
> majordomo@vger.kernel.org More majordomo info at  
> http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-26  3:36 ` Rik van Riel
@ 2005-09-26 14:18   ` Al Boldi
  2005-09-26 15:56     ` Neil Horman
  0 siblings, 1 reply; 24+ messages in thread
From: Al Boldi @ 2005-09-26 14:18 UTC (permalink / raw)
  To: Rik van Riel, Neil Horman; +Cc: linux-kernel

Rik van Riel wrote:
> On Sun, 25 Sep 2005, Al Boldi wrote:
> > Resource limits in Linux, when available, are currently very limited.
> >
> > i.e.:
> > Too many process forks and your system may crash.
> > This can be capped with threads-max, but may lead you into a lock-out.
> >
> > What is needed is a soft, hard, and a special emergency limit that would
> > allow you to use the resource for a limited time to circumvent a
> > lock-out.
> >
> > Would this be difficult to implement?
>
> How would you reclaim the resource after that limited time is
> over ?  Kill processes?

That's one way,  but really, the issue needs some deep thought.
Leaving Linux exposed to a lock-out is rather frightening.

Neil Horman wrote:
> Whats insufficient about the per-user limits that can be imposed by the
> ulimit syscall?

Are they system wide or per-user?

--
Al


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-25 14:12 Al Boldi
  2005-09-26  3:36 ` Rik van Riel
@ 2005-09-26 12:28 ` Neil Horman
  2005-09-26 14:44 ` Roger Heflin
  2005-09-26 19:07 ` Matthew Helsley
  3 siblings, 0 replies; 24+ messages in thread
From: Neil Horman @ 2005-09-26 12:28 UTC (permalink / raw)
  To: Al Boldi; +Cc: linux-kernel

On Sun, Sep 25, 2005 at 05:12:42PM +0300, Al Boldi wrote:
> 
> Resource limits in Linux, when available, are currently very limited.
> 
> i.e.:
> Too many process forks and your system may crash.
> This can be capped with threads-max, but may lead you into a lock-out.
> 
> What is needed is a soft, hard, and a special emergency limit that would 
> allow you to use the resource for a limited time to circumvent a lock-out.
> 
Whats insufficient about the per-user limits that can be imposed by the ulimit
syscall?


> Would this be difficult to implement?
> 
> Thanks!
> 
> --
> Al
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
/***************************************************
 *Neil Horman
 *Software Engineer
 *gpg keyid: 1024D / 0x92A74FA1 - http://pgp.mit.edu
 ***************************************************/

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Resource limits
  2005-09-25 14:12 Al Boldi
@ 2005-09-26  3:36 ` Rik van Riel
  2005-09-26 14:18   ` Al Boldi
  2005-09-26 12:28 ` Neil Horman
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 24+ messages in thread
From: Rik van Riel @ 2005-09-26  3:36 UTC (permalink / raw)
  To: Al Boldi; +Cc: linux-kernel

On Sun, 25 Sep 2005, Al Boldi wrote:

> Resource limits in Linux, when available, are currently very limited.
> 
> i.e.:
> Too many process forks and your system may crash.
> This can be capped with threads-max, but may lead you into a lock-out.
> 
> What is needed is a soft, hard, and a special emergency limit that would 
> allow you to use the resource for a limited time to circumvent a lock-out.
> 
> Would this be difficult to implement?

How would you reclaim the resource after that limited time is
over ?  Kill processes?

-- 
All Rights Reversed

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Resource limits
@ 2005-09-25 14:12 Al Boldi
  2005-09-26  3:36 ` Rik van Riel
                   ` (3 more replies)
  0 siblings, 4 replies; 24+ messages in thread
From: Al Boldi @ 2005-09-25 14:12 UTC (permalink / raw)
  To: linux-kernel


Resource limits in Linux, when available, are currently very limited.

i.e.:
Too many process forks and your system may crash.
This can be capped with threads-max, but may lead you into a lock-out.

What is needed is a soft, hard, and a special emergency limit that would 
allow you to use the resource for a limited time to circumvent a lock-out.

Would this be difficult to implement?

Thanks!

--
Al


^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2005-09-27 21:35 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-10-24 12:13 Resource limits Frank Cornelis
2002-10-24 16:46 ` Randolph Bentson
2005-09-25 14:12 Al Boldi
2005-09-26  3:36 ` Rik van Riel
2005-09-26 14:18   ` Al Boldi
2005-09-26 15:56     ` Neil Horman
2005-09-26 17:32       ` Al Boldi
2005-09-26 17:51         ` Neil Horman
2005-09-26 20:26           ` Al Boldi
2005-09-27  1:05             ` Neil Horman
2005-09-27  5:08               ` Al Boldi
2005-09-27 12:08                 ` Neil Horman
2005-09-27 13:42                   ` Al Boldi
2005-09-27 14:36                     ` Neil Horman
2005-09-27 15:50                       ` Al Boldi
2005-09-27 17:25                         ` Neil Horman
2005-09-27 21:35                 ` Chandra Seetharaman
2005-09-26 12:28 ` Neil Horman
2005-09-26 14:44 ` Roger Heflin
2005-09-26 17:11   ` Alan Cox
2005-09-26 17:32     ` Al Boldi
2005-09-26 21:21     ` Roger Heflin
2005-09-27  3:50   ` Coywolf Qi Hunt
2005-09-26 19:07 ` Matthew Helsley

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.