linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Zhang Qiao <zhangqiao22@huawei.com>
To: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: lkml <linux-kernel@vger.kernel.org>, <keescook@chromium.org>,
	<tglx@linutronix.de>, Peter Zijlstra <peterz@infradead.org>,
	<elver@google.com>, <legion@kernel.org>, <oleg@redhat.com>,
	<brauner@kernel.org>
Subject: Re: Question about kill a process group
Date: Thu, 28 Apr 2022 10:05:09 +0800	[thread overview]
Message-ID: <b36af592-ec3a-7a7e-3caa-0c5b15aee7fe@huawei.com> (raw)
In-Reply-To: <874k2mtny7.fsf@email.froward.int.ebiederm.org>


hi,

在 2022/4/22 0:12, Eric W. Biederman 写道:
> Zhang Qiao <zhangqiao22@huawei.com> writes:
> 
>> 在 2022/4/13 23:47, Eric W. Biederman 写道:
>>> To do something about this is going to take a deep and fundamental
>>> redesign of how we maintain process lists to handle a parent
>>> with millions of children well.
>>>
>>> Is there any real world reason to care about this case?  Without
>>> real world motivation I am inclined to just note that this is
>>
>> I just foune it while i ran ltp test.
> 
> So I looked and fork12 has been around since 2002 in largely it's
> current form.  So I am puzzled why you have run into problems
> and other people have not.
> 
> Did you perhaps have lock debugging enabled?
> 
> Did you run on a very large machine where a ridiculous number processes
> could be created?
> 
> Did you happen to run fork12 on a machine where locks are much more
> expensive than on most machines?


I don't think so, I reproduced this problem on two servers with different configurations.
One of server info as follows:
cpu: Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz, 64 cpus,
RAM: 377G

Do you need any other information?

> 
> 
>>> Is there a real world use case that connects to this?
>>>
>>> How many children are being created in this test?  Several million?
>>
>>   There are about 300,000+ processes.
> 
> Not as many as I was guessing, but still enough to cause a huge
> wait on locks.
> 
>>> I would like to blame this on the old issue that tasklist_lock being
>>> a global lock.  Given the number of child processes (as many as can be
>>> created) I don't think we are hurt much by using a global lock.  The
>>> problem for solubility is that we have a lock.
>>>
>>> Fundamentally there must be a lock taken to maintain the parent's
>>> list of children.
>>>
>>> I only see SIGQUIT being called once in the parent process so that
>>> should not be an issue.
>>
>>
>>   In fork12, every child will call kill(0, SIGQUIT) at cleanup().
>> There are a lot of kill(0, SIGQUIT) calls.
> 
> I had missed that.  I can see that stressing out a lot.
> 
> At the same time as I read fork12.c that is very much a bug.  The
> children in fork12.c should call _exit() instead of exit().  Which
> would suppress calling the atexit() handlers and let fork12.c
> test what it is trying to test.
> 
> That doesn't mean there isn't a mystery here, but more that if
> we really want to test lots of processes calling the same
> signal at the same time it should be a test that means to do that.
> 
> 
>>> There is a minor issue in fork12 that it calls exit(0) instead of
>>> _exit(0) in the children.  Not the problem you are dealing with
>>> but it does look like it can be a distraction.
>>>
>>> I suspect the issue really is the thundering hurd of a million+
>>> processes synchronizing on a single lock.
>>>
>>> I don't think this is a hard lockup, just a global slow down.
>>> I expect everything will eventually exit.
>>>
>>
>>  But according to the vmcore, this is a hardlockup issue, and i think
>> there may be the following scenarios:
> 
> Let me rewind a second.  I just realized that I don't have a clue what
> a hard lockup is (outside of the linux hard lockup detector).
> 
> The two kinds of lockups that I understand with a technical meaning are
> deadlock (such taking two locks in opposite orders which can never be
> escaped), and livelock (where things are so busy no progress is made for
> an extended period of time).
> 
> I meant to say this is not a deadlock situation.  This looks like a
> livelock, but I think given enough time the code would make progress and
> get out of it.
> 
> I do agree over 1 second for holding a spin lock is ridiculous and a
> denial of service attack.
> 
> 
> 
> What I unfortunately do not see is a real world scenario where this will
> happen.  Without a real world scenario it is hard to find motivation to
> spend the year or so it would take to rework all of the data structures.
> The closest I can imagine to a real world scenario is that this
> situation can be used as a denial of service attack.
> 
> The hardest part of the problem is that signals sent to a group need to
> be sent to the group atomically.  That is the signals need to be sent to
> every member of the group.
> 
> Anyway I am very curious why you are the only one seeing a problem with
> fork12.  That we can definitely investigate as tracking down what is
> different about your setup versus other people who have run ltp seems
> much easier than redesigning all of the signal processing data
> structures from scratch.

the test steps are as follows:

1. git clone https://github.com/linux-test-project/ltp.git --depth=1
2. cd ltp/
3. make autotools
4. ./configure
5. cd testcases/kernel/syscalls/
6. make -j64
7. find ./ -type f -executable > newlist
8. while read line;do ./$line -I 30;done < newlist
9. After ten hours, i trigger Ctrl+C repeatedly.

> 
> Eric
> .
> 

  reply	other threads:[~2022-04-28  2:05 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-29  8:07 Question about kill a process group Zhang Qiao
2022-04-02  2:22 ` Zhang Qiao
2022-04-13  1:56   ` Zhang Qiao
2022-04-13 15:47     ` Eric W. Biederman
2022-04-14 11:40       ` Zhang Qiao
2022-04-21 16:12         ` Eric W. Biederman
2022-04-28  2:05           ` Zhang Qiao [this message]
2022-04-28 12:33           ` Thomas Gleixner
2022-05-11 18:33             ` Eric W. Biederman
2022-05-11 22:53               ` Thomas Gleixner
2022-05-12 18:23                 ` Eric W. Biederman
2022-09-26  7:32                   ` Zhang Qiao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b36af592-ec3a-7a7e-3caa-0c5b15aee7fe@huawei.com \
    --to=zhangqiao22@huawei.com \
    --cc=brauner@kernel.org \
    --cc=ebiederm@xmission.com \
    --cc=elver@google.com \
    --cc=keescook@chromium.org \
    --cc=legion@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=oleg@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).