From: Manfred Spraul <manfred@colorfullife.com>
To: Vineet Gupta <Vineet.Gupta1@synopsys.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
Davidlohr Bueso <dave.bueso@gmail.com>,
Sedat Dilek <sedat.dilek@gmail.com>,
Davidlohr Bueso <davidlohr.bueso@hp.com>,
linux-next <linux-next@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
Stephen Rothwell <sfr@canb.auug.org.au>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm <linux-mm@kvack.org>, Andi Kleen <andi@firstfloor.org>,
Rik van Riel <riel@redhat.com>,
Jonathan Gonzalez <jgonzalez@linets.cl>
Subject: Re: ipc-msg broken again on 3.11-rc7?
Date: Tue, 03 Sep 2013 11:23:25 +0200 [thread overview]
Message-ID: <5225AA8D.6080403@colorfullife.com> (raw)
In-Reply-To: <C2D7FE5348E1B147BCA15975FBA2307514165E@IN01WEMBXA.internal.synopsys.com>
On 09/03/2013 11:16 AM, Vineet Gupta wrote:
> On 09/03/2013 02:27 PM, Manfred Spraul wrote:
>> On 09/03/2013 10:44 AM, Vineet Gupta wrote:
>>>> b) Could you check that it is not just a performance regression?
>>>> Does ./msgctl08 1000 16 hang, too?
>>> Nope that doesn't hang. The minimal configuration that hangs reliably is msgctl
>>> 50000 2
>>>
>>> With this config there are 3 processes.
>>> ...
>>> 555 554 root S 1208 0.4 0 0.0 ./msgctl08 50000 2
>>> 554 551 root S 1208 0.4 0 0.0 ./msgctl08 50000 2
>>> 551 496 root S 1208 0.4 0 0.0 ./msgctl08 50000 2
>>> ...
>>>
>>> [ARCLinux]$ cat /proc/551/stack
>>> [<80aec3c6>] do_wait+0xa02/0xc94
>>> [<80aecad2>] SyS_wait4+0x52/0xa4
>>> [<80ae24fc>] ret_from_system_call+0x0/0x4
>>>
>>> [ARCLinux]$ cat /proc/555/stack
>>> [<80c2950e>] SyS_msgrcv+0x252/0x420
>>> [<80ae24fc>] ret_from_system_call+0x0/0x4
>>>
>>> [ARCLinux]$ cat /proc/554/stack
>>> [<80c28c82>] do_msgsnd+0x116/0x35c
>>> [<80ae24fc>] ret_from_system_call+0x0/0x4
>>>
>>> Is this a case of lost wakeup or some such. I'm running with some more diagnostics
>>> and will report soon ...
>> What is the output of ipcs -q? Is the queue full or empty when it hangs?
>> I.e. do we forget to wake up a receiver or forget to wake up a sender?
> / # ipcs -q
>
> ------ Message Queues --------
> key msqid owner perms used-bytes messages
> 0x72d83160 163841 root 600 0 0
>
>
Ok, a sender is sleeping - even though there are no messages in the queue.
Perhaps it is the race that I mentioned in a previous mail:
> for (;;) {
> struct msg_sender s;
>
> err = -EACCES;
> if (ipcperms(ns, &msq->q_perm, S_IWUGO))
> goto out_unlock1;
>
> err = security_msg_queue_msgsnd(msq, msg, msgflg);
> if (err)
> goto out_unlock1;
>
> if (msgsz + msq->q_cbytes <= msq->q_qbytes &&
> 1 + msq->q_qnum <= msq->q_qbytes) {
> break;
> }
>
[snip]
> if (!pipelined_send(msq, msg)) {
> /* no one is waiting for this message, enqueue it */
> list_add_tail(&msg->m_list, &msq->q_messages);
> msq->q_cbytes += msgsz;
> msq->q_qnum++;
> atomic_add(msgsz, &ns->msg_bytes);
The access to msq->q_cbytes is not protected.
Vineet, could you try to move the test for free space after ipc_lock?
I.e. the lock must not get dropped between testing for free space and
enqueueing the messages.
--
Manfred
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-09-03 9:23 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-06-21 19:34 linux-next: Tree for Jun 21 [ BROKEN ipc/ipc-msg ] Sedat Dilek
2013-06-21 22:07 ` Davidlohr Bueso
2013-06-21 22:54 ` Sedat Dilek
2013-06-21 23:11 ` Davidlohr Bueso
2013-06-21 23:14 ` Sedat Dilek
2013-06-21 23:15 ` Sedat Dilek
2013-06-25 16:10 ` Sedat Dilek
2013-06-25 20:33 ` Davidlohr Bueso
2013-06-25 21:41 ` Sedat Dilek
2013-06-25 23:29 ` Davidlohr Bueso
2013-08-28 11:58 ` ipc-msg broken again on 3.11-rc7? (was Re: linux-next: Tree for Jun 21 [ BROKEN ipc/ipc-msg ]) Vineet Gupta
2013-08-29 3:04 ` Sedat Dilek
2013-08-29 7:21 ` Vineet Gupta
2013-08-29 7:52 ` Sedat Dilek
2013-08-30 8:19 ` Vineet Gupta
2013-08-30 8:27 ` Sedat Dilek
2013-08-30 8:46 ` ipc-msg broken again on 3.11-rc7? Vineet Gupta
[not found] ` <CALE5RAvaa4bb-9xAnBe07Yp2n+Nn4uGEgqpLrKMuOE8hhZv00Q@mail.gmail.com>
2013-08-30 16:31 ` Davidlohr Bueso
2013-08-31 17:50 ` Linus Torvalds
2013-09-02 4:58 ` Vineet Gupta
2013-09-02 16:29 ` Manfred Spraul
2013-09-03 7:16 ` Sedat Dilek
2013-09-03 7:34 ` Vineet Gupta
2013-09-03 7:49 ` Manfred Spraul
2013-09-03 8:43 ` Sedat Dilek
2013-09-03 8:44 ` Vineet Gupta
2013-09-03 8:57 ` Manfred Spraul
2013-09-03 9:16 ` Vineet Gupta
2013-09-03 9:23 ` Manfred Spraul [this message]
2013-09-03 9:51 ` Vineet Gupta
2013-09-03 10:16 ` Manfred Spraul
2013-09-03 10:32 ` ipc msg now works (was Re: ipc-msg broken again on 3.11-rc7?) Vineet Gupta
2013-09-03 22:46 ` Sedat Dilek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5225AA8D.6080403@colorfullife.com \
--to=manfred@colorfullife.com \
--cc=Vineet.Gupta1@synopsys.com \
--cc=akpm@linux-foundation.org \
--cc=andi@firstfloor.org \
--cc=dave.bueso@gmail.com \
--cc=davidlohr.bueso@hp.com \
--cc=jgonzalez@linets.cl \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-next@vger.kernel.org \
--cc=riel@redhat.com \
--cc=sedat.dilek@gmail.com \
--cc=sfr@canb.auug.org.au \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).