All of lore.kernel.org
 help / color / mirror / Atom feed
* sound: another deadlock in snd_seq_pool_done
@ 2017-03-04 16:31 Dmitry Vyukov
  2017-03-10  9:19 ` Dmitry Vyukov
  2017-03-21 12:23   ` Takashi Iwai
  0 siblings, 2 replies; 12+ messages in thread
From: Dmitry Vyukov @ 2017-03-04 16:31 UTC (permalink / raw)
  To: Jaroslav Kysela, Takashi Iwai, alsa-devel, LKML; +Cc: syzkaller

Hello,

The following program creates processes deadlocked in snd_seq_pool_done:

https://gist.githubusercontent.com/dvyukov/97efc9cb6d63b1b2c7b737b82cc8b0d6/raw/3546b133ae0b2d3e1190ae7c1f4e240ce7ce132e/gistfile1.txt

After few seconds I get:

# ps afxu | grep a.out
root      8660  2.0  0.0      0     0 pts/0    Zl   16:27   0:00
[a.out] <defunct>

# kill -9 8660

# cat /proc/8660/status
Name: a.out
State: Z (zombie)
Tgid: 8660
Ngid: 0
Pid: 8660
PPid: 1
TracerPid: 0
Uid: 0 0 0 0
Gid: 0 0 0 0
FDSize: 0
Groups: 0
NStgid: 8660
NSpid: 8660
NSpgid: 8660
NSsid: 2971
Threads: 2
SigQ: 1/3304
SigPnd: 0000000000000000
ShdPnd: 0000000000000100
SigBlk: 0000000000000000
SigIgn: 0000000180000000
SigCgt: 0000000000000440
CapInh: 0000000000000000
CapPrm: 0000003fffffffff
CapEff: 0000003fffffffff
CapBnd: 0000003fffffffff
CapAmb: 0000000000000000
NoNewPrivs: 0
Seccomp: 0
Cpus_allowed: f
Cpus_allowed_list: 0-3
Mems_allowed: 00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 12
nonvoluntary_ctxt_switches: 0

# cat /proc/8660/task/*/stack
[<ffffffff835406db>] snd_seq_pool_done+0x31b/0x620
sound/core/seq/seq_memory.c:436
[<ffffffff8353a11e>] snd_seq_ioctl_set_client_pool+0x1ae/0x600
sound/core/seq/seq_clientmgr.c:1836
[<ffffffff835382ba>] snd_seq_ioctl+0x2da/0x4d0
sound/core/seq/seq_clientmgr.c:2130
[<ffffffff81aced2f>] vfs_ioctl fs/ioctl.c:45 [inline]
[<ffffffff81aced2f>] do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:685
[<ffffffff81ad038f>] SYSC_ioctl fs/ioctl.c:700 [inline]
[<ffffffff81ad038f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
[<ffffffff8457dc41>] entry_SYSCALL_64_fastpath+0x1f/0xc2

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: sound: another deadlock in snd_seq_pool_done
  2017-03-04 16:31 sound: another deadlock in snd_seq_pool_done Dmitry Vyukov
@ 2017-03-10  9:19 ` Dmitry Vyukov
  2017-03-20  7:17     ` Takashi Iwai
  2017-03-21 12:23   ` Takashi Iwai
  1 sibling, 1 reply; 12+ messages in thread
From: Dmitry Vyukov @ 2017-03-10  9:19 UTC (permalink / raw)
  To: Jaroslav Kysela, Takashi Iwai, alsa-devel, LKML; +Cc: syzkaller

On Sat, Mar 4, 2017 at 5:31 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
> Hello,
>
> The following program creates processes deadlocked in snd_seq_pool_done:
>
> https://gist.githubusercontent.com/dvyukov/97efc9cb6d63b1b2c7b737b82cc8b0d6/raw/3546b133ae0b2d3e1190ae7c1f4e240ce7ce132e/gistfile1.txt
>
> After few seconds I get:
>
> # ps afxu | grep a.out
> root      8660  2.0  0.0      0     0 pts/0    Zl   16:27   0:00
> [a.out] <defunct>
>
> # kill -9 8660
>
> # cat /proc/8660/status
> Name: a.out
> State: Z (zombie)
> Tgid: 8660
> Ngid: 0
> Pid: 8660
> PPid: 1
> TracerPid: 0
> Uid: 0 0 0 0
> Gid: 0 0 0 0
> FDSize: 0
> Groups: 0
> NStgid: 8660
> NSpid: 8660
> NSpgid: 8660
> NSsid: 2971
> Threads: 2
> SigQ: 1/3304
> SigPnd: 0000000000000000
> ShdPnd: 0000000000000100
> SigBlk: 0000000000000000
> SigIgn: 0000000180000000
> SigCgt: 0000000000000440
> CapInh: 0000000000000000
> CapPrm: 0000003fffffffff
> CapEff: 0000003fffffffff
> CapBnd: 0000003fffffffff
> CapAmb: 0000000000000000
> NoNewPrivs: 0
> Seccomp: 0
> Cpus_allowed: f
> Cpus_allowed_list: 0-3
> Mems_allowed: 00000000,00000001
> Mems_allowed_list: 0
> voluntary_ctxt_switches: 12
> nonvoluntary_ctxt_switches: 0
>
> # cat /proc/8660/task/*/stack
> [<ffffffff835406db>] snd_seq_pool_done+0x31b/0x620
> sound/core/seq/seq_memory.c:436
> [<ffffffff8353a11e>] snd_seq_ioctl_set_client_pool+0x1ae/0x600
> sound/core/seq/seq_clientmgr.c:1836
> [<ffffffff835382ba>] snd_seq_ioctl+0x2da/0x4d0
> sound/core/seq/seq_clientmgr.c:2130
> [<ffffffff81aced2f>] vfs_ioctl fs/ioctl.c:45 [inline]
> [<ffffffff81aced2f>] do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:685
> [<ffffffff81ad038f>] SYSC_ioctl fs/ioctl.c:700 [inline]
> [<ffffffff81ad038f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
> [<ffffffff8457dc41>] entry_SYSCALL_64_fastpath+0x1f/0xc2


A friendly ping.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: sound: another deadlock in snd_seq_pool_done
  2017-03-10  9:19 ` Dmitry Vyukov
@ 2017-03-20  7:17     ` Takashi Iwai
  0 siblings, 0 replies; 12+ messages in thread
From: Takashi Iwai @ 2017-03-20  7:17 UTC (permalink / raw)
  To: Dmitry Vyukov; +Cc: alsa-devel, Jaroslav Kysela, LKML, syzkaller

On Fri, 10 Mar 2017 10:19:13 +0100,
Dmitry Vyukov wrote:
> 
> On Sat, Mar 4, 2017 at 5:31 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
> > Hello,
> >
> > The following program creates processes deadlocked in snd_seq_pool_done:
> >
> > https://gist.githubusercontent.com/dvyukov/97efc9cb6d63b1b2c7b737b82cc8b0d6/raw/3546b133ae0b2d3e1190ae7c1f4e240ce7ce132e/gistfile1.txt
> >
> > After few seconds I get:
> >
> > # ps afxu | grep a.out
> > root      8660  2.0  0.0      0     0 pts/0    Zl   16:27   0:00
> > [a.out] <defunct>
> >
> > # kill -9 8660
> >
> > # cat /proc/8660/status
> > Name: a.out
> > State: Z (zombie)
> > Tgid: 8660
> > Ngid: 0
> > Pid: 8660
> > PPid: 1
> > TracerPid: 0
> > Uid: 0 0 0 0
> > Gid: 0 0 0 0
> > FDSize: 0
> > Groups: 0
> > NStgid: 8660
> > NSpid: 8660
> > NSpgid: 8660
> > NSsid: 2971
> > Threads: 2
> > SigQ: 1/3304
> > SigPnd: 0000000000000000
> > ShdPnd: 0000000000000100
> > SigBlk: 0000000000000000
> > SigIgn: 0000000180000000
> > SigCgt: 0000000000000440
> > CapInh: 0000000000000000
> > CapPrm: 0000003fffffffff
> > CapEff: 0000003fffffffff
> > CapBnd: 0000003fffffffff
> > CapAmb: 0000000000000000
> > NoNewPrivs: 0
> > Seccomp: 0
> > Cpus_allowed: f
> > Cpus_allowed_list: 0-3
> > Mems_allowed: 00000000,00000001
> > Mems_allowed_list: 0
> > voluntary_ctxt_switches: 12
> > nonvoluntary_ctxt_switches: 0
> >
> > # cat /proc/8660/task/*/stack
> > [<ffffffff835406db>] snd_seq_pool_done+0x31b/0x620
> > sound/core/seq/seq_memory.c:436
> > [<ffffffff8353a11e>] snd_seq_ioctl_set_client_pool+0x1ae/0x600
> > sound/core/seq/seq_clientmgr.c:1836
> > [<ffffffff835382ba>] snd_seq_ioctl+0x2da/0x4d0
> > sound/core/seq/seq_clientmgr.c:2130
> > [<ffffffff81aced2f>] vfs_ioctl fs/ioctl.c:45 [inline]
> > [<ffffffff81aced2f>] do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:685
> > [<ffffffff81ad038f>] SYSC_ioctl fs/ioctl.c:700 [inline]
> > [<ffffffff81ad038f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
> > [<ffffffff8457dc41>] entry_SYSCALL_64_fastpath+0x1f/0xc2
> 
> 
> A friendly ping.

Sorry for the delay, I've been on vacation for over the last two
weeks.  (And I have a deja vu feeing, as if I so often receive your
syzkaller bug reports during or just before my vacation in the past,
too... :)

Will take a look at this soon after digesting the pending mails.


thanks,

Takashi

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: sound: another deadlock in snd_seq_pool_done
@ 2017-03-20  7:17     ` Takashi Iwai
  0 siblings, 0 replies; 12+ messages in thread
From: Takashi Iwai @ 2017-03-20  7:17 UTC (permalink / raw)
  To: Dmitry Vyukov; +Cc: alsa-devel, syzkaller, LKML

On Fri, 10 Mar 2017 10:19:13 +0100,
Dmitry Vyukov wrote:
> 
> On Sat, Mar 4, 2017 at 5:31 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
> > Hello,
> >
> > The following program creates processes deadlocked in snd_seq_pool_done:
> >
> > https://gist.githubusercontent.com/dvyukov/97efc9cb6d63b1b2c7b737b82cc8b0d6/raw/3546b133ae0b2d3e1190ae7c1f4e240ce7ce132e/gistfile1.txt
> >
> > After few seconds I get:
> >
> > # ps afxu | grep a.out
> > root      8660  2.0  0.0      0     0 pts/0    Zl   16:27   0:00
> > [a.out] <defunct>
> >
> > # kill -9 8660
> >
> > # cat /proc/8660/status
> > Name: a.out
> > State: Z (zombie)
> > Tgid: 8660
> > Ngid: 0
> > Pid: 8660
> > PPid: 1
> > TracerPid: 0
> > Uid: 0 0 0 0
> > Gid: 0 0 0 0
> > FDSize: 0
> > Groups: 0
> > NStgid: 8660
> > NSpid: 8660
> > NSpgid: 8660
> > NSsid: 2971
> > Threads: 2
> > SigQ: 1/3304
> > SigPnd: 0000000000000000
> > ShdPnd: 0000000000000100
> > SigBlk: 0000000000000000
> > SigIgn: 0000000180000000
> > SigCgt: 0000000000000440
> > CapInh: 0000000000000000
> > CapPrm: 0000003fffffffff
> > CapEff: 0000003fffffffff
> > CapBnd: 0000003fffffffff
> > CapAmb: 0000000000000000
> > NoNewPrivs: 0
> > Seccomp: 0
> > Cpus_allowed: f
> > Cpus_allowed_list: 0-3
> > Mems_allowed: 00000000,00000001
> > Mems_allowed_list: 0
> > voluntary_ctxt_switches: 12
> > nonvoluntary_ctxt_switches: 0
> >
> > # cat /proc/8660/task/*/stack
> > [<ffffffff835406db>] snd_seq_pool_done+0x31b/0x620
> > sound/core/seq/seq_memory.c:436
> > [<ffffffff8353a11e>] snd_seq_ioctl_set_client_pool+0x1ae/0x600
> > sound/core/seq/seq_clientmgr.c:1836
> > [<ffffffff835382ba>] snd_seq_ioctl+0x2da/0x4d0
> > sound/core/seq/seq_clientmgr.c:2130
> > [<ffffffff81aced2f>] vfs_ioctl fs/ioctl.c:45 [inline]
> > [<ffffffff81aced2f>] do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:685
> > [<ffffffff81ad038f>] SYSC_ioctl fs/ioctl.c:700 [inline]
> > [<ffffffff81ad038f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
> > [<ffffffff8457dc41>] entry_SYSCALL_64_fastpath+0x1f/0xc2
> 
> 
> A friendly ping.

Sorry for the delay, I've been on vacation for over the last two
weeks.  (And I have a deja vu feeing, as if I so often receive your
syzkaller bug reports during or just before my vacation in the past,
too... :)

Will take a look at this soon after digesting the pending mails.


thanks,

Takashi

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: sound: another deadlock in snd_seq_pool_done
  2017-03-20  7:17     ` Takashi Iwai
@ 2017-03-20  9:01       ` Dmitry Vyukov
  -1 siblings, 0 replies; 12+ messages in thread
From: Dmitry Vyukov @ 2017-03-20  9:01 UTC (permalink / raw)
  To: Takashi Iwai; +Cc: alsa-devel, Jaroslav Kysela, LKML, syzkaller

On Mon, Mar 20, 2017 at 8:17 AM, Takashi Iwai <tiwai@suse.de> wrote:
> On Fri, 10 Mar 2017 10:19:13 +0100,
> Dmitry Vyukov wrote:
>>
>> On Sat, Mar 4, 2017 at 5:31 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
>> > Hello,
>> >
>> > The following program creates processes deadlocked in snd_seq_pool_done:
>> >
>> > https://gist.githubusercontent.com/dvyukov/97efc9cb6d63b1b2c7b737b82cc8b0d6/raw/3546b133ae0b2d3e1190ae7c1f4e240ce7ce132e/gistfile1.txt
>> >
>> > After few seconds I get:
>> >
>> > # ps afxu | grep a.out
>> > root      8660  2.0  0.0      0     0 pts/0    Zl   16:27   0:00
>> > [a.out] <defunct>
>> >
>> > # kill -9 8660
>> >
>> > # cat /proc/8660/status
>> > Name: a.out
>> > State: Z (zombie)
>> > Tgid: 8660
>> > Ngid: 0
>> > Pid: 8660
>> > PPid: 1
>> > TracerPid: 0
>> > Uid: 0 0 0 0
>> > Gid: 0 0 0 0
>> > FDSize: 0
>> > Groups: 0
>> > NStgid: 8660
>> > NSpid: 8660
>> > NSpgid: 8660
>> > NSsid: 2971
>> > Threads: 2
>> > SigQ: 1/3304
>> > SigPnd: 0000000000000000
>> > ShdPnd: 0000000000000100
>> > SigBlk: 0000000000000000
>> > SigIgn: 0000000180000000
>> > SigCgt: 0000000000000440
>> > CapInh: 0000000000000000
>> > CapPrm: 0000003fffffffff
>> > CapEff: 0000003fffffffff
>> > CapBnd: 0000003fffffffff
>> > CapAmb: 0000000000000000
>> > NoNewPrivs: 0
>> > Seccomp: 0
>> > Cpus_allowed: f
>> > Cpus_allowed_list: 0-3
>> > Mems_allowed: 00000000,00000001
>> > Mems_allowed_list: 0
>> > voluntary_ctxt_switches: 12
>> > nonvoluntary_ctxt_switches: 0
>> >
>> > # cat /proc/8660/task/*/stack
>> > [<ffffffff835406db>] snd_seq_pool_done+0x31b/0x620
>> > sound/core/seq/seq_memory.c:436
>> > [<ffffffff8353a11e>] snd_seq_ioctl_set_client_pool+0x1ae/0x600
>> > sound/core/seq/seq_clientmgr.c:1836
>> > [<ffffffff835382ba>] snd_seq_ioctl+0x2da/0x4d0
>> > sound/core/seq/seq_clientmgr.c:2130
>> > [<ffffffff81aced2f>] vfs_ioctl fs/ioctl.c:45 [inline]
>> > [<ffffffff81aced2f>] do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:685
>> > [<ffffffff81ad038f>] SYSC_ioctl fs/ioctl.c:700 [inline]
>> > [<ffffffff81ad038f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
>> > [<ffffffff8457dc41>] entry_SYSCALL_64_fastpath+0x1f/0xc2
>>
>>
>> A friendly ping.
>
> Sorry for the delay, I've been on vacation for over the last two
> weeks.  (And I have a deja vu feeing, as if I so often receive your
> syzkaller bug reports during or just before my vacation in the past,
> too... :)
>
> Will take a look at this soon after digesting the pending mails.

I swear I don't stack them up to send right before your vacation :)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: sound: another deadlock in snd_seq_pool_done
@ 2017-03-20  9:01       ` Dmitry Vyukov
  0 siblings, 0 replies; 12+ messages in thread
From: Dmitry Vyukov @ 2017-03-20  9:01 UTC (permalink / raw)
  To: Takashi Iwai; +Cc: alsa-devel, syzkaller, LKML

On Mon, Mar 20, 2017 at 8:17 AM, Takashi Iwai <tiwai@suse.de> wrote:
> On Fri, 10 Mar 2017 10:19:13 +0100,
> Dmitry Vyukov wrote:
>>
>> On Sat, Mar 4, 2017 at 5:31 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
>> > Hello,
>> >
>> > The following program creates processes deadlocked in snd_seq_pool_done:
>> >
>> > https://gist.githubusercontent.com/dvyukov/97efc9cb6d63b1b2c7b737b82cc8b0d6/raw/3546b133ae0b2d3e1190ae7c1f4e240ce7ce132e/gistfile1.txt
>> >
>> > After few seconds I get:
>> >
>> > # ps afxu | grep a.out
>> > root      8660  2.0  0.0      0     0 pts/0    Zl   16:27   0:00
>> > [a.out] <defunct>
>> >
>> > # kill -9 8660
>> >
>> > # cat /proc/8660/status
>> > Name: a.out
>> > State: Z (zombie)
>> > Tgid: 8660
>> > Ngid: 0
>> > Pid: 8660
>> > PPid: 1
>> > TracerPid: 0
>> > Uid: 0 0 0 0
>> > Gid: 0 0 0 0
>> > FDSize: 0
>> > Groups: 0
>> > NStgid: 8660
>> > NSpid: 8660
>> > NSpgid: 8660
>> > NSsid: 2971
>> > Threads: 2
>> > SigQ: 1/3304
>> > SigPnd: 0000000000000000
>> > ShdPnd: 0000000000000100
>> > SigBlk: 0000000000000000
>> > SigIgn: 0000000180000000
>> > SigCgt: 0000000000000440
>> > CapInh: 0000000000000000
>> > CapPrm: 0000003fffffffff
>> > CapEff: 0000003fffffffff
>> > CapBnd: 0000003fffffffff
>> > CapAmb: 0000000000000000
>> > NoNewPrivs: 0
>> > Seccomp: 0
>> > Cpus_allowed: f
>> > Cpus_allowed_list: 0-3
>> > Mems_allowed: 00000000,00000001
>> > Mems_allowed_list: 0
>> > voluntary_ctxt_switches: 12
>> > nonvoluntary_ctxt_switches: 0
>> >
>> > # cat /proc/8660/task/*/stack
>> > [<ffffffff835406db>] snd_seq_pool_done+0x31b/0x620
>> > sound/core/seq/seq_memory.c:436
>> > [<ffffffff8353a11e>] snd_seq_ioctl_set_client_pool+0x1ae/0x600
>> > sound/core/seq/seq_clientmgr.c:1836
>> > [<ffffffff835382ba>] snd_seq_ioctl+0x2da/0x4d0
>> > sound/core/seq/seq_clientmgr.c:2130
>> > [<ffffffff81aced2f>] vfs_ioctl fs/ioctl.c:45 [inline]
>> > [<ffffffff81aced2f>] do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:685
>> > [<ffffffff81ad038f>] SYSC_ioctl fs/ioctl.c:700 [inline]
>> > [<ffffffff81ad038f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
>> > [<ffffffff8457dc41>] entry_SYSCALL_64_fastpath+0x1f/0xc2
>>
>>
>> A friendly ping.
>
> Sorry for the delay, I've been on vacation for over the last two
> weeks.  (And I have a deja vu feeing, as if I so often receive your
> syzkaller bug reports during or just before my vacation in the past,
> too... :)
>
> Will take a look at this soon after digesting the pending mails.

I swear I don't stack them up to send right before your vacation :)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: sound: another deadlock in snd_seq_pool_done
  2017-03-04 16:31 sound: another deadlock in snd_seq_pool_done Dmitry Vyukov
@ 2017-03-21 12:23   ` Takashi Iwai
  2017-03-21 12:23   ` Takashi Iwai
  1 sibling, 0 replies; 12+ messages in thread
From: Takashi Iwai @ 2017-03-21 12:23 UTC (permalink / raw)
  To: Dmitry Vyukov; +Cc: alsa-devel, Jaroslav Kysela, LKML, syzkaller

On Sat, 04 Mar 2017 17:31:21 +0100,
Dmitry Vyukov wrote:
> 
> Hello,
> 
> The following program creates processes deadlocked in snd_seq_pool_done:
> 
> https://gist.githubusercontent.com/dvyukov/97efc9cb6d63b1b2c7b737b82cc8b0d6/raw/3546b133ae0b2d3e1190ae7c1f4e240ce7ce132e/gistfile1.txt
> 
> After few seconds I get:
> 
> # ps afxu | grep a.out
> root      8660  2.0  0.0      0     0 pts/0    Zl   16:27   0:00
> [a.out] <defunct>
> 
> # kill -9 8660
> 
> # cat /proc/8660/status
> Name: a.out
> State: Z (zombie)
> Tgid: 8660
> Ngid: 0
> Pid: 8660
> PPid: 1
> TracerPid: 0
> Uid: 0 0 0 0
> Gid: 0 0 0 0
> FDSize: 0
> Groups: 0
> NStgid: 8660
> NSpid: 8660
> NSpgid: 8660
> NSsid: 2971
> Threads: 2
> SigQ: 1/3304
> SigPnd: 0000000000000000
> ShdPnd: 0000000000000100
> SigBlk: 0000000000000000
> SigIgn: 0000000180000000
> SigCgt: 0000000000000440
> CapInh: 0000000000000000
> CapPrm: 0000003fffffffff
> CapEff: 0000003fffffffff
> CapBnd: 0000003fffffffff
> CapAmb: 0000000000000000
> NoNewPrivs: 0
> Seccomp: 0
> Cpus_allowed: f
> Cpus_allowed_list: 0-3
> Mems_allowed: 00000000,00000001
> Mems_allowed_list: 0
> voluntary_ctxt_switches: 12
> nonvoluntary_ctxt_switches: 0
> 
> # cat /proc/8660/task/*/stack
> [<ffffffff835406db>] snd_seq_pool_done+0x31b/0x620
> sound/core/seq/seq_memory.c:436
> [<ffffffff8353a11e>] snd_seq_ioctl_set_client_pool+0x1ae/0x600
> sound/core/seq/seq_clientmgr.c:1836
> [<ffffffff835382ba>] snd_seq_ioctl+0x2da/0x4d0
> sound/core/seq/seq_clientmgr.c:2130
> [<ffffffff81aced2f>] vfs_ioctl fs/ioctl.c:45 [inline]
> [<ffffffff81aced2f>] do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:685
> [<ffffffff81ad038f>] SYSC_ioctl fs/ioctl.c:700 [inline]
> [<ffffffff81ad038f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
> [<ffffffff8457dc41>] entry_SYSCALL_64_fastpath+0x1f/0xc2

Now I've been trying to reproduce the hang, alas it didn't happen on
my VM by some reason.

In anyway, below is an untested fix for possible races against
snd_seq_pool_done() and cell insertions.  Could you check whether it
cures your issue?


thanks,

Takashi

---
diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
index 4c935202ce23..f3b1d7f50b81 100644
--- a/sound/core/seq/seq_clientmgr.c
+++ b/sound/core/seq/seq_clientmgr.c
@@ -1832,6 +1832,7 @@ static int snd_seq_ioctl_set_client_pool(struct snd_seq_client *client,
 	     info->output_pool != client->pool->size)) {
 		if (snd_seq_write_pool_allocated(client)) {
 			/* remove all existing cells */
+			snd_seq_pool_mark_closing(client->pool);
 			snd_seq_queue_client_leave_cells(client->number);
 			snd_seq_pool_done(client->pool);
 		}
diff --git a/sound/core/seq/seq_fifo.c b/sound/core/seq/seq_fifo.c
index 448efd4e980e..33980d1c8037 100644
--- a/sound/core/seq/seq_fifo.c
+++ b/sound/core/seq/seq_fifo.c
@@ -72,6 +72,9 @@ void snd_seq_fifo_delete(struct snd_seq_fifo **fifo)
 		return;
 	*fifo = NULL;
 
+	if (f->pool)
+		snd_seq_pool_mark_closing(f->pool);
+
 	snd_seq_fifo_clear(f);
 
 	/* wake up clients if any */
diff --git a/sound/core/seq/seq_memory.c b/sound/core/seq/seq_memory.c
index 1a1acf3ddda4..d4c61ec9be13 100644
--- a/sound/core/seq/seq_memory.c
+++ b/sound/core/seq/seq_memory.c
@@ -415,6 +415,18 @@ int snd_seq_pool_init(struct snd_seq_pool *pool)
 	return 0;
 }
 
+/* refuse the further insertion to the pool */
+void snd_seq_pool_mark_closing(struct snd_seq_pool *pool)
+{
+	unsigned long flags;
+
+	if (snd_BUG_ON(!pool))
+		return;
+	spin_lock_irqsave(&pool->lock, flags);
+	pool->closing = 1;
+	spin_unlock_irqrestore(&pool->lock, flags);
+}
+
 /* remove events */
 int snd_seq_pool_done(struct snd_seq_pool *pool)
 {
@@ -425,10 +437,6 @@ int snd_seq_pool_done(struct snd_seq_pool *pool)
 		return -EINVAL;
 
 	/* wait for closing all threads */
-	spin_lock_irqsave(&pool->lock, flags);
-	pool->closing = 1;
-	spin_unlock_irqrestore(&pool->lock, flags);
-
 	if (waitqueue_active(&pool->output_sleep))
 		wake_up(&pool->output_sleep);
 
@@ -485,6 +493,7 @@ int snd_seq_pool_delete(struct snd_seq_pool **ppool)
 	*ppool = NULL;
 	if (pool == NULL)
 		return 0;
+	snd_seq_pool_mark_closing(pool);
 	snd_seq_pool_done(pool);
 	kfree(pool);
 	return 0;
diff --git a/sound/core/seq/seq_memory.h b/sound/core/seq/seq_memory.h
index 4a2ec779b8a7..32f959c17786 100644
--- a/sound/core/seq/seq_memory.h
+++ b/sound/core/seq/seq_memory.h
@@ -84,6 +84,7 @@ static inline int snd_seq_total_cells(struct snd_seq_pool *pool)
 int snd_seq_pool_init(struct snd_seq_pool *pool);
 
 /* done pool - free events */
+void snd_seq_pool_mark_closing(struct snd_seq_pool *pool);
 int snd_seq_pool_done(struct snd_seq_pool *pool);
 
 /* create pool */

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: sound: another deadlock in snd_seq_pool_done
@ 2017-03-21 12:23   ` Takashi Iwai
  0 siblings, 0 replies; 12+ messages in thread
From: Takashi Iwai @ 2017-03-21 12:23 UTC (permalink / raw)
  To: Dmitry Vyukov; +Cc: alsa-devel, syzkaller, LKML

On Sat, 04 Mar 2017 17:31:21 +0100,
Dmitry Vyukov wrote:
> 
> Hello,
> 
> The following program creates processes deadlocked in snd_seq_pool_done:
> 
> https://gist.githubusercontent.com/dvyukov/97efc9cb6d63b1b2c7b737b82cc8b0d6/raw/3546b133ae0b2d3e1190ae7c1f4e240ce7ce132e/gistfile1.txt
> 
> After few seconds I get:
> 
> # ps afxu | grep a.out
> root      8660  2.0  0.0      0     0 pts/0    Zl   16:27   0:00
> [a.out] <defunct>
> 
> # kill -9 8660
> 
> # cat /proc/8660/status
> Name: a.out
> State: Z (zombie)
> Tgid: 8660
> Ngid: 0
> Pid: 8660
> PPid: 1
> TracerPid: 0
> Uid: 0 0 0 0
> Gid: 0 0 0 0
> FDSize: 0
> Groups: 0
> NStgid: 8660
> NSpid: 8660
> NSpgid: 8660
> NSsid: 2971
> Threads: 2
> SigQ: 1/3304
> SigPnd: 0000000000000000
> ShdPnd: 0000000000000100
> SigBlk: 0000000000000000
> SigIgn: 0000000180000000
> SigCgt: 0000000000000440
> CapInh: 0000000000000000
> CapPrm: 0000003fffffffff
> CapEff: 0000003fffffffff
> CapBnd: 0000003fffffffff
> CapAmb: 0000000000000000
> NoNewPrivs: 0
> Seccomp: 0
> Cpus_allowed: f
> Cpus_allowed_list: 0-3
> Mems_allowed: 00000000,00000001
> Mems_allowed_list: 0
> voluntary_ctxt_switches: 12
> nonvoluntary_ctxt_switches: 0
> 
> # cat /proc/8660/task/*/stack
> [<ffffffff835406db>] snd_seq_pool_done+0x31b/0x620
> sound/core/seq/seq_memory.c:436
> [<ffffffff8353a11e>] snd_seq_ioctl_set_client_pool+0x1ae/0x600
> sound/core/seq/seq_clientmgr.c:1836
> [<ffffffff835382ba>] snd_seq_ioctl+0x2da/0x4d0
> sound/core/seq/seq_clientmgr.c:2130
> [<ffffffff81aced2f>] vfs_ioctl fs/ioctl.c:45 [inline]
> [<ffffffff81aced2f>] do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:685
> [<ffffffff81ad038f>] SYSC_ioctl fs/ioctl.c:700 [inline]
> [<ffffffff81ad038f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
> [<ffffffff8457dc41>] entry_SYSCALL_64_fastpath+0x1f/0xc2

Now I've been trying to reproduce the hang, alas it didn't happen on
my VM by some reason.

In anyway, below is an untested fix for possible races against
snd_seq_pool_done() and cell insertions.  Could you check whether it
cures your issue?


thanks,

Takashi

---
diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
index 4c935202ce23..f3b1d7f50b81 100644
--- a/sound/core/seq/seq_clientmgr.c
+++ b/sound/core/seq/seq_clientmgr.c
@@ -1832,6 +1832,7 @@ static int snd_seq_ioctl_set_client_pool(struct snd_seq_client *client,
 	     info->output_pool != client->pool->size)) {
 		if (snd_seq_write_pool_allocated(client)) {
 			/* remove all existing cells */
+			snd_seq_pool_mark_closing(client->pool);
 			snd_seq_queue_client_leave_cells(client->number);
 			snd_seq_pool_done(client->pool);
 		}
diff --git a/sound/core/seq/seq_fifo.c b/sound/core/seq/seq_fifo.c
index 448efd4e980e..33980d1c8037 100644
--- a/sound/core/seq/seq_fifo.c
+++ b/sound/core/seq/seq_fifo.c
@@ -72,6 +72,9 @@ void snd_seq_fifo_delete(struct snd_seq_fifo **fifo)
 		return;
 	*fifo = NULL;
 
+	if (f->pool)
+		snd_seq_pool_mark_closing(f->pool);
+
 	snd_seq_fifo_clear(f);
 
 	/* wake up clients if any */
diff --git a/sound/core/seq/seq_memory.c b/sound/core/seq/seq_memory.c
index 1a1acf3ddda4..d4c61ec9be13 100644
--- a/sound/core/seq/seq_memory.c
+++ b/sound/core/seq/seq_memory.c
@@ -415,6 +415,18 @@ int snd_seq_pool_init(struct snd_seq_pool *pool)
 	return 0;
 }
 
+/* refuse the further insertion to the pool */
+void snd_seq_pool_mark_closing(struct snd_seq_pool *pool)
+{
+	unsigned long flags;
+
+	if (snd_BUG_ON(!pool))
+		return;
+	spin_lock_irqsave(&pool->lock, flags);
+	pool->closing = 1;
+	spin_unlock_irqrestore(&pool->lock, flags);
+}
+
 /* remove events */
 int snd_seq_pool_done(struct snd_seq_pool *pool)
 {
@@ -425,10 +437,6 @@ int snd_seq_pool_done(struct snd_seq_pool *pool)
 		return -EINVAL;
 
 	/* wait for closing all threads */
-	spin_lock_irqsave(&pool->lock, flags);
-	pool->closing = 1;
-	spin_unlock_irqrestore(&pool->lock, flags);
-
 	if (waitqueue_active(&pool->output_sleep))
 		wake_up(&pool->output_sleep);
 
@@ -485,6 +493,7 @@ int snd_seq_pool_delete(struct snd_seq_pool **ppool)
 	*ppool = NULL;
 	if (pool == NULL)
 		return 0;
+	snd_seq_pool_mark_closing(pool);
 	snd_seq_pool_done(pool);
 	kfree(pool);
 	return 0;
diff --git a/sound/core/seq/seq_memory.h b/sound/core/seq/seq_memory.h
index 4a2ec779b8a7..32f959c17786 100644
--- a/sound/core/seq/seq_memory.h
+++ b/sound/core/seq/seq_memory.h
@@ -84,6 +84,7 @@ static inline int snd_seq_total_cells(struct snd_seq_pool *pool)
 int snd_seq_pool_init(struct snd_seq_pool *pool);
 
 /* done pool - free events */
+void snd_seq_pool_mark_closing(struct snd_seq_pool *pool);
 int snd_seq_pool_done(struct snd_seq_pool *pool);
 
 /* create pool */

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: sound: another deadlock in snd_seq_pool_done
  2017-03-21 12:23   ` Takashi Iwai
@ 2017-03-21 12:48     ` Dmitry Vyukov
  -1 siblings, 0 replies; 12+ messages in thread
From: Dmitry Vyukov @ 2017-03-21 12:48 UTC (permalink / raw)
  To: Takashi Iwai; +Cc: alsa-devel, Jaroslav Kysela, LKML, syzkaller

On Tue, Mar 21, 2017 at 1:23 PM, Takashi Iwai <tiwai@suse.de> wrote:
> On Sat, 04 Mar 2017 17:31:21 +0100,
> Dmitry Vyukov wrote:
>>
>> Hello,
>>
>> The following program creates processes deadlocked in snd_seq_pool_done:
>>
>> https://gist.githubusercontent.com/dvyukov/97efc9cb6d63b1b2c7b737b82cc8b0d6/raw/3546b133ae0b2d3e1190ae7c1f4e240ce7ce132e/gistfile1.txt
>>
>> After few seconds I get:
>>
>> # ps afxu | grep a.out
>> root      8660  2.0  0.0      0     0 pts/0    Zl   16:27   0:00
>> [a.out] <defunct>
>>
>> # kill -9 8660
>>
>> # cat /proc/8660/status
>> Name: a.out
>> State: Z (zombie)
>> Tgid: 8660
>> Ngid: 0
>> Pid: 8660
>> PPid: 1
>> TracerPid: 0
>> Uid: 0 0 0 0
>> Gid: 0 0 0 0
>> FDSize: 0
>> Groups: 0
>> NStgid: 8660
>> NSpid: 8660
>> NSpgid: 8660
>> NSsid: 2971
>> Threads: 2
>> SigQ: 1/3304
>> SigPnd: 0000000000000000
>> ShdPnd: 0000000000000100
>> SigBlk: 0000000000000000
>> SigIgn: 0000000180000000
>> SigCgt: 0000000000000440
>> CapInh: 0000000000000000
>> CapPrm: 0000003fffffffff
>> CapEff: 0000003fffffffff
>> CapBnd: 0000003fffffffff
>> CapAmb: 0000000000000000
>> NoNewPrivs: 0
>> Seccomp: 0
>> Cpus_allowed: f
>> Cpus_allowed_list: 0-3
>> Mems_allowed: 00000000,00000001
>> Mems_allowed_list: 0
>> voluntary_ctxt_switches: 12
>> nonvoluntary_ctxt_switches: 0
>>
>> # cat /proc/8660/task/*/stack
>> [<ffffffff835406db>] snd_seq_pool_done+0x31b/0x620
>> sound/core/seq/seq_memory.c:436
>> [<ffffffff8353a11e>] snd_seq_ioctl_set_client_pool+0x1ae/0x600
>> sound/core/seq/seq_clientmgr.c:1836
>> [<ffffffff835382ba>] snd_seq_ioctl+0x2da/0x4d0
>> sound/core/seq/seq_clientmgr.c:2130
>> [<ffffffff81aced2f>] vfs_ioctl fs/ioctl.c:45 [inline]
>> [<ffffffff81aced2f>] do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:685
>> [<ffffffff81ad038f>] SYSC_ioctl fs/ioctl.c:700 [inline]
>> [<ffffffff81ad038f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
>> [<ffffffff8457dc41>] entry_SYSCALL_64_fastpath+0x1f/0xc2
>
> Now I've been trying to reproduce the hang, alas it didn't happen on
> my VM by some reason.
>
> In anyway, below is an untested fix for possible races against
> snd_seq_pool_done() and cell insertions.  Could you check whether it
> cures your issue?


I reproduced it again within 10 second. Then restarted kernel with
this patch and now it runs for 10 minutes without any hanged
processes.

Tested-by: Dmitry Vyukov <dvyukov@google.com>

Thanks!


> ---
> diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
> index 4c935202ce23..f3b1d7f50b81 100644
> --- a/sound/core/seq/seq_clientmgr.c
> +++ b/sound/core/seq/seq_clientmgr.c
> @@ -1832,6 +1832,7 @@ static int snd_seq_ioctl_set_client_pool(struct snd_seq_client *client,
>              info->output_pool != client->pool->size)) {
>                 if (snd_seq_write_pool_allocated(client)) {
>                         /* remove all existing cells */
> +                       snd_seq_pool_mark_closing(client->pool);
>                         snd_seq_queue_client_leave_cells(client->number);
>                         snd_seq_pool_done(client->pool);
>                 }
> diff --git a/sound/core/seq/seq_fifo.c b/sound/core/seq/seq_fifo.c
> index 448efd4e980e..33980d1c8037 100644
> --- a/sound/core/seq/seq_fifo.c
> +++ b/sound/core/seq/seq_fifo.c
> @@ -72,6 +72,9 @@ void snd_seq_fifo_delete(struct snd_seq_fifo **fifo)
>                 return;
>         *fifo = NULL;
>
> +       if (f->pool)
> +               snd_seq_pool_mark_closing(f->pool);
> +
>         snd_seq_fifo_clear(f);
>
>         /* wake up clients if any */
> diff --git a/sound/core/seq/seq_memory.c b/sound/core/seq/seq_memory.c
> index 1a1acf3ddda4..d4c61ec9be13 100644
> --- a/sound/core/seq/seq_memory.c
> +++ b/sound/core/seq/seq_memory.c
> @@ -415,6 +415,18 @@ int snd_seq_pool_init(struct snd_seq_pool *pool)
>         return 0;
>  }
>
> +/* refuse the further insertion to the pool */
> +void snd_seq_pool_mark_closing(struct snd_seq_pool *pool)
> +{
> +       unsigned long flags;
> +
> +       if (snd_BUG_ON(!pool))
> +               return;
> +       spin_lock_irqsave(&pool->lock, flags);
> +       pool->closing = 1;
> +       spin_unlock_irqrestore(&pool->lock, flags);
> +}
> +
>  /* remove events */
>  int snd_seq_pool_done(struct snd_seq_pool *pool)
>  {
> @@ -425,10 +437,6 @@ int snd_seq_pool_done(struct snd_seq_pool *pool)
>                 return -EINVAL;
>
>         /* wait for closing all threads */
> -       spin_lock_irqsave(&pool->lock, flags);
> -       pool->closing = 1;
> -       spin_unlock_irqrestore(&pool->lock, flags);
> -
>         if (waitqueue_active(&pool->output_sleep))
>                 wake_up(&pool->output_sleep);
>
> @@ -485,6 +493,7 @@ int snd_seq_pool_delete(struct snd_seq_pool **ppool)
>         *ppool = NULL;
>         if (pool == NULL)
>                 return 0;
> +       snd_seq_pool_mark_closing(pool);
>         snd_seq_pool_done(pool);
>         kfree(pool);
>         return 0;
> diff --git a/sound/core/seq/seq_memory.h b/sound/core/seq/seq_memory.h
> index 4a2ec779b8a7..32f959c17786 100644
> --- a/sound/core/seq/seq_memory.h
> +++ b/sound/core/seq/seq_memory.h
> @@ -84,6 +84,7 @@ static inline int snd_seq_total_cells(struct snd_seq_pool *pool)
>  int snd_seq_pool_init(struct snd_seq_pool *pool);
>
>  /* done pool - free events */
> +void snd_seq_pool_mark_closing(struct snd_seq_pool *pool);
>  int snd_seq_pool_done(struct snd_seq_pool *pool);
>
>  /* create pool */

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: sound: another deadlock in snd_seq_pool_done
@ 2017-03-21 12:48     ` Dmitry Vyukov
  0 siblings, 0 replies; 12+ messages in thread
From: Dmitry Vyukov @ 2017-03-21 12:48 UTC (permalink / raw)
  To: Takashi Iwai; +Cc: alsa-devel, syzkaller, LKML

On Tue, Mar 21, 2017 at 1:23 PM, Takashi Iwai <tiwai@suse.de> wrote:
> On Sat, 04 Mar 2017 17:31:21 +0100,
> Dmitry Vyukov wrote:
>>
>> Hello,
>>
>> The following program creates processes deadlocked in snd_seq_pool_done:
>>
>> https://gist.githubusercontent.com/dvyukov/97efc9cb6d63b1b2c7b737b82cc8b0d6/raw/3546b133ae0b2d3e1190ae7c1f4e240ce7ce132e/gistfile1.txt
>>
>> After few seconds I get:
>>
>> # ps afxu | grep a.out
>> root      8660  2.0  0.0      0     0 pts/0    Zl   16:27   0:00
>> [a.out] <defunct>
>>
>> # kill -9 8660
>>
>> # cat /proc/8660/status
>> Name: a.out
>> State: Z (zombie)
>> Tgid: 8660
>> Ngid: 0
>> Pid: 8660
>> PPid: 1
>> TracerPid: 0
>> Uid: 0 0 0 0
>> Gid: 0 0 0 0
>> FDSize: 0
>> Groups: 0
>> NStgid: 8660
>> NSpid: 8660
>> NSpgid: 8660
>> NSsid: 2971
>> Threads: 2
>> SigQ: 1/3304
>> SigPnd: 0000000000000000
>> ShdPnd: 0000000000000100
>> SigBlk: 0000000000000000
>> SigIgn: 0000000180000000
>> SigCgt: 0000000000000440
>> CapInh: 0000000000000000
>> CapPrm: 0000003fffffffff
>> CapEff: 0000003fffffffff
>> CapBnd: 0000003fffffffff
>> CapAmb: 0000000000000000
>> NoNewPrivs: 0
>> Seccomp: 0
>> Cpus_allowed: f
>> Cpus_allowed_list: 0-3
>> Mems_allowed: 00000000,00000001
>> Mems_allowed_list: 0
>> voluntary_ctxt_switches: 12
>> nonvoluntary_ctxt_switches: 0
>>
>> # cat /proc/8660/task/*/stack
>> [<ffffffff835406db>] snd_seq_pool_done+0x31b/0x620
>> sound/core/seq/seq_memory.c:436
>> [<ffffffff8353a11e>] snd_seq_ioctl_set_client_pool+0x1ae/0x600
>> sound/core/seq/seq_clientmgr.c:1836
>> [<ffffffff835382ba>] snd_seq_ioctl+0x2da/0x4d0
>> sound/core/seq/seq_clientmgr.c:2130
>> [<ffffffff81aced2f>] vfs_ioctl fs/ioctl.c:45 [inline]
>> [<ffffffff81aced2f>] do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:685
>> [<ffffffff81ad038f>] SYSC_ioctl fs/ioctl.c:700 [inline]
>> [<ffffffff81ad038f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
>> [<ffffffff8457dc41>] entry_SYSCALL_64_fastpath+0x1f/0xc2
>
> Now I've been trying to reproduce the hang, alas it didn't happen on
> my VM by some reason.
>
> In anyway, below is an untested fix for possible races against
> snd_seq_pool_done() and cell insertions.  Could you check whether it
> cures your issue?


I reproduced it again within 10 second. Then restarted kernel with
this patch and now it runs for 10 minutes without any hanged
processes.

Tested-by: Dmitry Vyukov <dvyukov@google.com>

Thanks!


> ---
> diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
> index 4c935202ce23..f3b1d7f50b81 100644
> --- a/sound/core/seq/seq_clientmgr.c
> +++ b/sound/core/seq/seq_clientmgr.c
> @@ -1832,6 +1832,7 @@ static int snd_seq_ioctl_set_client_pool(struct snd_seq_client *client,
>              info->output_pool != client->pool->size)) {
>                 if (snd_seq_write_pool_allocated(client)) {
>                         /* remove all existing cells */
> +                       snd_seq_pool_mark_closing(client->pool);
>                         snd_seq_queue_client_leave_cells(client->number);
>                         snd_seq_pool_done(client->pool);
>                 }
> diff --git a/sound/core/seq/seq_fifo.c b/sound/core/seq/seq_fifo.c
> index 448efd4e980e..33980d1c8037 100644
> --- a/sound/core/seq/seq_fifo.c
> +++ b/sound/core/seq/seq_fifo.c
> @@ -72,6 +72,9 @@ void snd_seq_fifo_delete(struct snd_seq_fifo **fifo)
>                 return;
>         *fifo = NULL;
>
> +       if (f->pool)
> +               snd_seq_pool_mark_closing(f->pool);
> +
>         snd_seq_fifo_clear(f);
>
>         /* wake up clients if any */
> diff --git a/sound/core/seq/seq_memory.c b/sound/core/seq/seq_memory.c
> index 1a1acf3ddda4..d4c61ec9be13 100644
> --- a/sound/core/seq/seq_memory.c
> +++ b/sound/core/seq/seq_memory.c
> @@ -415,6 +415,18 @@ int snd_seq_pool_init(struct snd_seq_pool *pool)
>         return 0;
>  }
>
> +/* refuse the further insertion to the pool */
> +void snd_seq_pool_mark_closing(struct snd_seq_pool *pool)
> +{
> +       unsigned long flags;
> +
> +       if (snd_BUG_ON(!pool))
> +               return;
> +       spin_lock_irqsave(&pool->lock, flags);
> +       pool->closing = 1;
> +       spin_unlock_irqrestore(&pool->lock, flags);
> +}
> +
>  /* remove events */
>  int snd_seq_pool_done(struct snd_seq_pool *pool)
>  {
> @@ -425,10 +437,6 @@ int snd_seq_pool_done(struct snd_seq_pool *pool)
>                 return -EINVAL;
>
>         /* wait for closing all threads */
> -       spin_lock_irqsave(&pool->lock, flags);
> -       pool->closing = 1;
> -       spin_unlock_irqrestore(&pool->lock, flags);
> -
>         if (waitqueue_active(&pool->output_sleep))
>                 wake_up(&pool->output_sleep);
>
> @@ -485,6 +493,7 @@ int snd_seq_pool_delete(struct snd_seq_pool **ppool)
>         *ppool = NULL;
>         if (pool == NULL)
>                 return 0;
> +       snd_seq_pool_mark_closing(pool);
>         snd_seq_pool_done(pool);
>         kfree(pool);
>         return 0;
> diff --git a/sound/core/seq/seq_memory.h b/sound/core/seq/seq_memory.h
> index 4a2ec779b8a7..32f959c17786 100644
> --- a/sound/core/seq/seq_memory.h
> +++ b/sound/core/seq/seq_memory.h
> @@ -84,6 +84,7 @@ static inline int snd_seq_total_cells(struct snd_seq_pool *pool)
>  int snd_seq_pool_init(struct snd_seq_pool *pool);
>
>  /* done pool - free events */
> +void snd_seq_pool_mark_closing(struct snd_seq_pool *pool);
>  int snd_seq_pool_done(struct snd_seq_pool *pool);
>
>  /* create pool */

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: sound: another deadlock in snd_seq_pool_done
  2017-03-21 12:48     ` Dmitry Vyukov
@ 2017-03-21 13:02       ` Takashi Iwai
  -1 siblings, 0 replies; 12+ messages in thread
From: Takashi Iwai @ 2017-03-21 13:02 UTC (permalink / raw)
  To: Dmitry Vyukov; +Cc: alsa-devel, Jaroslav Kysela, LKML, syzkaller

On Tue, 21 Mar 2017 13:48:27 +0100,
Dmitry Vyukov wrote:
> 
> On Tue, Mar 21, 2017 at 1:23 PM, Takashi Iwai <tiwai@suse.de> wrote:
> > On Sat, 04 Mar 2017 17:31:21 +0100,
> > Dmitry Vyukov wrote:
> >>
> >> Hello,
> >>
> >> The following program creates processes deadlocked in snd_seq_pool_done:
> >>
> >> https://gist.githubusercontent.com/dvyukov/97efc9cb6d63b1b2c7b737b82cc8b0d6/raw/3546b133ae0b2d3e1190ae7c1f4e240ce7ce132e/gistfile1.txt
> >>
> >> After few seconds I get:
> >>
> >> # ps afxu | grep a.out
> >> root      8660  2.0  0.0      0     0 pts/0    Zl   16:27   0:00
> >> [a.out] <defunct>
> >>
> >> # kill -9 8660
> >>
> >> # cat /proc/8660/status
> >> Name: a.out
> >> State: Z (zombie)
> >> Tgid: 8660
> >> Ngid: 0
> >> Pid: 8660
> >> PPid: 1
> >> TracerPid: 0
> >> Uid: 0 0 0 0
> >> Gid: 0 0 0 0
> >> FDSize: 0
> >> Groups: 0
> >> NStgid: 8660
> >> NSpid: 8660
> >> NSpgid: 8660
> >> NSsid: 2971
> >> Threads: 2
> >> SigQ: 1/3304
> >> SigPnd: 0000000000000000
> >> ShdPnd: 0000000000000100
> >> SigBlk: 0000000000000000
> >> SigIgn: 0000000180000000
> >> SigCgt: 0000000000000440
> >> CapInh: 0000000000000000
> >> CapPrm: 0000003fffffffff
> >> CapEff: 0000003fffffffff
> >> CapBnd: 0000003fffffffff
> >> CapAmb: 0000000000000000
> >> NoNewPrivs: 0
> >> Seccomp: 0
> >> Cpus_allowed: f
> >> Cpus_allowed_list: 0-3
> >> Mems_allowed: 00000000,00000001
> >> Mems_allowed_list: 0
> >> voluntary_ctxt_switches: 12
> >> nonvoluntary_ctxt_switches: 0
> >>
> >> # cat /proc/8660/task/*/stack
> >> [<ffffffff835406db>] snd_seq_pool_done+0x31b/0x620
> >> sound/core/seq/seq_memory.c:436
> >> [<ffffffff8353a11e>] snd_seq_ioctl_set_client_pool+0x1ae/0x600
> >> sound/core/seq/seq_clientmgr.c:1836
> >> [<ffffffff835382ba>] snd_seq_ioctl+0x2da/0x4d0
> >> sound/core/seq/seq_clientmgr.c:2130
> >> [<ffffffff81aced2f>] vfs_ioctl fs/ioctl.c:45 [inline]
> >> [<ffffffff81aced2f>] do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:685
> >> [<ffffffff81ad038f>] SYSC_ioctl fs/ioctl.c:700 [inline]
> >> [<ffffffff81ad038f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
> >> [<ffffffff8457dc41>] entry_SYSCALL_64_fastpath+0x1f/0xc2
> >
> > Now I've been trying to reproduce the hang, alas it didn't happen on
> > my VM by some reason.
> >
> > In anyway, below is an untested fix for possible races against
> > snd_seq_pool_done() and cell insertions.  Could you check whether it
> > cures your issue?
> 
> 
> I reproduced it again within 10 second. Then restarted kernel with
> this patch and now it runs for 10 minutes without any hanged
> processes.
> 
> Tested-by: Dmitry Vyukov <dvyukov@google.com>
> 
> Thanks!

Good to hear!  FWIW, below is the formal patch I'm going to queue.


thanks,

Takashi

-- 8< --
From: Takashi Iwai <tiwai@suse.de>
Subject: [PATCH] ALSA: seq: Fix racy cell insertions during
 snd_seq_pool_done()

When snd_seq_pool_done() is called, it marks the closing flag to
refuse the further cell insertions.  But snd_seq_pool_done() itself
doesn't clear the cells but just waits until all cells are cleared by
the caller side.  That is, it's racy, and this leads to the endless
stall as syzkaller spotted.

This patch addresses the racy by splitting the setup of pool->closing
flag out of snd_seq_pool_done(), and calling it properly before
snd_seq_pool_done().

BugLink: http://lkml.kernel.org/r/CACT4Y+aqqy8bZA1fFieifNxR2fAfFQQABcBHj801+u5ePV0URw@mail.gmail.com
Reported-and-tested-by: Dmitry Vyukov <dvyukov@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
---
 sound/core/seq/seq_clientmgr.c |  1 +
 sound/core/seq/seq_fifo.c      |  3 +++
 sound/core/seq/seq_memory.c    | 17 +++++++++++++----
 sound/core/seq/seq_memory.h    |  1 +
 4 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
index 4c935202ce23..f3b1d7f50b81 100644
--- a/sound/core/seq/seq_clientmgr.c
+++ b/sound/core/seq/seq_clientmgr.c
@@ -1832,6 +1832,7 @@ static int snd_seq_ioctl_set_client_pool(struct snd_seq_client *client,
 	     info->output_pool != client->pool->size)) {
 		if (snd_seq_write_pool_allocated(client)) {
 			/* remove all existing cells */
+			snd_seq_pool_mark_closing(client->pool);
 			snd_seq_queue_client_leave_cells(client->number);
 			snd_seq_pool_done(client->pool);
 		}
diff --git a/sound/core/seq/seq_fifo.c b/sound/core/seq/seq_fifo.c
index 448efd4e980e..33980d1c8037 100644
--- a/sound/core/seq/seq_fifo.c
+++ b/sound/core/seq/seq_fifo.c
@@ -72,6 +72,9 @@ void snd_seq_fifo_delete(struct snd_seq_fifo **fifo)
 		return;
 	*fifo = NULL;
 
+	if (f->pool)
+		snd_seq_pool_mark_closing(f->pool);
+
 	snd_seq_fifo_clear(f);
 
 	/* wake up clients if any */
diff --git a/sound/core/seq/seq_memory.c b/sound/core/seq/seq_memory.c
index 1a1acf3ddda4..d4c61ec9be13 100644
--- a/sound/core/seq/seq_memory.c
+++ b/sound/core/seq/seq_memory.c
@@ -415,6 +415,18 @@ int snd_seq_pool_init(struct snd_seq_pool *pool)
 	return 0;
 }
 
+/* refuse the further insertion to the pool */
+void snd_seq_pool_mark_closing(struct snd_seq_pool *pool)
+{
+	unsigned long flags;
+
+	if (snd_BUG_ON(!pool))
+		return;
+	spin_lock_irqsave(&pool->lock, flags);
+	pool->closing = 1;
+	spin_unlock_irqrestore(&pool->lock, flags);
+}
+
 /* remove events */
 int snd_seq_pool_done(struct snd_seq_pool *pool)
 {
@@ -425,10 +437,6 @@ int snd_seq_pool_done(struct snd_seq_pool *pool)
 		return -EINVAL;
 
 	/* wait for closing all threads */
-	spin_lock_irqsave(&pool->lock, flags);
-	pool->closing = 1;
-	spin_unlock_irqrestore(&pool->lock, flags);
-
 	if (waitqueue_active(&pool->output_sleep))
 		wake_up(&pool->output_sleep);
 
@@ -485,6 +493,7 @@ int snd_seq_pool_delete(struct snd_seq_pool **ppool)
 	*ppool = NULL;
 	if (pool == NULL)
 		return 0;
+	snd_seq_pool_mark_closing(pool);
 	snd_seq_pool_done(pool);
 	kfree(pool);
 	return 0;
diff --git a/sound/core/seq/seq_memory.h b/sound/core/seq/seq_memory.h
index 4a2ec779b8a7..32f959c17786 100644
--- a/sound/core/seq/seq_memory.h
+++ b/sound/core/seq/seq_memory.h
@@ -84,6 +84,7 @@ static inline int snd_seq_total_cells(struct snd_seq_pool *pool)
 int snd_seq_pool_init(struct snd_seq_pool *pool);
 
 /* done pool - free events */
+void snd_seq_pool_mark_closing(struct snd_seq_pool *pool);
 int snd_seq_pool_done(struct snd_seq_pool *pool);
 
 /* create pool */
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: sound: another deadlock in snd_seq_pool_done
@ 2017-03-21 13:02       ` Takashi Iwai
  0 siblings, 0 replies; 12+ messages in thread
From: Takashi Iwai @ 2017-03-21 13:02 UTC (permalink / raw)
  To: Dmitry Vyukov; +Cc: alsa-devel, syzkaller, LKML

On Tue, 21 Mar 2017 13:48:27 +0100,
Dmitry Vyukov wrote:
> 
> On Tue, Mar 21, 2017 at 1:23 PM, Takashi Iwai <tiwai@suse.de> wrote:
> > On Sat, 04 Mar 2017 17:31:21 +0100,
> > Dmitry Vyukov wrote:
> >>
> >> Hello,
> >>
> >> The following program creates processes deadlocked in snd_seq_pool_done:
> >>
> >> https://gist.githubusercontent.com/dvyukov/97efc9cb6d63b1b2c7b737b82cc8b0d6/raw/3546b133ae0b2d3e1190ae7c1f4e240ce7ce132e/gistfile1.txt
> >>
> >> After few seconds I get:
> >>
> >> # ps afxu | grep a.out
> >> root      8660  2.0  0.0      0     0 pts/0    Zl   16:27   0:00
> >> [a.out] <defunct>
> >>
> >> # kill -9 8660
> >>
> >> # cat /proc/8660/status
> >> Name: a.out
> >> State: Z (zombie)
> >> Tgid: 8660
> >> Ngid: 0
> >> Pid: 8660
> >> PPid: 1
> >> TracerPid: 0
> >> Uid: 0 0 0 0
> >> Gid: 0 0 0 0
> >> FDSize: 0
> >> Groups: 0
> >> NStgid: 8660
> >> NSpid: 8660
> >> NSpgid: 8660
> >> NSsid: 2971
> >> Threads: 2
> >> SigQ: 1/3304
> >> SigPnd: 0000000000000000
> >> ShdPnd: 0000000000000100
> >> SigBlk: 0000000000000000
> >> SigIgn: 0000000180000000
> >> SigCgt: 0000000000000440
> >> CapInh: 0000000000000000
> >> CapPrm: 0000003fffffffff
> >> CapEff: 0000003fffffffff
> >> CapBnd: 0000003fffffffff
> >> CapAmb: 0000000000000000
> >> NoNewPrivs: 0
> >> Seccomp: 0
> >> Cpus_allowed: f
> >> Cpus_allowed_list: 0-3
> >> Mems_allowed: 00000000,00000001
> >> Mems_allowed_list: 0
> >> voluntary_ctxt_switches: 12
> >> nonvoluntary_ctxt_switches: 0
> >>
> >> # cat /proc/8660/task/*/stack
> >> [<ffffffff835406db>] snd_seq_pool_done+0x31b/0x620
> >> sound/core/seq/seq_memory.c:436
> >> [<ffffffff8353a11e>] snd_seq_ioctl_set_client_pool+0x1ae/0x600
> >> sound/core/seq/seq_clientmgr.c:1836
> >> [<ffffffff835382ba>] snd_seq_ioctl+0x2da/0x4d0
> >> sound/core/seq/seq_clientmgr.c:2130
> >> [<ffffffff81aced2f>] vfs_ioctl fs/ioctl.c:45 [inline]
> >> [<ffffffff81aced2f>] do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:685
> >> [<ffffffff81ad038f>] SYSC_ioctl fs/ioctl.c:700 [inline]
> >> [<ffffffff81ad038f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
> >> [<ffffffff8457dc41>] entry_SYSCALL_64_fastpath+0x1f/0xc2
> >
> > Now I've been trying to reproduce the hang, alas it didn't happen on
> > my VM by some reason.
> >
> > In anyway, below is an untested fix for possible races against
> > snd_seq_pool_done() and cell insertions.  Could you check whether it
> > cures your issue?
> 
> 
> I reproduced it again within 10 second. Then restarted kernel with
> this patch and now it runs for 10 minutes without any hanged
> processes.
> 
> Tested-by: Dmitry Vyukov <dvyukov@google.com>
> 
> Thanks!

Good to hear!  FWIW, below is the formal patch I'm going to queue.


thanks,

Takashi

-- 8< --
From: Takashi Iwai <tiwai@suse.de>
Subject: [PATCH] ALSA: seq: Fix racy cell insertions during
 snd_seq_pool_done()

When snd_seq_pool_done() is called, it marks the closing flag to
refuse the further cell insertions.  But snd_seq_pool_done() itself
doesn't clear the cells but just waits until all cells are cleared by
the caller side.  That is, it's racy, and this leads to the endless
stall as syzkaller spotted.

This patch addresses the racy by splitting the setup of pool->closing
flag out of snd_seq_pool_done(), and calling it properly before
snd_seq_pool_done().

BugLink: http://lkml.kernel.org/r/CACT4Y+aqqy8bZA1fFieifNxR2fAfFQQABcBHj801+u5ePV0URw@mail.gmail.com
Reported-and-tested-by: Dmitry Vyukov <dvyukov@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
---
 sound/core/seq/seq_clientmgr.c |  1 +
 sound/core/seq/seq_fifo.c      |  3 +++
 sound/core/seq/seq_memory.c    | 17 +++++++++++++----
 sound/core/seq/seq_memory.h    |  1 +
 4 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
index 4c935202ce23..f3b1d7f50b81 100644
--- a/sound/core/seq/seq_clientmgr.c
+++ b/sound/core/seq/seq_clientmgr.c
@@ -1832,6 +1832,7 @@ static int snd_seq_ioctl_set_client_pool(struct snd_seq_client *client,
 	     info->output_pool != client->pool->size)) {
 		if (snd_seq_write_pool_allocated(client)) {
 			/* remove all existing cells */
+			snd_seq_pool_mark_closing(client->pool);
 			snd_seq_queue_client_leave_cells(client->number);
 			snd_seq_pool_done(client->pool);
 		}
diff --git a/sound/core/seq/seq_fifo.c b/sound/core/seq/seq_fifo.c
index 448efd4e980e..33980d1c8037 100644
--- a/sound/core/seq/seq_fifo.c
+++ b/sound/core/seq/seq_fifo.c
@@ -72,6 +72,9 @@ void snd_seq_fifo_delete(struct snd_seq_fifo **fifo)
 		return;
 	*fifo = NULL;
 
+	if (f->pool)
+		snd_seq_pool_mark_closing(f->pool);
+
 	snd_seq_fifo_clear(f);
 
 	/* wake up clients if any */
diff --git a/sound/core/seq/seq_memory.c b/sound/core/seq/seq_memory.c
index 1a1acf3ddda4..d4c61ec9be13 100644
--- a/sound/core/seq/seq_memory.c
+++ b/sound/core/seq/seq_memory.c
@@ -415,6 +415,18 @@ int snd_seq_pool_init(struct snd_seq_pool *pool)
 	return 0;
 }
 
+/* refuse the further insertion to the pool */
+void snd_seq_pool_mark_closing(struct snd_seq_pool *pool)
+{
+	unsigned long flags;
+
+	if (snd_BUG_ON(!pool))
+		return;
+	spin_lock_irqsave(&pool->lock, flags);
+	pool->closing = 1;
+	spin_unlock_irqrestore(&pool->lock, flags);
+}
+
 /* remove events */
 int snd_seq_pool_done(struct snd_seq_pool *pool)
 {
@@ -425,10 +437,6 @@ int snd_seq_pool_done(struct snd_seq_pool *pool)
 		return -EINVAL;
 
 	/* wait for closing all threads */
-	spin_lock_irqsave(&pool->lock, flags);
-	pool->closing = 1;
-	spin_unlock_irqrestore(&pool->lock, flags);
-
 	if (waitqueue_active(&pool->output_sleep))
 		wake_up(&pool->output_sleep);
 
@@ -485,6 +493,7 @@ int snd_seq_pool_delete(struct snd_seq_pool **ppool)
 	*ppool = NULL;
 	if (pool == NULL)
 		return 0;
+	snd_seq_pool_mark_closing(pool);
 	snd_seq_pool_done(pool);
 	kfree(pool);
 	return 0;
diff --git a/sound/core/seq/seq_memory.h b/sound/core/seq/seq_memory.h
index 4a2ec779b8a7..32f959c17786 100644
--- a/sound/core/seq/seq_memory.h
+++ b/sound/core/seq/seq_memory.h
@@ -84,6 +84,7 @@ static inline int snd_seq_total_cells(struct snd_seq_pool *pool)
 int snd_seq_pool_init(struct snd_seq_pool *pool);
 
 /* done pool - free events */
+void snd_seq_pool_mark_closing(struct snd_seq_pool *pool);
 int snd_seq_pool_done(struct snd_seq_pool *pool);
 
 /* create pool */
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2017-03-21 13:03 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-04 16:31 sound: another deadlock in snd_seq_pool_done Dmitry Vyukov
2017-03-10  9:19 ` Dmitry Vyukov
2017-03-20  7:17   ` Takashi Iwai
2017-03-20  7:17     ` Takashi Iwai
2017-03-20  9:01     ` Dmitry Vyukov
2017-03-20  9:01       ` Dmitry Vyukov
2017-03-21 12:23 ` Takashi Iwai
2017-03-21 12:23   ` Takashi Iwai
2017-03-21 12:48   ` Dmitry Vyukov
2017-03-21 12:48     ` Dmitry Vyukov
2017-03-21 13:02     ` Takashi Iwai
2017-03-21 13:02       ` Takashi Iwai

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.