All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] ALSA: seq: Yet a few more fixes / cleanups
@ 2019-04-12 12:29 Takashi Iwai
  2019-04-12 12:29 ` [PATCH 1/3] ALSA: seq: Cover unsubscribe_port() in list_mutex Takashi Iwai
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Takashi Iwai @ 2019-04-12 12:29 UTC (permalink / raw)
  To: alsa-devel

Hi,

this is a series of fixes / cleanups of ALSA sequencer code,
especially for OSS sequencer layer.  One fix is yet another attempt to
harden against the race of port deletions, while others are for racy
OSS sequencer accesses.


Takashi

===

Takashi Iwai (3):
  ALSA: seq: Cover unsubscribe_port() in list_mutex
  ALSA: seq: Simplify snd_seq_kernel_client_enqueue() helper
  ALSA: seq: Protect racy pool manipulation from OSS sequencer

 include/sound/seq_kernel.h          |  3 +-
 sound/core/seq/oss/seq_oss_device.h | 10 ++++-
 sound/core/seq/oss/seq_oss_rw.c     | 11 ++----
 sound/core/seq/oss/seq_oss_writeq.c |  2 +-
 sound/core/seq/seq_clientmgr.c      | 75 +++++++++++++++++++++----------------
 sound/core/seq/seq_clientmgr.h      |  8 ++--
 sound/core/seq/seq_ports.c          |  2 +-
 7 files changed, 63 insertions(+), 48 deletions(-)

-- 
2.16.4

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] ALSA: seq: Cover unsubscribe_port() in list_mutex
  2019-04-12 12:29 [PATCH 0/3] ALSA: seq: Yet a few more fixes / cleanups Takashi Iwai
@ 2019-04-12 12:29 ` Takashi Iwai
  2019-04-12 12:29 ` [PATCH 2/3] ALSA: seq: Simplify snd_seq_kernel_client_enqueue() helper Takashi Iwai
  2019-04-12 12:29 ` [PATCH 3/3] ALSA: seq: Protect racy pool manipulation from OSS sequencer Takashi Iwai
  2 siblings, 0 replies; 8+ messages in thread
From: Takashi Iwai @ 2019-04-12 12:29 UTC (permalink / raw)
  To: alsa-devel

The call of unsubscribe_port() which manages the group count and
module refcount from delete_and_unsubscribe_port() looks racy; it's
not covered by the group list lock, and it's likely a cause of the
reported unbalance at port deletion.  Let's move the call inside the
group list_mutex to plug the hole.

Reported-by: syzbot+e4c8abb920efa77bace9@syzkaller.appspotmail.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
---
 sound/core/seq/seq_ports.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/sound/core/seq/seq_ports.c b/sound/core/seq/seq_ports.c
index d964d728681e..ac7556ab531c 100644
--- a/sound/core/seq/seq_ports.c
+++ b/sound/core/seq/seq_ports.c
@@ -547,10 +547,10 @@ static void delete_and_unsubscribe_port(struct snd_seq_client *client,
 		list_del_init(list);
 	grp->exclusive = 0;
 	write_unlock_irq(&grp->list_lock);
-	up_write(&grp->list_mutex);
 
 	if (!empty)
 		unsubscribe_port(client, port, grp, &subs->info, ack);
+	up_write(&grp->list_mutex);
 }
 
 /* connect two ports */
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] ALSA: seq: Simplify snd_seq_kernel_client_enqueue() helper
  2019-04-12 12:29 [PATCH 0/3] ALSA: seq: Yet a few more fixes / cleanups Takashi Iwai
  2019-04-12 12:29 ` [PATCH 1/3] ALSA: seq: Cover unsubscribe_port() in list_mutex Takashi Iwai
@ 2019-04-12 12:29 ` Takashi Iwai
  2019-04-12 12:29 ` [PATCH 3/3] ALSA: seq: Protect racy pool manipulation from OSS sequencer Takashi Iwai
  2 siblings, 0 replies; 8+ messages in thread
From: Takashi Iwai @ 2019-04-12 12:29 UTC (permalink / raw)
  To: alsa-devel

We have two helpers for queuing a sequencer event from the kernel
client, and both are used only from OSS sequencer layer without any
hop and atomic set.  Let's simplify and unify two helpers into one.

No functional change, just a call pattern change.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
---
 include/sound/seq_kernel.h          |  3 ++-
 sound/core/seq/oss/seq_oss_rw.c     | 11 ++++-------
 sound/core/seq/oss/seq_oss_writeq.c |  2 +-
 sound/core/seq/seq_clientmgr.c      | 37 +++++++------------------------------
 sound/core/seq/seq_clientmgr.h      |  4 ----
 5 files changed, 14 insertions(+), 43 deletions(-)

diff --git a/include/sound/seq_kernel.h b/include/sound/seq_kernel.h
index 4b9ee3009aa0..c7a5433e109a 100644
--- a/include/sound/seq_kernel.h
+++ b/include/sound/seq_kernel.h
@@ -73,7 +73,8 @@ __printf(3, 4)
 int snd_seq_create_kernel_client(struct snd_card *card, int client_index,
 				 const char *name_fmt, ...);
 int snd_seq_delete_kernel_client(int client);
-int snd_seq_kernel_client_enqueue(int client, struct snd_seq_event *ev, int atomic, int hop);
+int snd_seq_kernel_client_enqueue(int client, struct snd_seq_event *ev,
+				  struct file *file, bool blocking);
 int snd_seq_kernel_client_dispatch(int client, struct snd_seq_event *ev, int atomic, int hop);
 int snd_seq_kernel_client_ctl(int client, unsigned int cmd, void *arg);
 
diff --git a/sound/core/seq/oss/seq_oss_rw.c b/sound/core/seq/oss/seq_oss_rw.c
index 30886f5fb100..eb1ef12181f3 100644
--- a/sound/core/seq/oss/seq_oss_rw.c
+++ b/sound/core/seq/oss/seq_oss_rw.c
@@ -180,14 +180,11 @@ insert_queue(struct seq_oss_devinfo *dp, union evrec *rec, struct file *opt)
 		return 0; /* invalid event - no need to insert queue */
 
 	event.time.tick = snd_seq_oss_timer_cur_tick(dp->timer);
-	if (dp->timer->realtime || !dp->timer->running) {
+	if (dp->timer->realtime || !dp->timer->running)
 		snd_seq_oss_dispatch(dp, &event, 0, 0);
-	} else {
-		if (is_nonblock_mode(dp->file_mode))
-			rc = snd_seq_kernel_client_enqueue(dp->cseq, &event, 0, 0);
-		else
-			rc = snd_seq_kernel_client_enqueue_blocking(dp->cseq, &event, opt, 0, 0);
-	}
+	else
+		rc = snd_seq_kernel_client_enqueue(dp->cseq, &event, opt,
+						   !is_nonblock_mode(dp->file_mode));
 	return rc;
 }
 		
diff --git a/sound/core/seq/oss/seq_oss_writeq.c b/sound/core/seq/oss/seq_oss_writeq.c
index 5e04f4df10e4..b2f69617591f 100644
--- a/sound/core/seq/oss/seq_oss_writeq.c
+++ b/sound/core/seq/oss/seq_oss_writeq.c
@@ -116,7 +116,7 @@ snd_seq_oss_writeq_sync(struct seq_oss_writeq *q)
 		rec->t.code = SEQ_SYNCTIMER;
 		rec->t.time = time;
 		q->sync_event_put = 1;
-		snd_seq_kernel_client_enqueue_blocking(dp->cseq, &ev, NULL, 0, 0);
+		snd_seq_kernel_client_enqueue(dp->cseq, &ev, NULL, true);
 	}
 
 	wait_event_interruptible_timeout(q->sync_sleep, ! q->sync_event_put, HZ);
diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
index de320b1b90de..0af5b1440b33 100644
--- a/sound/core/seq/seq_clientmgr.c
+++ b/sound/core/seq/seq_clientmgr.c
@@ -2218,12 +2218,13 @@ int snd_seq_delete_kernel_client(int client)
 }
 EXPORT_SYMBOL(snd_seq_delete_kernel_client);
 
-/* skeleton to enqueue event, called from snd_seq_kernel_client_enqueue
- * and snd_seq_kernel_client_enqueue_blocking
+/*
+ * exported, called by kernel clients to enqueue events (w/o blocking)
+ *
+ * RETURN VALUE: zero if succeed, negative if error
  */
-static int kernel_client_enqueue(int client, struct snd_seq_event *ev,
-				 struct file *file, int blocking,
-				 int atomic, int hop)
+int snd_seq_kernel_client_enqueue(int client, struct snd_seq_event *ev,
+				  struct file *file, bool blocking)
 {
 	struct snd_seq_client *cptr;
 	int result;
@@ -2250,37 +2251,13 @@ static int kernel_client_enqueue(int client, struct snd_seq_event *ev,
 		result = -EPERM;
 	else /* send it */
 		result = snd_seq_client_enqueue_event(cptr, ev, file, blocking,
-						      atomic, hop, NULL);
+						      false, 0, NULL);
 
 	snd_seq_client_unlock(cptr);
 	return result;
 }
-
-/*
- * exported, called by kernel clients to enqueue events (w/o blocking)
- *
- * RETURN VALUE: zero if succeed, negative if error
- */
-int snd_seq_kernel_client_enqueue(int client, struct snd_seq_event * ev,
-				  int atomic, int hop)
-{
-	return kernel_client_enqueue(client, ev, NULL, 0, atomic, hop);
-}
 EXPORT_SYMBOL(snd_seq_kernel_client_enqueue);
 
-/*
- * exported, called by kernel clients to enqueue events (with blocking)
- *
- * RETURN VALUE: zero if succeed, negative if error
- */
-int snd_seq_kernel_client_enqueue_blocking(int client, struct snd_seq_event * ev,
-					   struct file *file,
-					   int atomic, int hop)
-{
-	return kernel_client_enqueue(client, ev, file, 1, atomic, hop);
-}
-EXPORT_SYMBOL(snd_seq_kernel_client_enqueue_blocking);
-
 /* 
  * exported, called by kernel clients to dispatch events directly to other
  * clients, bypassing the queues.  Event time-stamp will be updated.
diff --git a/sound/core/seq/seq_clientmgr.h b/sound/core/seq/seq_clientmgr.h
index 0611e1e0ed5b..66ad3d547916 100644
--- a/sound/core/seq/seq_clientmgr.h
+++ b/sound/core/seq/seq_clientmgr.h
@@ -93,10 +93,6 @@ struct snd_seq_client *snd_seq_client_use_ptr(int clientid);
 /* dispatch event to client(s) */
 int snd_seq_dispatch_event(struct snd_seq_event_cell *cell, int atomic, int hop);
 
-/* exported to other modules */
-int snd_seq_kernel_client_enqueue(int client, struct snd_seq_event *ev, int atomic, int hop);
-int snd_seq_kernel_client_enqueue_blocking(int client, struct snd_seq_event * ev,
-					   struct file *file, int atomic, int hop);
 int snd_seq_kernel_client_write_poll(int clientid, struct file *file, poll_table *wait);
 int snd_seq_client_notify_subscription(int client, int port,
 				       struct snd_seq_port_subscribe *info, int evtype);
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3] ALSA: seq: Protect racy pool manipulation from OSS sequencer
  2019-04-12 12:29 [PATCH 0/3] ALSA: seq: Yet a few more fixes / cleanups Takashi Iwai
  2019-04-12 12:29 ` [PATCH 1/3] ALSA: seq: Cover unsubscribe_port() in list_mutex Takashi Iwai
  2019-04-12 12:29 ` [PATCH 2/3] ALSA: seq: Simplify snd_seq_kernel_client_enqueue() helper Takashi Iwai
@ 2019-04-12 12:29 ` Takashi Iwai
  2019-04-15  6:37   ` Kai Vehmanen
  2 siblings, 1 reply; 8+ messages in thread
From: Takashi Iwai @ 2019-04-12 12:29 UTC (permalink / raw)
  To: alsa-devel

OSS sequencer emulation still allows to queue and issue the events
that manipulate the client pool concurrently in a racy way.  This
patch serializes the access like the normal sequencer write / ioctl
via taking the client ioctl_mutex.  Since the access to the sequencer
client is done indirectly via a client id number, a new helper to
take/release the mutex is introduced.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
---
 sound/core/seq/oss/seq_oss_device.h | 10 ++++++++--
 sound/core/seq/seq_clientmgr.c      | 40 ++++++++++++++++++++++++++++++++++---
 sound/core/seq/seq_clientmgr.h      |  4 ++++
 3 files changed, 49 insertions(+), 5 deletions(-)

diff --git a/sound/core/seq/oss/seq_oss_device.h b/sound/core/seq/oss/seq_oss_device.h
index 2d0e9eaf13aa..77eb1fe1155c 100644
--- a/sound/core/seq/oss/seq_oss_device.h
+++ b/sound/core/seq/oss/seq_oss_device.h
@@ -30,6 +30,7 @@
 #include <sound/rawmidi.h>
 #include <sound/seq_kernel.h>
 #include <sound/info.h>
+#include "../seq_clientmgr.h"
 
 /* max. applications */
 #define SNDRV_SEQ_OSS_MAX_CLIENTS	16
@@ -150,11 +151,16 @@ snd_seq_oss_dispatch(struct seq_oss_devinfo *dp, struct snd_seq_event *ev, int a
 	return snd_seq_kernel_client_dispatch(dp->cseq, ev, atomic, hop);
 }
 
-/* ioctl */
+/* ioctl for writeq */
 static inline int
 snd_seq_oss_control(struct seq_oss_devinfo *dp, unsigned int type, void *arg)
 {
-	return snd_seq_kernel_client_ctl(dp->cseq, type, arg);
+	int err;
+
+	snd_seq_client_ioctl_lock(dp->cseq);
+	err = snd_seq_kernel_client_ctl(dp->cseq, type, arg);
+	snd_seq_client_ioctl_unlock(dp->cseq);
+	return err;
 }
 
 /* fill the addresses in header */
diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
index 0af5b1440b33..a5c9d59eb5b8 100644
--- a/sound/core/seq/seq_clientmgr.c
+++ b/sound/core/seq/seq_clientmgr.c
@@ -179,6 +179,36 @@ struct snd_seq_client *snd_seq_client_use_ptr(int clientid)
 	return client;
 }
 
+/* Take refcount and perform ioctl_mutex lock on the given client;
+ * used only for OSS sequencer
+ * Unlock via snd_seq_client_ioctl_unlock() below
+ */
+bool snd_seq_client_ioctl_lock(int clientid)
+{
+	struct snd_seq_client *client;
+
+	client = snd_seq_client_use_ptr(clientid);
+	if (!client)
+		return false;
+	mutex_lock(&client->ioctl_mutex);
+	return true;
+}
+EXPORT_SYMBOL_GPL(snd_seq_client_ioctl_lock);
+
+/* Unlock and unref the given client; for OSS sequencer use only */
+void snd_seq_client_ioctl_unlock(int clientid)
+{
+	struct snd_seq_client *client;
+
+	client = snd_seq_client_use_ptr(clientid);
+	if (WARN_ON(!client))
+		return;
+	mutex_unlock(&client->ioctl_mutex);
+	snd_use_lock_free(&client->use_lock);
+	snd_seq_client_unlock(client);
+}
+EXPORT_SYMBOL_GPL(snd_seq_client_ioctl_unlock);
+
 static void usage_alloc(struct snd_seq_usage *res, int num)
 {
 	res->cur += num;
@@ -2247,11 +2277,15 @@ int snd_seq_kernel_client_enqueue(int client, struct snd_seq_event *ev,
 	if (cptr == NULL)
 		return -EINVAL;
 	
-	if (! cptr->accept_output)
+	if (!cptr->accept_output) {
 		result = -EPERM;
-	else /* send it */
+	} else { /* send it */
+		mutex_lock(&cptr->ioctl_mutex);
 		result = snd_seq_client_enqueue_event(cptr, ev, file, blocking,
-						      false, 0, NULL);
+						      false, 0,
+						      &cptr->ioctl_mutex);
+		mutex_unlock(&cptr->ioctl_mutex);
+	}
 
 	snd_seq_client_unlock(cptr);
 	return result;
diff --git a/sound/core/seq/seq_clientmgr.h b/sound/core/seq/seq_clientmgr.h
index 66ad3d547916..28a51dcc0190 100644
--- a/sound/core/seq/seq_clientmgr.h
+++ b/sound/core/seq/seq_clientmgr.h
@@ -97,6 +97,10 @@ int snd_seq_kernel_client_write_poll(int clientid, struct file *file, poll_table
 int snd_seq_client_notify_subscription(int client, int port,
 				       struct snd_seq_port_subscribe *info, int evtype);
 
+/* only for OSS sequencer */
+bool snd_seq_client_ioctl_lock(int clientid);
+void snd_seq_client_ioctl_unlock(int clientid);
+
 extern int seq_client_load[15];
 
 #endif
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] ALSA: seq: Protect racy pool manipulation from OSS sequencer
  2019-04-12 12:29 ` [PATCH 3/3] ALSA: seq: Protect racy pool manipulation from OSS sequencer Takashi Iwai
@ 2019-04-15  6:37   ` Kai Vehmanen
  2019-04-15  6:46     ` Takashi Iwai
  0 siblings, 1 reply; 8+ messages in thread
From: Kai Vehmanen @ 2019-04-15  6:37 UTC (permalink / raw)
  To: Takashi Iwai; +Cc: alsa-devel

Hi Takashi,

On Fri, 12 Apr 2019, Takashi Iwai wrote:

> +/* Unlock and unref the given client; for OSS sequencer use only */
> +void snd_seq_client_ioctl_unlock(int clientid)
> +{
> +	struct snd_seq_client *client;
> +
> +	client = snd_seq_client_use_ptr(clientid);
> +	if (WARN_ON(!client))
> +		return;
> +	mutex_unlock(&client->ioctl_mutex);
> +	snd_use_lock_free(&client->use_lock);
> +	snd_seq_client_unlock(client);

is that double-unlock intentional? snd_seq_client_unlock() seems to call
the same snd_use_lock_free for the same handle as on the previous line.

Br, Kai

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] ALSA: seq: Protect racy pool manipulation from OSS sequencer
  2019-04-15  6:37   ` Kai Vehmanen
@ 2019-04-15  6:46     ` Takashi Iwai
  2019-04-15  8:52       ` Kai Vehmanen
  0 siblings, 1 reply; 8+ messages in thread
From: Takashi Iwai @ 2019-04-15  6:46 UTC (permalink / raw)
  To: Kai Vehmanen; +Cc: alsa-devel

On Mon, 15 Apr 2019 08:37:18 +0200,
Kai Vehmanen wrote:
> 
> Hi Takashi,
> 
> On Fri, 12 Apr 2019, Takashi Iwai wrote:
> 
> > +/* Unlock and unref the given client; for OSS sequencer use only */
> > +void snd_seq_client_ioctl_unlock(int clientid)
> > +{
> > +	struct snd_seq_client *client;
> > +
> > +	client = snd_seq_client_use_ptr(clientid);
> > +	if (WARN_ON(!client))
> > +		return;
> > +	mutex_unlock(&client->ioctl_mutex);
> > +	snd_use_lock_free(&client->use_lock);
> > +	snd_seq_client_unlock(client);
> 
> is that double-unlock intentional? snd_seq_client_unlock() seems to call
> the same snd_use_lock_free for the same handle as on the previous line.

Yes, it's intentional.  The trick is snd_seq_client_ioctl_lock()
doesn't call snd_seq_client_unlock(), i.e. keeps refcount up.  This
is, in turn, released in snd_seq_client_ioctl_unlock().
snd_seq_client_ioctl_unlock() gets its refcount at use_ptr(), hence
this releases twice.  Maybe it's worth for more comments.


thanks,

Takashi

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] ALSA: seq: Protect racy pool manipulation from OSS sequencer
  2019-04-15  6:46     ` Takashi Iwai
@ 2019-04-15  8:52       ` Kai Vehmanen
  2019-04-15  8:54         ` Takashi Iwai
  0 siblings, 1 reply; 8+ messages in thread
From: Kai Vehmanen @ 2019-04-15  8:52 UTC (permalink / raw)
  To: Takashi Iwai; +Cc: alsa-devel

Hi,

On Mon, 15 Apr 2019, Takashi Iwai wrote:

> > > +void snd_seq_client_ioctl_unlock(int clientid)
> > > +{
> > > +	struct snd_seq_client *client;
> > > +
> > > +	client = snd_seq_client_use_ptr(clientid);
> > > +	if (WARN_ON(!client))
> > > +		return;
> > > +	mutex_unlock(&client->ioctl_mutex);
> > > +	snd_use_lock_free(&client->use_lock);
> > > +	snd_seq_client_unlock(client);
> > 
> > is that double-unlock intentional? snd_seq_client_unlock() seems to call
> > the same snd_use_lock_free for the same handle as on the previous line.
> 
> Yes, it's intentional.  The trick is snd_seq_client_ioctl_lock()
> doesn't call snd_seq_client_unlock(), i.e. keeps refcount up.  This
> is, in turn, released in snd_seq_client_ioctl_unlock().
> snd_seq_client_ioctl_unlock() gets its refcount at use_ptr(), hence
> this releases twice.  Maybe it's worth for more comments.

ah yes, ok thanks, just checking. :) I was misled by the asymmetric 
names. As you are taking the refcount with two client_use_ptr() calls in 
both lock and unlock, would it be more logical to have two 
snd_seq_client_unlock() calls in unlock?

Otherwise patchset looks good!

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] ALSA: seq: Protect racy pool manipulation from OSS sequencer
  2019-04-15  8:52       ` Kai Vehmanen
@ 2019-04-15  8:54         ` Takashi Iwai
  0 siblings, 0 replies; 8+ messages in thread
From: Takashi Iwai @ 2019-04-15  8:54 UTC (permalink / raw)
  To: Kai Vehmanen; +Cc: alsa-devel

On Mon, 15 Apr 2019 10:52:01 +0200,
Kai Vehmanen wrote:
> 
> Hi,
> 
> On Mon, 15 Apr 2019, Takashi Iwai wrote:
> 
> > > > +void snd_seq_client_ioctl_unlock(int clientid)
> > > > +{
> > > > +	struct snd_seq_client *client;
> > > > +
> > > > +	client = snd_seq_client_use_ptr(clientid);
> > > > +	if (WARN_ON(!client))
> > > > +		return;
> > > > +	mutex_unlock(&client->ioctl_mutex);
> > > > +	snd_use_lock_free(&client->use_lock);
> > > > +	snd_seq_client_unlock(client);
> > > 
> > > is that double-unlock intentional? snd_seq_client_unlock() seems to call
> > > the same snd_use_lock_free for the same handle as on the previous line.
> > 
> > Yes, it's intentional.  The trick is snd_seq_client_ioctl_lock()
> > doesn't call snd_seq_client_unlock(), i.e. keeps refcount up.  This
> > is, in turn, released in snd_seq_client_ioctl_unlock().
> > snd_seq_client_ioctl_unlock() gets its refcount at use_ptr(), hence
> > this releases twice.  Maybe it's worth for more comments.
> 
> ah yes, ok thanks, just checking. :) I was misled by the asymmetric 
> names. As you are taking the refcount with two client_use_ptr() calls in 
> both lock and unlock, would it be more logical to have two 
> snd_seq_client_unlock() calls in unlock?

Well I thought it might be misleading to call twice, but yeah, that's
rather confusing.  I'll change to two sequential unlock calls with
more comments.


thanks,

Takashi

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-04-15  8:54 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-12 12:29 [PATCH 0/3] ALSA: seq: Yet a few more fixes / cleanups Takashi Iwai
2019-04-12 12:29 ` [PATCH 1/3] ALSA: seq: Cover unsubscribe_port() in list_mutex Takashi Iwai
2019-04-12 12:29 ` [PATCH 2/3] ALSA: seq: Simplify snd_seq_kernel_client_enqueue() helper Takashi Iwai
2019-04-12 12:29 ` [PATCH 3/3] ALSA: seq: Protect racy pool manipulation from OSS sequencer Takashi Iwai
2019-04-15  6:37   ` Kai Vehmanen
2019-04-15  6:46     ` Takashi Iwai
2019-04-15  8:52       ` Kai Vehmanen
2019-04-15  8:54         ` Takashi Iwai

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.