From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.codeaurora.org by pdx-caf-mail.web.codeaurora.org (Dovecot) with LMTP id NZrgCL46GVu7HQAAmS7hNA ; Thu, 07 Jun 2018 14:09:59 +0000 Received: by smtp.codeaurora.org (Postfix, from userid 1000) id 7D761608BA; Thu, 7 Jun 2018 14:09:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on pdx-caf-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI autolearn=ham autolearn_force=no version=3.4.0 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by smtp.codeaurora.org (Postfix) with ESMTP id EEB6A607DC; Thu, 7 Jun 2018 14:09:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org EEB6A607DC Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=decadent.org.uk Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933295AbeFGOJ4 (ORCPT + 25 others); Thu, 7 Jun 2018 10:09:56 -0400 Received: from shadbolt.e.decadent.org.uk ([88.96.1.126]:39345 "EHLO shadbolt.e.decadent.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933143AbeFGOJ2 (ORCPT ); Thu, 7 Jun 2018 10:09:28 -0400 Received: from [148.252.241.226] (helo=deadeye) by shadbolt.decadent.org.uk with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.84_2) (envelope-from ) id 1fQvbQ-0005Zk-BB; Thu, 07 Jun 2018 15:09:24 +0100 Received: from ben by deadeye with local (Exim 4.91) (envelope-from ) id 1fQvbC-0003DZ-Cs; Thu, 07 Jun 2018 15:09:10 +0100 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, "Takashi Iwai" Date: Thu, 07 Jun 2018 15:05:21 +0100 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) Subject: [PATCH 3.16 343/410] ALSA: seq: Clear client entry before deleting else at closing In-Reply-To: X-SA-Exim-Connect-IP: 148.252.241.226 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.16.57-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Takashi Iwai commit a2ff19f7b70118ced291a28d5313469914de451b upstream. When releasing a client, we need to clear the clienttab[] entry at first, then call snd_seq_queue_client_leave(). Otherwise, the in-flight cell in the queue might be picked up by the timer interrupt via snd_seq_check_queue() before calling snd_seq_queue_client_leave(), and it's delivered to another queue while the client is clearing queues. This may eventually result in an uncleared cell remaining in a queue, and the later snd_seq_pool_delete() may need to wait for a long time until the event gets really processed. By moving the clienttab[] clearance at the beginning of release, any event delivery of a cell belonging to this client will fail at a later point, since snd_seq_client_ptr() returns NULL. Thus the cell that was picked up by the timer interrupt will be returned immediately without further delivery, and the long stall of snd_seq_delete_pool() can be avoided, too. Signed-off-by: Takashi Iwai Signed-off-by: Ben Hutchings --- sound/core/seq/seq_clientmgr.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/sound/core/seq/seq_clientmgr.c +++ b/sound/core/seq/seq_clientmgr.c @@ -270,12 +270,12 @@ static int seq_free_client1(struct snd_s if (!client) return 0; - snd_seq_delete_all_ports(client); - snd_seq_queue_client_leave(client->number); spin_lock_irqsave(&clients_lock, flags); clienttablock[client->number] = 1; clienttab[client->number] = NULL; spin_unlock_irqrestore(&clients_lock, flags); + snd_seq_delete_all_ports(client); + snd_seq_queue_client_leave(client->number); snd_use_lock_sync(&client->use_lock); snd_seq_queue_client_termination(client->number); if (client->pool)