From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755064AbcGYVfV (ORCPT ); Mon, 25 Jul 2016 17:35:21 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:47111 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755030AbcGYVbt (ORCPT ); Mon, 25 Jul 2016 17:31:49 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Junichi Nomura , Corey Minyard , Ye Feng Subject: [PATCH 4.6 095/203] ipmi: Remove smi_msg from waiting_rcv_msgs list before handle_one_recv_msg() Date: Mon, 25 Jul 2016 13:55:10 -0700 Message-Id: <20160725203433.225639364@linuxfoundation.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20160725203429.221747288@linuxfoundation.org> References: <20160725203429.221747288@linuxfoundation.org> User-Agent: quilt/0.64 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.6-stable review patch. If anyone has any objections, please let me know. ------------------ From: Junichi Nomura commit ae4ea9a2460c7fee2ae8feeb4dfe96f5f6c3e562 upstream. Commit 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for SMI interfaces") changed handle_new_recv_msgs() to call handle_one_recv_msg() for a smi_msg while the smi_msg is still connected to waiting_rcv_msgs list. That could lead to following list corruption problems: 1) low-level function treats smi_msg as not connected to list handle_one_recv_msg() could end up calling smi_send(), which assumes the msg is not connected to list. For example, the following sequence could corrupt list by doing list_add_tail() for the entry still connected to other list. handle_new_recv_msgs() msg = list_entry(waiting_rcv_msgs) handle_one_recv_msg(msg) handle_ipmb_get_msg_cmd(msg) smi_send(msg) spin_lock(xmit_msgs_lock) list_add_tail(msg) spin_unlock(xmit_msgs_lock) 2) race between multiple handle_new_recv_msgs() instances handle_new_recv_msgs() once releases waiting_rcv_msgs_lock before calling handle_one_recv_msg() then retakes the lock and list_del() it. If others call handle_new_recv_msgs() during the window shown below list_del() will be done twice for the same smi_msg. handle_new_recv_msgs() spin_lock(waiting_rcv_msgs_lock) msg = list_entry(waiting_rcv_msgs) spin_unlock(waiting_rcv_msgs_lock) | | handle_one_recv_msg(msg) | spin_lock(waiting_rcv_msgs_lock) list_del(msg) spin_unlock(waiting_rcv_msgs_lock) Fixes: 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for SMI interfaces") Signed-off-by: Jun'ichi Nomura [Added a comment to describe why this works.] Signed-off-by: Corey Minyard Tested-by: Ye Feng Signed-off-by: Greg Kroah-Hartman --- drivers/char/ipmi/ipmi_msghandler.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) --- a/drivers/char/ipmi/ipmi_msghandler.c +++ b/drivers/char/ipmi/ipmi_msghandler.c @@ -3820,6 +3820,7 @@ static void handle_new_recv_msgs(ipmi_sm while (!list_empty(&intf->waiting_rcv_msgs)) { smi_msg = list_entry(intf->waiting_rcv_msgs.next, struct ipmi_smi_msg, link); + list_del(&smi_msg->link); if (!run_to_completion) spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock, flags); @@ -3829,11 +3830,14 @@ static void handle_new_recv_msgs(ipmi_sm if (rv > 0) { /* * To preserve message order, quit if we - * can't handle a message. + * can't handle a message. Add the message + * back at the head, this is safe because this + * tasklet is the only thing that pulls the + * messages. */ + list_add(&smi_msg->link, &intf->waiting_rcv_msgs); break; } else { - list_del(&smi_msg->link); if (rv == 0) /* Message handled */ ipmi_free_smi_msg(smi_msg);