From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1E02C04FF3 for ; Mon, 24 May 2021 16:00:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 95F386109E for ; Mon, 24 May 2021 16:00:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235352AbhEXQBs (ORCPT ); Mon, 24 May 2021 12:01:48 -0400 Received: from mail.kernel.org ([198.145.29.99]:41102 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234285AbhEXPz2 (ORCPT ); Mon, 24 May 2021 11:55:28 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 12E4B6143C; Mon, 24 May 2021 15:41:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1621870905; bh=1thRWn/Ko8lZOPys+Us69FIXuUyvzd84Oklr9SZ8F9o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HQyCZBPiwWh3nSIfen4peIyqhCKcspVvTAnva9O6uGiJWg2XFy8TK40EIKI1FjgK8 Q0newV54exa2eSG8LuXF2wXgSOboeX9hr8BXxFE9SK+3SViQmyT0I/U+eWt7aGoX6L O8maBRf3GxKMKN0Dxy0R0Zg+GlL6Omn4kdinFU8E= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Narayan Ayalasomayajula , Anil Mishra , Keith Busch , Sagi Grimberg , Christoph Hellwig Subject: [PATCH 5.10 054/104] nvme-tcp: fix possible use-after-completion Date: Mon, 24 May 2021 17:25:49 +0200 Message-Id: <20210524152334.635527739@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210524152332.844251980@linuxfoundation.org> References: <20210524152332.844251980@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sagi Grimberg commit 825619b09ad351894d2c6fb6705f5b3711d145c7 upstream. Commit db5ad6b7f8cd ("nvme-tcp: try to send request in queue_rq context") added a second context that may perform a network send. This means that now RX and TX are not serialized in nvme_tcp_io_work and can run concurrently. While there is correct mutual exclusion in the TX path (where the send_mutex protect the queue socket send activity) RX activity, and more specifically request completion may run concurrently. This means we must guarantee that any mutation of the request state related to its lifetime, bytes sent must not be accessed when a completion may have possibly arrived back (and processed). The race may trigger when a request completion arrives, processed _and_ reused as a fresh new request, exactly in the (relatively short) window between the last data payload sent and before the request iov_iter is advanced. Consider the following race: 1. 16K write request is queued 2. The nvme command and the data is sent to the controller (in-capsule or solicited by r2t) 3. After the last payload is sent but before the req.iter is advanced, the controller sends back a completion. 4. The completion is processed, the request is completed, and reused to transfer a new request (write or read) 5. The new request is queued, and the driver reset the request parameters (nvme_tcp_setup_cmd_pdu). 6. Now context in (2) resumes execution and advances the req.iter ==> use-after-completion as this is already a new request. Fix this by making sure the request is not advanced after the last data payload send, knowing that a completion may have arrived already. An alternative solution would have been to delay the request completion or state change waiting for reference counting on the TX path, but besides adding atomic operations to the hot-path, it may present challenges in multi-stage R2T scenarios where a r2t handler needs to be deferred to an async execution. Reported-by: Narayan Ayalasomayajula Tested-by: Anil Mishra Reviewed-by: Keith Busch Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Sagi Grimberg Signed-off-by: Christoph Hellwig Signed-off-by: Greg Kroah-Hartman --- drivers/nvme/host/tcp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -940,7 +940,6 @@ static int nvme_tcp_try_send_data(struct if (ret <= 0) return ret; - nvme_tcp_advance_req(req, ret); if (queue->data_digest) nvme_tcp_ddgst_update(queue->snd_hash, page, offset, ret); @@ -957,6 +956,7 @@ static int nvme_tcp_try_send_data(struct } return 1; } + nvme_tcp_advance_req(req, ret); } return -EAGAIN; }