From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCBFDC433EF for ; Mon, 20 Sep 2021 18:30:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A18F060F23 for ; Mon, 20 Sep 2021 18:30:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380034AbhITScB (ORCPT ); Mon, 20 Sep 2021 14:32:01 -0400 Received: from mail.kernel.org ([198.145.29.99]:44428 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378761AbhITS0W (ORCPT ); Mon, 20 Sep 2021 14:26:22 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 59D4C632DD; Mon, 20 Sep 2021 17:25:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1632158738; bh=bWFyB2GCQjZchfq8B6SsuD9LGSGlwgPReH0hiIu473g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gXXkIoa619v9m9UrtkSO4IrE+2PmPXfS/nu+EEEsvyXb998zbDAceLYtfpx0cPTPB UmknDSr1TKn6v3IPsq3JifrS9bieHd/5UnwiguIW+m0wAyLkLHfnczyBpF/JgIAicR gQW921aHHgVIwgOAG9YSmDnpuVCMBPw5uQ5QK2hM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Samuel Jones , Keith Busch , Sagi Grimberg , Christoph Hellwig Subject: [PATCH 5.10 034/122] nvme-tcp: fix io_work priority inversion Date: Mon, 20 Sep 2021 18:43:26 +0200 Message-Id: <20210920163916.915224696@linuxfoundation.org> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210920163915.757887582@linuxfoundation.org> References: <20210920163915.757887582@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Keith Busch commit 70f437fb4395ad4d1d16fab9a1ad9fbc9fc0579b upstream. Dispatching requests inline with the .queue_rq() call may block while holding the send_mutex. If the tcp io_work also happens to schedule, it may see the req_list is non-empty, leaving "pending" true and remaining in TASK_RUNNING. Since io_work is of higher scheduling priority, the .queue_rq task may not get a chance to run, blocking forward progress and leading to io timeouts. Instead of checking for pending requests within io_work, let the queueing restart io_work outside the send_mutex lock if there is more work to be done. Fixes: a0fdd1418007f ("nvme-tcp: rerun io_work if req_list is not empty") Reported-by: Samuel Jones Signed-off-by: Keith Busch Reviewed-by: Sagi Grimberg Signed-off-by: Christoph Hellwig Signed-off-by: Greg Kroah-Hartman --- drivers/nvme/host/tcp.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -273,6 +273,12 @@ static inline void nvme_tcp_send_all(str } while (ret > 0); } +static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue) +{ + return !list_empty(&queue->send_list) || + !llist_empty(&queue->req_list) || queue->more_requests; +} + static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req, bool sync, bool last) { @@ -293,9 +299,10 @@ static inline void nvme_tcp_queue_reques nvme_tcp_send_all(queue); queue->more_requests = false; mutex_unlock(&queue->send_mutex); - } else if (last) { - queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work); } + + if (last && nvme_tcp_queue_more(queue)) + queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work); } static void nvme_tcp_process_req_list(struct nvme_tcp_queue *queue) @@ -890,12 +897,6 @@ done: read_unlock_bh(&sk->sk_callback_lock); } -static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue) -{ - return !list_empty(&queue->send_list) || - !llist_empty(&queue->req_list) || queue->more_requests; -} - static inline void nvme_tcp_done_send_req(struct nvme_tcp_queue *queue) { queue->request = NULL; @@ -1132,8 +1133,7 @@ static void nvme_tcp_io_work(struct work pending = true; else if (unlikely(result < 0)) break; - } else - pending = !llist_empty(&queue->req_list); + } result = nvme_tcp_try_recv(queue); if (result > 0)