From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CA634C4332F for ; Fri, 23 Dec 2022 07:19:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=RiDC32orAmpUoZPYeYBiQ926ecOEA2Ppw5M4iWrOE/8=; b=2fOcg4bjG1fsesa102tQ7b6qL8 uxyJ1BpMKrl6oaY9dJo8GXNzdLRRBcLHQeCwUeIkZ7nXLOaBJyo5G1zlA7BXzd+uMMT9ZEq8KqXNS 9UCVqmfVDW0u9vVo5xODJsJfJCnUrZnxS439HdL1K0tHgYcZCUreFfihDL070uvo7+2GjG7VUxFf4 J+3hKBzN6CLnYkfZOnBRGNqOpPFT8FoZAyxBygn/zY/nGgWSaTmYVEMnQ4HG6kM+YtN4zJUO1STiX INwPfZNmKjJ273/FItQCOyNl+zDiTRsRCPT4/hURBpiEtZ9QI1voL9s03LKOzK9mXwB5U18Ly8ahB BFEZv90A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p8cKe-004i6T-E6; Fri, 23 Dec 2022 07:19:04 +0000 Received: from [2001:4bb8:199:7829:8d88:c8b3:6416:2f03] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1p8cK3-004hmi-9z; Fri, 23 Dec 2022 07:18:27 +0000 From: Christoph Hellwig To: Keith Busch , Sagi Grimberg , Chaitanya Kulkarni Cc: Kanchan Joshi , linux-nvme@lists.infradead.org Subject: [PATCH 4/6] nvmet: don't defer passthrough commands with trivial effects to the workqueue Date: Fri, 23 Dec 2022 08:18:12 +0100 Message-Id: <20221223071814.43564-5-hch@lst.de> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221223071814.43564-1-hch@lst.de> References: <20221223071814.43564-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org Mask out the "Command Supported" and "Logical Block Content Change" bits and only defer execution of commands that have non-trivial effects to the workqueue for synchronous execution. This allows to execture admin commands asynchronously on controllers that provide a Command Supported and Effects log page, and will keep allowing to execute Write commands asynchronously once command effects on I/O commands are taken into account. Fixes: c1fef73f793b ("nvmet: add passthru code to process commands") Reviewed-by: Kanchan Joshi Signed-off-by: Christoph Hellwig --- drivers/nvme/target/passthru.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index 79af5140af8bfe..adc0958755d66f 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -334,14 +334,13 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req) } /* - * If there are effects for the command we are about to execute, or - * an end_req function we need to use nvme_execute_passthru_rq() - * synchronously in a work item seeing the end_req function and - * nvme_passthru_end() can't be called in the request done callback - * which is typically in interrupt context. + * If a command needs post-execution fixups, or there are any + * non-trivial effects, make sure to execute the command synchronously + * in a workqueue so that nvme_passthru_end gets called. */ effects = nvme_command_effects(ctrl, ns, req->cmd->common.opcode); - if (req->p.use_workqueue || effects) { + if (req->p.use_workqueue || + (effects & ~(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC))) { INIT_WORK(&req->p.work, nvmet_passthru_execute_cmd_work); req->p.rq = rq; queue_work(nvmet_wq, &req->p.work); -- 2.35.1