From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00261C433DB for ; Sun, 14 Feb 2021 23:14:32 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 751D860231 for ; Sun, 14 Feb 2021 23:14:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 751D860231 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=irrelevant.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:44258 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lBQb1-0002xg-G4 for qemu-devel@archiver.kernel.org; Sun, 14 Feb 2021 18:14:31 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:58938) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lBQPp-0004NL-ME; Sun, 14 Feb 2021 18:02:58 -0500 Received: from out4-smtp.messagingengine.com ([66.111.4.28]:36409) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lBQPm-0001oC-Cc; Sun, 14 Feb 2021 18:02:57 -0500 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id A04585C00A8; Sun, 14 Feb 2021 18:02:53 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Sun, 14 Feb 2021 18:02:53 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=KCBHqwGXhIlfR 7BxiDtm0Vx8V3EIK91XJ/m3HITl1Rs=; b=NqikGhbvZ3BFSJugcJqaAOQcLLxhj XAojG1hyWh6xbihElMS7p08tW/9j+rVmnxZDuBSYB6AVS/p+HgmBcYZEaz9nHbOY QcRRxfnTezSxSDccmXsrWKFq5sWk7K3xO+SrbZW9sE/WAafh1U7vKhU+1md23QSH 82+6RXYQzoH432huhKHl4GIsLfNGopABdx+eqGCbZAO0IGecIfQcAz7hjQ3WKOun DKOrQmv7spoRTHJkrbAee1w6AfUvndNXDCWQ1b0hS0z/efa1kUzYoeWsszUGfRNr OLIPuBjc+DRNzsKoB3xz8KRn2/8Vz4YYXOx/j+ozGJTCjMggGKYG1tuVA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=KCBHqwGXhIlfR7BxiDtm0Vx8V3EIK91XJ/m3HITl1Rs=; b=f+qIjoF0 UlWOIIhgJ6BDVAYj9ELl7S5Etqp2oOzq4HxJkfmH73AHptdhHC2SeFSubbY/2cm6 AwqpN0b3mZFS3PoMms8Cd3wTDrFaDi9YAae+rb0bXNABPdwjbrRYDH43Awi+uVZd myoYuuT/WgBWjXNbciSogWmNwiMqvrv/J3GdZR5TuR77miYFL4Hg98rOKgWSwosa svmuxk5CP0XVcz3NOosFWGVHxAUeesWcxm1l3Rw+T7RV9L1UJo1tZF/6j26BZI36 JXj5bCcC3NetKhgg//ggWoGKhZUOZQRu3GQ7kXgXmWowpBg7ciRzhNEt1a713m0j 2LcuIdzFU3ZDbw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrieeigddtgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghushcu lfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrthhtvg hrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffevgfek necukfhppeektddrudeijedrleekrdduledtnecuvehluhhsthgvrhfuihiivgepgeenuc frrghrrghmpehmrghilhhfrhhomhepihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by mail.messagingengine.com (Postfix) with ESMTPA id 53F65240057; Sun, 14 Feb 2021 18:02:52 -0500 (EST) From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH RFC v3 06/12] hw/block/nvme: refactor nvme_dma Date: Mon, 15 Feb 2021 00:02:34 +0100 Message-Id: <20210214230240.301275-7-its@irrelevant.dk> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210214230240.301275-1-its@irrelevant.dk> References: <20210214230240.301275-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Received-SPF: pass client-ip=66.111.4.28; envelope-from=its@irrelevant.dk; helo=out4-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Fam Zheng , Kevin Wolf , qemu-block@nongnu.org, Klaus Jensen , Gollu Appalanaidu , Max Reitz , Klaus Jensen , Stefan Hajnoczi , Keith Busch Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen The nvme_dma function doesn't just do DMA (QEMUSGList-based) memory transfers; it also handles QEMUIOVector copies. Introduce the NvmeTxDirection enum and rename to nvme_tx. Remove mapping of PRPs/SGLs from nvme_tx and instead assert that they have been mapped previously. This allows more fine-grained use in subsequent patches. Add new (better named) helpers, nvme_{c2h,h2c}, that does both PRP/SGL mapping and transfer. Signed-off-by: Klaus Jensen --- hw/block/nvme.c | 139 ++++++++++++++++++++++++++---------------------- 1 file changed, 76 insertions(+), 63 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index b0f163374cf7..261b1484d4f0 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -858,45 +858,71 @@ static uint16_t nvme_map_dptr(NvmeCtrl *n, NvmeSg *sg, size_t len, } } -static uint16_t nvme_dma(NvmeCtrl *n, uint8_t *ptr, uint32_t len, - DMADirection dir, NvmeRequest *req) +typedef enum NvmeTxDirection { + NVME_TX_DIRECTION_TO_DEVICE = 0, + NVME_TX_DIRECTION_FROM_DEVICE = 1, +} NvmeTxDirection; + +static uint16_t nvme_tx(NvmeCtrl *n, NvmeSg *sg, uint8_t *ptr, uint32_t len, + NvmeTxDirection dir) { - uint16_t status = NVME_SUCCESS; + assert(sg->flags & NVME_SG_ALLOC); + + if (sg->flags & NVME_SG_DMA) { + uint64_t residual; + + if (dir == NVME_TX_DIRECTION_TO_DEVICE) { + residual = dma_buf_write(ptr, len, &sg->qsg); + } else { + residual = dma_buf_read(ptr, len, &sg->qsg); + } + + if (unlikely(residual)) { + trace_pci_nvme_err_invalid_dma(); + return NVME_INVALID_FIELD | NVME_DNR; + } + } else { + size_t bytes; + + if (dir == NVME_TX_DIRECTION_TO_DEVICE) { + bytes = qemu_iovec_to_buf(&sg->iov, 0, ptr, len); + } else { + bytes = qemu_iovec_from_buf(&sg->iov, 0, ptr, len); + } + + if (unlikely(bytes != len)) { + trace_pci_nvme_err_invalid_dma(); + return NVME_INVALID_FIELD | NVME_DNR; + } + } + + return NVME_SUCCESS; +} + +static inline uint16_t nvme_c2h(NvmeCtrl *n, uint8_t *ptr, uint32_t len, + NvmeRequest *req) +{ + uint16_t status; status = nvme_map_dptr(n, &req->sg, len, &req->cmd); if (status) { return status; } - if (req->sg.flags & NVME_SG_DMA) { - uint64_t residual; + return nvme_tx(n, &req->sg, ptr, len, NVME_TX_DIRECTION_FROM_DEVICE); +} - if (dir == DMA_DIRECTION_TO_DEVICE) { - residual = dma_buf_write(ptr, len, &req->sg.qsg); - } else { - residual = dma_buf_read(ptr, len, &req->sg.qsg); - } +static inline uint16_t nvme_h2c(NvmeCtrl *n, uint8_t *ptr, uint32_t len, + NvmeRequest *req) +{ + uint16_t status; - if (unlikely(residual)) { - trace_pci_nvme_err_invalid_dma(); - status = NVME_INVALID_FIELD | NVME_DNR; - } - } else { - size_t bytes; - - if (dir == DMA_DIRECTION_TO_DEVICE) { - bytes = qemu_iovec_to_buf(&req->sg.iov, 0, ptr, len); - } else { - bytes = qemu_iovec_from_buf(&req->sg.iov, 0, ptr, len); - } - - if (unlikely(bytes != len)) { - trace_pci_nvme_err_invalid_dma(); - status = NVME_INVALID_FIELD | NVME_DNR; - } + status = nvme_map_dptr(n, &req->sg, len, &req->cmd); + if (status) { + return status; } - return status; + return nvme_tx(n, &req->sg, ptr, len, NVME_TX_DIRECTION_TO_DEVICE); } static inline void nvme_blk_read(BlockBackend *blk, int64_t offset, @@ -1735,8 +1761,7 @@ static void nvme_compare_cb(void *opaque, int ret) buf = g_malloc(ctx->iov.size); - status = nvme_dma(nvme_ctrl(req), buf, ctx->iov.size, - DMA_DIRECTION_TO_DEVICE, req); + status = nvme_h2c(nvme_ctrl(req), buf, ctx->iov.size, req); if (status) { req->status = status; goto out; @@ -1772,8 +1797,7 @@ static uint16_t nvme_dsm(NvmeCtrl *n, NvmeRequest *req) NvmeDsmRange range[nr]; uintptr_t *discards = (uintptr_t *)&req->opaque; - status = nvme_dma(n, (uint8_t *)range, sizeof(range), - DMA_DIRECTION_TO_DEVICE, req); + status = nvme_h2c(n, (uint8_t *)range, sizeof(range), req); if (status) { return status; } @@ -1855,8 +1879,8 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req) range = g_new(NvmeCopySourceRange, nr); - status = nvme_dma(n, (uint8_t *)range, nr * sizeof(NvmeCopySourceRange), - DMA_DIRECTION_TO_DEVICE, req); + status = nvme_h2c(n, (uint8_t *)range, nr * sizeof(NvmeCopySourceRange), + req); if (status) { return status; } @@ -2511,8 +2535,7 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req) return NVME_INVALID_FIELD | NVME_DNR; } zd_ext = nvme_get_zd_extension(ns, zone_idx); - status = nvme_dma(n, zd_ext, ns->params.zd_extension_size, - DMA_DIRECTION_TO_DEVICE, req); + status = nvme_h2c(n, zd_ext, ns->params.zd_extension_size, req); if (status) { trace_pci_nvme_err_zd_extension_map_error(zone_idx); return status; @@ -2667,8 +2690,7 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeRequest *req) } } - status = nvme_dma(n, (uint8_t *)buf, data_size, - DMA_DIRECTION_FROM_DEVICE, req); + status = nvme_c2h(n, (uint8_t *)buf, data_size, req); g_free(buf); @@ -2934,8 +2956,7 @@ static uint16_t nvme_smart_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, nvme_clear_events(n, NVME_AER_TYPE_SMART); } - return nvme_dma(n, (uint8_t *) &smart + off, trans_len, - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *) &smart + off, trans_len, req); } static uint16_t nvme_fw_log_info(NvmeCtrl *n, uint32_t buf_len, uint64_t off, @@ -2953,8 +2974,7 @@ static uint16_t nvme_fw_log_info(NvmeCtrl *n, uint32_t buf_len, uint64_t off, strpadcpy((char *)&fw_log.frs1, sizeof(fw_log.frs1), "1.0", ' '); trans_len = MIN(sizeof(fw_log) - off, buf_len); - return nvme_dma(n, (uint8_t *) &fw_log + off, trans_len, - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *) &fw_log + off, trans_len, req); } static uint16_t nvme_error_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, @@ -2974,8 +2994,7 @@ static uint16_t nvme_error_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, memset(&errlog, 0x0, sizeof(errlog)); trans_len = MIN(sizeof(errlog) - off, buf_len); - return nvme_dma(n, (uint8_t *)&errlog, trans_len, - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *)&errlog, trans_len, req); } static uint16_t nvme_cmd_effects(NvmeCtrl *n, uint8_t csi, uint32_t buf_len, @@ -3015,8 +3034,7 @@ static uint16_t nvme_cmd_effects(NvmeCtrl *n, uint8_t csi, uint32_t buf_len, trans_len = MIN(sizeof(log) - off, buf_len); - return nvme_dma(n, ((uint8_t *)&log) + off, trans_len, - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, ((uint8_t *)&log) + off, trans_len, req); } static uint16_t nvme_get_log(NvmeCtrl *n, NvmeRequest *req) @@ -3185,7 +3203,7 @@ static uint16_t nvme_rpt_empty_id_struct(NvmeCtrl *n, NvmeRequest *req) { uint8_t id[NVME_IDENTIFY_DATA_SIZE] = {}; - return nvme_dma(n, id, sizeof(id), DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, id, sizeof(id), req); } static inline bool nvme_csi_has_nvm_support(NvmeNamespace *ns) @@ -3202,8 +3220,7 @@ static uint16_t nvme_identify_ctrl(NvmeCtrl *n, NvmeRequest *req) { trace_pci_nvme_identify_ctrl(); - return nvme_dma(n, (uint8_t *)&n->id_ctrl, sizeof(n->id_ctrl), - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *)&n->id_ctrl, sizeof(n->id_ctrl), req); } static uint16_t nvme_identify_ctrl_csi(NvmeCtrl *n, NvmeRequest *req) @@ -3219,8 +3236,7 @@ static uint16_t nvme_identify_ctrl_csi(NvmeCtrl *n, NvmeRequest *req) if (n->params.zasl_bs) { id.zasl = n->zasl; } - return nvme_dma(n, (uint8_t *)&id, sizeof(id), - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *)&id, sizeof(id), req); } return NVME_INVALID_FIELD | NVME_DNR; @@ -3244,8 +3260,7 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req) } if (c->csi == NVME_CSI_NVM && nvme_csi_has_nvm_support(ns)) { - return nvme_dma(n, (uint8_t *)&ns->id_ns, sizeof(NvmeIdNs), - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *)&ns->id_ns, sizeof(NvmeIdNs), req); } return NVME_INVALID_CMD_SET | NVME_DNR; @@ -3271,8 +3286,8 @@ static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req) if (c->csi == NVME_CSI_NVM && nvme_csi_has_nvm_support(ns)) { return nvme_rpt_empty_id_struct(n, req); } else if (c->csi == NVME_CSI_ZONED && ns->csi == NVME_CSI_ZONED) { - return nvme_dma(n, (uint8_t *)ns->id_ns_zoned, sizeof(NvmeIdNsZoned), - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *)ns->id_ns_zoned, sizeof(NvmeIdNsZoned), + req); } return NVME_INVALID_FIELD | NVME_DNR; @@ -3314,7 +3329,7 @@ static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeRequest *req) } } - return nvme_dma(n, list, data_len, DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, list, data_len, req); } static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeRequest *req) @@ -3354,7 +3369,7 @@ static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeRequest *req) } } - return nvme_dma(n, list, data_len, DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, list, data_len, req); } static uint16_t nvme_identify_ns_descr_list(NvmeCtrl *n, NvmeRequest *req) @@ -3401,7 +3416,7 @@ static uint16_t nvme_identify_ns_descr_list(NvmeCtrl *n, NvmeRequest *req) ns_descrs->csi.hdr.nidl = NVME_NIDL_CSI; ns_descrs->csi.v = ns->csi; - return nvme_dma(n, list, sizeof(list), DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, list, sizeof(list), req); } static uint16_t nvme_identify_cmd_set(NvmeCtrl *n, NvmeRequest *req) @@ -3414,7 +3429,7 @@ static uint16_t nvme_identify_cmd_set(NvmeCtrl *n, NvmeRequest *req) NVME_SET_CSI(*list, NVME_CSI_NVM); NVME_SET_CSI(*list, NVME_CSI_ZONED); - return nvme_dma(n, list, data_len, DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, list, data_len, req); } static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequest *req) @@ -3503,8 +3518,7 @@ static uint16_t nvme_get_feature_timestamp(NvmeCtrl *n, NvmeRequest *req) { uint64_t timestamp = nvme_get_timestamp(n); - return nvme_dma(n, (uint8_t *)×tamp, sizeof(timestamp), - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *)×tamp, sizeof(timestamp), req); } static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req) @@ -3665,8 +3679,7 @@ static uint16_t nvme_set_feature_timestamp(NvmeCtrl *n, NvmeRequest *req) uint16_t ret; uint64_t timestamp; - ret = nvme_dma(n, (uint8_t *)×tamp, sizeof(timestamp), - DMA_DIRECTION_TO_DEVICE, req); + ret = nvme_h2c(n, (uint8_t *)×tamp, sizeof(timestamp), req); if (ret) { return ret; } -- 2.30.0