linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Boris Pismenny <borisp@mellanox.com>
To: <dsahern@gmail.com>, <kuba@kernel.org>, <davem@davemloft.net>,
	<saeedm@nvidia.com>, <hch@lst.de>, <sagi@grimberg.me>,
	<axboe@fb.com>, <kbusch@kernel.org>, <viro@zeniv.linux.org.uk>,
	<edumazet@google.com>, <smalin@marvell.com>
Cc: Yoray Zack <yorayz@mellanox.com>,
	Boris Pismenny <borisp@mellanox.com>,
	yorayz@nvidia.com, boris.pismenny@gmail.com, benishay@nvidia.com,
	linux-nvme@lists.infradead.org, netdev@vger.kernel.org,
	Or Gerlitz <ogerlitz@mellanox.com>,
	ogerlitz@nvidia.com
Subject: [PATCH v4 net-next  18/21] net/mlx5e: NVMEoTCP ddp setup and resync
Date: Thu, 11 Feb 2021 23:10:41 +0200	[thread overview]
Message-ID: <20210211211044.32701-19-borisp@mellanox.com> (raw)
In-Reply-To: <20210211211044.32701-1-borisp@mellanox.com>

From: Ben Ben-Ishay <benishay@nvidia.com>

NVMEoTCP offload uses buffer registration for every NVME request to
perform direct data placement, The registration is done via KLM UMR
WQE's.  The driver resync handler advertise the software resync response
via static params WQE.

Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Ben Ben-Ishay <benishay@nvidia.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Yoray Zack <yorayz@mellanox.com>
---
 .../mellanox/mlx5/core/en_accel/nvmeotcp.c    | 33 +++++++++++++++++--
 1 file changed, 31 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c
index f79e26419a89..6fa35f3a8e21 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c
@@ -754,6 +754,30 @@ mlx5e_nvmeotcp_ddp_setup(struct net_device *netdev,
 			 struct sock *sk,
 			 struct tcp_ddp_io *ddp)
 {
+	struct mlx5e_priv *priv = netdev_priv(netdev);
+	struct scatterlist *sg = ddp->sg_table.sgl;
+	struct mlx5e_nvmeotcp_queue *queue;
+	struct mlx5_core_dev *mdev;
+	int i, size = 0, count = 0;
+
+	queue = container_of(tcp_ddp_get_ctx(sk), struct mlx5e_nvmeotcp_queue, tcp_ddp_ctx);
+
+	mdev = queue->priv->mdev;
+	count = dma_map_sg(mdev->device, ddp->sg_table.sgl, ddp->nents,
+			   DMA_FROM_DEVICE);
+
+	if (WARN_ON(count > mlx5e_get_max_sgl(mdev)))
+		return -ENOSPC;
+
+	for (i = 0; i < count; i++)
+		size += sg[i].length;
+
+	queue->ccid_table[ddp->command_id].size = size;
+	queue->ccid_table[ddp->command_id].ddp = ddp;
+	queue->ccid_table[ddp->command_id].sgl = sg;
+	queue->ccid_table[ddp->command_id].ccid_gen++;
+	queue->ccid_table[ddp->command_id].sgl_length = count;
+
 	return 0;
 }
 
@@ -791,11 +815,11 @@ mlx5e_nvmeotcp_ddp_teardown(struct net_device *netdev,
 			    struct tcp_ddp_io *ddp,
 			    void *ddp_ctx)
 {
-	struct mlx5e_nvmeotcp_queue *queue =
-		(struct mlx5e_nvmeotcp_queue *)tcp_ddp_get_ctx(sk);
+	struct mlx5e_nvmeotcp_queue *queue;
 	struct mlx5e_priv *priv = netdev_priv(netdev);
 	struct nvmeotcp_queue_entry *q_entry;
 
+	queue = container_of(tcp_ddp_get_ctx(sk), struct mlx5e_nvmeotcp_queue, tcp_ddp_ctx);
 	q_entry  = &queue->ccid_table[ddp->command_id];
 	WARN_ON(q_entry->sgl_length == 0);
 
@@ -811,6 +835,11 @@ static void
 mlx5e_nvmeotcp_dev_resync(struct net_device *netdev,
 			  struct sock *sk, u32 seq)
 {
+	struct mlx5e_nvmeotcp_queue *queue =
+		container_of(tcp_ddp_get_ctx(sk), struct mlx5e_nvmeotcp_queue, tcp_ddp_ctx);
+
+	queue->after_resync_cqe = 1;
+	mlx5e_nvmeotcp_rx_post_static_params_wqe(queue, seq);
 }
 
 static const struct tcp_ddp_dev_ops mlx5e_nvmeotcp_ops = {
-- 
2.24.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  parent reply	other threads:[~2021-02-11 21:15 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-11 21:10 [PATCH v4 net-next 00/21] nvme-tcp receive offloads Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 01/21] net: Introduce direct data placement tcp offload Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 02/21] net: Introduce crc offload for tcp ddp ulp Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 03/21] iov_iter: DDP copy to iter/pages Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 04/21] net: skb copy(+hash) iterators for DDP offloads Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 05/21] net/tls: expose get_netdev_for_sock Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 06/21] nvme-tcp: Add DDP offload control path Boris Pismenny
2021-02-14 18:16   ` David Ahern
2021-02-15  7:57     ` Or Gerlitz
2021-02-17 13:55     ` Or Gerlitz
2021-02-21 11:29       ` Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 07/21] nvme-tcp: Add DDP data-path Boris Pismenny
2021-02-14 18:27   ` David Ahern
2021-02-17 14:01     ` Or Gerlitz
2021-02-17 17:00       ` David Ahern
2021-02-21 11:44         ` Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 08/21] nvme-tcp: RX CRC offload Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 09/21] nvme-tcp: Deal with netdevice DOWN events Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 10/21] net/mlx5: Header file changes for nvme-tcp offload Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 11/21] net/mlx5: Add 128B CQE for NVMEoTCP offload Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 12/21] net/mlx5e: TCP flow steering for nvme-tcp Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 13/21] net/mlx5e: NVMEoTCP offload initialization Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 14/21] net/mlx5e: KLM UMR helper macros Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 15/21] net/mlx5e: NVMEoTCP use KLM UMRs Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 16/21] net/mlx5e: NVMEoTCP queue init/teardown Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 17/21] net/mlx5e: NVMEoTCP async ddp invalidation Boris Pismenny
2021-02-11 21:10 ` Boris Pismenny [this message]
2021-02-11 21:10 ` [PATCH v4 net-next 19/21] net/mlx5e: NVMEoTCP, data-path for DDP+CRC offload Boris Pismenny
2021-02-11 21:10 ` [PATCH v4 net-next 20/21] net/mlx5e: NVMEoTCP statistics Boris Pismenny
2021-02-11 21:32 ` [PATCH v4 net-next 00/21] nvme-tcp receive offloads Randy Dunlap
2021-02-12  5:22   ` Boris Pismenny
2021-02-21  8:52 ` Or Gerlitz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210211211044.32701-19-borisp@mellanox.com \
    --to=borisp@mellanox.com \
    --cc=axboe@fb.com \
    --cc=benishay@nvidia.com \
    --cc=boris.pismenny@gmail.com \
    --cc=davem@davemloft.net \
    --cc=dsahern@gmail.com \
    --cc=edumazet@google.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=netdev@vger.kernel.org \
    --cc=ogerlitz@mellanox.com \
    --cc=ogerlitz@nvidia.com \
    --cc=saeedm@nvidia.com \
    --cc=sagi@grimberg.me \
    --cc=smalin@marvell.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=yorayz@mellanox.com \
    --cc=yorayz@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).