All of lore.kernel.org
 help / color / mirror / Atom feed
From: Prabhakar Kushwaha <pkushwaha@marvell.com>
To: <linux-nvme@lists.infradead.org>, <sagi@grimberg.me>,
	<hch@lst.de>, <axboe@fb.com>, <kbusch@kernel.org>
Cc: <davem@davemloft.net>, <kuba@kernel.org>, <smalin@marvell.com>,
	<aelior@marvell.com>, <mkalderon@marvell.com>,
	<okulkarni@marvell.com>, <pkushwaha@marvell.com>,
	<prabhakar.pkin@gmail.com>, <malin1024@gmail.com>,
	Nikolay Assa <nassa@marvell.com>
Subject: [PATCH v4 11/20] qedn: Add qedn_claim_dev API support
Date: Tue, 29 Jun 2021 15:47:34 +0300	[thread overview]
Message-ID: <20210629124743.6898-12-pkushwaha@marvell.com> (raw)
In-Reply-To: <20210629124743.6898-1-pkushwaha@marvell.com>

From: Nikolay Assa <nassa@marvell.com>

This patch introduces the qedn_claim_dev() network service which the
offload device (qedn) is using through the paired net-device (qede).
qedn_claim_dev() returns true if the IP addr(IPv4 or IPv6) of the target
server is reachable via the net-device which is paired with the
offloaded device.

Acked-by: Igor Russkikh <irusskikh@marvell.com>
Signed-off-by: Nikolay Assa <nassa@marvell.com>
Signed-off-by: Prabhakar Kushwaha <pkushwaha@marvell.com>
Signed-off-by: Omkar Kulkarni <okulkarni@marvell.com>
Signed-off-by: Michal Kalderon <mkalderon@marvell.com>
Signed-off-by: Ariel Elior <aelior@marvell.com>
Signed-off-by: Shai Malin <smalin@marvell.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/hw/qedn/qedn.h      |  4 +++
 drivers/nvme/hw/qedn/qedn_main.c | 55 ++++++++++++++++++++++++++++++--
 2 files changed, 56 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/hw/qedn/qedn.h b/drivers/nvme/hw/qedn/qedn.h
index 931efc3afbaa..0ce1e19d1ba8 100644
--- a/drivers/nvme/hw/qedn/qedn.h
+++ b/drivers/nvme/hw/qedn/qedn.h
@@ -8,6 +8,10 @@
 
 #include <linux/qed/qed_if.h>
 #include <linux/qed/qed_nvmetcp_if.h>
+#include <linux/qed/qed_nvmetcp_ip_services_if.h>
+#include <linux/qed/qed_chain.h>
+#include <linux/qed/storage_common.h>
+#include <linux/qed/nvmetcp_common.h>
 
 /* Driver includes */
 #include "../../host/tcp-offload.h"
diff --git a/drivers/nvme/hw/qedn/qedn_main.c b/drivers/nvme/hw/qedn/qedn_main.c
index 97591797605e..78bc9fe17e7b 100644
--- a/drivers/nvme/hw/qedn/qedn_main.c
+++ b/drivers/nvme/hw/qedn/qedn_main.c
@@ -22,13 +22,62 @@ static struct pci_device_id qedn_pci_tbl[] = {
 	{0, 0},
 };
 
+static int
+qedn_find_dev(struct nvme_tcp_ofld_dev *dev,
+	      struct nvme_tcp_ofld_ctrl *ctrl)
+{
+	struct nvme_tcp_ofld_ctrl_con_params *conn_params;
+	struct pci_dev *qede_pdev = NULL;
+	struct sockaddr remote_mac_addr;
+	struct net_device *ndev = NULL;
+	u16 vlan_id = 0;
+	int rc = 0;
+
+	conn_params = &ctrl->conn_params;
+
+	/* qedn utilizes host network stack through paired qede device for
+	 * non-offload traffic. First we verify there is valid route to remote
+	 * peer.
+	 */
+	if (conn_params->remote_ip_addr.ss_family == AF_INET) {
+		rc = qed_route_ipv4(&conn_params->local_ip_addr,
+				    &conn_params->remote_ip_addr,
+				    &remote_mac_addr, &ndev);
+	} else if (conn_params->remote_ip_addr.ss_family == AF_INET6) {
+		rc = qed_route_ipv6(&conn_params->local_ip_addr,
+				    &conn_params->remote_ip_addr,
+				    &remote_mac_addr, &ndev);
+	} else {
+		pr_err("address family %d not supported\n",
+		       conn_params->remote_ip_addr.ss_family);
+
+		return false;
+	}
+
+	if (rc)
+		return false;
+
+	if (!ctrl->private_data && ctrl->ndev &&
+	    strcmp(ctrl->ndev->name, ndev->name))
+		return false;
+
+	ctrl->ndev = ndev;
+
+	qed_vlan_get_ndev(&ctrl->ndev, &vlan_id);
+
+	/* route found through ndev - validate this is qede*/
+	qede_pdev = qed_validate_ndev(ctrl->ndev);
+	if (!qede_pdev)
+		return false;
+
+	return true;
+}
+
 static int
 qedn_claim_dev(struct nvme_tcp_ofld_dev *dev,
 	       struct nvme_tcp_ofld_ctrl *ctrl)
 {
-	/* Placeholder - qedn_claim_dev */
-
-	return 0;
+	return qedn_find_dev(dev, ctrl);
 }
 
 static int qedn_setup_ctrl(struct nvme_tcp_ofld_ctrl *ctrl)
-- 
2.24.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  parent reply	other threads:[~2021-06-29 12:54 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-29 12:47 [PATCH v4 00/20] NVMeTCP Offload ULP Prabhakar Kushwaha
2021-06-29 12:47 ` [PATCH v4 01/20] nvme-tcp-offload: Add nvme-tcp-offload - NVMeTCP HW offload ULP Prabhakar Kushwaha
2021-07-01 13:34   ` Christoph Hellwig
2021-07-05 15:09     ` Shai Malin
2021-07-12 14:39       ` Prabhakar Kushwaha
2021-07-16  7:45       ` Christoph Hellwig
2021-06-29 12:47 ` [PATCH v4 02/20] nvme-fabrics: Move NVMF_ALLOWED_OPTS and NVMF_REQUIRED_OPTS definitions Prabhakar Kushwaha
2021-06-29 12:47 ` [PATCH v4 03/20] nvme-fabrics: Expose nvmf_check_required_opts() globally Prabhakar Kushwaha
2021-07-01 13:35   ` Christoph Hellwig
2021-07-05 15:10     ` Shai Malin
2021-06-29 12:47 ` [PATCH v4 04/20] nvme-tcp-offload: Add device scan implementation Prabhakar Kushwaha
2021-07-01 13:36   ` Christoph Hellwig
2021-07-05 15:10     ` Shai Malin
2021-06-29 12:47 ` [PATCH v4 05/20] nvme-tcp-offload: Add controller level implementation Prabhakar Kushwaha
2021-06-29 12:47 ` [PATCH v4 06/20] nvme-tcp-offload: Add controller level error recovery implementation Prabhakar Kushwaha
2021-06-29 12:47 ` [PATCH v4 07/20] nvme-tcp-offload: Add queue level implementation Prabhakar Kushwaha
2021-06-29 12:47 ` [PATCH v4 08/20] nvme-tcp-offload: Add IO " Prabhakar Kushwaha
2021-06-29 12:47 ` [PATCH v4 09/20] qedn: Add qedn - Marvell's NVMeTCP HW offload device driver Prabhakar Kushwaha
2021-07-01 13:41   ` Christoph Hellwig
2021-07-05 15:13     ` Shai Malin
2021-06-29 12:47 ` [PATCH v4 10/20] qedn: Add qedn probe Prabhakar Kushwaha
2021-07-01 13:48   ` Christoph Hellwig
2021-07-05 15:13     ` Shai Malin
2021-06-29 12:47 ` Prabhakar Kushwaha [this message]
2021-06-29 12:47 ` [PATCH v4 12/20] qedn: Add IRQ and fast-path resources initializations Prabhakar Kushwaha
2021-06-29 12:47 ` [PATCH v4 13/20] qedn: Add connection-level slowpath functionality Prabhakar Kushwaha
2021-06-29 12:47 ` [PATCH v4 14/20] qedn: Add support of configuring HW filter block Prabhakar Kushwaha
2021-06-29 12:47 ` [PATCH v4 15/20] qedn: Add IO level qedn_send_req and fw_cq workqueue Prabhakar Kushwaha
2021-06-29 12:47 ` [PATCH v4 16/20] qedn: Add support of Task and SGL Prabhakar Kushwaha
2021-06-29 12:47 ` [PATCH v4 17/20] qedn: Add support of NVME ICReq & ICResp Prabhakar Kushwaha
2021-06-29 12:47 ` [PATCH v4 18/20] qedn: Add IO level fastpath functionality Prabhakar Kushwaha
2021-06-29 12:47 ` [PATCH v4 19/20] qedn: Add Connection and IO level recovery flows Prabhakar Kushwaha
2021-06-29 12:47 ` [PATCH v4 20/20] qedn: Add support of ASYNC Prabhakar Kushwaha
2021-07-01 13:23 ` [PATCH v4 00/20] NVMeTCP Offload ULP Christoph Hellwig
2021-07-07 14:58   ` Hannes Reinecke
2021-07-07 15:07     ` Keith Busch
2021-07-07 15:25       ` Hannes Reinecke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210629124743.6898-12-pkushwaha@marvell.com \
    --to=pkushwaha@marvell.com \
    --cc=aelior@marvell.com \
    --cc=axboe@fb.com \
    --cc=davem@davemloft.net \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=malin1024@gmail.com \
    --cc=mkalderon@marvell.com \
    --cc=nassa@marvell.com \
    --cc=okulkarni@marvell.com \
    --cc=prabhakar.pkin@gmail.com \
    --cc=sagi@grimberg.me \
    --cc=smalin@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.