From: Sagi Grimberg <sagi@grimberg.me>
To: Boris Pismenny <borisp@mellanox.com>,
kuba@kernel.org, davem@davemloft.net, saeedm@nvidia.com,
hch@lst.de, axboe@fb.com, kbusch@kernel.org,
viro@zeniv.linux.org.uk, edumazet@google.com
Cc: Yoray Zack <yorayz@mellanox.com>,
Ben Ben-Ishay <benishay@mellanox.com>,
boris.pismenny@gmail.com, linux-nvme@lists.infradead.org,
netdev@vger.kernel.org, Or Gerlitz <ogerlitz@mellanox.com>
Subject: Re: [PATCH net-next RFC v1 08/10] nvme-tcp: Deal with netdevice DOWN events
Date: Thu, 8 Oct 2020 15:47:37 -0700 [thread overview]
Message-ID: <67e29f83-5bab-4abd-44c0-9c5ae29d5784@grimberg.me> (raw)
In-Reply-To: <20200930162010.21610-9-borisp@mellanox.com>
On 9/30/20 9:20 AM, Boris Pismenny wrote:
> From: Or Gerlitz <ogerlitz@mellanox.com>
>
> For ddp setup/teardown and resync, the offloading logic
> uses HW resources at the NIC driver such as SQ and CQ.
>
> These resources are destroyed when the netdevice does down
> and hence we must stop using them before the NIC driver
> destroyes them.
>
> Use netdevice notifier for that matter -- offloaded connections
> are stopped before the stack continues to call the NIC driver
> close ndo.
>
> We use the existing recovery flow which has the advantage
> of resuming the offload once the connection is re-set.
>
> Since the recovery flow runs in a separate/dedicated WQ
> we need to wait in the notifier code for an ACK that all
> offloaded queues were stopped which means that the teardown
> queue offload ndo was called and the NIC doesn't have any
> resources related to that connection any more.
>
> This also buys us proper handling for the UNREGISTER event
> b/c our offloading starts in the UP state, and down is always
> there between up to unregister.
>
> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> Signed-off-by: Ben Ben-Ishay <benishay@mellanox.com>
> Signed-off-by: Yoray Zack <yorayz@mellanox.com>
> ---
> drivers/nvme/host/tcp.c | 39 +++++++++++++++++++++++++++++++++++++--
> 1 file changed, 37 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> index 9a620d1dacb4..7569b47f0414 100644
> --- a/drivers/nvme/host/tcp.c
> +++ b/drivers/nvme/host/tcp.c
> @@ -144,6 +144,7 @@ struct nvme_tcp_ctrl {
>
> static LIST_HEAD(nvme_tcp_ctrl_list);
> static DEFINE_MUTEX(nvme_tcp_ctrl_mutex);
> +static struct notifier_block nvme_tcp_netdevice_nb;
> static struct workqueue_struct *nvme_tcp_wq;
> static const struct blk_mq_ops nvme_tcp_mq_ops;
> static const struct blk_mq_ops nvme_tcp_admin_mq_ops;
> @@ -412,8 +413,6 @@ int nvme_tcp_offload_limits(struct nvme_tcp_queue *queue,
> queue->ctrl->ctrl.max_segments = limits->max_ddp_sgl_len;
> queue->ctrl->ctrl.max_hw_sectors =
> limits->max_ddp_sgl_len << (ilog2(SZ_4K) - 9);
> - } else {
> - queue->ctrl->offloading_netdev = NULL;
Squash this change to the patch that introduced it.
> }
>
> dev_put(netdev);
> @@ -1992,6 +1991,8 @@ static int nvme_tcp_alloc_admin_queue(struct nvme_ctrl *ctrl)
> {
> int ret;
>
> + to_tcp_ctrl(ctrl)->offloading_netdev = NULL;
> +
> ret = nvme_tcp_alloc_queue(ctrl, 0, NVME_AQ_DEPTH);
> if (ret)
> return ret;
> @@ -2885,6 +2886,26 @@ static struct nvme_ctrl *nvme_tcp_create_ctrl(struct device *dev,
> return ERR_PTR(ret);
> }
>
> +static int nvme_tcp_netdev_event(struct notifier_block *this,
> + unsigned long event, void *ptr)
> +{
> + struct net_device *ndev = netdev_notifier_info_to_dev(ptr);
> + struct nvme_tcp_ctrl *ctrl;
> +
> + switch (event) {
> + case NETDEV_GOING_DOWN:
> + mutex_lock(&nvme_tcp_ctrl_mutex);
> + list_for_each_entry(ctrl, &nvme_tcp_ctrl_list, list) {
> + if (ndev != ctrl->offloading_netdev)
> + continue;
> + nvme_tcp_error_recovery(&ctrl->ctrl);
> + }
> + mutex_unlock(&nvme_tcp_ctrl_mutex);
> + flush_workqueue(nvme_reset_wq);
Worth a small comment that this we want the err_work to complete
here. So if someone changes workqueues he may see this.
> + }
> + return NOTIFY_DONE;
> +}
> +
> static struct nvmf_transport_ops nvme_tcp_transport = {
> .name = "tcp",
> .module = THIS_MODULE,
next prev parent reply other threads:[~2020-10-08 22:47 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-30 16:20 [PATCH net-next RFC v1 00/10] nvme-tcp receive offloads Boris Pismenny
2020-09-30 16:20 ` [PATCH net-next RFC v1 01/10] iov_iter: Skip copy in memcpy_to_page if src==dst Boris Pismenny
2020-10-08 23:05 ` Sagi Grimberg
2020-09-30 16:20 ` [PATCH net-next RFC v1 02/10] net: Introduce direct data placement tcp offload Boris Pismenny
2020-10-08 21:47 ` Sagi Grimberg
2020-10-11 14:44 ` Boris Pismenny
2020-09-30 16:20 ` [PATCH net-next RFC v1 03/10] net: Introduce crc offload for tcp ddp ulp Boris Pismenny
2020-10-08 21:51 ` Sagi Grimberg
2020-10-11 14:58 ` Boris Pismenny
2020-09-30 16:20 ` [PATCH net-next RFC v1 04/10] net/tls: expose get_netdev_for_sock Boris Pismenny
2020-10-08 21:56 ` Sagi Grimberg
2020-09-30 16:20 ` [PATCH net-next RFC v1 05/10] nvme-tcp: Add DDP offload control path Boris Pismenny
2020-10-08 22:19 ` Sagi Grimberg
2020-10-19 18:28 ` Boris Pismenny
[not found] ` <PH0PR18MB3845430DDF572E0DD4832D06CCED0@PH0PR18MB3845.namprd18.prod.outlook.com>
2020-11-08 6:51 ` Shai Malin
2020-11-09 23:23 ` Sagi Grimberg
2020-11-11 5:12 ` FW: " Shai Malin
2020-11-11 5:43 ` Shai Malin
2020-09-30 16:20 ` [PATCH net-next RFC v1 06/10] nvme-tcp: Add DDP data-path Boris Pismenny
2020-10-08 22:29 ` Sagi Grimberg
2020-10-08 23:00 ` Sagi Grimberg
2020-11-08 13:59 ` Boris Pismenny
2020-11-08 9:44 ` Boris Pismenny
2020-11-09 23:18 ` Sagi Grimberg
2020-09-30 16:20 ` [PATCH net-next RFC v1 07/10] nvme-tcp : Recalculate crc in the end of the capsule Boris Pismenny
2020-10-08 22:44 ` Sagi Grimberg
[not found] ` <PH0PR18MB3845764B48FD24C87FA34304CCED0@PH0PR18MB3845.namprd18.prod.outlook.com>
[not found] ` <PH0PR18MB38458FD325BD77983D2623D4CCEB0@PH0PR18MB3845.namprd18.prod.outlook.com>
2020-11-08 6:59 ` Shai Malin
2020-11-08 7:28 ` Boris Pismenny
2020-11-08 14:46 ` Boris Pismenny
2020-09-30 16:20 ` [PATCH net-next RFC v1 08/10] nvme-tcp: Deal with netdevice DOWN events Boris Pismenny
2020-10-08 22:47 ` Sagi Grimberg [this message]
2020-10-11 6:54 ` Or Gerlitz
2020-09-30 16:20 ` [PATCH net-next RFC v1 09/10] net/mlx5e: Add NVMEoTCP offload Boris Pismenny
2020-09-30 16:20 ` [PATCH net-next RFC v1 10/10] net/mlx5e: NVMEoTCP, data-path for DDP offload Boris Pismenny
2020-10-09 0:08 ` [PATCH net-next RFC v1 00/10] nvme-tcp receive offloads Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=67e29f83-5bab-4abd-44c0-9c5ae29d5784@grimberg.me \
--to=sagi@grimberg.me \
--cc=axboe@fb.com \
--cc=benishay@mellanox.com \
--cc=boris.pismenny@gmail.com \
--cc=borisp@mellanox.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@mellanox.com \
--cc=saeedm@nvidia.com \
--cc=viro@zeniv.linux.org.uk \
--cc=yorayz@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).