From: Sagi Grimberg <sagi@grimberg.me>
To: Boris Pismenny <borispismenny@gmail.com>,
Boris Pismenny <borisp@mellanox.com>,
kuba@kernel.org, davem@davemloft.net, saeedm@nvidia.com,
hch@lst.de, axboe@fb.com, kbusch@kernel.org,
viro@zeniv.linux.org.uk, edumazet@google.com
Cc: yorayz@nvidia.com, boris.pismenny@gmail.com, benishay@nvidia.com,
linux-nvme@lists.infradead.org, netdev@vger.kernel.org,
ogerlitz@nvidia.com
Subject: Re: [PATCH v1 net-next 00/15] nvme-tcp receive offloads
Date: Thu, 14 Jan 2021 13:07:34 -0800 [thread overview]
Message-ID: <02f12db0-4f66-8691-72cb-7531395c7990@grimberg.me> (raw)
In-Reply-To: <6b9da1e6-c4d0-7853-15fb-edf655ead33f@gmail.com>
>> Hey Boris, sorry for some delays on my end...
>>
>> I saw some long discussions on this set with David, what is
>> the status here?
>>
>
> The main purpose of this series is to address these.
>
>> I'll take some more look into the patches, but if you
>> addressed the feedback from the last iteration I don't
>> expect major issues with this patch set (at least from
>> nvme-tcp side).
>>
>>> Changes since RFC v1:
>>> =========================================
>>> * Split mlx5 driver patches to several commits
>>> * Fix nvme-tcp handling of recovery flows. In particular, move queue offlaod
>>> init/teardown to the start/stop functions.
>>
>> I'm assuming that you tested controller resets and network hiccups
>> during traffic right?
>>
>
> Network hiccups were tested through netem packet drops and reordering.
> We tested error recovery by taking the controller down and bringing it
> back up while the system is quiescent and during traffic.
>
> If you have another test in mind, please let me know.
I suggest to also perform interface down/up during traffic both
on the host and the targets.
Other than that we should be in decent shape...
prev parent reply other threads:[~2021-01-14 21:08 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-07 21:06 [PATCH v1 net-next 00/15] nvme-tcp receive offloads Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 01/15] iov_iter: Skip copy in memcpy_to_page if src==dst Boris Pismenny
2020-12-08 0:39 ` David Ahern
2020-12-08 14:30 ` Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 02/15] net: Introduce direct data placement tcp offload Boris Pismenny
2020-12-08 0:42 ` David Ahern
2020-12-08 14:36 ` Boris Pismenny
2020-12-09 0:38 ` David Ahern
2020-12-09 8:15 ` Boris Pismenny
2020-12-10 4:26 ` David Ahern
2020-12-11 2:01 ` Jakub Kicinski
2020-12-11 2:43 ` David Ahern
2020-12-11 18:45 ` Jakub Kicinski
2020-12-11 18:58 ` Eric Dumazet
2020-12-11 19:59 ` David Ahern
2020-12-11 23:05 ` Jonathan Lemon
2020-12-13 18:34 ` Boris Pismenny
2020-12-13 18:21 ` Boris Pismenny
2020-12-15 5:19 ` David Ahern
2020-12-17 19:06 ` Boris Pismenny
2020-12-18 0:44 ` David Ahern
2020-12-09 0:57 ` David Ahern
2020-12-09 1:11 ` David Ahern
2020-12-09 8:28 ` Boris Pismenny
2020-12-09 8:25 ` Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 03/15] net: Introduce crc offload for tcp ddp ulp Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 04/15] net/tls: expose get_netdev_for_sock Boris Pismenny
2020-12-09 1:06 ` David Ahern
2020-12-09 7:41 ` Boris Pismenny
2020-12-10 3:39 ` David Ahern
2020-12-11 18:43 ` Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 05/15] nvme-tcp: Add DDP offload control path Boris Pismenny
2020-12-10 17:15 ` Shai Malin
2020-12-14 6:38 ` Boris Pismenny
2020-12-15 13:33 ` Shai Malin
2020-12-17 18:51 ` Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 06/15] nvme-tcp: Add DDP data-path Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 07/15] nvme-tcp : Recalculate crc in the end of the capsule Boris Pismenny
2020-12-15 14:07 ` Shai Malin
2020-12-07 21:06 ` [PATCH v1 net-next 08/15] nvme-tcp: Deal with netdevice DOWN events Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 09/15] net/mlx5: Header file changes for nvme-tcp offload Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 10/15] net/mlx5: Add 128B CQE for NVMEoTCP offload Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 11/15] net/mlx5e: TCP flow steering for nvme-tcp Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 12/15] net/mlx5e: NVMEoTCP DDP offload control path Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 13/15] net/mlx5e: NVMEoTCP, data-path for DDP offload Boris Pismenny
2020-12-18 0:57 ` David Ahern
2020-12-07 21:06 ` [PATCH v1 net-next 14/15] net/mlx5e: NVMEoTCP statistics Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 15/15] net/mlx5e: NVMEoTCP workaround CRC after resync Boris Pismenny
2021-01-14 1:27 ` [PATCH v1 net-next 00/15] nvme-tcp receive offloads Sagi Grimberg
2021-01-14 4:47 ` David Ahern
2021-01-14 19:21 ` Boris Pismenny
2021-01-14 19:17 ` Boris Pismenny
2021-01-14 21:07 ` Sagi Grimberg [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=02f12db0-4f66-8691-72cb-7531395c7990@grimberg.me \
--to=sagi@grimberg.me \
--cc=axboe@fb.com \
--cc=benishay@nvidia.com \
--cc=boris.pismenny@gmail.com \
--cc=borisp@mellanox.com \
--cc=borispismenny@gmail.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@nvidia.com \
--cc=saeedm@nvidia.com \
--cc=viro@zeniv.linux.org.uk \
--cc=yorayz@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).