From: Boris Pismenny <borispismenny@gmail.com>
To: Shai Malin <malin1024@gmail.com>,
Boris Pismenny <borisp@mellanox.com>,
"kuba@kernel.org" <kuba@kernel.org>,
"davem@davemloft.net" <davem@davemloft.net>,
"saeedm@nvidia.com" <saeedm@nvidia.com>,
"hch@lst.de" <hch@lst.de>, "sagi@grimberg.me" <sagi@grimberg.me>,
"axboe@fb.com" <axboe@fb.com>,
"kbusch@kernel.org" <kbusch@kernel.org>,
"viro@zeniv.linux.org.uk" <viro@zeniv.linux.org.uk>,
"edumazet@google.com" <edumazet@google.com>
Cc: Yoray Zack <yorayz@mellanox.com>,
"yorayz@nvidia.com" <yorayz@nvidia.com>,
"boris.pismenny@gmail.com" <boris.pismenny@gmail.com>,
Ben Ben-Ishay <benishay@mellanox.com>,
"benishay@nvidia.com" <benishay@nvidia.com>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
Or Gerlitz <ogerlitz@mellanox.com>,
"ogerlitz@nvidia.com" <ogerlitz@nvidia.com>,
Shai Malin <smalin@marvell.com>, Ariel Elior <aelior@marvell.com>,
Michal Kalderon <mkalderon@marvell.com>
Subject: Re: [PATCH v1 net-next 05/15] nvme-tcp: Add DDP offload control path
Date: Thu, 17 Dec 2020 20:51:31 +0200 [thread overview]
Message-ID: <c667ccb0-7444-6892-f647-8971d0b0b461@gmail.com> (raw)
In-Reply-To: <CAKKgK4xvS9SeM3NmNKDNe5oFxxfi0m_=iHCXeXX0DGcgzG_BBA@mail.gmail.com>
On 15/12/2020 15:33, Shai Malin wrote:
> On 12/14/2020 08:38, Boris Pismenny wrote:
>> On 10/12/2020 19:15, Shai Malin wrote:
>>> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index c0c33320fe65..ef96e4a02bbd 100644
>>> --- a/drivers/nvme/host/tcp.c
>>> +++ b/drivers/nvme/host/tcp.c
>>> @@ -14,6 +14,7 @@
>>> #include <linux/blk-mq.h>
>>> #include <crypto/hash.h>
>>> #include <net/busy_poll.h>
>>> +#include <net/tcp_ddp.h>
>>>
>>> #include "nvme.h"
>>> #include "fabrics.h"
>>> @@ -62,6 +63,7 @@ enum nvme_tcp_queue_flags {
>>> NVME_TCP_Q_ALLOCATED = 0,
>>> NVME_TCP_Q_LIVE = 1,
>>> NVME_TCP_Q_POLLING = 2,
>>> + NVME_TCP_Q_OFFLOADS = 3,
>>> };
>>>
>>> The same comment from the previous version - we are concerned that perhaps
>>> the generic term "offload" for both the transport type (for the Marvell work)
>>> and for the DDP and CRC offload queue (for the Mellanox work) may be
>>> misleading and confusing to developers and to users.
>>>
>>> As suggested by Sagi, we can call this NVME_TCP_Q_DDP.
>>>
>>
>> While I don't mind changing the naming here. I wonder why not call the
>> toe you use TOE and not TCP_OFFLOAD, and then offload is free for this?
>
> Thanks - please do change the name to NVME_TCP_Q_DDP.
> The Marvell nvme-tcp-offload patch series introducing the offloading of both the
> TCP as well as the NVMe/TCP layer, therefore it's not TOE.
>
Will do.
>>
>> Moreover, the most common use of offload in the kernel is for partial offloads
>> like this one, and not for full offloads (such as toe).
>
> Because each vendor might implement a different partial offload I
> suggest naming it
> with the specific technique which is used, as was suggested - NVME_TCP_Q_DDP.
>
IIUC, if TCP/IP is offloaded entirely then it is called TOE. It doesn't matter
than you offload additional stuff (nvme-tcp)
next prev parent reply other threads:[~2020-12-17 18:52 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-07 21:06 [PATCH v1 net-next 00/15] nvme-tcp receive offloads Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 01/15] iov_iter: Skip copy in memcpy_to_page if src==dst Boris Pismenny
2020-12-08 0:39 ` David Ahern
2020-12-08 14:30 ` Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 02/15] net: Introduce direct data placement tcp offload Boris Pismenny
2020-12-08 0:42 ` David Ahern
2020-12-08 14:36 ` Boris Pismenny
2020-12-09 0:38 ` David Ahern
2020-12-09 8:15 ` Boris Pismenny
2020-12-10 4:26 ` David Ahern
2020-12-11 2:01 ` Jakub Kicinski
2020-12-11 2:43 ` David Ahern
2020-12-11 18:45 ` Jakub Kicinski
2020-12-11 18:58 ` Eric Dumazet
2020-12-11 19:59 ` David Ahern
2020-12-11 23:05 ` Jonathan Lemon
2020-12-13 18:34 ` Boris Pismenny
2020-12-13 18:21 ` Boris Pismenny
2020-12-15 5:19 ` David Ahern
2020-12-17 19:06 ` Boris Pismenny
2020-12-18 0:44 ` David Ahern
2020-12-09 0:57 ` David Ahern
2020-12-09 1:11 ` David Ahern
2020-12-09 8:28 ` Boris Pismenny
2020-12-09 8:25 ` Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 03/15] net: Introduce crc offload for tcp ddp ulp Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 04/15] net/tls: expose get_netdev_for_sock Boris Pismenny
2020-12-09 1:06 ` David Ahern
2020-12-09 7:41 ` Boris Pismenny
2020-12-10 3:39 ` David Ahern
2020-12-11 18:43 ` Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 05/15] nvme-tcp: Add DDP offload control path Boris Pismenny
2020-12-10 17:15 ` Shai Malin
2020-12-14 6:38 ` Boris Pismenny
2020-12-15 13:33 ` Shai Malin
2020-12-17 18:51 ` Boris Pismenny [this message]
2020-12-07 21:06 ` [PATCH v1 net-next 06/15] nvme-tcp: Add DDP data-path Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 07/15] nvme-tcp : Recalculate crc in the end of the capsule Boris Pismenny
2020-12-15 14:07 ` Shai Malin
2020-12-07 21:06 ` [PATCH v1 net-next 08/15] nvme-tcp: Deal with netdevice DOWN events Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 09/15] net/mlx5: Header file changes for nvme-tcp offload Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 10/15] net/mlx5: Add 128B CQE for NVMEoTCP offload Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 11/15] net/mlx5e: TCP flow steering for nvme-tcp Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 12/15] net/mlx5e: NVMEoTCP DDP offload control path Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 13/15] net/mlx5e: NVMEoTCP, data-path for DDP offload Boris Pismenny
2020-12-18 0:57 ` David Ahern
2020-12-07 21:06 ` [PATCH v1 net-next 14/15] net/mlx5e: NVMEoTCP statistics Boris Pismenny
2020-12-07 21:06 ` [PATCH v1 net-next 15/15] net/mlx5e: NVMEoTCP workaround CRC after resync Boris Pismenny
2021-01-14 1:27 ` [PATCH v1 net-next 00/15] nvme-tcp receive offloads Sagi Grimberg
2021-01-14 4:47 ` David Ahern
2021-01-14 19:21 ` Boris Pismenny
2021-01-14 19:17 ` Boris Pismenny
2021-01-14 21:07 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c667ccb0-7444-6892-f647-8971d0b0b461@gmail.com \
--to=borispismenny@gmail.com \
--cc=aelior@marvell.com \
--cc=axboe@fb.com \
--cc=benishay@mellanox.com \
--cc=benishay@nvidia.com \
--cc=boris.pismenny@gmail.com \
--cc=borisp@mellanox.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=malin1024@gmail.com \
--cc=mkalderon@marvell.com \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@mellanox.com \
--cc=ogerlitz@nvidia.com \
--cc=saeedm@nvidia.com \
--cc=sagi@grimberg.me \
--cc=smalin@marvell.com \
--cc=viro@zeniv.linux.org.uk \
--cc=yorayz@mellanox.com \
--cc=yorayz@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).