From: Shai Malin <smalin@marvell.com> To: "linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>, "Sagi Grimberg" <sagi@grimberg.me>, Boris Pismenny <borispismenny@gmail.com>, "Boris Pismenny" <borisp@mellanox.com>, "kuba@kernel.org" <kuba@kernel.org>, "davem@davemloft.net" <davem@davemloft.net>, "saeedm@nvidia.com" <saeedm@nvidia.com>, "hch@lst.de" <hch@lst.de>, "axboe@fb.com" <axboe@fb.com>, "kbusch@kernel.org" <kbusch@kernel.org>, "viro@zeniv.linux.org.uk" <viro@zeniv.linux.org.uk>, "edumazet@google.com" <edumazet@google.com> Cc: Yoray Zack <yorayz@mellanox.com>, Ariel Elior <aelior@marvell.com>, "Ben Ben-Ishay" <benishay@mellanox.com>, Michal Kalderon <mkalderon@marvell.com>, "boris.pismenny@gmail.com" <boris.pismenny@gmail.com>, "netdev@vger.kernel.org" <netdev@vger.kernel.org>, Or Gerlitz <ogerlitz@mellanox.com> Subject: FW: [PATCH net-next RFC v1 05/10] nvme-tcp: Add DDP offload control path Date: Wed, 11 Nov 2020 05:12:10 +0000 [thread overview] Message-ID: <PH0PR18MB38458A4B836AB72B5770C8BACCE80@PH0PR18MB3845.namprd18.prod.outlook.com> (raw) In-Reply-To: <a41ff414-4286-e5e9-5b80-85d87533361e@grimberg.me> On 11/10/2020 1:24 AM, Sagi Grimberg wrote: > >>> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index > >>> 8f4f29f18b8c..06711ac095f2 100644 > >>> --- a/drivers/nvme/host/tcp.c > >>> +++ b/drivers/nvme/host/tcp.c > >>> @@ -62,6 +62,7 @@ enum nvme_tcp_queue_flags { > >>> NVME_TCP_Q_ALLOCATED = 0, > >>> NVME_TCP_Q_LIVE = 1, > >>> NVME_TCP_Q_POLLING = 2, > >>> + NVME_TCP_Q_OFFLOADS = 3, > > > > Sagi - following our discussion and your suggestions regarding the > > NVMeTCP Offload ULP module that we are working on at Marvell in which > > a TCP_OFFLOAD transport type would be added, > > We still need to see how this pans out.. it's hard to predict if this is the best > approach before seeing the code. I'd suggest to share some code so others > can share their input. > We plan to do this soon. > > we are concerned that perhaps the generic term "offload" for both the > transport type (for the Marvell work) and for the DDP and CRC offload queue > (for the Mellanox work) may be misleading and confusing to developers and > to users. Perhaps the naming should be "direct data placement", e.g. > NVME_TCP_Q_DDP or NVME_TCP_Q_DIRECT? > > We can call this NVME_TCP_Q_DDP, no issues with that. > Great. Thanks.
WARNING: multiple messages have this Message-ID (diff)
From: Shai Malin <smalin@marvell.com> To: "linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>, "Sagi Grimberg" <sagi@grimberg.me>, Boris Pismenny <borispismenny@gmail.com>, "Boris Pismenny" <borisp@mellanox.com>, "kuba@kernel.org" <kuba@kernel.org>, "davem@davemloft.net" <davem@davemloft.net>, "saeedm@nvidia.com" <saeedm@nvidia.com>, "hch@lst.de" <hch@lst.de>, "axboe@fb.com" <axboe@fb.com>, "kbusch@kernel.org" <kbusch@kernel.org>, "viro@zeniv.linux.org.uk" <viro@zeniv.linux.org.uk>, "edumazet@google.com" <edumazet@google.com> Cc: Yoray Zack <yorayz@mellanox.com>, Ariel Elior <aelior@marvell.com>, Ben Ben-Ishay <benishay@mellanox.com>, Michal Kalderon <mkalderon@marvell.com>, "boris.pismenny@gmail.com" <boris.pismenny@gmail.com>, "netdev@vger.kernel.org" <netdev@vger.kernel.org>, Or Gerlitz <ogerlitz@mellanox.com> Subject: FW: [PATCH net-next RFC v1 05/10] nvme-tcp: Add DDP offload control path Date: Wed, 11 Nov 2020 05:12:10 +0000 [thread overview] Message-ID: <PH0PR18MB38458A4B836AB72B5770C8BACCE80@PH0PR18MB3845.namprd18.prod.outlook.com> (raw) In-Reply-To: <a41ff414-4286-e5e9-5b80-85d87533361e@grimberg.me> On 11/10/2020 1:24 AM, Sagi Grimberg wrote: > >>> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index > >>> 8f4f29f18b8c..06711ac095f2 100644 > >>> --- a/drivers/nvme/host/tcp.c > >>> +++ b/drivers/nvme/host/tcp.c > >>> @@ -62,6 +62,7 @@ enum nvme_tcp_queue_flags { > >>> NVME_TCP_Q_ALLOCATED = 0, > >>> NVME_TCP_Q_LIVE = 1, > >>> NVME_TCP_Q_POLLING = 2, > >>> + NVME_TCP_Q_OFFLOADS = 3, > > > > Sagi - following our discussion and your suggestions regarding the > > NVMeTCP Offload ULP module that we are working on at Marvell in which > > a TCP_OFFLOAD transport type would be added, > > We still need to see how this pans out.. it's hard to predict if this is the best > approach before seeing the code. I'd suggest to share some code so others > can share their input. > We plan to do this soon. > > we are concerned that perhaps the generic term "offload" for both the > transport type (for the Marvell work) and for the DDP and CRC offload queue > (for the Mellanox work) may be misleading and confusing to developers and > to users. Perhaps the naming should be "direct data placement", e.g. > NVME_TCP_Q_DDP or NVME_TCP_Q_DIRECT? > > We can call this NVME_TCP_Q_DDP, no issues with that. > Great. Thanks. _______________________________________________ Linux-nvme mailing list Linux-nvme@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2020-11-11 5:13 UTC|newest] Thread overview: 73+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-09-30 16:20 [PATCH net-next RFC v1 00/10] nvme-tcp receive offloads Boris Pismenny 2020-09-30 16:20 ` Boris Pismenny 2020-09-30 16:20 ` [PATCH net-next RFC v1 01/10] iov_iter: Skip copy in memcpy_to_page if src==dst Boris Pismenny 2020-09-30 16:20 ` Boris Pismenny 2020-10-08 23:05 ` Sagi Grimberg 2020-10-08 23:05 ` Sagi Grimberg 2020-09-30 16:20 ` [PATCH net-next RFC v1 02/10] net: Introduce direct data placement tcp offload Boris Pismenny 2020-09-30 16:20 ` Boris Pismenny 2020-10-08 21:47 ` Sagi Grimberg 2020-10-08 21:47 ` Sagi Grimberg 2020-10-11 14:44 ` Boris Pismenny 2020-10-11 14:44 ` Boris Pismenny 2020-09-30 16:20 ` [PATCH net-next RFC v1 03/10] net: Introduce crc offload for tcp ddp ulp Boris Pismenny 2020-09-30 16:20 ` Boris Pismenny 2020-10-08 21:51 ` Sagi Grimberg 2020-10-08 21:51 ` Sagi Grimberg 2020-10-11 14:58 ` Boris Pismenny 2020-10-11 14:58 ` Boris Pismenny 2020-09-30 16:20 ` [PATCH net-next RFC v1 04/10] net/tls: expose get_netdev_for_sock Boris Pismenny 2020-09-30 16:20 ` Boris Pismenny 2020-10-08 21:56 ` Sagi Grimberg 2020-10-08 21:56 ` Sagi Grimberg 2020-09-30 16:20 ` [PATCH net-next RFC v1 05/10] nvme-tcp: Add DDP offload control path Boris Pismenny 2020-09-30 16:20 ` Boris Pismenny 2020-10-08 22:19 ` Sagi Grimberg 2020-10-08 22:19 ` Sagi Grimberg 2020-10-19 18:28 ` Boris Pismenny 2020-10-19 18:28 ` Boris Pismenny [not found] ` <PH0PR18MB3845430DDF572E0DD4832D06CCED0@PH0PR18MB3845.namprd18.prod.outlook.com> 2020-11-08 6:51 ` Shai Malin 2020-11-08 6:51 ` Shai Malin 2020-11-09 23:23 ` Sagi Grimberg 2020-11-09 23:23 ` Sagi Grimberg 2020-11-11 5:12 ` Shai Malin [this message] 2020-11-11 5:12 ` FW: " Shai Malin 2020-11-11 5:43 ` Shai Malin 2020-11-11 5:43 ` Shai Malin 2020-09-30 16:20 ` [PATCH net-next RFC v1 06/10] nvme-tcp: Add DDP data-path Boris Pismenny 2020-09-30 16:20 ` Boris Pismenny 2020-10-08 22:29 ` Sagi Grimberg 2020-10-08 22:29 ` Sagi Grimberg 2020-10-08 23:00 ` Sagi Grimberg 2020-10-08 23:00 ` Sagi Grimberg 2020-11-08 13:59 ` Boris Pismenny 2020-11-08 13:59 ` Boris Pismenny 2020-11-08 9:44 ` Boris Pismenny 2020-11-08 9:44 ` Boris Pismenny 2020-11-09 23:18 ` Sagi Grimberg 2020-11-09 23:18 ` Sagi Grimberg 2020-09-30 16:20 ` [PATCH net-next RFC v1 07/10] nvme-tcp : Recalculate crc in the end of the capsule Boris Pismenny 2020-09-30 16:20 ` Boris Pismenny 2020-10-08 22:44 ` Sagi Grimberg 2020-10-08 22:44 ` Sagi Grimberg [not found] ` <PH0PR18MB3845764B48FD24C87FA34304CCED0@PH0PR18MB3845.namprd18.prod.outlook.com> [not found] ` <PH0PR18MB38458FD325BD77983D2623D4CCEB0@PH0PR18MB3845.namprd18.prod.outlook.com> 2020-11-08 6:59 ` Shai Malin 2020-11-08 6:59 ` Shai Malin 2020-11-08 7:28 ` Boris Pismenny 2020-11-08 7:28 ` Boris Pismenny 2020-11-08 14:46 ` Boris Pismenny 2020-11-08 14:46 ` Boris Pismenny 2020-09-30 16:20 ` [PATCH net-next RFC v1 08/10] nvme-tcp: Deal with netdevice DOWN events Boris Pismenny 2020-09-30 16:20 ` Boris Pismenny 2020-10-08 22:47 ` Sagi Grimberg 2020-10-08 22:47 ` Sagi Grimberg 2020-10-11 6:54 ` Or Gerlitz 2020-10-11 6:54 ` Or Gerlitz 2020-09-30 16:20 ` [PATCH net-next RFC v1 09/10] net/mlx5e: Add NVMEoTCP offload Boris Pismenny 2020-09-30 16:20 ` Boris Pismenny 2020-09-30 22:33 ` kernel test robot 2020-10-01 0:26 ` kernel test robot 2020-09-30 16:20 ` [PATCH net-next RFC v1 10/10] net/mlx5e: NVMEoTCP, data-path for DDP offload Boris Pismenny 2020-09-30 16:20 ` Boris Pismenny 2020-10-01 1:10 ` kernel test robot 2020-10-09 0:08 ` [PATCH net-next RFC v1 00/10] nvme-tcp receive offloads Sagi Grimberg 2020-10-09 0:08 ` Sagi Grimberg
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=PH0PR18MB38458A4B836AB72B5770C8BACCE80@PH0PR18MB3845.namprd18.prod.outlook.com \ --to=smalin@marvell.com \ --cc=aelior@marvell.com \ --cc=axboe@fb.com \ --cc=benishay@mellanox.com \ --cc=boris.pismenny@gmail.com \ --cc=borisp@mellanox.com \ --cc=borispismenny@gmail.com \ --cc=davem@davemloft.net \ --cc=edumazet@google.com \ --cc=hch@lst.de \ --cc=kbusch@kernel.org \ --cc=kuba@kernel.org \ --cc=linux-nvme@lists.infradead.org \ --cc=mkalderon@marvell.com \ --cc=netdev@vger.kernel.org \ --cc=ogerlitz@mellanox.com \ --cc=saeedm@nvidia.com \ --cc=sagi@grimberg.me \ --cc=viro@zeniv.linux.org.uk \ --cc=yorayz@mellanox.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.