linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Shai Malin <smalin@nvidia.com>
To: Sagi Grimberg <sagi@grimberg.me>,
	Aurelien Aptel <aaptel@nvidia.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"kuba@kernel.org" <kuba@kernel.org>,
	"edumazet@google.com" <edumazet@google.com>,
	"pabeni@redhat.com" <pabeni@redhat.com>,
	Saeed Mahameed <saeedm@nvidia.com>,
	Tariq Toukan <tariqt@nvidia.com>,
	"leon@kernel.org" <leon@kernel.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"hch@lst.de" <hch@lst.de>,
	"kbusch@kernel.org" <kbusch@kernel.org>,
	"axboe@fb.com" <axboe@fb.com>,
	Chaitanya Kulkarni <chaitanyak@nvidia.com>
Cc: Or Gerlitz <ogerlitz@nvidia.com>, Yoray Zack <yorayz@nvidia.com>,
	Boris Pismenny <borisp@nvidia.com>,
	"aurelien.aptel@gmail.com" <aurelien.aptel@gmail.com>,
	"malin1024@gmail.com" <malin1024@gmail.com>
Subject: RE: [PATCH v7 00/23] nvme-tcp receive offloads
Date: Thu, 27 Oct 2022 10:45:31 +0000	[thread overview]
Message-ID: <DM6PR12MB35643EF99249C97998510C51BC339@DM6PR12MB3564.namprd12.prod.outlook.com> (raw)
In-Reply-To: <f62c517e-e25e-ad2f-cf31-cba6639735ad@grimberg.me>

On Thu, 27 Oct 2022 at 11:35, Sagi Grimberg <sagi@grimberg.me> wrote:
> > Hi,
> >
> > The nvme-tcp receive offloads series v7 was sent to both net-next and
> > nvme.  It is the continuation of v5 which was sent on July 2021
> > https://lore.kernel.org/netdev/20210722110325.371-1-borisp@nvidia.com/ .
> > V7 is now working on a real HW.
> >
> > The feature will also be presented in netdev this week
> > https://netdevconf.info/0x16/session.html?NVMeTCP-Offload-%E2%80%93-
> Implementation-and-Performance-Gains
> >
> > Currently the series is aligned to net-next, please update us if you will prefer
> otherwise.
> >
> > Thanks,
> > Shai, Aurelien
> 
> Hey Shai & Aurelien
> 
> Can you please add in the next time documentation of the limitations
> that this offload has in terms of compatibility? i.e. for example (from
> my own imagination):
> 1. bonding/teaming/other-stacking?
> 2. TLS (sw/hw)?
> 3. any sort of tunneling/overlay?
> 4. VF/PF?
> 5. any nvme features?
> 6. ...
> 
> And what are your plans to address each if at all.
> 
> Also, does this have a path to userspace? for example almost all
> of the nvme-tcp targets live in userspace.
> 
> I don't think I see in the code any limits like the maximum
> connections that can be offloaded on a single device/port. Can
> you share some details on this?
> 
> Thanks.

Sure, we add it.

      reply	other threads:[~2022-10-27 10:45 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-25 13:59 [PATCH v7 00/23] nvme-tcp receive offloads Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 01/23] net: Introduce direct data placement tcp offload Aurelien Aptel
2022-10-25 22:39   ` Jakub Kicinski
2022-10-26 15:01     ` Shai Malin
2022-10-26 16:24       ` Jakub Kicinski
2022-10-28 10:32         ` Shai Malin
2022-10-28 15:40           ` Jakub Kicinski
2022-10-31 18:13             ` Shai Malin
2022-10-31 23:47               ` Jakub Kicinski
2022-11-03 17:23                 ` Shai Malin
2022-11-03 17:29                   ` Aurelien Aptel
2022-11-04  1:57                     ` Jakub Kicinski
2022-11-04 13:44                       ` Aurelien Aptel
2022-11-04 16:15                         ` Jakub Kicinski
2022-10-25 13:59 ` [PATCH v7 02/23] iov_iter: DDP copy to iter/pages Aurelien Aptel
2022-10-25 16:01   ` Christoph Hellwig
2022-10-26 16:05     ` Aurelien Aptel
2022-10-25 22:40   ` Jakub Kicinski
2022-10-25 13:59 ` [PATCH v7 03/23] net/tls: export get_netdev_for_sock Aurelien Aptel
2022-10-25 16:12   ` Christoph Hellwig
2022-10-26 15:55     ` Aurelien Aptel
2022-10-30 16:06       ` Christoph Hellwig
2022-10-25 13:59 ` [PATCH v7 04/23] Revert "nvme-tcp: remove the unused queue_size member in nvme_tcp_queue" Aurelien Aptel
2022-10-25 16:14   ` Christoph Hellwig
2022-10-26 11:02     ` Sagi Grimberg
2022-10-26 11:52       ` Shai Malin
2022-10-25 13:59 ` [PATCH v7 05/23] nvme-tcp: Add DDP offload control path Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 06/23] nvme-tcp: Add DDP data-path Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 07/23] nvme-tcp: RX DDGST offload Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 08/23] nvme-tcp: Deal with netdevice DOWN events Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 09/23] nvme-tcp: Add modparam to control the ULP offload enablement Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 10/23] Documentation: add ULP DDP offload documentation Aurelien Aptel
2022-10-26 22:23   ` kernel test robot
2022-10-25 13:59 ` [PATCH v7 11/23] net/mlx5e: Rename from tls to transport static params Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 12/23] net/mlx5e: Refactor ico sq polling to get budget Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 13/23] net/mlx5e: Have mdev pointer directly on the icosq structure Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 14/23] net/mlx5e: Refactor doorbell function to allow avoiding a completion Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 15/23] net/mlx5: Add NVMEoTCP caps, HW bits, 128B CQE and enumerations Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 16/23] net/mlx5e: NVMEoTCP, offload initialization Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 17/23] net/mlx5e: TCP flow steering for nvme-tcp acceleration Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 18/23] net/mlx5e: NVMEoTCP, use KLM UMRs for buffer registration Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 19/23] net/mlx5e: NVMEoTCP, queue init/teardown Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 20/23] net/mlx5e: NVMEoTCP, ddp setup and resync Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 21/23] net/mlx5e: NVMEoTCP, async ddp invalidation Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 22/23] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Aurelien Aptel
2022-10-25 13:59 ` [PATCH v7 23/23] net/mlx5e: NVMEoTCP, statistics Aurelien Aptel
2022-10-25 16:00 ` [PATCH v7 00/23] nvme-tcp receive offloads Christoph Hellwig
2022-10-26  8:28   ` Or Gerlitz
2022-10-26 11:52   ` Aurelien Aptel
2022-10-27 10:35 ` Sagi Grimberg
2022-10-27 10:45   ` Shai Malin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM6PR12MB35643EF99249C97998510C51BC339@DM6PR12MB3564.namprd12.prod.outlook.com \
    --to=smalin@nvidia.com \
    --cc=aaptel@nvidia.com \
    --cc=aurelien.aptel@gmail.com \
    --cc=axboe@fb.com \
    --cc=borisp@nvidia.com \
    --cc=chaitanyak@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kuba@kernel.org \
    --cc=leon@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=malin1024@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=ogerlitz@nvidia.com \
    --cc=pabeni@redhat.com \
    --cc=saeedm@nvidia.com \
    --cc=sagi@grimberg.me \
    --cc=tariqt@nvidia.com \
    --cc=yorayz@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).