All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ariel Elior <aelior@marvell.com>
To: "hch@lst.de" <hch@lst.de>, Shai Malin <smalin@marvell.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"sagi@grimberg.me" <sagi@grimberg.me>,
	"axboe@fb.com" <axboe@fb.com>,
	"kbusch@kernel.org" <kbusch@kernel.org>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"kuba@kernel.org" <kuba@kernel.org>,
	"Erik.Smith@dell.com" <Erik.Smith@dell.com>,
	"Douglas.Farley@dell.com" <Douglas.Farley@dell.com>,
	Michal Kalderon <mkalderon@marvell.com>,
	"Prabhakar Kushwaha" <pkushwaha@marvell.com>,
	Nikolay Assa <nassa@marvell.com>,
	"malin1024@gmail.com" <malin1024@gmail.com>
Subject: RE: [EXT] Re: [RFC PATCH v3 00/11] NVMeTCP Offload ULP and QEDN Device Driver
Date: Fri, 19 Feb 2021 21:28:18 +0000	[thread overview]
Message-ID: <SJ0PR18MB397807DB5295C9D39D93DF8EC4849@SJ0PR18MB3978.namprd18.prod.outlook.com> (raw)
In-Reply-To: <20210219091237.GA4036@lst.de>

> On Thu, Feb 18, 2021 at 06:38:07PM +0000, Shai Malin wrote:
> > So, as there are no more comments / questions, we understand the
> > direction is acceptable and will proceed to the full series.
> 
> I do not think we should support offloads at all, and certainly not onces
> requiring extra drivers.  Those drivers have caused unbelivable pain for iSCSI
> and we should not repeat that mistake.

Hi Christoph,

We are fully aware of the challenges the iSCSI offload faced - I was there too
(in bnx2i and qedi). In our mind the heart of that hardship was the iSCSI uio
design, essentially a thin alternative networking stack, which led to no end of
compatibility challenges.

But we were also there for RoCE and iWARP (TCP based) RDMA offloads where a
different approach was used, working with the networking stack instead of around
it. We feel this is a much better approach, and this is what we are attempting
to implement here.

For this reason exactly we designed this offload to be completely seemless.
There is no alternate user stack - we plug in directly into the networking
stack and there are zero changes to the regular nvme-tcp.

We are just adding a new transport alongside it, which interacts with the
networking stack when needed, and leaves it alone most of the time. Our
intention is to completely own the maintenance of the new transport, including
any compatibility requirements, and have purposefully designed it to be
streamlined in this aspect.

Protocol offload is at the core of our technology, and our device offloads RoCE,
iWARP, iSCSI and FCoE, all already in upstream drivers (qedr, qedi and qedf
respectively).

We are especially excited about NVMeTCP offload as it brings huge benefits:
RDMA-like latency, tremendous cpu utilization reduction and the reliability of
TCP.

We would be more than happy to incorporate any feedback you may have on the
design, in how to make it more robust and correct. We are aware of other work
being done in creating special types of offloaded queue, and could model our
design similarly, although our thinking was that this would be more intrusive to
regular nvme over tcp. In our original submission of the RFC we were not adding
a ULP driver, only our own vendor driver, but Sagi pointed us in the direction
of a vendor agnostic ulp layer, which made a lot of sense to us and we think is
a good approach.

Thanks,
Ariel

WARNING: multiple messages have this Message-ID (diff)
From: Ariel Elior <aelior@marvell.com>
To: "hch@lst.de" <hch@lst.de>, Shai Malin <smalin@marvell.com>
Cc: "malin1024@gmail.com" <malin1024@gmail.com>,
	"sagi@grimberg.me" <sagi@grimberg.me>,
	Michal Kalderon <mkalderon@marvell.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"axboe@fb.com" <axboe@fb.com>, Nikolay Assa <nassa@marvell.com>,
	"Douglas.Farley@dell.com" <Douglas.Farley@dell.com>,
	"Erik.Smith@dell.com" <Erik.Smith@dell.com>,
	"kbusch@kernel.org" <kbusch@kernel.org>,
	"kuba@kernel.org" <kuba@kernel.org>,
	Prabhakar Kushwaha <pkushwaha@marvell.com>,
	"davem@davemloft.net" <davem@davemloft.net>
Subject: RE: [EXT] Re: [RFC PATCH v3 00/11] NVMeTCP Offload ULP and QEDN Device Driver
Date: Fri, 19 Feb 2021 21:28:18 +0000	[thread overview]
Message-ID: <SJ0PR18MB397807DB5295C9D39D93DF8EC4849@SJ0PR18MB3978.namprd18.prod.outlook.com> (raw)
In-Reply-To: <20210219091237.GA4036@lst.de>

> On Thu, Feb 18, 2021 at 06:38:07PM +0000, Shai Malin wrote:
> > So, as there are no more comments / questions, we understand the
> > direction is acceptable and will proceed to the full series.
> 
> I do not think we should support offloads at all, and certainly not onces
> requiring extra drivers.  Those drivers have caused unbelivable pain for iSCSI
> and we should not repeat that mistake.

Hi Christoph,

We are fully aware of the challenges the iSCSI offload faced - I was there too
(in bnx2i and qedi). In our mind the heart of that hardship was the iSCSI uio
design, essentially a thin alternative networking stack, which led to no end of
compatibility challenges.

But we were also there for RoCE and iWARP (TCP based) RDMA offloads where a
different approach was used, working with the networking stack instead of around
it. We feel this is a much better approach, and this is what we are attempting
to implement here.

For this reason exactly we designed this offload to be completely seemless.
There is no alternate user stack - we plug in directly into the networking
stack and there are zero changes to the regular nvme-tcp.

We are just adding a new transport alongside it, which interacts with the
networking stack when needed, and leaves it alone most of the time. Our
intention is to completely own the maintenance of the new transport, including
any compatibility requirements, and have purposefully designed it to be
streamlined in this aspect.

Protocol offload is at the core of our technology, and our device offloads RoCE,
iWARP, iSCSI and FCoE, all already in upstream drivers (qedr, qedi and qedf
respectively).

We are especially excited about NVMeTCP offload as it brings huge benefits:
RDMA-like latency, tremendous cpu utilization reduction and the reliability of
TCP.

We would be more than happy to incorporate any feedback you may have on the
design, in how to make it more robust and correct. We are aware of other work
being done in creating special types of offloaded queue, and could model our
design similarly, although our thinking was that this would be more intrusive to
regular nvme over tcp. In our original submission of the RFC we were not adding
a ULP driver, only our own vendor driver, but Sagi pointed us in the direction
of a vendor agnostic ulp layer, which made a lot of sense to us and we think is
a good approach.

Thanks,
Ariel

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-02-19 21:29 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-07 18:13 [RFC PATCH v3 00/11] NVMeTCP Offload ULP and QEDN Device Driver Shai Malin
2021-02-07 18:13 ` Shai Malin
2021-02-07 18:13 ` [RFC PATCH v3 01/11] nvme-tcp-offload: Add nvme-tcp-offload - NVMeTCP HW offload ULP Shai Malin
2021-02-07 18:13   ` Shai Malin
2021-02-07 22:57   ` kernel test robot
2021-02-07 18:13 ` [RFC PATCH v3 02/11] nvme-fabrics: Move NVMF_ALLOWED_OPTS and NVMF_REQUIRED_OPTS definitions Shai Malin
2021-02-07 18:13   ` Shai Malin
2021-02-07 18:13 ` [RFC PATCH v3 03/11] nvme-tcp-offload: Add device scan implementation Shai Malin
2021-02-07 18:13   ` Shai Malin
2021-02-08  0:53   ` kernel test robot
2021-02-07 18:13 ` [RFC PATCH v3 04/11] nvme-tcp-offload: Add controller level implementation Shai Malin
2021-02-07 18:13   ` Shai Malin
2021-02-07 18:13 ` [RFC PATCH v3 05/11] nvme-tcp-offload: Add controller level error recovery implementation Shai Malin
2021-02-07 18:13   ` Shai Malin
2021-02-07 18:13 ` [RFC PATCH v3 06/11] nvme-tcp-offload: Add queue level implementation Shai Malin
2021-02-07 18:13   ` Shai Malin
2021-02-07 18:13 ` [RFC PATCH v3 07/11] nvme-tcp-offload: Add IO " Shai Malin
2021-02-07 18:13   ` Shai Malin
2021-02-07 18:13 ` [RFC PATCH v3 08/11] nvme-qedn: Add qedn - Marvell's NVMeTCP HW offload vendor driver Shai Malin
2021-02-07 18:13   ` Shai Malin
2021-02-07 18:13 ` [RFC PATCH v3 09/11] net-qed: Add NVMeTCP Offload PF Level FW and HW HSI Shai Malin
2021-02-07 18:13   ` Shai Malin
2021-02-07 18:13 ` [RFC PATCH v3 10/11] nvme-qedn: Add qedn probe Shai Malin
2021-02-07 18:13   ` Shai Malin
2021-02-08  2:37   ` kernel test robot
2021-02-08  2:52   ` kernel test robot
2021-02-07 18:13 ` [RFC PATCH v3 11/11] nvme-qedn: Add IRQ and fast-path resources initializations Shai Malin
2021-02-07 18:13   ` Shai Malin
2021-02-12 18:06 ` [RFC PATCH v3 00/11] NVMeTCP Offload ULP and QEDN Device Driver Chris Leech
2021-02-12 18:06   ` Chris Leech
2021-02-13 16:47   ` Shai Malin
2021-02-13 16:47     ` Shai Malin
2021-02-18 18:38 ` Shai Malin
2021-02-18 18:38   ` Shai Malin
2021-02-19  9:12   ` hch
2021-02-19  9:12     ` hch
2021-02-19 21:28     ` Ariel Elior [this message]
2021-02-19 21:28       ` [EXT] " Ariel Elior

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SJ0PR18MB397807DB5295C9D39D93DF8EC4849@SJ0PR18MB3978.namprd18.prod.outlook.com \
    --to=aelior@marvell.com \
    --cc=Douglas.Farley@dell.com \
    --cc=Erik.Smith@dell.com \
    --cc=axboe@fb.com \
    --cc=davem@davemloft.net \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=malin1024@gmail.com \
    --cc=mkalderon@marvell.com \
    --cc=nassa@marvell.com \
    --cc=netdev@vger.kernel.org \
    --cc=pkushwaha@marvell.com \
    --cc=sagi@grimberg.me \
    --cc=smalin@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.