All of lore.kernel.org
 help / color / mirror / Atom feed
* [SPDK] Re: SPDK socket abstraction layer
@ 2019-10-30 18:46 Walker, Benjamin
  0 siblings, 0 replies; 20+ messages in thread
From: Walker, Benjamin @ 2019-10-30 18:46 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6187 bytes --]

On Wed, 2019-10-30 at 17:54 +0000, Harris, James R wrote:
> Hi Sasha,
> 
> Tomek is only talking about the VPP implementation.  There are no plans to
> remove the socket abstraction layer.  If anything, the project needs to look
> at extending it in ways as you suggested.

To expand on this, there's a lot of activity right now in the SPDK sock
abstraction layer to begin to implement asynchronous operations, zero copy
operations, etc. For example, see:

Asynchronous writev
https://review.gerrithub.io/c/spdk/spdk/+/470523

MSG_ZEROCOPY use in the posix implementation
https://review.gerrithub.io/c/spdk/spdk/+/471752

A new sock implementation based on io_uring/libaio:
https://review.gerrithub.io/c/spdk/spdk/+/471314

And a new sock implementation based on Seastar:
https://review.gerrithub.io/c/spdk/spdk/+/466629

So not only is the sock abstraction layer sticking around, but it's getting a
lot of focus going forward. There is a lot of innovation happening in the Linux
kernel around networking at all layers that we need to keep up with.

One thing I would like community feedback on is what to do about the current VPP
implementation. As we make improvements and additions to the sock abstraction,
it will necessarily require updates to the VPP implementation. We can of course
continue to make those, but does the community see value in maintaining support
here? I'd really love to see someone take up the mantle on VPP if they believe
there is value that we just haven't been able to unlock yet, but absent that
it's just a maintenance burden.

Personally speaking, it would be easier for me, as someone trying to evolve the
sock abstraction layer, to drop VPP. That's one less implementation that I then
have to go update and test each time. But I'm very open to opinions and feedback
here if anyone has something to say. SPDK obviously can't just drop support
without strong consensus and a considerable amount of forewarning.

Thanks,
Ben

> 
> -Jim
> 
> 
> On 10/30/19, 10:50 AM, "Sasha Kotchubievsky" <sashakot(a)dev.mellanox.co.il>
> wrote:
> 
>     Hi Tomek,
>     
>     Are you looking for community feedback regarding VPP implementation of TCP
>     stack, or about having socket abstraction layer in SPDK?
>     I think, socket abstraction layer is critical for future integration
> between
>     SPDK and user-space stacks. In Mellanox, we're evaluating integration
>     between VMA (https://github.com/Mellanox/libvma) and SPDK. Although, VMA
> can
>     be used as replacement for Kernel implementation of Posix socket
> interface,
>     we see great potential in "deep" integration, which definitely needs keep
>     existing abstraction layer. For example, one of potential improvements can
>     be zero-copy in RX (receive) flow. I don't see how that can be implemented
>     on top of Linux Kernel stack. 
>     
>     Best regards
>     Sasha
>     
>     -----Original Message-----
>     From: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com> 
>     Sent: Monday, October 21, 2019 3:01 PM
>     To: Storage Performance Development Kit <spdk(a)lists.01.org>
>     Subject: [SPDK] SPDK socket abstraction layer
>     
>     Hello everyone,
>     
>     Summary:
>     
>     With this message I wanted to update SPDK community on state of VPP socket
>     abstraction as of SPDK 19.07 release.
>     At this time there does not seem to be a clear efficiency improvements
> with
>     VPP. There is no further work planned on SPDK and VPP integration.
>     
>     Details:
>     
>     As some of you may remember, SPDK 18.04 release introduced support for
>     alternative socket types. Along with that release, Vector Packet
> Processing
>     (VPP)<https://wiki.fd.io/view/VPP> 18.01 was integrated with SPDK, by
>     expanding socket abstraction to use VPP Communications Library (VCL).
> TCP/IP
>     stack in VPP<https://wiki.fd.io/view/VPP/HostStack> was in early stages
> back
>     then and has seen improvements throughout the last year.
>     
>     To better use VPP capabilities, following fruitful collaboration with VPP
>     team, in SPDK 19.07, this implementation was changed from VCL to VPP
> Session
>     API from VPP 19.04.2.
>     
>     VPP socket abstraction has met some challenges due to inherent design of
>     both projects, in particular related to running separate processes and
>     memory copies.
>     Seeing improvements from original implementation was encouraging, yet
>     measuring against posix socket abstraction (taking into consideration
> entire
>     system, i.e. both processes), results are comparable. In other words,  at
>     this time there does not seem to be a clear benefit of either socket
>     abstraction from standpoint of CPU efficiency or IOPS.
>     
>     With this message I just wanted to update SPDK community on state of
> socket
>     abstraction layers as of SPDK 19.07 release. Each SPDK release always
> brings
>     improvements to the abstraction and its implementations, with exciting
> work
>     on more efficient use of kernel TCP stack - changes in SPDK 19.10 and SPDK
>     20.01.
>     
>     However there is no active involvement at this point around VPP
>     implementation of socket abstraction in SPDK. Contributions in this area
> are
>     always welcome. In case you're interested in implementing further
>     enhancements of VPP and SPDK integration feel free to reply, or to use one
>     of the many SPDK community communications
>     channels<https://spdk.io/community/>;.
>     
>     Thanks,
>     Tomek
>     
>     _______________________________________________
>     SPDK mailing list -- spdk(a)lists.01.org
>     To unsubscribe send an email to spdk-leave(a)lists.01.org
>     _______________________________________________
>     SPDK mailing list -- spdk(a)lists.01.org
>     To unsubscribe send an email to spdk-leave(a)lists.01.org
>     
> 
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2020-06-01 16:55 Zawadzki, Tomasz
  0 siblings, 0 replies; 20+ messages in thread
From: Zawadzki, Tomasz @ 2020-06-01 16:55 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3832 bytes --]

Hello everyone,

Summary:

VPP socket abstraction will be deprecated starting with SPDK 20.07. Later to be removed in SPDK 20.10.

Details:

Soon a full year of SPDK releases will pass since the previous message. During this time the VPP socket abstraction has not seen any community contributions.

In the meantime POSIX socket abstraction has seen major improvements. This includes addition of asynchronous writev, that batches writes to reduce syscalls. Added support for MSG_ZEROCOPY. Batching receives in NVMe-oF TCP.
The enhancements are best demonstrated by comparing SPDK 19.07 and 20.01 NVMe-oF TCP performance reports:
https://spdk.io/doc/performance_reports.html
POSIX/kernel stack saw considerable gains. Yet performance and efficiency of VPP abstraction was left unchanged.
Implementing those changes and keeping compatibility with VPP socket abstraction got more complex.

To not stifle further work in this area, the VPP socket abstraction will be removed.
First step will be deprecation with SPDK 20.07 release - CI will not build, nor test the VPP component.
SPDK 20.10 release will remove the VPP socket abstraction implementation from SPDK.

Thanks,
Tomek

> -----Original Message-----
> From: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>
> Sent: Monday, October 21, 2019 2:01 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] SPDK socket abstraction layer
> 
> Hello everyone,
> 
> Summary:
> 
> With this message I wanted to update SPDK community on state of VPP
> socket abstraction as of SPDK 19.07 release.
> At this time there does not seem to be a clear efficiency improvements with
> VPP. There is no further work planned on SPDK and VPP integration.
> 
> Details:
> 
> As some of you may remember, SPDK 18.04 release introduced support for
> alternative socket types. Along with that release, Vector Packet Processing
> (VPP)<https://wiki.fd.io/view/VPP> 18.01 was integrated with SPDK, by
> expanding socket abstraction to use VPP Communications Library (VCL).
> TCP/IP stack in VPP<https://wiki.fd.io/view/VPP/HostStack> was in early
> stages back then and has seen improvements throughout the last year.
> 
> To better use VPP capabilities, following fruitful collaboration with VPP team,
> in SPDK 19.07, this implementation was changed from VCL to VPP Session API
> from VPP 19.04.2.
> 
> VPP socket abstraction has met some challenges due to inherent design of
> both projects, in particular related to running separate processes and
> memory copies.
> Seeing improvements from original implementation was encouraging, yet
> measuring against posix socket abstraction (taking into consideration entire
> system, i.e. both processes), results are comparable. In other words,  at this
> time there does not seem to be a clear benefit of either socket abstraction
> from standpoint of CPU efficiency or IOPS.
> 
> With this message I just wanted to update SPDK community on state of
> socket abstraction layers as of SPDK 19.07 release. Each SPDK release always
> brings improvements to the abstraction and its implementations, with
> exciting work on more efficient use of kernel TCP stack - changes in SPDK
> 19.10 and SPDK 20.01.
> 
> However there is no active involvement at this point around VPP
> implementation of socket abstraction in SPDK. Contributions in this area are
> always welcome. In case you're interested in implementing further
> enhancements of VPP and SPDK integration feel free to reply, or to use one
> of the many SPDK community communications
> channels<https://spdk.io/community/>.
> 
> Thanks,
> Tomek
> 
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-11-07 18:45 Walker, Benjamin
  0 siblings, 0 replies; 20+ messages in thread
From: Walker, Benjamin @ 2019-11-07 18:45 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1525 bytes --]

On Thu, 2019-11-07 at 18:26 +0200, Or Gerlitz wrote:
> On Tue, Nov 5, 2019 at 8:08 PM Walker, Benjamin
> <benjamin.walker(a)intel.com> wrote:
> > On my system currently the network stack is electing to do a deferred copy
> > so the performance is not good.
> 
> Ben, can't prove that instantly, but I tend to think that if you use
> dma buffers for the wire pdu headers,
> things would be better for you in that respect (deferred copy), spdk's
> dma buffers originate (dpdk memory
> allocators) from huge pages which afaik on linux are always pinned.
> Hence the kernel just needs to ref/unref
> when you use them with MSG_ZEROCOPY and nothing beyond that. This is
> the initiator patch [1] but should
> be straight forward to apply it for the target.

I agree and I've had similar thoughts - putting all of the data into pre-pinned
memory is much more likely to hit a fast path in the pinning logic. But I would
have expected it to work even if it wasn't optimal, and it does look like your
colleagues at Mellanox got my patch working to some extent. We're just going to
try it again using the exact set up they used and confirm that it does function.

In parallel I'm going to change the allocations for the PDUs on the target side
to DMA memory like you've done on the initiator.

> 
> https://review.gerrithub.io/c/spdk/spdk/+/473278/4
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-11-07 16:26 Or Gerlitz
  0 siblings, 0 replies; 20+ messages in thread
From: Or Gerlitz @ 2019-11-07 16:26 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 734 bytes --]

On Tue, Nov 5, 2019 at 8:08 PM Walker, Benjamin
<benjamin.walker(a)intel.com> wrote:
> On my system currently the network stack is electing to do a deferred copy so the performance is not good.

Ben, can't prove that instantly, but I tend to think that if you use
dma buffers for the wire pdu headers,
things would be better for you in that respect (deferred copy), spdk's
dma buffers originate (dpdk memory
allocators) from huge pages which afaik on linux are always pinned.
Hence the kernel just needs to ref/unref
when you use them with MSG_ZEROCOPY and nothing beyond that. This is
the initiator patch [1] but should
be straight forward to apply it for the target.

https://review.gerrithub.io/c/spdk/spdk/+/473278/4

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-11-06 10:19 Or Gerlitz
  0 siblings, 0 replies; 20+ messages in thread
From: Or Gerlitz @ 2019-11-06 10:19 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2647 bytes --]

On Tue, Nov 5, 2019 at 5:06 PM Or Gerlitz <gerlitz.or(a)gmail.com> wrote:
>
> On Sun, Nov 3, 2019 at 6:56 PM Walker, Benjamin
> <benjamin.walker(a)intel.com> wrote:
> > On Nov 3, 2019 9:00 AM, Or Gerlitz <gerlitz.or(a)gmail.com> wrote:
> > On Thu, Oct 31, 2019 at 8:54 PM Walker, Benjamin wrote:
>
> >> Here's the patch at the top of the series:
> >> https://review.gerrithub.io/c/spdk/spdk/+/471752
>
> >> Zero copy is getting enabled on the socket and I do see completion
> >> notifications, but it's always doing a deferred copy. If there was some
> >> description somewhere of what causes the kernel to end up doing a deferred copy
> >> instead of page pinning that would be really useful.
>
> > the patch you pointed out doesn't change lib/nvmf/tcp.c only the sock code..
> > can you also refer to the patch that change nvmf
>
> > The patches are all in a series with the one I linked at the end. They show in the "relation chain" section of Gerrit.
> > You can click the "Download" button in Gerrit for the patch I linked and it will give you a git-fetch command that grabs the whole series.
>
> Hi Ben,
>
> yeah, was my bad, I didn't look into the code. Now I see that you have
> encapsulated the Linux TX ZC thing inside the socket/posix
> layer as a follow up change to the async writev concept.

> Recently, I worked on the initiator side, basically my approach is
> different in two aspects:
>
> 1. have also the pdu headers on dma memory (this can apply to nvmf as well)
>
> 2. since the initiator has transactional communication pattern with the target,
> I didn't care for zero-copy buffer reclaim notifications from the socket provider,
> the target response signal that the buffer may be reused (I don't see
> how this can apply to the target side)
>
> Re the approach for 1 && 2 above, please lemmi know if you want to
> discuss it here or in Gerrit :)
>
> I was working on 19.07 and today rebased it to latest master and pushed that here [1]
>
> I see some issues on the rebased bits when we move from in-capsule to
> rt2/h2c and will debug it, just wanted to give you quick access to the bits.

ok, I had a rebase bug and sorted/fixed it now, pushed the bits [1],
will be happy to get
feedback on the approach and the implementation before I go to handle
2-3 small fixmes
I have there (I listed them on the commit  change-logs).

[1] https://review.gerrithub.io/c/spdk/spdk/+/473280/5

> We saw very nice improvements with the 19.07 + TX ZC bits over SmartNIC ARM CPUs
> and VMA socket accelerator.  On this system copy takes at least 25%
> CPU and we just eliminated them.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-11-05 19:56 Sasha Kotchubievsky
  0 siblings, 0 replies; 20+ messages in thread
From: Sasha Kotchubievsky @ 2019-11-05 19:56 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1847 bytes --]

Hi,

Our testing environment is:

SW:
CentOS Linux release 8.0.1905 (Core)
Kernel: 4.18.0-80.el8.x86_64
MLNX_OFED_LINUX-4.7-1.0.0.1

HW:
Mellanox Technologies MT27800 Family [ConnectX-5]
CPU:          Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz

We used null device in target. In both sides (Target, Initiator) we ran the
same SPDK version.

Best regards
Sasha

-----Original Message-----
From: Walker, Benjamin <benjamin.walker(a)intel.com> 
Sent: Tuesday, November 5, 2019 8:08 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] Re: SPDK socket abstraction layer

Can you outline your system set up so I can reproduce this? On my system
currently the network stack is electing to do a deferred copy so the
performance is not good. I'd love to reproduce your exact set up.

I'm not surprised there are bugs at higher queue depth. The patch is
definitely still a work in progress.

On Nov 4, 2019 9:26 PM, allenz(a)mellanox.com wrote:
Hi,

With suggestion of Sasha, we synced the codes to patch
(https://review.gerrithub.io/c/spdk/spdk/+/471752). We run tests on two
X86-64 servers connected with Mellanox CX-5 100G.

For Perf with 16 QD/4K IO, we found 14% improvement with the zero-copy patch
when 1 or 2 cores were used, and 6% improvement when more cores (e.g., 8)
were used.

Unfortunately,  when we tried to use queue depth more than 16, and bigger IO
than 4K, Perf hung or got CQ error. But without the zero-copy patch, 64QD or
64K IO was ok.

Best regards,
Allen
_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org
_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-11-05 18:08 Walker, Benjamin
  0 siblings, 0 replies; 20+ messages in thread
From: Walker, Benjamin @ 2019-11-05 18:08 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1070 bytes --]

Can you outline your system set up so I can reproduce this? On my system currently the network stack is electing to do a deferred copy so the performance is not good. I'd love to reproduce your exact set up.

I'm not surprised there are bugs at higher queue depth. The patch is definitely still a work in progress.

On Nov 4, 2019 9:26 PM, allenz(a)mellanox.com wrote:
Hi,

With suggestion of Sasha, we synced the codes to patch (https://review.gerrithub.io/c/spdk/spdk/+/471752). We run tests on two X86-64 servers connected with Mellanox CX-5 100G.

For Perf with 16 QD/4K IO, we found 14% improvement with the zero-copy patch when 1 or 2 cores were used, and 6% improvement when more cores (e.g., 8) were used.

Unfortunately,  when we tried to use queue depth more than 16, and bigger IO than 4K, Perf hung or got CQ error. But without the zero-copy patch, 64QD or 64K IO was ok.

Best regards,
Allen
_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-11-05 15:06 Or Gerlitz
  0 siblings, 0 replies; 20+ messages in thread
From: Or Gerlitz @ 2019-11-05 15:06 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2264 bytes --]

On Sun, Nov 3, 2019 at 6:56 PM Walker, Benjamin
<benjamin.walker(a)intel.com> wrote:
> On Nov 3, 2019 9:00 AM, Or Gerlitz <gerlitz.or(a)gmail.com> wrote:
> On Thu, Oct 31, 2019 at 8:54 PM Walker, Benjamin wrote:

>> Here's the patch at the top of the series:
>> https://review.gerrithub.io/c/spdk/spdk/+/471752

>> Zero copy is getting enabled on the socket and I do see completion
>> notifications, but it's always doing a deferred copy. If there was some
>> description somewhere of what causes the kernel to end up doing a deferred copy
>> instead of page pinning that would be really useful.

> the patch you pointed out doesn't change lib/nvmf/tcp.c only the sock code..
> can you also refer to the patch that change nvmf

> The patches are all in a series with the one I linked at the end. They show in the "relation chain" section of Gerrit.
> You can click the "Download" button in Gerrit for the patch I linked and it will give you a git-fetch command that grabs the whole series.

Hi Ben,

yeah, was my bad, I didn't look into the code. Now I see that you have
encapsulated the Linux
TX ZC thing inside the socket/posix layer as a follow up change to the
async writev concept.

Recently, I worked on the initiator side, basically my approach is
different in two aspects:

1. have also the pdu headers on dma memory (this can apply to nvmf as well)

2. since the initiator has transactional communication pattern with the target,
I didn't care for zero-copy buffer reclaim notifications from the
socket provider,
the target response signal that the buffer may be reused (I don't see
how this can
apply to the target side)

Re the approach for 1 && 2 above, please lemmi know if you want to
discuss it here or in Gerrit :)

I was working on 19.07 and today rebased it to latest master and
pushed that here [1]

I see some issues on the rebased bits when we move from in-capsule to
rt2/h2c and
will debug it, just wanted to give you quick access to the bits.

We saw very nice improvements with the 19.07 + TX ZC bits over SmartNIC ARM CPUs
and VMA socket accelerator.  On this system copy takes at least 25%
CPU and we just
eliminated them.

Or.

[1] https://review.gerrithub.io/c/spdk/spdk/+/473280/3

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-11-05  5:29 allenz
  0 siblings, 0 replies; 20+ messages in thread
From: allenz @ 2019-11-05  5:29 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 546 bytes --]

Hi,

With suggestion of Sasha, we synced the codes to patch (https://review.gerrithub.io/c/spdk/spdk/+/471752). We run tests on two X86-64 servers connected with Mellanox CX-5 100G.  

For Perf with 16 QD/4K IO, we found 14% improvement with the zero-copy patch when 1 or 2 cores were used, and 6% improvement when more cores (e.g., 8) were used.

Unfortunately,  when we tried to use queue depth more than 16, and bigger IO than 4K, Perf hung or got CQ error. But without the zero-copy patch, 64QD or 64K IO was ok.

Best regards,
Allen

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-11-03 16:56 Walker, Benjamin
  0 siblings, 0 replies; 20+ messages in thread
From: Walker, Benjamin @ 2019-11-03 16:56 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 935 bytes --]

On Nov 3, 2019 9:00 AM, Or Gerlitz <gerlitz.or(a)gmail.com> wrote:
On Thu, Oct 31, 2019 at 8:54 PM Walker, Benjamin
<benjamin.walker(a)intel.com> wrote:

> Here's the patch at the top of the series:
> https://review.gerrithub.io/c/spdk/spdk/+/471752

> Zero copy is getting enabled on the socket and I do see completion
> notifications, but it's always doing a deferred copy. If there was some
> description somewhere of what causes the kernel to end up doing a deferred copy
> instead of page pinning that would be really useful.

the patch you pointed out doesn't change lib/nvmf/tcp.c only the sock code..
can you also refer to the patch that change nvmf


The patches are all in a series with the one I linked at the end. They show in the "relation chain" section of Gerrit. You can click the "Download" button in Gerrit for the patch I linked and it will give you a git-fetch command that grabs the whole series.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-11-03 15:59 Or Gerlitz
  0 siblings, 0 replies; 20+ messages in thread
From: Or Gerlitz @ 2019-11-03 15:59 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 602 bytes --]

On Thu, Oct 31, 2019 at 8:54 PM Walker, Benjamin
<benjamin.walker(a)intel.com> wrote:

> Here's the patch at the top of the series:
> https://review.gerrithub.io/c/spdk/spdk/+/471752

> Zero copy is getting enabled on the socket and I do see completion
> notifications, but it's always doing a deferred copy. If there was some
> description somewhere of what causes the kernel to end up doing a deferred copy
> instead of page pinning that would be really useful.

the patch you pointed out doesn't change lib/nvmf/tcp.c only the sock code..
can you also refer to the patch that change nvmf

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-10-31 21:11 Andrey Kuzmin
  0 siblings, 0 replies; 20+ messages in thread
From: Andrey Kuzmin @ 2019-10-31 21:11 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1021 bytes --]

On Thu, Oct 31, 2019, 21:54 Walker, Benjamin <benjamin.walker(a)intel.com>
wrote:

>
>
> Here's the patch at the top of the series:
> https://review.gerrithub.io/c/spdk/spdk/+/471752
>
> Zero copy is getting enabled on the socket and I do see completion
> notifications, but it's always doing a deferred copy. If there was some
> description somewhere of what causes the kernel to end up doing a deferred
> copy
> instead of page pinning that would be really useful.
>

FWIW (notice the tx-scatter-gather-fraglist discussion), :
https://stackoverflow.com/questions/48378534/sending-bufs-with-msg-zerocopy-flag-and-so-zerocopy-option-in-kernel-4-14-be

Regards,
Andrey


> >
> > _______________________________________________
> > SPDK mailing list -- spdk(a)lists.01.org
> > To unsubscribe send an email to spdk-leave(a)lists.01.org
>
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-10-31 18:54 Walker, Benjamin
  0 siblings, 0 replies; 20+ messages in thread
From: Walker, Benjamin @ 2019-10-31 18:54 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1870 bytes --]

On Thu, 2019-10-31 at 16:21 +0200, Sasha Kotchubievsky wrote:
> > However, I just figured out what's wrong, so maybe you can help me fix it.
> > The
> > zero copy is all working mechanically, except when I get the zero copy
> > completion notification, ee_code is set to SO_EE_CODE_ZEROCOPY_COPIED. So
> > something is not configured correctly in my networking stack and the kernel
> > is
> > doing deferred copies instead. Any ideas what I'd need to do in order to
> > enable
> > this? I'm fairly certain I have it working at my desk on Fedora 30 in
> > loopback,
> > based on the CPU traces I'm seeing, so maybe it's just a matter of
> > installing
> > Fedora 30 on the benchmark system instead.
> 
> I don't know about any specific setting for enabling ZERO-COPY. But, let 
> me double check that.
> 
> > Is it enough to apply those two patches for zero-copy in target ?
> > 
> > Asynchronous writevhttps://review.gerrithub.io/c/spdk/spdk/+/470523
> > 
> > MSG_ZEROCOPY use in the posix implementation
> > https://review.gerrithub.io/c/spdk/spdk/+/471752
> > Let me get the patches sorted out in a nice series and then you can grab the
> > top
> > of the series for testing.
> 
> This will be great.  On Sunday, we will create an environment like 
> yours  (except OS) and will check the feature.


Here's the patch at the top of the series:
https://review.gerrithub.io/c/spdk/spdk/+/471752

Zero copy is getting enabled on the socket and I do see completion
notifications, but it's always doing a deferred copy. If there was some
description somewhere of what causes the kernel to end up doing a deferred copy
instead of page pinning that would be really useful.

> 
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-10-31 14:21 Sasha Kotchubievsky
  0 siblings, 0 replies; 20+ messages in thread
From: Sasha Kotchubievsky @ 2019-10-31 14:21 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1197 bytes --]


> However, I just figured out what's wrong, so maybe you can help me fix it. The
> zero copy is all working mechanically, except when I get the zero copy
> completion notification, ee_code is set to SO_EE_CODE_ZEROCOPY_COPIED. So
> something is not configured correctly in my networking stack and the kernel is
> doing deferred copies instead. Any ideas what I'd need to do in order to enable
> this? I'm fairly certain I have it working at my desk on Fedora 30 in loopback,
> based on the CPU traces I'm seeing, so maybe it's just a matter of installing
> Fedora 30 on the benchmark system instead.


I don't know about any specific setting for enabling ZERO-COPY. But, let 
me double check that.

> Is it enough to apply those two patches for zero-copy in target ?
>
> Asynchronous writevhttps://review.gerrithub.io/c/spdk/spdk/+/470523
>
> MSG_ZEROCOPY use in the posix implementation
> https://review.gerrithub.io/c/spdk/spdk/+/471752
> Let me get the patches sorted out in a nice series and then you can grab the top
> of the series for testing.

This will be great.  On Sunday, we will create an environment like 
yours  (except OS) and will check the feature.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-10-30 23:20 Walker, Benjamin
  0 siblings, 0 replies; 20+ messages in thread
From: Walker, Benjamin @ 2019-10-30 23:20 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 12482 bytes --]

On Wed, 2019-10-30 at 23:47 +0200, Sasha Kotchubievsky wrote:
> We started from following use-case: Initiator running on ARM above 
> user-space stack (VMA) + TSO optimization . TSO -TCP segmentation 
> offload. This optimization should improve sending large packets (storage 
> case). In this case, we see benefit after applying zero-copy in any 
> block size bigger than 512B.
> 
> Obviously, you test x86 and target side. We didn't tested that yet. So, 
> I can't say what is a  bottleneck in target (x86) case. I'd suggest to 
> reduce overhead for sending big buffer in network card by configuring 
> TSO, or jubmo-frames. After that retest zero-copy solution. Maybe 
> without TSO, memcopy is not a real bottleneck.
> 
> memcpy in target side (on ARM) after applying TSO takes about 18% (4K 
> IO). So, I believe, zero-copy should improve performance.

The copy is taking ~25% of CPU in our traces normally. I agree that it should be
a huge improvement.

> 
> We can test your configuration too. It's very interesting understanding 
> what's real bottleneck.
> 
> What configuration do you use?
> - What's network card?
> - What's OS and kernel?
> - NULL devices, or real NVME disks?
> - queue depth and number of cores?

It's a Mellanox CX-5 100GbE card with Ubuntu 18.10 and kernel 5.4.0-rc4 that we
compiled ourselves. TSO and jumbo frames are enabled. The I/O is going to real
NVMe devices on the backend (there are 12 P4500 Intel NVMe SSDs attached). We're
running 8 cores on the initiator side, each sending queue depth 64 worth of 4k
I/O. The target is running just one core.

However, I just figured out what's wrong, so maybe you can help me fix it. The
zero copy is all working mechanically, except when I get the zero copy
completion notification, ee_code is set to SO_EE_CODE_ZEROCOPY_COPIED. So
something is not configured correctly in my networking stack and the kernel is
doing deferred copies instead. Any ideas what I'd need to do in order to enable
this? I'm fairly certain I have it working at my desk on Fedora 30 in loopback,
based on the CPU traces I'm seeing, so maybe it's just a matter of installing
Fedora 30 on the benchmark system instead.

> 
> Is it enough to apply those two patches for zero-copy in target ?
> 
> Asynchronous writev https://review.gerrithub.io/c/spdk/spdk/+/470523
> 
> MSG_ZEROCOPY use in the posix implementation 
> https://review.gerrithub.io/c/spdk/spdk/+/471752

Let me get the patches sorted out in a nice series and then you can grab the top
of the series for testing.

> 
> 
> BR,
> 
> Sasha
> 
> On 30-Oct-19 10:28 PM, Walker, Benjamin wrote:
> > On Wed, 2019-10-30 at 21:55 +0200, Sasha Kotchubievsky wrote:
> > > Hi Ben,
> > > 
> > > Great list of patches,
> > > This work will , definitely, take NVME-OF TCP at the next level.
> > > 
> > > We, also, work on zero-copy (TX) in initiator side based on MSG_ZEROCOPY.
> > > Preliminary results are great. In the target side, we still investigate
> > > the
> > > solution.
> > I'm very interested to hear about your work here. I've been doing it
> > primarily
> > on the target side and the preliminary results we're seeing are that it's
> > slower
> > for 4K I/O. That's not really what I expected at all, and I feel like I must
> > be
> > missing something with getting the page pinning to hit the fast path
> > consistently. It's early days with all of these things, so it's a safe
> > assumption that I either coded something incorrectly or the system isn't
> > configured right.
> > 
> > > We run a lot of tests and see great potential for zero-copy. It look like
> > > a
> > > real bottleneck. On the send side, it can be removed with POSIX interface
> > > (MSG_ZEROCOPY), in receive side, it needs deep integration between TCP
> > > stack
> > > and SPDK. Next week we will have internal brain-storming, and, I hope, on
> > > dev
> > > meetup, I'll be ready for discussions.
> > > 
> > > We will invest in both directions: user-space path and in Linux Kernel
> > > path.  But, in user-space area, at this stage, VPP is out of our interest.
> > > 
> > > Best regards
> > > Sasha
> > > 
> > > -----Original Message-----
> > > From: Walker, Benjamin <benjamin.walker(a)intel.com>
> > > Sent: Wednesday, October 30, 2019 8:47 PM
> > > To: spdk(a)lists.01.org
> > > Subject: [SPDK] Re: SPDK socket abstraction layer
> > > 
> > > On Wed, 2019-10-30 at 17:54 +0000, Harris, James R wrote:
> > > > Hi Sasha,
> > > > 
> > > > Tomek is only talking about the VPP implementation.  There are no
> > > > plans to remove the socket abstraction layer.  If anything, the
> > > > project needs to look at extending it in ways as you suggested.
> > > To expand on this, there's a lot of activity right now in the SPDK sock
> > > abstraction layer to begin to implement asynchronous operations, zero copy
> > > operations, etc. For example, see:
> > > 
> > > Asynchronous writev
> > > https://review.gerrithub.io/c/spdk/spdk/+/470523
> > > 
> > > MSG_ZEROCOPY use in the posix implementation
> > > https://review.gerrithub.io/c/spdk/spdk/+/471752
> > > 
> > > A new sock implementation based on io_uring/libaio:
> > > https://review.gerrithub.io/c/spdk/spdk/+/471314
> > > 
> > > And a new sock implementation based on Seastar:
> > > https://review.gerrithub.io/c/spdk/spdk/+/466629
> > > 
> > > So not only is the sock abstraction layer sticking around, but it's
> > > getting a
> > > lot of focus going forward. There is a lot of innovation happening in the
> > > Linux kernel around networking at all layers that we need to keep up with.
> > > 
> > > One thing I would like community feedback on is what to do about the
> > > current
> > > VPP implementation. As we make improvements and additions to the sock
> > > abstraction, it will necessarily require updates to the VPP
> > > implementation. We
> > > can of course continue to make those, but does the community see value in
> > > maintaining support here? I'd really love to see someone take up the
> > > mantle on
> > > VPP if they believe there is value that we just haven't been able to
> > > unlock
> > > yet, but absent that it's just a maintenance burden.
> > > 
> > > Personally speaking, it would be easier for me, as someone trying to
> > > evolve
> > > the sock abstraction layer, to drop VPP. That's one less implementation
> > > that I
> > > then have to go update and test each time. But I'm very open to opinions
> > > and
> > > feedback here if anyone has something to say. SPDK obviously can't just
> > > drop
> > > support without strong consensus and a considerable amount of forewarning.
> > > 
> > > Thanks,
> > > Ben
> > > 
> > > > -Jim
> > > > 
> > > > 
> > > > On 10/30/19, 10:50 AM, "Sasha Kotchubievsky"
> > > > <sashakot(a)dev.mellanox.co.il>
> > > > wrote:
> > > > 
> > > >      Hi Tomek,
> > > >      
> > > >      Are you looking for community feedback regarding VPP implementation
> > > > of
> > > > TCP
> > > >      stack, or about having socket abstraction layer in SPDK?
> > > >      I think, socket abstraction layer is critical for future
> > > > integration between
> > > >      SPDK and user-space stacks. In Mellanox, we're evaluating
> > > > integration
> > > >      between VMA (https://github.com/Mellanox/libvma) and SPDK.
> > > > Although, VMA can
> > > >      be used as replacement for Kernel implementation of Posix socket
> > > > interface,
> > > >      we see great potential in "deep" integration, which definitely
> > > > needs
> > > > keep
> > > >      existing abstraction layer. For example, one of potential
> > > > improvements
> > > > can
> > > >      be zero-copy in RX (receive) flow. I don't see how that can be
> > > > implemented
> > > >      on top of Linux Kernel stack.
> > > >      
> > > >      Best regards
> > > >      Sasha
> > > >      
> > > >      -----Original Message-----
> > > >      From: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>
> > > >      Sent: Monday, October 21, 2019 3:01 PM
> > > >      To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > >      Subject: [SPDK] SPDK socket abstraction layer
> > > >      
> > > >      Hello everyone,
> > > >      
> > > >      Summary:
> > > >      
> > > >      With this message I wanted to update SPDK community on state of VPP
> > > > socket
> > > >      abstraction as of SPDK 19.07 release.
> > > >      At this time there does not seem to be a clear efficiency
> > > > improvements with
> > > >      VPP. There is no further work planned on SPDK and VPP integration.
> > > >      
> > > >      Details:
> > > >      
> > > >      As some of you may remember, SPDK 18.04 release introduced support
> > > > for
> > > >      alternative socket types. Along with that release, Vector Packet
> > > > Processing
> > > >      (VPP)<https://wiki.fd.io/view/VPP> 18.01 was integrated with SPDK,
> > > > by
> > > >      expanding socket abstraction to use VPP Communications Library
> > > > (VCL).
> > > > TCP/IP
> > > >      stack in VPP<https://wiki.fd.io/view/VPP/HostStack> was in early
> > > > stages back
> > > >      then and has seen improvements throughout the last year.
> > > >      
> > > >      To better use VPP capabilities, following fruitful collaboration
> > > > with
> > > > VPP
> > > >      team, in SPDK 19.07, this implementation was changed from VCL to
> > > > VPP Session
> > > >      API from VPP 19.04.2.
> > > >      
> > > >      VPP socket abstraction has met some challenges due to inherent
> > > > design of
> > > >      both projects, in particular related to running separate processes
> > > > and
> > > >      memory copies.
> > > >      Seeing improvements from original implementation was encouraging,
> > > > yet
> > > >      measuring against posix socket abstraction (taking into
> > > > consideration entire
> > > >      system, i.e. both processes), results are comparable. In other
> > > > words,  at
> > > >      this time there does not seem to be a clear benefit of either
> > > > socket
> > > >      abstraction from standpoint of CPU efficiency or IOPS.
> > > >      
> > > >      With this message I just wanted to update SPDK community on state
> > > > of socket
> > > >      abstraction layers as of SPDK 19.07 release. Each SPDK release
> > > > always brings
> > > >      improvements to the abstraction and its implementations, with
> > > > exciting work
> > > >      on more efficient use of kernel TCP stack - changes in SPDK 19.10
> > > > and
> > > > SPDK
> > > >      20.01.
> > > >      
> > > >      However there is no active involvement at this point around VPP
> > > >      implementation of socket abstraction in SPDK. Contributions in
> > > > this area are
> > > >      always welcome. In case you're interested in implementing further
> > > >      enhancements of VPP and SPDK integration feel free to reply, or to
> > > > use
> > > > one
> > > >      of the many SPDK community communications
> > > >      channels<https://spdk.io/community/>;;;.
> > > >      
> > > >      Thanks,
> > > >      Tomek
> > > >      
> > > >      _______________________________________________
> > > >      SPDK mailing list -- spdk(a)lists.01.org
> > > >      To unsubscribe send an email to spdk-leave(a)lists.01.org
> > > >      _______________________________________________
> > > >      SPDK mailing list -- spdk(a)lists.01.org
> > > >      To unsubscribe send an email to spdk-leave(a)lists.01.org
> > > >      
> > > > 
> > > > _______________________________________________
> > > > SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to
> > > > spdk-leave(a)lists.01.org
> > > _______________________________________________
> > > SPDK mailing list -- spdk(a)lists.01.org
> > > To unsubscribe send an email to spdk-leave(a)lists.01.org
> > > _______________________________________________
> > > SPDK mailing list -- spdk(a)lists.01.org
> > > To unsubscribe send an email to spdk-leave(a)lists.01.org
> > _______________________________________________
> > SPDK mailing list -- spdk(a)lists.01.org
> > To unsubscribe send an email to spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-10-30 21:47 Sasha Kotchubievsky
  0 siblings, 0 replies; 20+ messages in thread
From: Sasha Kotchubievsky @ 2019-10-30 21:47 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 10057 bytes --]

We started from following use-case: Initiator running on ARM above 
user-space stack (VMA) + TSO optimization . TSO -TCP segmentation 
offload. This optimization should improve sending large packets (storage 
case). In this case, we see benefit after applying zero-copy in any 
block size bigger than 512B.

Obviously, you test x86 and target side. We didn't tested that yet. So, 
I can't say what is a  bottleneck in target (x86) case. I'd suggest to 
reduce overhead for sending big buffer in network card by configuring 
TSO, or jubmo-frames. After that retest zero-copy solution. Maybe 
without TSO, memcopy is not a real bottleneck.

memcpy in target side (on ARM) after applying TSO takes about 18% (4K 
IO). So, I believe, zero-copy should improve performance.

We can test your configuration too. It's very interesting understanding 
what's real bottleneck.

What configuration do you use?
- What's network card?
- What's OS and kernel?
- NULL devices, or real NVME disks?
- queue depth and number of cores?

Is it enough to apply those two patches for zero-copy in target ?

Asynchronous writev https://review.gerrithub.io/c/spdk/spdk/+/470523

MSG_ZEROCOPY use in the posix implementation 
https://review.gerrithub.io/c/spdk/spdk/+/471752


BR,

Sasha

On 30-Oct-19 10:28 PM, Walker, Benjamin wrote:
> On Wed, 2019-10-30 at 21:55 +0200, Sasha Kotchubievsky wrote:
>> Hi Ben,
>>
>> Great list of patches,
>> This work will , definitely, take NVME-OF TCP at the next level.
>>
>> We, also, work on zero-copy (TX) in initiator side based on MSG_ZEROCOPY.
>> Preliminary results are great. In the target side, we still investigate the
>> solution.
> I'm very interested to hear about your work here. I've been doing it primarily
> on the target side and the preliminary results we're seeing are that it's slower
> for 4K I/O. That's not really what I expected at all, and I feel like I must be
> missing something with getting the page pinning to hit the fast path
> consistently. It's early days with all of these things, so it's a safe
> assumption that I either coded something incorrectly or the system isn't
> configured right.
>
>> We run a lot of tests and see great potential for zero-copy. It look like a
>> real bottleneck. On the send side, it can be removed with POSIX interface
>> (MSG_ZEROCOPY), in receive side, it needs deep integration between TCP stack
>> and SPDK. Next week we will have internal brain-storming, and, I hope, on dev
>> meetup, I'll be ready for discussions.
>>
>> We will invest in both directions: user-space path and in Linux Kernel
>> path.  But, in user-space area, at this stage, VPP is out of our interest.
>>
>> Best regards
>> Sasha
>>
>> -----Original Message-----
>> From: Walker, Benjamin <benjamin.walker(a)intel.com>
>> Sent: Wednesday, October 30, 2019 8:47 PM
>> To: spdk(a)lists.01.org
>> Subject: [SPDK] Re: SPDK socket abstraction layer
>>
>> On Wed, 2019-10-30 at 17:54 +0000, Harris, James R wrote:
>>> Hi Sasha,
>>>
>>> Tomek is only talking about the VPP implementation.  There are no
>>> plans to remove the socket abstraction layer.  If anything, the
>>> project needs to look at extending it in ways as you suggested.
>> To expand on this, there's a lot of activity right now in the SPDK sock
>> abstraction layer to begin to implement asynchronous operations, zero copy
>> operations, etc. For example, see:
>>
>> Asynchronous writev
>> https://review.gerrithub.io/c/spdk/spdk/+/470523
>>
>> MSG_ZEROCOPY use in the posix implementation
>> https://review.gerrithub.io/c/spdk/spdk/+/471752
>>
>> A new sock implementation based on io_uring/libaio:
>> https://review.gerrithub.io/c/spdk/spdk/+/471314
>>
>> And a new sock implementation based on Seastar:
>> https://review.gerrithub.io/c/spdk/spdk/+/466629
>>
>> So not only is the sock abstraction layer sticking around, but it's getting a
>> lot of focus going forward. There is a lot of innovation happening in the
>> Linux kernel around networking at all layers that we need to keep up with.
>>
>> One thing I would like community feedback on is what to do about the current
>> VPP implementation. As we make improvements and additions to the sock
>> abstraction, it will necessarily require updates to the VPP implementation. We
>> can of course continue to make those, but does the community see value in
>> maintaining support here? I'd really love to see someone take up the mantle on
>> VPP if they believe there is value that we just haven't been able to unlock
>> yet, but absent that it's just a maintenance burden.
>>
>> Personally speaking, it would be easier for me, as someone trying to evolve
>> the sock abstraction layer, to drop VPP. That's one less implementation that I
>> then have to go update and test each time. But I'm very open to opinions and
>> feedback here if anyone has something to say. SPDK obviously can't just drop
>> support without strong consensus and a considerable amount of forewarning.
>>
>> Thanks,
>> Ben
>>
>>> -Jim
>>>
>>>
>>> On 10/30/19, 10:50 AM, "Sasha Kotchubievsky"
>>> <sashakot(a)dev.mellanox.co.il>
>>> wrote:
>>>
>>>      Hi Tomek,
>>>      
>>>      Are you looking for community feedback regarding VPP implementation of
>>> TCP
>>>      stack, or about having socket abstraction layer in SPDK?
>>>      I think, socket abstraction layer is critical for future
>>> integration between
>>>      SPDK and user-space stacks. In Mellanox, we're evaluating integration
>>>      between VMA (https://github.com/Mellanox/libvma) and SPDK.
>>> Although, VMA can
>>>      be used as replacement for Kernel implementation of Posix socket
>>> interface,
>>>      we see great potential in "deep" integration, which definitely needs
>>> keep
>>>      existing abstraction layer. For example, one of potential improvements
>>> can
>>>      be zero-copy in RX (receive) flow. I don't see how that can be
>>> implemented
>>>      on top of Linux Kernel stack.
>>>      
>>>      Best regards
>>>      Sasha
>>>      
>>>      -----Original Message-----
>>>      From: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>
>>>      Sent: Monday, October 21, 2019 3:01 PM
>>>      To: Storage Performance Development Kit <spdk(a)lists.01.org>
>>>      Subject: [SPDK] SPDK socket abstraction layer
>>>      
>>>      Hello everyone,
>>>      
>>>      Summary:
>>>      
>>>      With this message I wanted to update SPDK community on state of VPP
>>> socket
>>>      abstraction as of SPDK 19.07 release.
>>>      At this time there does not seem to be a clear efficiency
>>> improvements with
>>>      VPP. There is no further work planned on SPDK and VPP integration.
>>>      
>>>      Details:
>>>      
>>>      As some of you may remember, SPDK 18.04 release introduced support for
>>>      alternative socket types. Along with that release, Vector Packet
>>> Processing
>>>      (VPP)<https://wiki.fd.io/view/VPP> 18.01 was integrated with SPDK, by
>>>      expanding socket abstraction to use VPP Communications Library (VCL).
>>> TCP/IP
>>>      stack in VPP<https://wiki.fd.io/view/VPP/HostStack> was in early
>>> stages back
>>>      then and has seen improvements throughout the last year.
>>>      
>>>      To better use VPP capabilities, following fruitful collaboration with
>>> VPP
>>>      team, in SPDK 19.07, this implementation was changed from VCL to
>>> VPP Session
>>>      API from VPP 19.04.2.
>>>      
>>>      VPP socket abstraction has met some challenges due to inherent design of
>>>      both projects, in particular related to running separate processes and
>>>      memory copies.
>>>      Seeing improvements from original implementation was encouraging, yet
>>>      measuring against posix socket abstraction (taking into
>>> consideration entire
>>>      system, i.e. both processes), results are comparable. In other
>>> words,  at
>>>      this time there does not seem to be a clear benefit of either socket
>>>      abstraction from standpoint of CPU efficiency or IOPS.
>>>      
>>>      With this message I just wanted to update SPDK community on state
>>> of socket
>>>      abstraction layers as of SPDK 19.07 release. Each SPDK release
>>> always brings
>>>      improvements to the abstraction and its implementations, with
>>> exciting work
>>>      on more efficient use of kernel TCP stack - changes in SPDK 19.10 and
>>> SPDK
>>>      20.01.
>>>      
>>>      However there is no active involvement at this point around VPP
>>>      implementation of socket abstraction in SPDK. Contributions in
>>> this area are
>>>      always welcome. In case you're interested in implementing further
>>>      enhancements of VPP and SPDK integration feel free to reply, or to use
>>> one
>>>      of the many SPDK community communications
>>>      channels<https://spdk.io/community/>;;.
>>>      
>>>      Thanks,
>>>      Tomek
>>>      
>>>      _______________________________________________
>>>      SPDK mailing list -- spdk(a)lists.01.org
>>>      To unsubscribe send an email to spdk-leave(a)lists.01.org
>>>      _______________________________________________
>>>      SPDK mailing list -- spdk(a)lists.01.org
>>>      To unsubscribe send an email to spdk-leave(a)lists.01.org
>>>      
>>>
>>> _______________________________________________
>>> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to
>>> spdk-leave(a)lists.01.org
>> _______________________________________________
>> SPDK mailing list -- spdk(a)lists.01.org
>> To unsubscribe send an email to spdk-leave(a)lists.01.org
>> _______________________________________________
>> SPDK mailing list -- spdk(a)lists.01.org
>> To unsubscribe send an email to spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-10-30 20:28 Walker, Benjamin
  0 siblings, 0 replies; 20+ messages in thread
From: Walker, Benjamin @ 2019-10-30 20:28 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8451 bytes --]

On Wed, 2019-10-30 at 21:55 +0200, Sasha Kotchubievsky wrote:
> Hi Ben, 
> 
> Great list of patches,
> This work will , definitely, take NVME-OF TCP at the next level.
> 
> We, also, work on zero-copy (TX) in initiator side based on MSG_ZEROCOPY.
> Preliminary results are great. In the target side, we still investigate the
> solution.

I'm very interested to hear about your work here. I've been doing it primarily
on the target side and the preliminary results we're seeing are that it's slower
for 4K I/O. That's not really what I expected at all, and I feel like I must be
missing something with getting the page pinning to hit the fast path
consistently. It's early days with all of these things, so it's a safe
assumption that I either coded something incorrectly or the system isn't
configured right.

> 
> We run a lot of tests and see great potential for zero-copy. It look like a
> real bottleneck. On the send side, it can be removed with POSIX interface
> (MSG_ZEROCOPY), in receive side, it needs deep integration between TCP stack
> and SPDK. Next week we will have internal brain-storming, and, I hope, on dev
> meetup, I'll be ready for discussions.
> 
> We will invest in both directions: user-space path and in Linux Kernel
> path.  But, in user-space area, at this stage, VPP is out of our interest. 
> 
> Best regards
> Sasha  
> 
> -----Original Message-----
> From: Walker, Benjamin <benjamin.walker(a)intel.com> 
> Sent: Wednesday, October 30, 2019 8:47 PM
> To: spdk(a)lists.01.org
> Subject: [SPDK] Re: SPDK socket abstraction layer
> 
> On Wed, 2019-10-30 at 17:54 +0000, Harris, James R wrote:
> > Hi Sasha,
> > 
> > Tomek is only talking about the VPP implementation.  There are no 
> > plans to remove the socket abstraction layer.  If anything, the 
> > project needs to look at extending it in ways as you suggested.
> 
> To expand on this, there's a lot of activity right now in the SPDK sock
> abstraction layer to begin to implement asynchronous operations, zero copy
> operations, etc. For example, see:
> 
> Asynchronous writev
> https://review.gerrithub.io/c/spdk/spdk/+/470523
> 
> MSG_ZEROCOPY use in the posix implementation
> https://review.gerrithub.io/c/spdk/spdk/+/471752
> 
> A new sock implementation based on io_uring/libaio:
> https://review.gerrithub.io/c/spdk/spdk/+/471314
> 
> And a new sock implementation based on Seastar:
> https://review.gerrithub.io/c/spdk/spdk/+/466629
> 
> So not only is the sock abstraction layer sticking around, but it's getting a
> lot of focus going forward. There is a lot of innovation happening in the
> Linux kernel around networking at all layers that we need to keep up with.
> 
> One thing I would like community feedback on is what to do about the current
> VPP implementation. As we make improvements and additions to the sock
> abstraction, it will necessarily require updates to the VPP implementation. We
> can of course continue to make those, but does the community see value in
> maintaining support here? I'd really love to see someone take up the mantle on
> VPP if they believe there is value that we just haven't been able to unlock
> yet, but absent that it's just a maintenance burden.
> 
> Personally speaking, it would be easier for me, as someone trying to evolve
> the sock abstraction layer, to drop VPP. That's one less implementation that I
> then have to go update and test each time. But I'm very open to opinions and
> feedback here if anyone has something to say. SPDK obviously can't just drop
> support without strong consensus and a considerable amount of forewarning.
> 
> Thanks,
> Ben
> 
> > -Jim
> > 
> > 
> > On 10/30/19, 10:50 AM, "Sasha Kotchubievsky" 
> > <sashakot(a)dev.mellanox.co.il>
> > wrote:
> > 
> >     Hi Tomek,
> >     
> >     Are you looking for community feedback regarding VPP implementation of
> > TCP
> >     stack, or about having socket abstraction layer in SPDK?
> >     I think, socket abstraction layer is critical for future 
> > integration between
> >     SPDK and user-space stacks. In Mellanox, we're evaluating integration
> >     between VMA (https://github.com/Mellanox/libvma) and SPDK. 
> > Although, VMA can
> >     be used as replacement for Kernel implementation of Posix socket 
> > interface,
> >     we see great potential in "deep" integration, which definitely needs
> > keep
> >     existing abstraction layer. For example, one of potential improvements
> > can
> >     be zero-copy in RX (receive) flow. I don't see how that can be
> > implemented
> >     on top of Linux Kernel stack. 
> >     
> >     Best regards
> >     Sasha
> >     
> >     -----Original Message-----
> >     From: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com> 
> >     Sent: Monday, October 21, 2019 3:01 PM
> >     To: Storage Performance Development Kit <spdk(a)lists.01.org>
> >     Subject: [SPDK] SPDK socket abstraction layer
> >     
> >     Hello everyone,
> >     
> >     Summary:
> >     
> >     With this message I wanted to update SPDK community on state of VPP
> > socket
> >     abstraction as of SPDK 19.07 release.
> >     At this time there does not seem to be a clear efficiency 
> > improvements with
> >     VPP. There is no further work planned on SPDK and VPP integration.
> >     
> >     Details:
> >     
> >     As some of you may remember, SPDK 18.04 release introduced support for
> >     alternative socket types. Along with that release, Vector Packet 
> > Processing
> >     (VPP)<https://wiki.fd.io/view/VPP> 18.01 was integrated with SPDK, by
> >     expanding socket abstraction to use VPP Communications Library (VCL).
> > TCP/IP
> >     stack in VPP<https://wiki.fd.io/view/VPP/HostStack> was in early 
> > stages back
> >     then and has seen improvements throughout the last year.
> >     
> >     To better use VPP capabilities, following fruitful collaboration with
> > VPP
> >     team, in SPDK 19.07, this implementation was changed from VCL to 
> > VPP Session
> >     API from VPP 19.04.2.
> >     
> >     VPP socket abstraction has met some challenges due to inherent design of
> >     both projects, in particular related to running separate processes and
> >     memory copies.
> >     Seeing improvements from original implementation was encouraging, yet
> >     measuring against posix socket abstraction (taking into 
> > consideration entire
> >     system, i.e. both processes), results are comparable. In other
> > words,  at
> >     this time there does not seem to be a clear benefit of either socket
> >     abstraction from standpoint of CPU efficiency or IOPS.
> >     
> >     With this message I just wanted to update SPDK community on state 
> > of socket
> >     abstraction layers as of SPDK 19.07 release. Each SPDK release 
> > always brings
> >     improvements to the abstraction and its implementations, with 
> > exciting work
> >     on more efficient use of kernel TCP stack - changes in SPDK 19.10 and
> > SPDK
> >     20.01.
> >     
> >     However there is no active involvement at this point around VPP
> >     implementation of socket abstraction in SPDK. Contributions in 
> > this area are
> >     always welcome. In case you're interested in implementing further
> >     enhancements of VPP and SPDK integration feel free to reply, or to use
> > one
> >     of the many SPDK community communications
> >     channels<https://spdk.io/community/>;;.
> >     
> >     Thanks,
> >     Tomek
> >     
> >     _______________________________________________
> >     SPDK mailing list -- spdk(a)lists.01.org
> >     To unsubscribe send an email to spdk-leave(a)lists.01.org
> >     _______________________________________________
> >     SPDK mailing list -- spdk(a)lists.01.org
> >     To unsubscribe send an email to spdk-leave(a)lists.01.org
> >     
> > 
> > _______________________________________________
> > SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to 
> > spdk-leave(a)lists.01.org
> 
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-10-30 19:55 Sasha Kotchubievsky
  0 siblings, 0 replies; 20+ messages in thread
From: Sasha Kotchubievsky @ 2019-10-30 19:55 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 7335 bytes --]

Hi Ben, 

Great list of patches,
This work will , definitely, take NVME-OF TCP at the next level.

We, also, work on zero-copy (TX) in initiator side based on MSG_ZEROCOPY. Preliminary results are great. In the target side, we still investigate the solution.

We run a lot of tests and see great potential for zero-copy. It look like a real bottleneck. On the send side, it can be removed with POSIX interface (MSG_ZEROCOPY), in receive side, it needs deep integration between TCP stack and SPDK. Next week we will have internal brain-storming, and, I hope, on dev meetup, I'll be ready for discussions.

We will invest in both directions: user-space path and in Linux Kernel path.  But, in user-space area, at this stage, VPP is out of our interest. 

Best regards
Sasha  

-----Original Message-----
From: Walker, Benjamin <benjamin.walker(a)intel.com> 
Sent: Wednesday, October 30, 2019 8:47 PM
To: spdk(a)lists.01.org
Subject: [SPDK] Re: SPDK socket abstraction layer

On Wed, 2019-10-30 at 17:54 +0000, Harris, James R wrote:
> Hi Sasha,
> 
> Tomek is only talking about the VPP implementation.  There are no 
> plans to remove the socket abstraction layer.  If anything, the 
> project needs to look at extending it in ways as you suggested.

To expand on this, there's a lot of activity right now in the SPDK sock abstraction layer to begin to implement asynchronous operations, zero copy operations, etc. For example, see:

Asynchronous writev
https://review.gerrithub.io/c/spdk/spdk/+/470523

MSG_ZEROCOPY use in the posix implementation
https://review.gerrithub.io/c/spdk/spdk/+/471752

A new sock implementation based on io_uring/libaio:
https://review.gerrithub.io/c/spdk/spdk/+/471314

And a new sock implementation based on Seastar:
https://review.gerrithub.io/c/spdk/spdk/+/466629

So not only is the sock abstraction layer sticking around, but it's getting a lot of focus going forward. There is a lot of innovation happening in the Linux kernel around networking at all layers that we need to keep up with.

One thing I would like community feedback on is what to do about the current VPP implementation. As we make improvements and additions to the sock abstraction, it will necessarily require updates to the VPP implementation. We can of course continue to make those, but does the community see value in maintaining support here? I'd really love to see someone take up the mantle on VPP if they believe there is value that we just haven't been able to unlock yet, but absent that it's just a maintenance burden.

Personally speaking, it would be easier for me, as someone trying to evolve the sock abstraction layer, to drop VPP. That's one less implementation that I then have to go update and test each time. But I'm very open to opinions and feedback here if anyone has something to say. SPDK obviously can't just drop support without strong consensus and a considerable amount of forewarning.

Thanks,
Ben

> 
> -Jim
> 
> 
> On 10/30/19, 10:50 AM, "Sasha Kotchubievsky" 
> <sashakot(a)dev.mellanox.co.il>
> wrote:
> 
>     Hi Tomek,
>     
>     Are you looking for community feedback regarding VPP implementation of TCP
>     stack, or about having socket abstraction layer in SPDK?
>     I think, socket abstraction layer is critical for future 
> integration between
>     SPDK and user-space stacks. In Mellanox, we're evaluating integration
>     between VMA (https://github.com/Mellanox/libvma) and SPDK. 
> Although, VMA can
>     be used as replacement for Kernel implementation of Posix socket 
> interface,
>     we see great potential in "deep" integration, which definitely needs keep
>     existing abstraction layer. For example, one of potential improvements can
>     be zero-copy in RX (receive) flow. I don't see how that can be implemented
>     on top of Linux Kernel stack. 
>     
>     Best regards
>     Sasha
>     
>     -----Original Message-----
>     From: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com> 
>     Sent: Monday, October 21, 2019 3:01 PM
>     To: Storage Performance Development Kit <spdk(a)lists.01.org>
>     Subject: [SPDK] SPDK socket abstraction layer
>     
>     Hello everyone,
>     
>     Summary:
>     
>     With this message I wanted to update SPDK community on state of VPP socket
>     abstraction as of SPDK 19.07 release.
>     At this time there does not seem to be a clear efficiency 
> improvements with
>     VPP. There is no further work planned on SPDK and VPP integration.
>     
>     Details:
>     
>     As some of you may remember, SPDK 18.04 release introduced support for
>     alternative socket types. Along with that release, Vector Packet 
> Processing
>     (VPP)<https://wiki.fd.io/view/VPP> 18.01 was integrated with SPDK, by
>     expanding socket abstraction to use VPP Communications Library (VCL).
> TCP/IP
>     stack in VPP<https://wiki.fd.io/view/VPP/HostStack> was in early 
> stages back
>     then and has seen improvements throughout the last year.
>     
>     To better use VPP capabilities, following fruitful collaboration with VPP
>     team, in SPDK 19.07, this implementation was changed from VCL to 
> VPP Session
>     API from VPP 19.04.2.
>     
>     VPP socket abstraction has met some challenges due to inherent design of
>     both projects, in particular related to running separate processes and
>     memory copies.
>     Seeing improvements from original implementation was encouraging, yet
>     measuring against posix socket abstraction (taking into 
> consideration entire
>     system, i.e. both processes), results are comparable. In other words,  at
>     this time there does not seem to be a clear benefit of either socket
>     abstraction from standpoint of CPU efficiency or IOPS.
>     
>     With this message I just wanted to update SPDK community on state 
> of socket
>     abstraction layers as of SPDK 19.07 release. Each SPDK release 
> always brings
>     improvements to the abstraction and its implementations, with 
> exciting work
>     on more efficient use of kernel TCP stack - changes in SPDK 19.10 and SPDK
>     20.01.
>     
>     However there is no active involvement at this point around VPP
>     implementation of socket abstraction in SPDK. Contributions in 
> this area are
>     always welcome. In case you're interested in implementing further
>     enhancements of VPP and SPDK integration feel free to reply, or to use one
>     of the many SPDK community communications
>     channels<https://spdk.io/community/>;.
>     
>     Thanks,
>     Tomek
>     
>     _______________________________________________
>     SPDK mailing list -- spdk(a)lists.01.org
>     To unsubscribe send an email to spdk-leave(a)lists.01.org
>     _______________________________________________
>     SPDK mailing list -- spdk(a)lists.01.org
>     To unsubscribe send an email to spdk-leave(a)lists.01.org
>     
> 
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to 
> spdk-leave(a)lists.01.org

_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-10-30 17:54 Harris, James R
  0 siblings, 0 replies; 20+ messages in thread
From: Harris, James R @ 2019-10-30 17:54 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4027 bytes --]

Hi Sasha,

Tomek is only talking about the VPP implementation.  There are no plans to remove the socket abstraction layer.  If anything, the project needs to look at extending it in ways as you suggested.

-Jim


On 10/30/19, 10:50 AM, "Sasha Kotchubievsky" <sashakot(a)dev.mellanox.co.il> wrote:

    Hi Tomek,
    
    Are you looking for community feedback regarding VPP implementation of TCP
    stack, or about having socket abstraction layer in SPDK?
    I think, socket abstraction layer is critical for future integration between
    SPDK and user-space stacks. In Mellanox, we're evaluating integration
    between VMA (https://github.com/Mellanox/libvma) and SPDK. Although, VMA can
    be used as replacement for Kernel implementation of Posix socket interface,
    we see great potential in "deep" integration, which definitely needs keep
    existing abstraction layer. For example, one of potential improvements can
    be zero-copy in RX (receive) flow. I don't see how that can be implemented
    on top of Linux Kernel stack. 
    
    Best regards
    Sasha
    
    -----Original Message-----
    From: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com> 
    Sent: Monday, October 21, 2019 3:01 PM
    To: Storage Performance Development Kit <spdk(a)lists.01.org>
    Subject: [SPDK] SPDK socket abstraction layer
    
    Hello everyone,
    
    Summary:
    
    With this message I wanted to update SPDK community on state of VPP socket
    abstraction as of SPDK 19.07 release.
    At this time there does not seem to be a clear efficiency improvements with
    VPP. There is no further work planned on SPDK and VPP integration.
    
    Details:
    
    As some of you may remember, SPDK 18.04 release introduced support for
    alternative socket types. Along with that release, Vector Packet Processing
    (VPP)<https://wiki.fd.io/view/VPP> 18.01 was integrated with SPDK, by
    expanding socket abstraction to use VPP Communications Library (VCL). TCP/IP
    stack in VPP<https://wiki.fd.io/view/VPP/HostStack> was in early stages back
    then and has seen improvements throughout the last year.
    
    To better use VPP capabilities, following fruitful collaboration with VPP
    team, in SPDK 19.07, this implementation was changed from VCL to VPP Session
    API from VPP 19.04.2.
    
    VPP socket abstraction has met some challenges due to inherent design of
    both projects, in particular related to running separate processes and
    memory copies.
    Seeing improvements from original implementation was encouraging, yet
    measuring against posix socket abstraction (taking into consideration entire
    system, i.e. both processes), results are comparable. In other words,  at
    this time there does not seem to be a clear benefit of either socket
    abstraction from standpoint of CPU efficiency or IOPS.
    
    With this message I just wanted to update SPDK community on state of socket
    abstraction layers as of SPDK 19.07 release. Each SPDK release always brings
    improvements to the abstraction and its implementations, with exciting work
    on more efficient use of kernel TCP stack - changes in SPDK 19.10 and SPDK
    20.01.
    
    However there is no active involvement at this point around VPP
    implementation of socket abstraction in SPDK. Contributions in this area are
    always welcome. In case you're interested in implementing further
    enhancements of VPP and SPDK integration feel free to reply, or to use one
    of the many SPDK community communications
    channels<https://spdk.io/community/>.
    
    Thanks,
    Tomek
    
    _______________________________________________
    SPDK mailing list -- spdk(a)lists.01.org
    To unsubscribe send an email to spdk-leave(a)lists.01.org
    _______________________________________________
    SPDK mailing list -- spdk(a)lists.01.org
    To unsubscribe send an email to spdk-leave(a)lists.01.org
    


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [SPDK] Re: SPDK socket abstraction layer
@ 2019-10-30 17:50 Sasha Kotchubievsky
  0 siblings, 0 replies; 20+ messages in thread
From: Sasha Kotchubievsky @ 2019-10-30 17:50 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3259 bytes --]

Hi Tomek,

Are you looking for community feedback regarding VPP implementation of TCP
stack, or about having socket abstraction layer in SPDK?
I think, socket abstraction layer is critical for future integration between
SPDK and user-space stacks. In Mellanox, we're evaluating integration
between VMA (https://github.com/Mellanox/libvma) and SPDK. Although, VMA can
be used as replacement for Kernel implementation of Posix socket interface,
we see great potential in "deep" integration, which definitely needs keep
existing abstraction layer. For example, one of potential improvements can
be zero-copy in RX (receive) flow. I don't see how that can be implemented
on top of Linux Kernel stack. 

Best regards
Sasha

-----Original Message-----
From: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com> 
Sent: Monday, October 21, 2019 3:01 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] SPDK socket abstraction layer

Hello everyone,

Summary:

With this message I wanted to update SPDK community on state of VPP socket
abstraction as of SPDK 19.07 release.
At this time there does not seem to be a clear efficiency improvements with
VPP. There is no further work planned on SPDK and VPP integration.

Details:

As some of you may remember, SPDK 18.04 release introduced support for
alternative socket types. Along with that release, Vector Packet Processing
(VPP)<https://wiki.fd.io/view/VPP> 18.01 was integrated with SPDK, by
expanding socket abstraction to use VPP Communications Library (VCL). TCP/IP
stack in VPP<https://wiki.fd.io/view/VPP/HostStack> was in early stages back
then and has seen improvements throughout the last year.

To better use VPP capabilities, following fruitful collaboration with VPP
team, in SPDK 19.07, this implementation was changed from VCL to VPP Session
API from VPP 19.04.2.

VPP socket abstraction has met some challenges due to inherent design of
both projects, in particular related to running separate processes and
memory copies.
Seeing improvements from original implementation was encouraging, yet
measuring against posix socket abstraction (taking into consideration entire
system, i.e. both processes), results are comparable. In other words,  at
this time there does not seem to be a clear benefit of either socket
abstraction from standpoint of CPU efficiency or IOPS.

With this message I just wanted to update SPDK community on state of socket
abstraction layers as of SPDK 19.07 release. Each SPDK release always brings
improvements to the abstraction and its implementations, with exciting work
on more efficient use of kernel TCP stack - changes in SPDK 19.10 and SPDK
20.01.

However there is no active involvement at this point around VPP
implementation of socket abstraction in SPDK. Contributions in this area are
always welcome. In case you're interested in implementing further
enhancements of VPP and SPDK integration feel free to reply, or to use one
of the many SPDK community communications
channels<https://spdk.io/community/>.

Thanks,
Tomek

_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2020-06-01 16:55 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-30 18:46 [SPDK] Re: SPDK socket abstraction layer Walker, Benjamin
  -- strict thread matches above, loose matches on Subject: below --
2020-06-01 16:55 Zawadzki, Tomasz
2019-11-07 18:45 Walker, Benjamin
2019-11-07 16:26 Or Gerlitz
2019-11-06 10:19 Or Gerlitz
2019-11-05 19:56 Sasha Kotchubievsky
2019-11-05 18:08 Walker, Benjamin
2019-11-05 15:06 Or Gerlitz
2019-11-05  5:29 allenz
2019-11-03 16:56 Walker, Benjamin
2019-11-03 15:59 Or Gerlitz
2019-10-31 21:11 Andrey Kuzmin
2019-10-31 18:54 Walker, Benjamin
2019-10-31 14:21 Sasha Kotchubievsky
2019-10-30 23:20 Walker, Benjamin
2019-10-30 21:47 Sasha Kotchubievsky
2019-10-30 20:28 Walker, Benjamin
2019-10-30 19:55 Sasha Kotchubievsky
2019-10-30 17:54 Harris, James R
2019-10-30 17:50 Sasha Kotchubievsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.