linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Should I expect lower bandwidth when using IBV_QPT_RAW_PACKET and steering rule?
@ 2020-03-17 21:07 Terry Toole
  2020-03-17 23:40 ` Mark Bloch
  0 siblings, 1 reply; 9+ messages in thread
From: Terry Toole @ 2020-03-17 21:07 UTC (permalink / raw)
  To: linux-rdma

Hi,
I am trying to understand if I should expect lower bandwidth from a
setup when using IBV_QPT_RAW_PACKET and a steering rule which looks at
the MAC addresses of
the source and destination NICs. I am trying out an example program
from the Mellanox community website which uses these features. For the
code example, please see

https://community.mellanox.com/s/article/raw-ethernet-programming--basic-introduction---code-example

In my test setup, I am using two Mellanox MCX515A-CCAT NICs, which have a
maximum bandwidth of 100 Gbps. They are installed in two linux computers which
are connected by a single cable (no switch or router). Previously,
when I was running
tests like ib_send_bw or ib_write_bw and using UD, UC, or RC transport
mode, I was seeing bandwidths ~90Gbps or higher. With the example in
Raw Ethernet Programming: Basic introduction, after adding some code
to count packets and measure time, I am seeing bandwidths around 10
Gbps. I have been playing with different parameters such as MTU,
packet size, or IBV_SEND_INLINE. I am wondering if the reduction in
bandwidth is due to the packet filtering being done by the steering
rule? Or should I expect to see similar bandwidths (~90 Gbps) as in my
earlier tests and the problem is one of lack of optimization of my
setup?

Thanks for any help you can provide.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Should I expect lower bandwidth when using IBV_QPT_RAW_PACKET and steering rule?
  2020-03-17 21:07 Should I expect lower bandwidth when using IBV_QPT_RAW_PACKET and steering rule? Terry Toole
@ 2020-03-17 23:40 ` Mark Bloch
  2020-03-18 11:21   ` Terry Toole
  2020-03-20 15:40   ` Dimitris Dimitropoulos
  0 siblings, 2 replies; 9+ messages in thread
From: Mark Bloch @ 2020-03-17 23:40 UTC (permalink / raw)
  To: Terry Toole, linux-rdma

Hey Terry,

On 3/17/20 2:07 PM, Terry Toole wrote:
> Hi,
> I am trying to understand if I should expect lower bandwidth from a
> setup when using IBV_QPT_RAW_PACKET and a steering rule which looks at
> the MAC addresses of
> the source and destination NICs. I am trying out an example program
> from the Mellanox community website which uses these features. For the
> code example, please see
> 
> https://community.mellanox.com/s/article/raw-ethernet-programming--basic-introduction---code-example
> 
> In my test setup, I am using two Mellanox MCX515A-CCAT NICs, which have a
> maximum bandwidth of 100 Gbps. They are installed in two linux computers which
> are connected by a single cable (no switch or router). Previously,
> when I was running
> tests like ib_send_bw or ib_write_bw and using UD, UC, or RC transport
> mode, I was seeing bandwidths ~90Gbps or higher. With the example in
> Raw Ethernet Programming: Basic introduction, after adding some code
> to count packets and measure time, I am seeing bandwidths around 10
> Gbps. I have been playing with different parameters such as MTU,
> packet size, or IBV_SEND_INLINE. I am wondering if the reduction in
> bandwidth is due to the packet filtering being done by the steering

While steering requires more work from the HW (if a packet hits too many
steering rules before being directed to a TIR/RQ it might affect the BW)
A single steering rule shouldn't.

> rule? Or should I expect to see similar bandwidths (~90 Gbps) as in my
> earlier tests and the problem is one of lack of optimization of my
> setup?

More like lack of optimizations in the test program you've used.
When talking about sending traffic using RAW_ETHERNET QP there are lot of optimizations
that can/should be done.

If you would like to have a look at a highly optimized datapath from userspace:
https://github.com/DPDK/dpdk/blob/master/drivers/net/mlx5/mlx5_rxtx.c

With the right code you should have no issue reaching line rate speeds with raw_ethernet QPs

Mark
 
> 
> Thanks for any help you can provide.
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Should I expect lower bandwidth when using IBV_QPT_RAW_PACKET and steering rule?
  2020-03-17 23:40 ` Mark Bloch
@ 2020-03-18 11:21   ` Terry Toole
  2020-03-20 15:40   ` Dimitris Dimitropoulos
  1 sibling, 0 replies; 9+ messages in thread
From: Terry Toole @ 2020-03-18 11:21 UTC (permalink / raw)
  To: Mark Bloch; +Cc: linux-rdma

Hi Mark,

Thanks for the response and the example.

Terry

On Tue, Mar 17, 2020 at 7:40 PM Mark Bloch <markb@mellanox.com> wrote:
>
> Hey Terry,
>
> On 3/17/20 2:07 PM, Terry Toole wrote:
> > Hi,
> > I am trying to understand if I should expect lower bandwidth from a
> > setup when using IBV_QPT_RAW_PACKET and a steering rule which looks at
> > the MAC addresses of
> > the source and destination NICs. I am trying out an example program
> > from the Mellanox community website which uses these features. For the
> > code example, please see
> >
> > https://community.mellanox.com/s/article/raw-ethernet-programming--basic-introduction---code-example
> >
> > In my test setup, I am using two Mellanox MCX515A-CCAT NICs, which have a
> > maximum bandwidth of 100 Gbps. They are installed in two linux computers which
> > are connected by a single cable (no switch or router). Previously,
> > when I was running
> > tests like ib_send_bw or ib_write_bw and using UD, UC, or RC transport
> > mode, I was seeing bandwidths ~90Gbps or higher. With the example in
> > Raw Ethernet Programming: Basic introduction, after adding some code
> > to count packets and measure time, I am seeing bandwidths around 10
> > Gbps. I have been playing with different parameters such as MTU,
> > packet size, or IBV_SEND_INLINE. I am wondering if the reduction in
> > bandwidth is due to the packet filtering being done by the steering
>
> While steering requires more work from the HW (if a packet hits too many
> steering rules before being directed to a TIR/RQ it might affect the BW)
> A single steering rule shouldn't.
>
> > rule? Or should I expect to see similar bandwidths (~90 Gbps) as in my
> > earlier tests and the problem is one of lack of optimization of my
> > setup?
>
> More like lack of optimizations in the test program you've used.
> When talking about sending traffic using RAW_ETHERNET QP there are lot of optimizations
> that can/should be done.
>
> If you would like to have a look at a highly optimized datapath from userspace:
> https://github.com/DPDK/dpdk/blob/master/drivers/net/mlx5/mlx5_rxtx.c
>
> With the right code you should have no issue reaching line rate speeds with raw_ethernet QPs
>
> Mark
>
> >
> > Thanks for any help you can provide.
> >

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Should I expect lower bandwidth when using IBV_QPT_RAW_PACKET and steering rule?
  2020-03-17 23:40 ` Mark Bloch
  2020-03-18 11:21   ` Terry Toole
@ 2020-03-20 15:40   ` Dimitris Dimitropoulos
  2020-03-20 16:04     ` Dimitris Dimitropoulos
  2020-03-20 16:05     ` Mark Bloch
  1 sibling, 2 replies; 9+ messages in thread
From: Dimitris Dimitropoulos @ 2020-03-20 15:40 UTC (permalink / raw)
  To: Mark Bloch; +Cc: Terry Toole, linux-rdma

Hi Mark,

Just a clarification: when you say reach line rates speeds, you mean
with no packet drops ?

Thanks
Dimitris

On Tue, Mar 17, 2020 at 4:40 PM Mark Bloch <markb@mellanox.com> wrote:
> If you would like to have a look at a highly optimized datapath from userspace:
> https://github.com/DPDK/dpdk/blob/master/drivers/net/mlx5/mlx5_rxtx.c
>
> With the right code you should have no issue reaching line rate speeds with raw_ethernet QPs
>
> Mark

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Should I expect lower bandwidth when using IBV_QPT_RAW_PACKET and steering rule?
  2020-03-20 15:40   ` Dimitris Dimitropoulos
@ 2020-03-20 16:04     ` Dimitris Dimitropoulos
  2020-03-20 16:34       ` Jason Gunthorpe
  2020-03-20 16:05     ` Mark Bloch
  1 sibling, 1 reply; 9+ messages in thread
From: Dimitris Dimitropoulos @ 2020-03-20 16:04 UTC (permalink / raw)
  To: Mark Bloch; +Cc: Terry Toole, linux-rdma

I should make the question more precise: in a peer to peer setup where
RDMA UC can achieve line rate with no packet drops, can we expect the
same result with UDP with IB verbs ?

Any IB verbs library restrictions that would have us see a difference
in line rate or drop rate ?

Dimitris

On Fri, Mar 20, 2020 at 8:40 AM Dimitris Dimitropoulos
<d.dimitropoulos@imatrex.com> wrote:
>
> Hi Mark,
>
> Just a clarification: when you say reach line rates speeds, you mean
> with no packet drops ?
>
> Thanks
> Dimitris
>
> On Tue, Mar 17, 2020 at 4:40 PM Mark Bloch <markb@mellanox.com> wrote:
> > If you would like to have a look at a highly optimized datapath from userspace:
> > https://github.com/DPDK/dpdk/blob/master/drivers/net/mlx5/mlx5_rxtx.c
> >
> > With the right code you should have no issue reaching line rate speeds with raw_ethernet QPs
> >
> > Mark

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Should I expect lower bandwidth when using IBV_QPT_RAW_PACKET and steering rule?
  2020-03-20 15:40   ` Dimitris Dimitropoulos
  2020-03-20 16:04     ` Dimitris Dimitropoulos
@ 2020-03-20 16:05     ` Mark Bloch
  2020-03-21  1:47       ` Dimitris Dimitropoulos
  1 sibling, 1 reply; 9+ messages in thread
From: Mark Bloch @ 2020-03-20 16:05 UTC (permalink / raw)
  To: Dimitris Dimitropoulos; +Cc: Terry Toole, linux-rdma

Hey Dimitris,

On 3/20/2020 08:40, Dimitris Dimitropoulos wrote:
> Hi Mark,
> 
> Just a clarification: when you say reach line rates speeds, you mean
> with no packet drops ?

Yep* you can have a look at the following PDF for some numbers:
https://fast.dpdk.org/doc/perf/DPDK_19_08_Mellanox_NIC_performance_report.pdf

*I just want to point out that in the real world (as opposed to the tests/applications used during the measurements
inside the PDF) usually there is a lot more processing inside the application itself which can
make it harder to handle large number of packets and still saturate the wire.
Once you take into account packet sizes, MTU, servers configuration/hardware etc it can get a bit tricky
but not impossible.

Just to add that currently we have support and ability to insert millions for flow steering rules with very
high update rate, but you have to use the DV APIs to achieve that:
https://github.com/linux-rdma/rdma-core/blob/master/providers/mlx5/man/mlx5dv_dr_flow.3.md

Mark

> 
> Thanks
> Dimitris
> 
> On Tue, Mar 17, 2020 at 4:40 PM Mark Bloch <markb@mellanox.com> wrote:
>> If you would like to have a look at a highly optimized datapath from userspace:
>> https://github.com/DPDK/dpdk/blob/master/drivers/net/mlx5/mlx5_rxtx.c
>>
>> With the right code you should have no issue reaching line rate speeds with raw_ethernet QPs
>>
>> Mark

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Should I expect lower bandwidth when using IBV_QPT_RAW_PACKET and steering rule?
  2020-03-20 16:04     ` Dimitris Dimitropoulos
@ 2020-03-20 16:34       ` Jason Gunthorpe
  2020-03-21  1:43         ` Dimitris Dimitropoulos
  0 siblings, 1 reply; 9+ messages in thread
From: Jason Gunthorpe @ 2020-03-20 16:34 UTC (permalink / raw)
  To: Dimitris Dimitropoulos; +Cc: Mark Bloch, Terry Toole, linux-rdma

On Fri, Mar 20, 2020 at 09:04:34AM -0700, Dimitris Dimitropoulos wrote:
> I should make the question more precise: in a peer to peer setup where
> RDMA UC can achieve line rate with no packet drops, can we expect the
> same result with UDP with IB verbs ?

I would expect UDP and UD to be similar

But verbs will be slower than dpdk.

Jason

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Should I expect lower bandwidth when using IBV_QPT_RAW_PACKET and steering rule?
  2020-03-20 16:34       ` Jason Gunthorpe
@ 2020-03-21  1:43         ` Dimitris Dimitropoulos
  0 siblings, 0 replies; 9+ messages in thread
From: Dimitris Dimitropoulos @ 2020-03-21  1:43 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Mark Bloch, Terry Toole, linux-rdma

Got it. Thank you.

Dimitris

On Fri, Mar 20, 2020 at 9:34 AM Jason Gunthorpe <jgg@ziepe.ca> wrote:
>
> On Fri, Mar 20, 2020 at 09:04:34AM -0700, Dimitris Dimitropoulos wrote:
> > I should make the question more precise: in a peer to peer setup where
> > RDMA UC can achieve line rate with no packet drops, can we expect the
> > same result with UDP with IB verbs ?
>
> I would expect UDP and UD to be similar
>
> But verbs will be slower than dpdk.
>
> Jason

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Should I expect lower bandwidth when using IBV_QPT_RAW_PACKET and steering rule?
  2020-03-20 16:05     ` Mark Bloch
@ 2020-03-21  1:47       ` Dimitris Dimitropoulos
  0 siblings, 0 replies; 9+ messages in thread
From: Dimitris Dimitropoulos @ 2020-03-21  1:47 UTC (permalink / raw)
  To: Mark Bloch; +Cc: Terry Toole, linux-rdma

Thanks Mark. We haven't been using DPDK. We're using IB verbs, but
we've had difficulty reaching these with UDP, even though with UC it's
been straightforward.

Dimitris

On Fri, Mar 20, 2020 at 9:05 AM Mark Bloch <markb@mellanox.com> wrote:
>
> Hey Dimitris,
>
> On 3/20/2020 08:40, Dimitris Dimitropoulos wrote:
> > Hi Mark,
> >
> > Just a clarification: when you say reach line rates speeds, you mean
> > with no packet drops ?
>
> Yep* you can have a look at the following PDF for some numbers:
> https://fast.dpdk.org/doc/perf/DPDK_19_08_Mellanox_NIC_performance_report.pdf
>
> *I just want to point out that in the real world (as opposed to the tests/applications used during the measurements
> inside the PDF) usually there is a lot more processing inside the application itself which can
> make it harder to handle large number of packets and still saturate the wire.
> Once you take into account packet sizes, MTU, servers configuration/hardware etc it can get a bit tricky
> but not impossible.
>
> Just to add that currently we have support and ability to insert millions for flow steering rules with very
> high update rate, but you have to use the DV APIs to achieve that:
> https://github.com/linux-rdma/rdma-core/blob/master/providers/mlx5/man/mlx5dv_dr_flow.3.md
>
> Mark
>
> >
> > Thanks
> > Dimitris
> >
> > On Tue, Mar 17, 2020 at 4:40 PM Mark Bloch <markb@mellanox.com> wrote:
> >> If you would like to have a look at a highly optimized datapath from userspace:
> >> https://github.com/DPDK/dpdk/blob/master/drivers/net/mlx5/mlx5_rxtx.c
> >>
> >> With the right code you should have no issue reaching line rate speeds with raw_ethernet QPs
> >>
> >> Mark

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-03-21  1:47 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-17 21:07 Should I expect lower bandwidth when using IBV_QPT_RAW_PACKET and steering rule? Terry Toole
2020-03-17 23:40 ` Mark Bloch
2020-03-18 11:21   ` Terry Toole
2020-03-20 15:40   ` Dimitris Dimitropoulos
2020-03-20 16:04     ` Dimitris Dimitropoulos
2020-03-20 16:34       ` Jason Gunthorpe
2020-03-21  1:43         ` Dimitris Dimitropoulos
2020-03-20 16:05     ` Mark Bloch
2020-03-21  1:47       ` Dimitris Dimitropoulos

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).