All of lore.kernel.org
 help / color / mirror / Atom feed
* Mellanox Flow Steering
@ 2015-04-12  5:10 Raghav Sethi
       [not found] ` <CAC2O_6o88Bq=zYgqK8M=YYWFuJ5Jsc5xBcs5Eqb8+2OGzXcN1g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 7+ messages in thread
From: Raghav Sethi @ 2015-04-12  5:10 UTC (permalink / raw)
  To: dev-VfR2kkLFssw

Hi folks,

I'm trying to use the flow steering features of the Mellanox card to
effectively use a multicore server for a benchmark.

The system has a single-port Mellanox ConnectX-3 EN, and I want to use 4 of
the 32 cores present and 4 of the 16 RX queues supported by the hardware
(i.e. one RX queue per core).

I assign RX queues to each of the cores, but obviously without flow
steering (all the packets have the same IP and UDP headers, but different
dest MACs in the ethernet headers) each of the packets hits one core. I've
set up the client such that it sends packets with a different destination
MAC for each RX queue (e.g. RX queue 1 should get 10:00:00:00:00:00, RX
queue 2 should get 10:00:00:00:00:01 and so on).

I try to accomplish this by using ethtool to set flow steering rules (e.g.
ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc 1,
ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..).

As soon as I set up these rules though, packets matching them just stop
hitting my application. All other packets go through, and removing the
rules also causes the packets to go through. I'm pretty sure my application
is looking at all the queues, but I tried changing the rules to try a rule
for every single destination RX queue (0-16), and that doesn't work either.

If it helps, my code is based on the l2fwd sample application, and is here:
https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e

Also, I added the following to my /etc/init.d: options mlx4_core
log_num_mgm_entry_size=-1, and restarted the driver before any of these
tests.

Any ideas what might be causing my packets to drop? In case this is a
Mellanox issue, should I be talking to their customer support?

Best,
Raghav Sethi

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Mellanox Flow Steering
       [not found] ` <CAC2O_6o88Bq=zYgqK8M=YYWFuJ5Jsc5xBcs5Eqb8+2OGzXcN1g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2015-04-12 11:47   ` Zhou, Danny
       [not found]     ` <DFDF335405C17848924A094BC35766CF0AB54724-0J0gbvR4kTg/UvCtAeCM4rfspsVTdybXVpNB7YpNyf8@public.gmane.org>
  0 siblings, 1 reply; 7+ messages in thread
From: Zhou, Danny @ 2015-04-12 11:47 UTC (permalink / raw)
  To: Raghav Sethi, dev-VfR2kkLFssw

Currently, the DPDK PMD and NIC kernel driver cannot drive a same NIC device simultaneously. When you
use ethtool to setup flow director filter, the rules are written to NIC via ethtool support in kernel driver. But when
DPDK PMD is loaded to drive same device, the rules previously written by ethtool/kernel_driver will be invalid, so 
you may have to use DPDK APIs to rewrite your rules to the NIC again.

The bifurcated driver is designed to provide a solution to support the kernel driver and DPDK coexist scenarios, but
it has security concern so netdev maintainer rejects it.

It should not be a Mellanox hardware problem, if you try it on Intel NIC the result is same.

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Raghav Sethi
> Sent: Sunday, April 12, 2015 1:10 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] Mellanox Flow Steering
> 
> Hi folks,
> 
> I'm trying to use the flow steering features of the Mellanox card to
> effectively use a multicore server for a benchmark.
> 
> The system has a single-port Mellanox ConnectX-3 EN, and I want to use 4 of
> the 32 cores present and 4 of the 16 RX queues supported by the hardware
> (i.e. one RX queue per core).
> 
> I assign RX queues to each of the cores, but obviously without flow
> steering (all the packets have the same IP and UDP headers, but different
> dest MACs in the ethernet headers) each of the packets hits one core. I've
> set up the client such that it sends packets with a different destination
> MAC for each RX queue (e.g. RX queue 1 should get 10:00:00:00:00:00, RX
> queue 2 should get 10:00:00:00:00:01 and so on).
> 
> I try to accomplish this by using ethtool to set flow steering rules (e.g.
> ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc 1,
> ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..).
> 
> As soon as I set up these rules though, packets matching them just stop
> hitting my application. All other packets go through, and removing the
> rules also causes the packets to go through. I'm pretty sure my application
> is looking at all the queues, but I tried changing the rules to try a rule
> for every single destination RX queue (0-16), and that doesn't work either.
> 
> If it helps, my code is based on the l2fwd sample application, and is here:
> https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e
> 
> Also, I added the following to my /etc/init.d: options mlx4_core
> log_num_mgm_entry_size=-1, and restarted the driver before any of these
> tests.
> 
> Any ideas what might be causing my packets to drop? In case this is a
> Mellanox issue, should I be talking to their customer support?
> 
> Best,
> Raghav Sethi

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Mellanox Flow Steering
       [not found]     ` <DFDF335405C17848924A094BC35766CF0AB54724-0J0gbvR4kTg/UvCtAeCM4rfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2015-04-12 16:17       ` Raghav Sethi
       [not found]         ` <CAC2O_6oPEZEDoXi5kCq=0=5wNNL7t7-Y4W=UBPu+eaybDSaeHA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 7+ messages in thread
From: Raghav Sethi @ 2015-04-12 16:17 UTC (permalink / raw)
  To: Zhou, Danny, dev-VfR2kkLFssw

Hi Danny,

Thanks, that's helpful. However, Mellanox cards don't support Intel Flow
Director, so how would one go about installing these rules in the NIC? The
only technique the Mellanox User Manual (
http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Linux_User_Manual_v2_0-3_0_0.pdf)
lists to use Flow Steering is the ethtool based method.

Additionally, the mlx4_core driver is used both by DPDK PMD and otherwise
(unlike the igb_uio driver, which needs to be loaded to use PMD) and it
seems weird that only the packets affected by the rules don't hit the DPDK
application. That indicates to me that the NIC is dealing with the rules
somehow even though a DPDK application is running.

Best,
Raghav

On Sun, Apr 12, 2015 at 7:47 AM Zhou, Danny <danny.zhou-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> wrote:

> Currently, the DPDK PMD and NIC kernel driver cannot drive a same NIC
> device simultaneously. When you
> use ethtool to setup flow director filter, the rules are written to NIC
> via ethtool support in kernel driver. But when
> DPDK PMD is loaded to drive same device, the rules previously written by
> ethtool/kernel_driver will be invalid, so
> you may have to use DPDK APIs to rewrite your rules to the NIC again.
>
> The bifurcated driver is designed to provide a solution to support the
> kernel driver and DPDK coexist scenarios, but
> it has security concern so netdev maintainer rejects it.
>
> It should not be a Mellanox hardware problem, if you try it on Intel NIC
> the result is same.
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces-VfR2kkLFssw@public.gmane.org] On Behalf Of Raghav Sethi
> > Sent: Sunday, April 12, 2015 1:10 PM
> > To: dev-VfR2kkLFssw@public.gmane.org
> > Subject: [dpdk-dev] Mellanox Flow Steering
> >
> > Hi folks,
> >
> > I'm trying to use the flow steering features of the Mellanox card to
> > effectively use a multicore server for a benchmark.
> >
> > The system has a single-port Mellanox ConnectX-3 EN, and I want to use 4
> of
> > the 32 cores present and 4 of the 16 RX queues supported by the hardware
> > (i.e. one RX queue per core).
> >
> > I assign RX queues to each of the cores, but obviously without flow
> > steering (all the packets have the same IP and UDP headers, but different
> > dest MACs in the ethernet headers) each of the packets hits one core.
> I've
> > set up the client such that it sends packets with a different destination
> > MAC for each RX queue (e.g. RX queue 1 should get 10:00:00:00:00:00, RX
> > queue 2 should get 10:00:00:00:00:01 and so on).
> >
> > I try to accomplish this by using ethtool to set flow steering rules
> (e.g.
> > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc 1,
> > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..).
> >
> > As soon as I set up these rules though, packets matching them just stop
> > hitting my application. All other packets go through, and removing the
> > rules also causes the packets to go through. I'm pretty sure my
> application
> > is looking at all the queues, but I tried changing the rules to try a
> rule
> > for every single destination RX queue (0-16), and that doesn't work
> either.
> >
> > If it helps, my code is based on the l2fwd sample application, and is
> here:
> > https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e
> >
> > Also, I added the following to my /etc/init.d: options mlx4_core
> > log_num_mgm_entry_size=-1, and restarted the driver before any of these
> > tests.
> >
> > Any ideas what might be causing my packets to drop? In case this is a
> > Mellanox issue, should I be talking to their customer support?
> >
> > Best,
> > Raghav Sethi
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Mellanox Flow Steering
       [not found]         ` <CAC2O_6oPEZEDoXi5kCq=0=5wNNL7t7-Y4W=UBPu+eaybDSaeHA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2015-04-12 20:39           ` Olga Shern
       [not found]             ` <AM2PR05MB099547FBAF1A483765B024B7D3F80-Wc3DjHnhGidGpxLZXf4xbdqRiQSDpxhJvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
  0 siblings, 1 reply; 7+ messages in thread
From: Olga Shern @ 2015-04-12 20:39 UTC (permalink / raw)
  To: Raghav Sethi, Zhou, Danny, dev-VfR2kkLFssw

Hi Raghav, 

You are right with your observations,  Mellanox PMD and mlx4_en (kernel driver) are co-exist.
When DPDK application run, all traffic is redirected to DPDK application. When DPDK application exit the traffic is received by mlx4_en driver.

Regarding ethtool configuration you did, it influence only mlx4_en driver, it doesn't influence Mellanox PMD queues.

Mellanox PMD doesn't support Flow Director, like you mention, and we are working to add it.
Currently the only way to spread traffic between different PMD queues is using RSS. 

Best Regards,
Olga

-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Raghav Sethi
Sent: Sunday, April 12, 2015 7:18 PM
To: Zhou, Danny; dev@dpdk.org
Subject: Re: [dpdk-dev] Mellanox Flow Steering

Hi Danny,

Thanks, that's helpful. However, Mellanox cards don't support Intel Flow Director, so how would one go about installing these rules in the NIC? The only technique the Mellanox User Manual (
http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Linux_User_Manual_v2_0-3_0_0.pdf)
lists to use Flow Steering is the ethtool based method.

Additionally, the mlx4_core driver is used both by DPDK PMD and otherwise (unlike the igb_uio driver, which needs to be loaded to use PMD) and it seems weird that only the packets affected by the rules don't hit the DPDK application. That indicates to me that the NIC is dealing with the rules somehow even though a DPDK application is running.

Best,
Raghav

On Sun, Apr 12, 2015 at 7:47 AM Zhou, Danny <danny.zhou@intel.com> wrote:

> Currently, the DPDK PMD and NIC kernel driver cannot drive a same NIC 
> device simultaneously. When you use ethtool to setup flow director 
> filter, the rules are written to NIC via ethtool support in kernel 
> driver. But when DPDK PMD is loaded to drive same device, the rules 
> previously written by ethtool/kernel_driver will be invalid, so you 
> may have to use DPDK APIs to rewrite your rules to the NIC again.
>
> The bifurcated driver is designed to provide a solution to support the 
> kernel driver and DPDK coexist scenarios, but it has security concern 
> so netdev maintainer rejects it.
>
> It should not be a Mellanox hardware problem, if you try it on Intel 
> NIC the result is same.
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Raghav Sethi
> > Sent: Sunday, April 12, 2015 1:10 PM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] Mellanox Flow Steering
> >
> > Hi folks,
> >
> > I'm trying to use the flow steering features of the Mellanox card to 
> > effectively use a multicore server for a benchmark.
> >
> > The system has a single-port Mellanox ConnectX-3 EN, and I want to 
> > use 4
> of
> > the 32 cores present and 4 of the 16 RX queues supported by the 
> > hardware (i.e. one RX queue per core).
> >
> > I assign RX queues to each of the cores, but obviously without flow 
> > steering (all the packets have the same IP and UDP headers, but 
> > different dest MACs in the ethernet headers) each of the packets hits one core.
> I've
> > set up the client such that it sends packets with a different 
> > destination MAC for each RX queue (e.g. RX queue 1 should get 
> > 10:00:00:00:00:00, RX queue 2 should get 10:00:00:00:00:01 and so on).
> >
> > I try to accomplish this by using ethtool to set flow steering rules
> (e.g.
> > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc 
> > 1, ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..).
> >
> > As soon as I set up these rules though, packets matching them just 
> > stop hitting my application. All other packets go through, and 
> > removing the rules also causes the packets to go through. I'm pretty 
> > sure my
> application
> > is looking at all the queues, but I tried changing the rules to try 
> > a
> rule
> > for every single destination RX queue (0-16), and that doesn't work
> either.
> >
> > If it helps, my code is based on the l2fwd sample application, and 
> > is
> here:
> > https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e
> >
> > Also, I added the following to my /etc/init.d: options mlx4_core 
> > log_num_mgm_entry_size=-1, and restarted the driver before any of 
> > these tests.
> >
> > Any ideas what might be causing my packets to drop? In case this is 
> > a Mellanox issue, should I be talking to their customer support?
> >
> > Best,
> > Raghav Sethi
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Mellanox Flow Steering
       [not found]             ` <AM2PR05MB099547FBAF1A483765B024B7D3F80-Wc3DjHnhGidGpxLZXf4xbdqRiQSDpxhJvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
@ 2015-04-12 23:29               ` Zhou, Danny
       [not found]                 ` <DFDF335405C17848924A094BC35766CF0AB54B69-0J0gbvR4kTg/UvCtAeCM4rfspsVTdybXVpNB7YpNyf8@public.gmane.org>
  2015-04-13 18:01               ` Raghav Sethi
  1 sibling, 1 reply; 7+ messages in thread
From: Zhou, Danny @ 2015-04-12 23:29 UTC (permalink / raw)
  To: Olga Shern, Raghav Sethi, dev-VfR2kkLFssw

Thanks for clarification Olga. I assume when PMD is upgraded to support flow director, the rules should be only set
by PMD while DPDK application is running, right? Also, when DPDK application exits, the rules previously written by
the PMD are invalid then user needs to reset rules by ethtool via mlx4_en driver.

I think it does not make sense to allow two drivers, one in kernel and another in user space, to control a same 
NIC device simultaneously. Or a control plane synchronization mechanism is needed between two drivers. 
A master driver responsible for NIC control solely is expected.

> -----Original Message-----
> From: Olga Shern [mailto:olgas@mellanox.com]
> Sent: Monday, April 13, 2015 4:39 AM
> To: Raghav Sethi; Zhou, Danny; dev@dpdk.org
> Subject: RE: [dpdk-dev] Mellanox Flow Steering
> 
> Hi Raghav,
> 
> You are right with your observations,  Mellanox PMD and mlx4_en (kernel driver) are co-exist.
> When DPDK application run, all traffic is redirected to DPDK application. When DPDK application exit the traffic is received by
> mlx4_en driver.
> 
> Regarding ethtool configuration you did, it influence only mlx4_en driver, it doesn't influence Mellanox PMD queues.
> 
> Mellanox PMD doesn't support Flow Director, like you mention, and we are working to add it.
> Currently the only way to spread traffic between different PMD queues is using RSS.
> 
> Best Regards,
> Olga
> 
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Raghav Sethi
> Sent: Sunday, April 12, 2015 7:18 PM
> To: Zhou, Danny; dev@dpdk.org
> Subject: Re: [dpdk-dev] Mellanox Flow Steering
> 
> Hi Danny,
> 
> Thanks, that's helpful. However, Mellanox cards don't support Intel Flow Director, so how would one go about installing these
> rules in the NIC? The only technique the Mellanox User Manual (
> http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Linux_User_Manual_v2_0-3_0_0.pdf)
> lists to use Flow Steering is the ethtool based method.
> 
> Additionally, the mlx4_core driver is used both by DPDK PMD and otherwise (unlike the igb_uio driver, which needs to be loaded
> to use PMD) and it seems weird that only the packets affected by the rules don't hit the DPDK application. That indicates to me
> that the NIC is dealing with the rules somehow even though a DPDK application is running.
> 
> Best,
> Raghav
> 
> On Sun, Apr 12, 2015 at 7:47 AM Zhou, Danny <danny.zhou@intel.com> wrote:
> 
> > Currently, the DPDK PMD and NIC kernel driver cannot drive a same NIC
> > device simultaneously. When you use ethtool to setup flow director
> > filter, the rules are written to NIC via ethtool support in kernel
> > driver. But when DPDK PMD is loaded to drive same device, the rules
> > previously written by ethtool/kernel_driver will be invalid, so you
> > may have to use DPDK APIs to rewrite your rules to the NIC again.
> >
> > The bifurcated driver is designed to provide a solution to support the
> > kernel driver and DPDK coexist scenarios, but it has security concern
> > so netdev maintainer rejects it.
> >
> > It should not be a Mellanox hardware problem, if you try it on Intel
> > NIC the result is same.
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Raghav Sethi
> > > Sent: Sunday, April 12, 2015 1:10 PM
> > > To: dev@dpdk.org
> > > Subject: [dpdk-dev] Mellanox Flow Steering
> > >
> > > Hi folks,
> > >
> > > I'm trying to use the flow steering features of the Mellanox card to
> > > effectively use a multicore server for a benchmark.
> > >
> > > The system has a single-port Mellanox ConnectX-3 EN, and I want to
> > > use 4
> > of
> > > the 32 cores present and 4 of the 16 RX queues supported by the
> > > hardware (i.e. one RX queue per core).
> > >
> > > I assign RX queues to each of the cores, but obviously without flow
> > > steering (all the packets have the same IP and UDP headers, but
> > > different dest MACs in the ethernet headers) each of the packets hits one core.
> > I've
> > > set up the client such that it sends packets with a different
> > > destination MAC for each RX queue (e.g. RX queue 1 should get
> > > 10:00:00:00:00:00, RX queue 2 should get 10:00:00:00:00:01 and so on).
> > >
> > > I try to accomplish this by using ethtool to set flow steering rules
> > (e.g.
> > > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc
> > > 1, ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..).
> > >
> > > As soon as I set up these rules though, packets matching them just
> > > stop hitting my application. All other packets go through, and
> > > removing the rules also causes the packets to go through. I'm pretty
> > > sure my
> > application
> > > is looking at all the queues, but I tried changing the rules to try
> > > a
> > rule
> > > for every single destination RX queue (0-16), and that doesn't work
> > either.
> > >
> > > If it helps, my code is based on the l2fwd sample application, and
> > > is
> > here:
> > > https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e
> > >
> > > Also, I added the following to my /etc/init.d: options mlx4_core
> > > log_num_mgm_entry_size=-1, and restarted the driver before any of
> > > these tests.
> > >
> > > Any ideas what might be causing my packets to drop? In case this is
> > > a Mellanox issue, should I be talking to their customer support?
> > >
> > > Best,
> > > Raghav Sethi
> >

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Mellanox Flow Steering
       [not found]                 ` <DFDF335405C17848924A094BC35766CF0AB54B69-0J0gbvR4kTg/UvCtAeCM4rfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2015-04-13 16:59                   ` Olga Shern
  0 siblings, 0 replies; 7+ messages in thread
From: Olga Shern @ 2015-04-13 16:59 UTC (permalink / raw)
  To: Zhou, Danny, Raghav Sethi, dev-VfR2kkLFssw

Hi Danny, 

Please see below

Best Regards,
Olga

-----Original Message-----
From: Zhou, Danny [mailto:danny.zhou@intel.com] 
Sent: Monday, April 13, 2015 2:30 AM
To: Olga Shern; Raghav Sethi; dev@dpdk.org
Subject: RE: [dpdk-dev] Mellanox Flow Steering

Thanks for clarification Olga. I assume when PMD is upgraded to support flow director, the rules should be only set by PMD while DPDK application is running, right?
[Olga ] Right
 Also, when DPDK application exits, the rules previously written by the PMD are invalid then user needs to reset rules by ethtool via mlx4_en driver.
[Olga ] Right

I think it does not make sense to allow two drivers, one in kernel and another in user space, to control a same NIC device simultaneously. Or a control plane synchronization mechanism is needed between two drivers.
[Olga ] Agree :) We are looking for a solution 
 
A master driver responsible for NIC control solely is expected.
[Olga ] Or there should be synchronization mechanism as you mentioned before 

> -----Original Message-----
> From: Olga Shern [mailto:olgas@mellanox.com]
> Sent: Monday, April 13, 2015 4:39 AM
> To: Raghav Sethi; Zhou, Danny; dev@dpdk.org
> Subject: RE: [dpdk-dev] Mellanox Flow Steering
> 
> Hi Raghav,
> 
> You are right with your observations,  Mellanox PMD and mlx4_en (kernel driver) are co-exist.
> When DPDK application run, all traffic is redirected to DPDK 
> application. When DPDK application exit the traffic is received by mlx4_en driver.
> 
> Regarding ethtool configuration you did, it influence only mlx4_en driver, it doesn't influence Mellanox PMD queues.
> 
> Mellanox PMD doesn't support Flow Director, like you mention, and we are working to add it.
> Currently the only way to spread traffic between different PMD queues is using RSS.
> 
> Best Regards,
> Olga
> 
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Raghav Sethi
> Sent: Sunday, April 12, 2015 7:18 PM
> To: Zhou, Danny; dev@dpdk.org
> Subject: Re: [dpdk-dev] Mellanox Flow Steering
> 
> Hi Danny,
> 
> Thanks, that's helpful. However, Mellanox cards don't support Intel 
> Flow Director, so how would one go about installing these rules in the 
> NIC? The only technique the Mellanox User Manual (
> http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Lin
> ux_User_Manual_v2_0-3_0_0.pdf) lists to use Flow Steering is the 
> ethtool based method.
> 
> Additionally, the mlx4_core driver is used both by DPDK PMD and 
> otherwise (unlike the igb_uio driver, which needs to be loaded to use 
> PMD) and it seems weird that only the packets affected by the rules don't hit the DPDK application. That indicates to me that the NIC is dealing with the rules somehow even though a DPDK application is running.
> 
> Best,
> Raghav
> 
> On Sun, Apr 12, 2015 at 7:47 AM Zhou, Danny <danny.zhou@intel.com> wrote:
> 
> > Currently, the DPDK PMD and NIC kernel driver cannot drive a same 
> > NIC device simultaneously. When you use ethtool to setup flow 
> > director filter, the rules are written to NIC via ethtool support in 
> > kernel driver. But when DPDK PMD is loaded to drive same device, the 
> > rules previously written by ethtool/kernel_driver will be invalid, 
> > so you may have to use DPDK APIs to rewrite your rules to the NIC again.
> >
> > The bifurcated driver is designed to provide a solution to support 
> > the kernel driver and DPDK coexist scenarios, but it has security 
> > concern so netdev maintainer rejects it.
> >
> > It should not be a Mellanox hardware problem, if you try it on Intel 
> > NIC the result is same.
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Raghav Sethi
> > > Sent: Sunday, April 12, 2015 1:10 PM
> > > To: dev@dpdk.org
> > > Subject: [dpdk-dev] Mellanox Flow Steering
> > >
> > > Hi folks,
> > >
> > > I'm trying to use the flow steering features of the Mellanox card 
> > > to effectively use a multicore server for a benchmark.
> > >
> > > The system has a single-port Mellanox ConnectX-3 EN, and I want to 
> > > use 4
> > of
> > > the 32 cores present and 4 of the 16 RX queues supported by the 
> > > hardware (i.e. one RX queue per core).
> > >
> > > I assign RX queues to each of the cores, but obviously without 
> > > flow steering (all the packets have the same IP and UDP headers, 
> > > but different dest MACs in the ethernet headers) each of the packets hits one core.
> > I've
> > > set up the client such that it sends packets with a different 
> > > destination MAC for each RX queue (e.g. RX queue 1 should get 
> > > 10:00:00:00:00:00, RX queue 2 should get 10:00:00:00:00:01 and so on).
> > >
> > > I try to accomplish this by using ethtool to set flow steering 
> > > rules
> > (e.g.
> > > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc 
> > > 1, ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..).
> > >
> > > As soon as I set up these rules though, packets matching them just 
> > > stop hitting my application. All other packets go through, and 
> > > removing the rules also causes the packets to go through. I'm 
> > > pretty sure my
> > application
> > > is looking at all the queues, but I tried changing the rules to 
> > > try a
> > rule
> > > for every single destination RX queue (0-16), and that doesn't 
> > > work
> > either.
> > >
> > > If it helps, my code is based on the l2fwd sample application, and 
> > > is
> > here:
> > > https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e
> > >
> > > Also, I added the following to my /etc/init.d: options mlx4_core 
> > > log_num_mgm_entry_size=-1, and restarted the driver before any of 
> > > these tests.
> > >
> > > Any ideas what might be causing my packets to drop? In case this 
> > > is a Mellanox issue, should I be talking to their customer support?
> > >
> > > Best,
> > > Raghav Sethi
> >

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Mellanox Flow Steering
       [not found]             ` <AM2PR05MB099547FBAF1A483765B024B7D3F80-Wc3DjHnhGidGpxLZXf4xbdqRiQSDpxhJvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
  2015-04-12 23:29               ` Zhou, Danny
@ 2015-04-13 18:01               ` Raghav Sethi
  1 sibling, 0 replies; 7+ messages in thread
From: Raghav Sethi @ 2015-04-13 18:01 UTC (permalink / raw)
  To: Olga Shern, Zhou, Danny, dev-VfR2kkLFssw

Hi Olga,

Thanks for clarifying. It appears that the mlx driver does not allow me to
modify RSS options. lib/libret_pmd_mlx4/mlx4.c file states that RSS hash
key and options cannot be modified. However, I will need to modify the hash
function to be an identity/mask function, and the key to be dst mac for my
application.

Would it be correct to conclude that I cannot route packets to cores based
on dst mac using the Mellanox?

If so, given that I have complete control over the packet headers, is there
any other way to ensure deterministic, but equal partitioning of 5-tuple
space across cores using the Mellanox card? My application uses UDP, so I'm
not really concerned about flows. I'm sure the default RSS function
attempts to do just this, but some links to documentation/code for
DPDK+mlx4 default RSS would be great.

Best,
Raghav

On Sun, Apr 12, 2015 at 4:39 PM Olga Shern <olgas-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org> wrote:

> Hi Raghav,
>
> You are right with your observations,  Mellanox PMD and mlx4_en (kernel
> driver) are co-exist.
> When DPDK application run, all traffic is redirected to DPDK application.
> When DPDK application exit the traffic is received by mlx4_en driver.
>
> Regarding ethtool configuration you did, it influence only mlx4_en driver,
> it doesn't influence Mellanox PMD queues.
>
> Mellanox PMD doesn't support Flow Director, like you mention, and we are
> working to add it.
> Currently the only way to spread traffic between different PMD queues is
> using RSS.
>
> Best Regards,
> Olga
>
> -----Original Message-----
> From: dev [mailto:dev-bounces-VfR2kkLFssw@public.gmane.org] On Behalf Of Raghav Sethi
> Sent: Sunday, April 12, 2015 7:18 PM
> To: Zhou, Danny; dev-VfR2kkLFssw@public.gmane.org
> Subject: Re: [dpdk-dev] Mellanox Flow Steering
>
> Hi Danny,
>
> Thanks, that's helpful. However, Mellanox cards don't support Intel Flow
> Director, so how would one go about installing these rules in the NIC? The
> only technique the Mellanox User Manual (
>
> http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Linux_User_Manual_v2_0-3_0_0.pdf
> )
> lists to use Flow Steering is the ethtool based method.
>
> Additionally, the mlx4_core driver is used both by DPDK PMD and otherwise
> (unlike the igb_uio driver, which needs to be loaded to use PMD) and it
> seems weird that only the packets affected by the rules don't hit the DPDK
> application. That indicates to me that the NIC is dealing with the rules
> somehow even though a DPDK application is running.
>
> Best,
> Raghav
>
> On Sun, Apr 12, 2015 at 7:47 AM Zhou, Danny <danny.zhou-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> wrote:
>
> > Currently, the DPDK PMD and NIC kernel driver cannot drive a same NIC
> > device simultaneously. When you use ethtool to setup flow director
> > filter, the rules are written to NIC via ethtool support in kernel
> > driver. But when DPDK PMD is loaded to drive same device, the rules
> > previously written by ethtool/kernel_driver will be invalid, so you
> > may have to use DPDK APIs to rewrite your rules to the NIC again.
> >
> > The bifurcated driver is designed to provide a solution to support the
> > kernel driver and DPDK coexist scenarios, but it has security concern
> > so netdev maintainer rejects it.
> >
> > It should not be a Mellanox hardware problem, if you try it on Intel
> > NIC the result is same.
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces-VfR2kkLFssw@public.gmane.org] On Behalf Of Raghav Sethi
> > > Sent: Sunday, April 12, 2015 1:10 PM
> > > To: dev-VfR2kkLFssw@public.gmane.org
> > > Subject: [dpdk-dev] Mellanox Flow Steering
> > >
> > > Hi folks,
> > >
> > > I'm trying to use the flow steering features of the Mellanox card to
> > > effectively use a multicore server for a benchmark.
> > >
> > > The system has a single-port Mellanox ConnectX-3 EN, and I want to
> > > use 4
> > of
> > > the 32 cores present and 4 of the 16 RX queues supported by the
> > > hardware (i.e. one RX queue per core).
> > >
> > > I assign RX queues to each of the cores, but obviously without flow
> > > steering (all the packets have the same IP and UDP headers, but
> > > different dest MACs in the ethernet headers) each of the packets hits
> one core.
> > I've
> > > set up the client such that it sends packets with a different
> > > destination MAC for each RX queue (e.g. RX queue 1 should get
> > > 10:00:00:00:00:00, RX queue 2 should get 10:00:00:00:00:01 and so on).
> > >
> > > I try to accomplish this by using ethtool to set flow steering rules
> > (e.g.
> > > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc
> > > 1, ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc
> 2..).
> > >
> > > As soon as I set up these rules though, packets matching them just
> > > stop hitting my application. All other packets go through, and
> > > removing the rules also causes the packets to go through. I'm pretty
> > > sure my
> > application
> > > is looking at all the queues, but I tried changing the rules to try
> > > a
> > rule
> > > for every single destination RX queue (0-16), and that doesn't work
> > either.
> > >
> > > If it helps, my code is based on the l2fwd sample application, and
> > > is
> > here:
> > > https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e
> > >
> > > Also, I added the following to my /etc/init.d: options mlx4_core
> > > log_num_mgm_entry_size=-1, and restarted the driver before any of
> > > these tests.
> > >
> > > Any ideas what might be causing my packets to drop? In case this is
> > > a Mellanox issue, should I be talking to their customer support?
> > >
> > > Best,
> > > Raghav Sethi
> >
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2015-04-13 18:01 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-04-12  5:10 Mellanox Flow Steering Raghav Sethi
     [not found] ` <CAC2O_6o88Bq=zYgqK8M=YYWFuJ5Jsc5xBcs5Eqb8+2OGzXcN1g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2015-04-12 11:47   ` Zhou, Danny
     [not found]     ` <DFDF335405C17848924A094BC35766CF0AB54724-0J0gbvR4kTg/UvCtAeCM4rfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2015-04-12 16:17       ` Raghav Sethi
     [not found]         ` <CAC2O_6oPEZEDoXi5kCq=0=5wNNL7t7-Y4W=UBPu+eaybDSaeHA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2015-04-12 20:39           ` Olga Shern
     [not found]             ` <AM2PR05MB099547FBAF1A483765B024B7D3F80-Wc3DjHnhGidGpxLZXf4xbdqRiQSDpxhJvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2015-04-12 23:29               ` Zhou, Danny
     [not found]                 ` <DFDF335405C17848924A094BC35766CF0AB54B69-0J0gbvR4kTg/UvCtAeCM4rfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2015-04-13 16:59                   ` Olga Shern
2015-04-13 18:01               ` Raghav Sethi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.