DPDK-dev Archive on lore.kernel.org
 help / color / Atom feed
* [dpdk-dev] RFC - vdev_netvsc automatic blacklisting
@ 2019-06-04 19:54 Stephen Hemminger
  2019-06-11  6:21 ` Matan Azrad
  0 siblings, 1 reply; 6+ messages in thread
From: Stephen Hemminger @ 2019-06-04 19:54 UTC (permalink / raw)
  To: matan; +Cc: dev

When using DPDK on Azure it is common to have one non-DPDK interface.
If that non-DPDK interface is present vdev_netvsc correctly skip it.
But if the non-DPDK has accelerated networking the Mellanox driver will
still get associated with DPDK (and break connectivity).

The current process is to tell users to do whitelist or blacklist the PCI
device(s) not used for DPDK. But vdev_netvsc already is doing a lot of
looking at devices and VF devices. 

Could vdev_netvsc just do this automatically by setting devargs for the VF to blacklist?


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] RFC - vdev_netvsc automatic blacklisting
  2019-06-04 19:54 [dpdk-dev] RFC - vdev_netvsc automatic blacklisting Stephen Hemminger
@ 2019-06-11  6:21 ` Matan Azrad
  2019-06-11 14:26   ` Stephen Hemminger
  0 siblings, 1 reply; 6+ messages in thread
From: Matan Azrad @ 2019-06-11  6:21 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev

Hi Stephen

From: Stephen Hemminger
> When using DPDK on Azure it is common to have one non-DPDK interface.
> If that non-DPDK interface is present vdev_netvsc correctly skip it.
> But if the non-DPDK has accelerated networking the Mellanox driver will still
> get associated with DPDK (and break connectivity).
> 
> The current process is to tell users to do whitelist or blacklist the PCI
> device(s) not used for DPDK. But vdev_netvsc already is doing a lot of looking
> at devices and VF devices.
> 
> Could vdev_netvsc just do this automatically by setting devargs for the VF to
> blacklist?


There is way to blacklist a device by setting it a rout\IP\IPv6, from the VDEV_NETVSC doc:
"Not specifying either iface or mac makes this driver attach itself to all unrouted NetVSC interfaces found on the system. Specifying the device makes this driver attach itself to the device regardless the device routes."

So, we are expecting that used VFs will be with a rout and DPDK VFs will not be with a rout.

Doesn't it enough?


Matan

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] RFC - vdev_netvsc automatic blacklisting
  2019-06-11  6:21 ` Matan Azrad
@ 2019-06-11 14:26   ` Stephen Hemminger
  2019-06-12  5:15     ` Matan Azrad
  0 siblings, 1 reply; 6+ messages in thread
From: Stephen Hemminger @ 2019-06-11 14:26 UTC (permalink / raw)
  To: Matan Azrad; +Cc: dev

On Tue, 11 Jun 2019 06:21:21 +0000
Matan Azrad <matan@mellanox.com> wrote:

> Hi Stephen
> 
> From: Stephen Hemminger
> > When using DPDK on Azure it is common to have one non-DPDK interface.
> > If that non-DPDK interface is present vdev_netvsc correctly skip it.
> > But if the non-DPDK has accelerated networking the Mellanox driver will still
> > get associated with DPDK (and break connectivity).
> > 
> > The current process is to tell users to do whitelist or blacklist the PCI
> > device(s) not used for DPDK. But vdev_netvsc already is doing a lot of looking
> > at devices and VF devices.
> > 
> > Could vdev_netvsc just do this automatically by setting devargs for the VF to
> > blacklist?  
> 
> 
> There is way to blacklist a device by setting it a rout\IP\IPv6, from the VDEV_NETVSC doc:
> "Not specifying either iface or mac makes this driver attach itself to all unrouted NetVSC interfaces found on the system. Specifying the device makes this driver attach itself to the device regardless the device routes."
> 
> So, we are expecting that used VFs will be with a rout and DPDK VFs will not be with a rout.
> 
> Doesn't it enough?
> 
> 
> Matan

I am talking about if eth0 has a route, it gets skipped but the associated MLX SR-IOV device
does not. When the MLX device is then configured for DPDK, it breaks it for use by kernel;
and therefore connectivity with the VM is lost.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] RFC - vdev_netvsc automatic blacklisting
  2019-06-11 14:26   ` Stephen Hemminger
@ 2019-06-12  5:15     ` Matan Azrad
  2019-06-12 13:26       ` Stephen Hemminger
  0 siblings, 1 reply; 6+ messages in thread
From: Matan Azrad @ 2019-06-12  5:15 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev



 From: Stephen Hemminger 
> > Hi Stephen
> >
> > From: Stephen Hemminger
> > > When using DPDK on Azure it is common to have one non-DPDK
> interface.
> > > If that non-DPDK interface is present vdev_netvsc correctly skip it.
> > > But if the non-DPDK has accelerated networking the Mellanox driver
> > > will still get associated with DPDK (and break connectivity).
> > >
> > > The current process is to tell users to do whitelist or blacklist
> > > the PCI
> > > device(s) not used for DPDK. But vdev_netvsc already is doing a lot
> > > of looking at devices and VF devices.
> > >
> > > Could vdev_netvsc just do this automatically by setting devargs for
> > > the VF to blacklist?
> >
> >
> > There is way to blacklist a device by setting it a rout\IP\IPv6, from the
> VDEV_NETVSC doc:
> > "Not specifying either iface or mac makes this driver attach itself to all
> unrouted NetVSC interfaces found on the system. Specifying the device
> makes this driver attach itself to the device regardless the device routes."
> >
> > So, we are expecting that used VFs will be with a rout and DPDK VFs will not
> be with a rout.
> >
> > Doesn't it enough?
> >
> >
> > Matan
> 
> I am talking about if eth0 has a route, it gets skipped but the associated MLX
> SR-IOV device does not. When the MLX device is then configured for DPDK, it
> breaks it for use by kernel; and therefore connectivity with the VM is lost.

Ok, I think I got you.
You want to blacklist the PCI device which its netvsc net-device is detected as routed. Do you?

If so,

I don't think that probing the pci device hurts the connectivity, only the configuration should hurt it.

It means that the application configures the device and hurt it.
Doesn't it an application issue? 

Matan



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] RFC - vdev_netvsc automatic blacklisting
  2019-06-12  5:15     ` Matan Azrad
@ 2019-06-12 13:26       ` Stephen Hemminger
  2019-06-12 16:13         ` Matan Azrad
  0 siblings, 1 reply; 6+ messages in thread
From: Stephen Hemminger @ 2019-06-12 13:26 UTC (permalink / raw)
  To: Matan Azrad; +Cc: dev

On Wed, 12 Jun 2019 05:15:47 +0000
Matan Azrad <matan@mellanox.com> wrote:

>  From: Stephen Hemminger 
> > > Hi Stephen
> > >
> > > From: Stephen Hemminger  
> > > > When using DPDK on Azure it is common to have one non-DPDK  
> > interface.  
> > > > If that non-DPDK interface is present vdev_netvsc correctly skip it.
> > > > But if the non-DPDK has accelerated networking the Mellanox driver
> > > > will still get associated with DPDK (and break connectivity).
> > > >
> > > > The current process is to tell users to do whitelist or blacklist
> > > > the PCI
> > > > device(s) not used for DPDK. But vdev_netvsc already is doing a lot
> > > > of looking at devices and VF devices.
> > > >
> > > > Could vdev_netvsc just do this automatically by setting devargs for
> > > > the VF to blacklist?  
> > >
> > >
> > > There is way to blacklist a device by setting it a rout\IP\IPv6, from the  
> > VDEV_NETVSC doc:  
> > > "Not specifying either iface or mac makes this driver attach itself to all  
> > unrouted NetVSC interfaces found on the system. Specifying the device
> > makes this driver attach itself to the device regardless the device routes."  
> > >
> > > So, we are expecting that used VFs will be with a rout and DPDK VFs will not  
> > be with a rout.  
> > >
> > > Doesn't it enough?
> > >
> > >
> > > Matan  
> > 
> > I am talking about if eth0 has a route, it gets skipped but the associated MLX
> > SR-IOV device does not. When the MLX device is then configured for DPDK, it
> > breaks it for use by kernel; and therefore connectivity with the VM is lost.  
> 
> Ok, I think I got you.
> You want to blacklist the PCI device which its netvsc net-device is detected as routed. Do you?
> 
> If so,
> 
> I don't think that probing the pci device hurts the connectivity, only the configuration should hurt it.
> 
> It means that the application configures the device and hurt it.
> Doesn't it an application issue? 
> 
> Matan

Actually probing does hurt, it corrupts the MLX driver.
In theory, the driver supports bifurcated but in practice it is greedy and grabs all flows.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] RFC - vdev_netvsc automatic blacklisting
  2019-06-12 13:26       ` Stephen Hemminger
@ 2019-06-12 16:13         ` Matan Azrad
  0 siblings, 0 replies; 6+ messages in thread
From: Matan Azrad @ 2019-06-12 16:13 UTC (permalink / raw)
  To: Stephen Hemminger, Shahaf Shuler; +Cc: dev

Hi

+ Shahaf

 From: Stephen Hemminger 
> On Wed, 12 Jun 2019 05:15:47 +0000
> Matan Azrad <matan@mellanox.com> wrote:
> 
> >  From: Stephen Hemminger
> > > > Hi Stephen
> > > >
> > > > From: Stephen Hemminger
> > > > > When using DPDK on Azure it is common to have one non-DPDK
> > > interface.
> > > > > If that non-DPDK interface is present vdev_netvsc correctly skip it.
> > > > > But if the non-DPDK has accelerated networking the Mellanox
> > > > > driver will still get associated with DPDK (and break connectivity).
> > > > >
> > > > > The current process is to tell users to do whitelist or
> > > > > blacklist the PCI
> > > > > device(s) not used for DPDK. But vdev_netvsc already is doing a
> > > > > lot of looking at devices and VF devices.
> > > > >
> > > > > Could vdev_netvsc just do this automatically by setting devargs
> > > > > for the VF to blacklist?
> > > >
> > > >
> > > > There is way to blacklist a device by setting it a rout\IP\IPv6,
> > > > from the
> > > VDEV_NETVSC doc:
> > > > "Not specifying either iface or mac makes this driver attach
> > > > itself to all
> > > unrouted NetVSC interfaces found on the system. Specifying the
> > > device makes this driver attach itself to the device regardless the device
> routes."
> > > >
> > > > So, we are expecting that used VFs will be with a rout and DPDK
> > > > VFs will not
> > > be with a rout.
> > > >
> > > > Doesn't it enough?
> > > >
> > > >
> > > > Matan
> > >
> > > I am talking about if eth0 has a route, it gets skipped but the
> > > associated MLX SR-IOV device does not. When the MLX device is then
> > > configured for DPDK, it breaks it for use by kernel; and therefore
> connectivity with the VM is lost.
> >
> > Ok, I think I got you.
> > You want to blacklist the PCI device which its netvsc net-device is detected
> as routed. Do you?
> >
> > If so,
> >
> > I don't think that probing the pci device hurts the connectivity, only the
> configuration should hurt it.
> >
> > It means that the application configures the device and hurt it.
> > Doesn't it an application issue?
> >
> > Matan
> 
> Actually probing does hurt, it corrupts the MLX driver.
> In theory, the driver supports bifurcated but in practice it is greedy and grabs
> all flows.


I can't see promiscuous configuration in the probing code.
Looks like it is an application configuration.

Shahaf , are you agree?

Matan


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, back to index

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-04 19:54 [dpdk-dev] RFC - vdev_netvsc automatic blacklisting Stephen Hemminger
2019-06-11  6:21 ` Matan Azrad
2019-06-11 14:26   ` Stephen Hemminger
2019-06-12  5:15     ` Matan Azrad
2019-06-12 13:26       ` Stephen Hemminger
2019-06-12 16:13         ` Matan Azrad

DPDK-dev Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/dpdk-dev/0 dpdk-dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dpdk-dev dpdk-dev/ https://lore.kernel.org/dpdk-dev \
		dev@dpdk.org dpdk-dev@archiver.kernel.org
	public-inbox-index dpdk-dev


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/ public-inbox