linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: "Ertman, David M" <david.m.ertman@intel.com>
Cc: "Saleem, Shiraz" <shiraz.saleem@intel.com>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"kuba@kernel.org" <kuba@kernel.org>,
	"yongxin.liu@windriver.com" <yongxin.liu@windriver.com>,
	"Nguyen, Anthony L" <anthony.l.nguyen@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"Brandeburg, Jesse" <jesse.brandeburg@intel.com>,
	"intel-wired-lan@lists.osuosl.org"
	<intel-wired-lan@lists.osuosl.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"jgg@ziepe.ca" <jgg@ziepe.ca>,
	"Williams, Dan J" <dan.j.williams@intel.com>,
	"Singhai, Anjali" <anjali.singhai@intel.com>,
	"Parikh, Neerav" <neerav.parikh@intel.com>,
	"Samudrala, Sridhar" <sridhar.samudrala@intel.com>
Subject: Re: [PATCH RESEND net] ice: Correctly deal with PFs that do not support RDMA
Date: Tue, 14 Sep 2021 06:16:27 +0300	[thread overview]
Message-ID: <YUAUC1AJP6JVMxBr@unreal> (raw)
In-Reply-To: <PH0PR11MB49667F5B029D37D0E257A256DDD99@PH0PR11MB4966.namprd11.prod.outlook.com>

On Mon, Sep 13, 2021 at 04:07:28PM +0000, Ertman, David M wrote:
> > -----Original Message-----
> > From: Saleem, Shiraz <shiraz.saleem@intel.com>
> > Sent: Monday, September 13, 2021 8:50 AM
> > To: Leon Romanovsky <leon@kernel.org>; Ertman, David M
> > <david.m.ertman@intel.com>
> > Cc: davem@davemloft.net; kuba@kernel.org; yongxin.liu@windriver.com;
> > Nguyen, Anthony L <anthony.l.nguyen@intel.com>;
> > netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Brandeburg, Jesse
> > <jesse.brandeburg@intel.com>; intel-wired-lan@lists.osuosl.org; linux-
> > rdma@vger.kernel.org; jgg@ziepe.ca; Williams, Dan J
> > <dan.j.williams@intel.com>; Singhai, Anjali <anjali.singhai@intel.com>;
> > Parikh, Neerav <neerav.parikh@intel.com>; Samudrala, Sridhar
> > <sridhar.samudrala@intel.com>
> > Subject: RE: [PATCH RESEND net] ice: Correctly deal with PFs that do not
> > support RDMA
> > 
> > > Subject: Re: [PATCH RESEND net] ice: Correctly deal with PFs that do not
> > > support RDMA
> > >
> > > On Thu, Sep 09, 2021 at 08:12:23AM -0700, Dave Ertman wrote:
> > > > There are two cases where the current PF does not support RDMA
> > > > functionality.  The first is if the NVM loaded on the device is set to
> > > > not support RDMA (common_caps.rdma is false).  The second is if the
> > > > kernel bonding driver has included the current PF in an active link
> > > > aggregate.
> > > >
> > > > When the driver has determined that this PF does not support RDMA,
> > > > then auxiliary devices should not be created on the auxiliary bus.
> > >
> > > This part is wrong, auxiliary devices should always be created, in your case it
> > will
> > > be one eth device only without extra irdma device.
> > 
> > It is worth considering having an eth aux device/driver but is it a hard-and-
> > fast rule?
> > In this case, the RDMA-capable PCI network device spawns an auxiliary
> > device for RDMA
> > and the core driver is a network driver.
> > 
> > >
> > > Your "bug" is that you mixed auxiliary bus devices with "regular" ones and
> > created
> > > eth device not as auxiliary one. This is why you are calling to
> > auxiliary_device_init()
> > > for RDMA only and fallback to non-auxiliary mode.
> > 
> > It's a design choice on how you carve out function(s) off your PCI core device
> > to be
> > managed by auxiliary driver(s) and not a bug.
> > 
> > Shiraz
> 
> Also, regardless of whether netdev functionality is carved out into an auxiliary device or not, this code would still be necessary.

Right

> 
> We don't want to carve out an auxiliary device to support a functionality that the base PCI device does not support.  Not having
> the RDMA auxiliary device for an auxiliary driver to bind to is how we differentiate between devices that support RDMA and those
> that don't.

This is right too.

My complain is that you mixed enumerator logic with eth driver and
create auxiliary bus only if your RDMA device exists. It is wrong.

Thanks

> 
> Thanks,
> DaveE
> 

  reply	other threads:[~2021-09-14  3:16 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-09 15:12 [PATCH RESEND net] ice: Correctly deal with PFs that do not support RDMA Dave Ertman
2021-09-10  9:19 ` Leon Romanovsky
2021-09-13 15:49   ` Saleem, Shiraz
2021-09-13 16:07     ` Ertman, David M
2021-09-14  3:16       ` Leon Romanovsky [this message]
2021-09-14  3:10     ` Leon Romanovsky
2021-09-24 14:10 ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YUAUC1AJP6JVMxBr@unreal \
    --to=leon@kernel.org \
    --cc=anjali.singhai@intel.com \
    --cc=anthony.l.nguyen@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=davem@davemloft.net \
    --cc=david.m.ertman@intel.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=jesse.brandeburg@intel.com \
    --cc=jgg@ziepe.ca \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=neerav.parikh@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=shiraz.saleem@intel.com \
    --cc=sridhar.samudrala@intel.com \
    --cc=yongxin.liu@windriver.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).