All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Ertman, David M" <david.m.ertman@intel.com>
To: Jason Gunthorpe <jgg@ziepe.ca>,
	"Saleem, Shiraz" <shiraz.saleem@intel.com>
Cc: "dledford@redhat.com" <dledford@redhat.com>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"Ismail, Mustafa" <mustafa.ismail@intel.com>,
	"Kirsher, Jeffrey T" <jeffrey.t.kirsher@intel.com>,
	"Patil, Kiran" <kiran.patil@intel.com>
Subject: RE: [RFC v1 01/19] net/i40e: Add peer register/unregister to struct i40e_netdev_priv
Date: Fri, 22 Feb 2019 20:13:58 +0000	[thread overview]
Message-ID: <2B0E3F215D1AB84DA946C8BEE234CCC97B11DF23@ORSMSX101.amr.corp.intel.com> (raw)
In-Reply-To: <20190221193523.GO17500@ziepe.ca>

> -----Original Message-----
> From: Jason Gunthorpe [mailto:jgg@ziepe.ca]
> Sent: Thursday, February 21, 2019 11:35 AM
> To: Saleem, Shiraz <shiraz.saleem@intel.com>
> Cc: dledford@redhat.com; davem@davemloft.net; linux-
> rdma@vger.kernel.org; netdev@vger.kernel.org; Ismail, Mustafa
> <mustafa.ismail@intel.com>; Kirsher, Jeffrey T <jeffrey.t.kirsher@intel.com>;
> Patil, Kiran <kiran.patil@intel.com>; Ertman, David M
> <david.m.ertman@intel.com>
> Subject: Re: [RFC v1 01/19] net/i40e: Add peer register/unregister to struct
> i40e_netdev_priv
> 
> On Thu, Feb 21, 2019 at 02:19:33AM +0000, Saleem, Shiraz wrote:
> > >Subject: Re: [RFC v1 01/19] net/i40e: Add peer register/unregister to
> > >struct i40e_netdev_priv
> > >
> > >On Fri, Feb 15, 2019 at 11:10:48AM -0600, Shiraz Saleem wrote:
> > >> Expose the register/unregister function pointers in the struct
> > >> i40e_netdev_priv which is accesible via the netdev_priv() interface
> > >> in the RDMA driver. On a netdev notification in the RDMA driver,
> > >> the appropriate LAN driver register/unregister functions are
> > >> invoked from the struct i40e_netdev_priv structure,
> > >
> > >Why? In later patches we get an entire device_add() based thing. Why
> > >do you need two things?
> > >
> > >The RDMA driver should bind to the thing that device_add created and
> > >from there reliably get the netdev. It should not listen to netdev notifiers for
> attachment.
> >
> > In the new IDC mechanism between ice<->irdma, the LAN driver setups up
> > the device for us and attaches it to a software bus via device_add() based
> mechanism.
> > However, RDMA driver binds to the device only when the LAN 'register'
> > function is called in irdma.
> 
> That doesn't make sense. The PCI driver should always create the required
> struct device attachment point when attachment is becomes possible.
> 
> > There is no ordering guarantee in which irdma, i40e and ice modules load.
> > The netdev notifier is for the case where the irdma loads before i40e
> > or ice.
> 
> You are supposed to use the driver core to handle this ordering.
> 
> The pci driver creates the attachment points in the correct order, when they
> are ready for use, and the driver core will automatically attach registered
> device drivers to the attachement points, no matter the module load loader.
> 
> You will have a netdev and a rdma attachment point, sounds like the RDMA one
> is created once the netdev is happy.
> 
> Maybe what you are missing is a struct device_driver?
> 
> Jason

I am assuming that the term PCI driver is being used to mean the PCI
subsystem in the kernel.  If this assumption is wrong, please disregard the next
paragraph, but the following points will still apply.

The bus that the irdma driver is registering with is a software (pseudo) bus
and not a hardware bus, and since this software bus is being defined by the LAN
driver, the bus does not exist until a LAN driver is loaded, up, and ready to receive
registration from the irdma peer.  The PCI driver does not have anything to with this
bus, and has no ability to perform the described functions.  The irdma driver cannot
register with the software bus unless it registers with the LAN driver that controls the
bus.  The LAN driver's register function will call "driver_register(&drv->driver)" for the
registering irdma driver.

Since the irdma driver is a consolidated driver (supports both ice and i40e LAN
drivers), we cannot guarantee that a given LAN driver will load before the irdma
driver.  Even if we use module dependencies to make irdma depend on (ice ||
i40e), we have to consider the situation where a machine will have both an ice
supported LAN device and an i40e supported LAN device in it.  In this case, the
load order could be (e.g.) i40e -> irdma -> ice.  The irdma driver can check all
present netdevs when it loads to find the one that has the correct function
pointers in it, but it will have no way of knowing that a new software bus was
created by the second LAN driver to load.

This is why irdma is listening for netdev notifiers, so that whenever a new netdev
appears from a LAN driver loading after irdma, the irdma driver can evaluate
whether the new netdev was created by a LAN driver supported by irdma driver.

If I have misunderstood your concerns, I apologize and look forward to clarification.

Thanks,
-Dave Ertman

  reply	other threads:[~2019-02-22 20:13 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-15 17:10 [RFC v1 00/19] Add unified Intel Ethernet RDMA driver (irdma) Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 01/19] net/i40e: Add peer register/unregister to struct i40e_netdev_priv Shiraz Saleem
2019-02-15 17:22   ` Jason Gunthorpe
2019-02-21  2:19     ` Saleem, Shiraz
2019-02-21 19:35       ` Jason Gunthorpe
2019-02-22 20:13         ` Ertman, David M [this message]
2019-02-22 20:23           ` Jason Gunthorpe
2019-03-13  2:11             ` Jeff Kirsher
2019-03-13 13:28               ` Jason Gunthorpe
2019-05-10 13:31                 ` Shiraz Saleem
2019-05-10 18:17                   ` Jason Gunthorpe
2019-02-15 17:10 ` [RFC v1 02/19] net/ice: Create framework for VSI queue context Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 03/19] net/ice: Add support for ice peer devices and drivers Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 04/19] RDMA/irdma: Add driver framework definitions Shiraz Saleem
2019-02-24 15:02   ` Gal Pressman
2019-02-24 15:02     ` Gal Pressman
2019-02-26 21:08     ` Saleem, Shiraz
2019-02-15 17:10 ` [RFC v1 05/19] RDMA/irdma: Implement device initialization definitions Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 06/19] RDMA/irdma: Implement HW Admin Queue OPs Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 07/19] RDMA/irdma: Add HMC backing store setup functions Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 08/19] RDMA/irdma: Add privileged UDA queue implementation Shiraz Saleem
2019-02-24 11:42   ` Gal Pressman
2019-02-24 11:42     ` Gal Pressman
2019-02-15 17:10 ` [RFC v1 09/19] RDMA/irdma: Add QoS definitions Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 10/19] RDMA/irdma: Add connection manager Shiraz Saleem
2019-02-24 11:21   ` Gal Pressman
2019-02-24 11:21     ` Gal Pressman
2019-02-25 18:46     ` Jason Gunthorpe
2019-02-26 21:07       ` Saleem, Shiraz
2019-02-15 17:10 ` [RFC v1 11/19] RDMA/irdma: Add PBLE resource manager Shiraz Saleem
2019-02-27  6:58   ` Leon Romanovsky
2019-02-15 17:10 ` [RFC v1 12/19] RDMA/irdma: Implement device supported verb APIs Shiraz Saleem
2019-02-15 17:35   ` Jason Gunthorpe
2019-02-15 22:19     ` Shiraz Saleem
2019-02-15 22:32       ` Jason Gunthorpe
2019-02-20 14:52     ` Saleem, Shiraz
2019-02-20 16:51       ` Jason Gunthorpe
2019-02-24 14:35   ` Gal Pressman
2019-02-24 14:35     ` Gal Pressman
2019-02-25 18:50     ` Jason Gunthorpe
2019-02-26 21:09       ` Saleem, Shiraz
2019-02-26 21:09     ` Saleem, Shiraz
2019-02-27  7:31       ` Gal Pressman
2019-02-15 17:11 ` [RFC v1 13/19] RDMA/irdma: Add RoCEv2 UD OP support Shiraz Saleem
2019-02-27  6:50   ` Leon Romanovsky
2019-02-15 17:11 ` [RFC v1 14/19] RDMA/irdma: Add user/kernel shared libraries Shiraz Saleem
2019-02-15 17:11 ` [RFC v1 15/19] RDMA/irdma: Add miscellaneous utility definitions Shiraz Saleem
2019-02-15 17:47   ` Jason Gunthorpe
2019-02-20  7:51     ` Leon Romanovsky
2019-02-20 14:53     ` Saleem, Shiraz
2019-02-20 16:53       ` Jason Gunthorpe
2019-02-15 17:11 ` [RFC v1 16/19] RDMA/irdma: Add dynamic tracing for CM Shiraz Saleem
2019-02-15 17:11 ` [RFC v1 17/19] RDMA/irdma: Add ABI definitions Shiraz Saleem
2019-02-15 17:16   ` Jason Gunthorpe
2019-02-20 14:52     ` Saleem, Shiraz
2019-02-20 16:50       ` Jason Gunthorpe
2019-02-15 17:11 ` [RFC v1 18/19] RDMA/irdma: Add Kconfig and Makefile Shiraz Saleem
2019-02-15 17:11 ` [RFC v1 19/19] RDMA/irdma: Update MAINTAINERS file Shiraz Saleem
2019-02-15 17:20 ` [RFC v1 00/19] Add unified Intel Ethernet RDMA driver (irdma) Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2B0E3F215D1AB84DA946C8BEE234CCC97B11DF23@ORSMSX101.amr.corp.intel.com \
    --to=david.m.ertman@intel.com \
    --cc=davem@davemloft.net \
    --cc=dledford@redhat.com \
    --cc=jeffrey.t.kirsher@intel.com \
    --cc=jgg@ziepe.ca \
    --cc=kiran.patil@intel.com \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mustafa.ismail@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=shiraz.saleem@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.