On Tue, 2019-02-12 at 20:41 -0700, Jason Gunthorpe wrote: > On Tue, Feb 12, 2019 at 03:43:46PM -0600, Shiraz Saleem wrote: > > From: Anirudh Venkataramanan > > > > The E800 series of Ethernet devices has multiple hardware blocks, > > of > > which RDMA is one. The RDMA block isn't interfaced directly to PCI > > or any other bus. The RDMA driver (irdma) thus depends on the ice > > driver to provide access to the RDMA hardware block. > > > > The ice driver first creates a pseudo bus and then creates and > > attaches > > a new device to the pseudo bus using device_register(). This new > > device > > is referred to as a "peer device" and the associated driver (i.e. > > irdma) > > is a "peer driver" to ice. Once the peer driver loads, it can call > > ice driver functions exposed to it via struct ice_ops. Similarly, > > ice can > > call peer driver functions exposed to it via struct ice_peer_ops. > > This seems quite big for this straightforward description.. > > I was going to say I like the idea of using the driver model to > connect the drivers, but if it takes so much code ... Part of the reason why the ice driver patches are much larger than the i40e patch is because currently there is zero RDMA support for our ice driver. The ice developers wanted to wait for the new RDMA interface implementation before adding the RDMA support to the ice driver. > > > + /* check for reset in progress before proceeding */ > > + pf = pci_get_drvdata(peer_dev->pdev); > > + for (i = 0; i < ICE_MAX_RESET_WAIT; i++) { > > + if (!ice_is_reset_in_progress(pf->state)) > > + break; > > + msleep(100); > > + } > > Use proper locking, not loops with sleeps.