All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
To: Jason Gunthorpe <jgg@ziepe.ca>, Shiraz Saleem <shiraz.saleem@intel.com>
Cc: dledford@redhat.com, davem@davemloft.net,
	linux-rdma@vger.kernel.org, netdev@vger.kernel.org,
	mustafa.ismail@intel.com,
	Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Subject: Re: [RFC 03/19] net/ice: Add support for ice peer devices and drivers
Date: Wed, 13 Feb 2019 07:40:15 -0800	[thread overview]
Message-ID: <8612b6c404633f930de5ceb090647ae0910644b2.camel@intel.com> (raw)
In-Reply-To: <20190213034104.GA8751@ziepe.ca>

[-- Attachment #1: Type: text/plain, Size: 1803 bytes --]

On Tue, 2019-02-12 at 20:41 -0700, Jason Gunthorpe wrote:
> On Tue, Feb 12, 2019 at 03:43:46PM -0600, Shiraz Saleem wrote:
> > From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
> > 
> > The E800 series of Ethernet devices has multiple hardware blocks,
> > of
> > which RDMA is one. The RDMA block isn't interfaced directly to PCI
> > or any other bus. The RDMA driver (irdma) thus depends on the ice
> > driver to provide access to the RDMA hardware block.
> > 
> > The ice driver first creates a pseudo bus and then creates and
> > attaches
> > a new device to the pseudo bus using device_register(). This new
> > device
> > is referred to as a "peer device" and the associated driver (i.e.
> > irdma)
> > is a "peer driver" to ice. Once the peer driver loads, it can call
> > ice driver functions exposed to it via struct ice_ops. Similarly,
> > ice can
> > call peer driver functions exposed to it via struct ice_peer_ops.
> 
> This seems quite big for this straightforward description..
>  
> I was going to say I like the idea of using the driver model to
> connect the drivers, but if it takes so much code ...

Part of the reason why the ice driver patches are much larger than the
i40e patch is because currently there is zero RDMA support for our ice
driver.  The ice developers wanted to wait for the new RDMA interface
implementation before adding the RDMA support to the ice driver.

> 
> > +     /* check for reset in progress before proceeding */
> > +     pf = pci_get_drvdata(peer_dev->pdev);
> > +     for (i = 0; i < ICE_MAX_RESET_WAIT; i++) {
> > +             if (!ice_is_reset_in_progress(pf->state))
> > +                     break;
> > +             msleep(100);
> > +     }
> 
> Use proper locking, not loops with sleeps.


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2019-02-13 15:40 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-12 21:43 [RFC 00/19] Add unified Intel Ethernet RDMA driver (irdma) Shiraz Saleem
2019-02-12 21:43 ` [RFC 01/19] net/i40e: Add peer register/unregister to struct i40e_netdev_priv Shiraz Saleem
2019-02-12 21:43 ` [RFC 02/19] net/ice: Create framework for VSI queue context Shiraz Saleem
2019-02-12 21:43 ` [RFC 03/19] net/ice: Add support for ice peer devices and drivers Shiraz Saleem
2019-02-13  3:41   ` Jason Gunthorpe
2019-02-13 15:40     ` Jeff Kirsher [this message]
2019-02-12 21:43 ` [RFC 04/19] RDMA/irdma: Add driver framework definitions Shiraz Saleem
2019-02-12 21:43 ` [RFC 05/19] RDMA/irdma: Implement device initialization definitions Shiraz Saleem
2019-02-12 21:43 ` [RFC 06/19] RDMA/irdma: Implement HW Admin Queue OPs Shiraz Saleem
2019-02-12 21:43 ` [RFC 07/19] RDMA/irdma: Add HMC backing store setup functions Shiraz Saleem
2019-02-12 21:43 ` [RFC 08/19] RDMA/irdma: Add privileged UDA queue implementation Shiraz Saleem
2019-02-12 21:43 ` [RFC 09/19] RDMA/irdma: Add QoS definitions Shiraz Saleem
2019-02-12 21:43 ` [RFC 10/19] RDMA/irdma: Add connection manager Shiraz Saleem
2019-02-12 21:43 ` [RFC 11/19] RDMA/irdma: Add PBLE resource manager Shiraz Saleem
2019-02-12 21:43 ` [RFC 12/19] RDMA/irdma: Implement device supported verb APIs Shiraz Saleem
2019-02-12 22:27   ` Jason Gunthorpe
2019-02-15 17:18     ` Shiraz Saleem
2019-02-12 21:43 ` [RFC 13/19] RDMA/irdma: Add RoCEv2 UD OP support Shiraz Saleem
2019-02-12 21:43 ` [RFC 14/19] RDMA/irdma: Add user/kernel shared libraries Shiraz Saleem
2019-02-12 21:43 ` [RFC 15/19] RDMA/irdma: Add miscellaneous utility definitions Shiraz Saleem
2019-02-12 21:43 ` [RFC 16/19] RDMA/irdma: Add dynamic tracing for CM Shiraz Saleem
2019-02-12 21:44 ` [RFC 17/19] RDMA/irdma: Add ABI definitions Shiraz Saleem
2019-02-12 23:05   ` Jason Gunthorpe
2019-02-12 21:44 ` [RFC 18/19] RDMA/irdma: Add Kconfig and Makefile Shiraz Saleem
2019-02-12 21:44 ` [RFC 19/19] RDMA/irdma: Update MAINTAINERS file Shiraz Saleem

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8612b6c404633f930de5ceb090647ae0910644b2.camel@intel.com \
    --to=jeffrey.t.kirsher@intel.com \
    --cc=anirudh.venkataramanan@intel.com \
    --cc=davem@davemloft.net \
    --cc=dledford@redhat.com \
    --cc=jgg@ziepe.ca \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mustafa.ismail@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=shiraz.saleem@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.