linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Saleem, Shiraz" <shiraz.saleem@intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: "dledford@redhat.com" <dledford@redhat.com>,
	"kuba@kernel.org" <kuba@kernel.org>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"Ertman, David M" <david.m.ertman@intel.com>,
	"Nguyen, Anthony L" <anthony.l.nguyen@intel.com>,
	"Williams, Dan J" <dan.j.williams@intel.com>,
	"Hefty, Sean" <sean.hefty@intel.com>,
	"Lacombe, John S" <john.s.lacombe@intel.com>
Subject: RE: [PATCH v4 01/23] iidc: Introduce iidc.h
Date: Mon, 12 Apr 2021 14:50:43 +0000	[thread overview]
Message-ID: <2339b8bb35b74aabbb708fcd1a6ab40f@intel.com> (raw)
In-Reply-To: <20210407224324.GH282464@nvidia.com>

> Subject: Re: [PATCH v4 01/23] iidc: Introduce iidc.h
> 
> On Wed, Apr 07, 2021 at 08:58:49PM +0000, Saleem, Shiraz wrote:
> > > Subject: Re: [PATCH v4 01/23] iidc: Introduce iidc.h
> > >
> > > On Tue, Apr 06, 2021 at 04:01:03PM -0500, Shiraz Saleem wrote:
> > >
> > > > +/* Following APIs are implemented by core PCI driver */ struct
> > > > +iidc_core_ops {
> > > > +	/* APIs to allocate resources such as VEB, VSI, Doorbell queues,
> > > > +	 * completion queues, Tx/Rx queues, etc...
> > > > +	 */
> > > > +	int (*alloc_res)(struct iidc_core_dev_info *cdev_info,
> > > > +			 struct iidc_res *res,
> > > > +			 int partial_acceptable);
> > > > +	int (*free_res)(struct iidc_core_dev_info *cdev_info,
> > > > +			struct iidc_res *res);
> > > > +
> > > > +	int (*request_reset)(struct iidc_core_dev_info *cdev_info,
> > > > +			     enum iidc_reset_type reset_type);
> > > > +
> > > > +	int (*update_vport_filter)(struct iidc_core_dev_info *cdev_info,
> > > > +				   u16 vport_id, bool enable);
> > > > +	int (*vc_send)(struct iidc_core_dev_info *cdev_info, u32 vf_id, u8 *msg,
> > > > +		       u16 len);
> > > > +};
> > >
> > > What is this? There is only one implementation:
> > >
> > > static const struct iidc_core_ops ops = {
> > > 	.alloc_res			= ice_cdev_info_alloc_res,
> > > 	.free_res			= ice_cdev_info_free_res,
> > > 	.request_reset			= ice_cdev_info_request_reset,
> > > 	.update_vport_filter		= ice_cdev_info_update_vsi_filter,
> > > 	.vc_send			= ice_cdev_info_vc_send,
> > > };
> > >
> > > So export and call the functions directly.
> >
> > No. Then we end up requiring ice to be loaded even when just want to
> > use irdma with x722 [whose ethernet driver is "i40e"].
> 
> So what? What does it matter to load a few extra kb of modules?

Because it is an unnecessary thing to force a user to build/load drivers for
which they don't have the HW for? The problem gets compounded if we have to
do it for all future HW Intel PCI drivers, i.e. depends on ICE && ....

IIDC is Intel's converged and generic RDMA <--> PCI driver channel interface; which
we intend to use moving forward. And these .ops callbacks are part of this interface which will
have different implementations by each HW generation PCI core driver. It is extensible
with new ops added to the table for new HW and where implementations of the certain ops on some
HW will be NULL.

There is a near-term Intel ethernet VF driver which will use IIDC to provide RDMA in the VF,
and implement some of these .ops callbacks. There is also intent to move i40e to IIDC. 

And yes, it allows to load a unified irdma driver without having all the mulit-gen PCI core drivers to be
built/loaded as a pre-requisite which is solving a pain-point to the user and does not unnecessarily
add a memory foot-print.

In the past, with i40e <-> i40iw, I acknowledge such a dependency was decoupled
for the wrong reasons [1] and understand where your frustration is coming from. But in
a unified irdma driver model connecting to multiple PCI gen drivers, I do think it serves
a reason. This has also been voiced over the years in some of our discussions [2] leading to
the auxiliary bus and been part of our submissions from the get go. In fact, use of such domain
specific .ops from the parent device is an assumption baked into the design when the auxiliary bus
was conceived and in the documentation [3] (See Example Usage).

[1] https://lore.kernel.org/linux-rdma/20180522205612.GD7502@mellanox.com/
[2] https://lore.kernel.org/linux-rdma/2B0E3F215D1AB84DA946C8BEE234CCC97B2E1D29@ORSMSX101.amr.corp.intel.com/
[3] https://www.kernel.org/doc/html/latest/driver-api/auxiliary_bus.html

Shiraz

  parent reply	other threads:[~2021-04-12 14:50 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-06 21:01 [PATCH v4 00/23] Add Intel Ethernet Protocol Driver for RDMA (irdma) Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 01/23] iidc: Introduce iidc.h Shiraz Saleem
2021-04-07 15:44   ` Jason Gunthorpe
2021-04-07 20:58     ` Saleem, Shiraz
2021-04-07 22:43       ` Jason Gunthorpe
2021-04-08  7:14         ` Leon Romanovsky
2021-04-09  1:38           ` Saleem, Shiraz
2021-04-11 11:48             ` Leon Romanovsky
2021-04-12 14:50         ` Saleem, Shiraz [this message]
2021-04-12 16:12           ` Jason Gunthorpe
2021-04-15 17:36             ` Saleem, Shiraz
2021-04-07 17:35   ` Jason Gunthorpe
2021-04-12 14:51     ` Saleem, Shiraz
2021-04-06 21:01 ` [PATCH v4 02/23] ice: Initialize RDMA support Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 03/23] ice: Implement iidc operations Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 04/23] ice: Register auxiliary device to provide RDMA Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 05/23] ice: Add devlink params support Shiraz Saleem
2021-04-07 14:57   ` Jason Gunthorpe
2021-04-07 20:58     ` Saleem, Shiraz
2021-04-07 22:46       ` Jason Gunthorpe
2021-04-12 14:50         ` Saleem, Shiraz
2021-04-12 19:07           ` Parav Pandit
2021-04-13  4:03             ` Parav Pandit
2021-04-13 14:40             ` Saleem, Shiraz
2021-04-13 17:36               ` Parav Pandit
2021-04-14  0:21                 ` Saleem, Shiraz
2021-04-14  5:27                   ` Parav Pandit
2021-04-18 11:51                   ` Leon Romanovsky
2021-04-06 21:01 ` [PATCH v4 06/23] i40e: Prep i40e header for aux bus conversion Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 07/23] i40e: Register auxiliary devices to provide RDMA Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 08/23] RDMA/irdma: Register auxiliary driver and implement private channel OPs Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 09/23] RDMA/irdma: Implement device initialization definitions Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 10/23] RDMA/irdma: Implement HW Admin Queue OPs Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 11/23] RDMA/irdma: Add HMC backing store setup functions Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 12/23] RDMA/irdma: Add privileged UDA queue implementation Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 13/23] RDMA/irdma: Add QoS definitions Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 14/23] RDMA/irdma: Add connection manager Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 15/23] RDMA/irdma: Add PBLE resource manager Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 16/23] RDMA/irdma: Implement device supported verb APIs Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 17/23] RDMA/irdma: Add RoCEv2 UD OP support Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 18/23] RDMA/irdma: Add user/kernel shared libraries Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 19/23] RDMA/irdma: Add miscellaneous utility definitions Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 20/23] RDMA/irdma: Add dynamic tracing for CM Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 21/23] RDMA/irdma: Add ABI definitions Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 22/23] RDMA/irdma: Add irdma Kconfig/Makefile and remove i40iw Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 23/23] RDMA/irdma: Update MAINTAINERS file Shiraz Saleem
2021-04-06 21:05 ` [PATCH v4 00/23] Add Intel Ethernet Protocol Driver for RDMA (irdma) Saleem, Shiraz
2021-04-06 23:15 ` Jason Gunthorpe
2021-04-06 23:30   ` Saleem, Shiraz
2021-04-07  0:18     ` Saleem, Shiraz
2021-04-07 11:31     ` Jason Gunthorpe
2021-04-07 15:06       ` Saleem, Shiraz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2339b8bb35b74aabbb708fcd1a6ab40f@intel.com \
    --to=shiraz.saleem@intel.com \
    --cc=anthony.l.nguyen@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=davem@davemloft.net \
    --cc=david.m.ertman@intel.com \
    --cc=dledford@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=john.s.lacombe@intel.com \
    --cc=kuba@kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=sean.hefty@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).