Linux-RDMA Archive on lore.kernel.org
 help / color / Atom feed
From: "Saleem, Shiraz" <shiraz.saleem@intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: "dledford@redhat.com" <dledford@redhat.com>,
	"kuba@kernel.org" <kuba@kernel.org>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"Ertman, David M" <david.m.ertman@intel.com>,
	"Nguyen, Anthony L" <anthony.l.nguyen@intel.com>,
	"Williams, Dan J" <dan.j.williams@intel.com>,
	"Hefty, Sean" <sean.hefty@intel.com>
Subject: RE: [PATCH v4 01/23] iidc: Introduce iidc.h
Date: Mon, 12 Apr 2021 14:51:03 +0000
Message-ID: <be4f52362019468b90cd5998fb5cb8b5@intel.com> (raw)
In-Reply-To: <20210407173547.GB502757@nvidia.com>

> Subject: Re: [PATCH v4 01/23] iidc: Introduce iidc.h
> 
> On Tue, Apr 06, 2021 at 04:01:03PM -0500, Shiraz Saleem wrote:
> 
> > +struct iidc_res_base {
> > +	/* Union for future provision e.g. other res_type */
> > +	union {
> > +		struct iidc_rdma_qset_params qsets;
> > +	} res;
> 
> Use an anonymous union?
> 
> There is alot of confusiong provisioning for future types, do you have concrete
> plans here? I'm a bit confused why this is so different from how mlx5 ended up
> when it already has multiple types.

It was initially designed to be extensible for more resource types. But at this point,
there is no concrete plan and hence it doesn't need to be a union. 

> 
> > +};
> > +
> > +struct iidc_res {
> > +	/* Type of resource. */
> > +	enum iidc_res_type res_type;
> > +	/* Count requested */
> > +	u16 cnt_req;
> > +
> > +	/* Number of resources allocated. Filled in by callee.
> > +	 * Based on this value, caller to fill up "resources"
> > +	 */
> > +	u16 res_allocated;
> > +
> > +	/* Unique handle to resources allocated. Zero if call fails.
> > +	 * Allocated by callee and for now used by caller for internal
> > +	 * tracking purpose.
> > +	 */
> > +	u32 res_handle;
> > +
> > +	/* Peer driver has to allocate sufficient memory, to accommodate
> > +	 * cnt_requested before calling this function.
> 
> Calling what function?

Left over cruft from the re-write of IIDC in v2.
> 
> > +	 * Memory has to be zero initialized. It is input/output param.
> > +	 * As a result of alloc_res API, this structures will be populated.
> > +	 */
> > +	struct iidc_res_base res[1];
> 
> So it is a wrongly defined flex array? Confused

Needs fixing.

> 
> The usages are all using this as some super-complicated function argument:
> 
> 	struct iidc_res rdma_qset_res = {};
> 
> 	rdma_qset_res.res_allocated = 1;
> 	rdma_qset_res.res_type = IIDC_RDMA_QSETS_TXSCHED;
> 	rdma_qset_res.res[0].res.qsets.vport_id = vsi->vsi_idx;
> 	rdma_qset_res.res[0].res.qsets.teid = tc_node->l2_sched_node_id;
> 	rdma_qset_res.res[0].res.qsets.qs_handle = tc_node->qs_handle;
> 
> 	if (cdev_info->ops->free_res(cdev_info, &rdma_qset_res))
> 
> So the answer here is to make your function calls sane and well architected. If you
> have to pass a union to call a function then something is very wrong with the
> design.
> 

Based on previous comment, the union will be removed.

> You aren't trying to achieve ABI decoupling of the rdma/ethernet modules with an
> obfuscated complicated function pointer indirection, are you?

As discussed in other thread, this is part of the IIDC interface exporting the core device .ops callbacks.
> 
> > +/* Following APIs are implemented by auxiliary drivers and invoked by
> > +core PCI
> > + * driver
> > + */
> > +struct iidc_auxiliary_ops {
> > +	/* This event_handler is meant to be a blocking call.  For instance,
> > +	 * when a BEFORE_MTU_CHANGE event comes in, the event_handler will
> not
> > +	 * return until the auxiliary driver is ready for the MTU change to
> > +	 * happen.
> > +	 */
> > +	void (*event_handler)(struct iidc_core_dev_info *cdev_info,
> > +			      struct iidc_event *event);
> > +
> > +	int (*vc_receive)(struct iidc_core_dev_info *cdev_info, u32 vf_id,
> > +			  u8 *msg, u16 len);
> > +};
> 
> This is not the normal pattern:
> 
> > +struct iidc_auxiliary_drv {
> > +	struct auxiliary_driver adrv;
> > +	struct iidc_auxiliary_ops *ops;
> > +};
> 
> Just put the two functions above in the drv directly:

Ok.


> 
> struct iidc_auxiliary_drv {
>         struct auxilary_driver adrv;
> 	void (*event_handler)(struct iidc_core_dev_info *cdev_info, *cdev_info,
> 			      struct iidc_event *event);
> 
> 	int (*vc_receive)(struct iidc_core_dev_info *cdev_info, u32 vf_id,
> 			  u8 *msg, u16 len);
> }
> 
> > +
> > +#define IIDC_RDMA_NAME	"intel_rdma"
> > +#define IIDC_RDMA_ID	0x00000010
> > +#define IIDC_MAX_NUM_AUX	4
> > +
> > +/* The const struct that instantiates cdev_info_id needs to be
> > +initialized
> > + * in the .c with the macro ASSIGN_IIDC_INFO.
> > + * For example:
> > + * static const struct cdev_info_id cdev_info_ids[] =
> > +ASSIGN_IIDC_INFO;  */ struct cdev_info_id {
> > +	char *name;
> > +	int id;
> > +};
> > +
> > +#define IIDC_RDMA_INFO   { .name = IIDC_RDMA_NAME,  .id =
> IIDC_RDMA_ID },
> > +
> > +#define ASSIGN_IIDC_INFO	\
> > +{				\
> > +	IIDC_RDMA_INFO		\
> > +}
> 
> I tried to figure out what all this was for and came up short. There is only one user
> and all this seems unnecessary in this series, add it later when you need it.

No plan for new user, so this should go.

> 
> > +
> > +#define iidc_priv(x) ((x)->auxiliary_priv)
> 
> Use a static inline function
> 
Ok

  reply index

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-06 21:01 [PATCH v4 00/23] Add Intel Ethernet Protocol Driver for RDMA (irdma) Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 01/23] iidc: Introduce iidc.h Shiraz Saleem
2021-04-07 15:44   ` Jason Gunthorpe
2021-04-07 20:58     ` Saleem, Shiraz
2021-04-07 22:43       ` Jason Gunthorpe
2021-04-08  7:14         ` Leon Romanovsky
2021-04-09  1:38           ` Saleem, Shiraz
2021-04-11 11:48             ` Leon Romanovsky
2021-04-12 14:50         ` Saleem, Shiraz
2021-04-12 16:12           ` Jason Gunthorpe
2021-04-15 17:36             ` Saleem, Shiraz
2021-04-07 17:35   ` Jason Gunthorpe
2021-04-12 14:51     ` Saleem, Shiraz [this message]
2021-04-06 21:01 ` [PATCH v4 02/23] ice: Initialize RDMA support Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 03/23] ice: Implement iidc operations Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 04/23] ice: Register auxiliary device to provide RDMA Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 05/23] ice: Add devlink params support Shiraz Saleem
2021-04-07 14:57   ` Jason Gunthorpe
2021-04-07 20:58     ` Saleem, Shiraz
2021-04-07 22:46       ` Jason Gunthorpe
2021-04-12 14:50         ` Saleem, Shiraz
2021-04-12 19:07           ` Parav Pandit
2021-04-13  4:03             ` Parav Pandit
2021-04-13 14:40             ` Saleem, Shiraz
2021-04-13 17:36               ` Parav Pandit
2021-04-14  0:21                 ` Saleem, Shiraz
2021-04-14  5:27                   ` Parav Pandit
2021-04-18 11:51                   ` Leon Romanovsky
2021-04-06 21:01 ` [PATCH v4 06/23] i40e: Prep i40e header for aux bus conversion Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 07/23] i40e: Register auxiliary devices to provide RDMA Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 08/23] RDMA/irdma: Register auxiliary driver and implement private channel OPs Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 09/23] RDMA/irdma: Implement device initialization definitions Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 10/23] RDMA/irdma: Implement HW Admin Queue OPs Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 11/23] RDMA/irdma: Add HMC backing store setup functions Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 12/23] RDMA/irdma: Add privileged UDA queue implementation Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 13/23] RDMA/irdma: Add QoS definitions Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 14/23] RDMA/irdma: Add connection manager Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 15/23] RDMA/irdma: Add PBLE resource manager Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 16/23] RDMA/irdma: Implement device supported verb APIs Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 17/23] RDMA/irdma: Add RoCEv2 UD OP support Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 18/23] RDMA/irdma: Add user/kernel shared libraries Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 19/23] RDMA/irdma: Add miscellaneous utility definitions Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 20/23] RDMA/irdma: Add dynamic tracing for CM Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 21/23] RDMA/irdma: Add ABI definitions Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 22/23] RDMA/irdma: Add irdma Kconfig/Makefile and remove i40iw Shiraz Saleem
2021-04-06 21:01 ` [PATCH v4 23/23] RDMA/irdma: Update MAINTAINERS file Shiraz Saleem
2021-04-06 21:05 ` [PATCH v4 00/23] Add Intel Ethernet Protocol Driver for RDMA (irdma) Saleem, Shiraz
2021-04-06 23:15 ` Jason Gunthorpe
2021-04-06 23:30   ` Saleem, Shiraz
2021-04-07  0:18     ` Saleem, Shiraz
2021-04-07 11:31     ` Jason Gunthorpe
2021-04-07 15:06       ` Saleem, Shiraz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=be4f52362019468b90cd5998fb5cb8b5@intel.com \
    --to=shiraz.saleem@intel.com \
    --cc=anthony.l.nguyen@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=davem@davemloft.net \
    --cc=david.m.ertman@intel.com \
    --cc=dledford@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=kuba@kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=sean.hefty@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-RDMA Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-rdma/0 linux-rdma/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-rdma linux-rdma/ https://lore.kernel.org/linux-rdma \
		linux-rdma@vger.kernel.org
	public-inbox-index linux-rdma

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-rdma


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git