linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Martin Habets <mhabets@solarflare.com>
To: Jason Gunthorpe <jgg@ziepe.ca>, Parav Pandit <parav@mellanox.com>
Cc: "Saleem, Shiraz" <shiraz.saleem@intel.com>,
	"Kirsher, Jeffrey T" <jeffrey.t.kirsher@intel.com>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
	"Ismail, Mustafa" <mustafa.ismail@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"nhorman@redhat.com" <nhorman@redhat.com>,
	"sassmann@redhat.com" <sassmann@redhat.com>
Subject: Re: [RFC PATCH v4 10/25] RDMA/irdma: Add driver framework definitions
Date: Thu, 19 Mar 2020 11:49:52 +0000	[thread overview]
Message-ID: <745514bf-80a0-db20-044b-220a9f49e71f@solarflare.com> (raw)
In-Reply-To: <20200221180427.GL31668@ziepe.ca>

On 21/02/2020 18:04, Jason Gunthorpe wrote:
> On Fri, Feb 21, 2020 at 05:23:31PM +0000, Parav Pandit wrote:
>> On 2/21/2020 11:01 AM, Saleem, Shiraz wrote:
>>>> Subject: RE: [RFC PATCH v4 10/25] RDMA/irdma: Add driver framework
>>>> definitions
>>>>
>>>
>>> [....]
>>>
>>>>>>> +static int irdma_devlink_reload_up(struct devlink *devlink,
>>>>>>> +				   struct netlink_ext_ack *extack) {
>>>>>>> +	struct irdma_dl_priv *priv = devlink_priv(devlink);
>>>>>>> +	union devlink_param_value saved_value;
>>>>>>> +	const struct virtbus_dev_id *id = priv->vdev->matched_element;
>>>>>>
>>>>>> Like irdma_probe(), struct iidc_virtbus_object *vo is accesible for
>>>>>> the given
>>>>> priv.
>>>>>> Please use struct iidc_virtbus_object for any sharing between two drivers.
>>>>>> matched_element modification inside the virtbus match() function and
>>>>>> accessing pointer to some driver data between two driver through
>>>>>> this matched_element is not appropriate.
>>>>>
>>>>> We can possibly avoid matched_element and driver data look up here.
>>>>> But fundamentally, at probe time (see irdma_gen_probe) the irdma
>>>>> driver needs to know which generation type of vdev we bound to. i.e. i40e or ice
>>>> ?
>>>>> since we support both.
>>>>> And based on it, extract the driver specific virtbus device object,
>>>>> i.e i40e_virtbus_device vs iidc_virtbus_object and init that device.
>>>>>
>>>>> Accessing driver_data off the vdev matched entry in
>>>>> irdma_virtbus_id_table is how we know this generation info and make the
>>>> decision.
>>>>>
>>>> If there is single irdma driver for two different virtbus device types, it is better to
>>>> have two instances of virtbus_register_driver() with different matching string/id.
>>>> So based on the probe(), it will be clear with virtbus device of interest got added.
>>>> This way, code will have clear separation between two device types.
>>>
>>> Thanks for the feedback!
>>> Is it common place to have multiple driver_register instances of same bus type
>>> in a driver to support different devices? Seems odd.
>>> Typically a single driver that supports multiple device types for a specific bus-type
>>> would do a single bus-specific driver_register and pass in an array of bus-specific
>>> device IDs and let the bus do the match up for you right? And in the probe(), a driver could do device
>>> specific quirks for the device types. Isnt that purpose of device ID tables for pci, platform, usb etc?
>>> Why are we trying to handle multiple virtbus device types from a driver any differently?
>>>
>>
>> If differences in treating the two devices is not a lot, if you have lot
>> of common code, it make sense to do single virtbus_register_driver()
>> with two different ids.
>> In that case, struct virtbus_device_id should have some device specific
>> field like how pci has driver_data.
>>
>> It should not be set by the match() function by virtbus core.
>> This field should be setup in the id table by the hw driver which
>> invokes virtbus_register_device().
> 
> Yes
> 
> I think the basic point here is that the 'id' should specify what
> container_of() is valid on the virtbus_device
> 
> And for things like this where we want to make a many to one
> connection then it makes sense to permute the id for each 'connection
> point'
> 
> ie, if the id was a string like OF uses maybe you'd have
> 
>  intel,i40e,rdma
>  intel,i40e,ethernet
>  intel,ice,rdma
> 
> etc
> 
> A string for match id is often a good idea..
> 
> And I'd suggest introducing a matching alloc so it is all clear and
> type safe:
> 
>    struct mydev_struct mydev;
> 
>    mydev = virtbus_alloc(parent, "intel,i40e,rdma", struct mydev_struct,
>                          vbus_dev);
> 
> 
>    [..]
>    virtbus_register(&mydev->vbus_dev);

I'd like to see something like this as well. In my experiments for a single type of device I've been doing this,
which works fine but is not future-proof:

	struct sfc_rdma_dev *rdev;

	rdev = kzalloc(sizeof(*rdev), GFP_KERNEL);
	if (!rdev)
		return -ENOMEM;

	/* This is like virtbus_dev_alloc() but using our own memory. */
	rdev->vdev.name = SFC_RDMA_DEVNAME;
	rdev->vdev.data = (void *) &rdma_devops;
	rdev->vdev.dev.release = efx_rdma_dev_release;

Martin

  reply	other threads:[~2020-03-19 11:50 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-12 19:13 [RFC PATCH v4 00/25] Intel Wired LAN/RDMA Driver Updates 2020-02-11 Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 01/25] virtual-bus: Implementation of Virtual Bus Jeff Kirsher
2020-02-14 17:02   ` Greg KH
2020-02-14 20:34     ` Jason Gunthorpe
2020-02-14 20:43       ` Greg KH
2020-02-15  0:01         ` Jason Gunthorpe
2020-02-15  0:53           ` Greg KH
2020-02-14 20:45       ` Greg KH
2020-02-20 18:55         ` Ertman, David M
2020-02-20 19:27           ` Jason Gunthorpe
2020-02-14 21:22   ` Parav Pandit
2020-02-15  0:08   ` Jason Gunthorpe
2020-02-12 19:14 ` [RFC PATCH v4 02/25] ice: Create and register virtual bus for RDMA Jeff Kirsher
2020-02-14 20:39   ` Jason Gunthorpe
2020-02-20 18:48     ` Ertman, David M
2020-02-20 20:58       ` Jason Gunthorpe
2020-02-12 19:14 ` [RFC PATCH v4 03/25] ice: Complete RDMA peer registration Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 04/25] ice: Support resource allocation requests Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 05/25] ice: Enable event notifications Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 06/25] ice: Allow reset operations Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 07/25] ice: Pass through communications to VF Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 08/25] i40e: Move client header location Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 09/25] i40e: Register a virtbus device to provide RDMA Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 10/25] RDMA/irdma: Add driver framework definitions Jeff Kirsher
2020-02-14 22:13   ` Parav Pandit
2020-02-18 20:42     ` Saleem, Shiraz
2020-02-20 22:24       ` Parav Pandit
2020-02-20 23:06         ` Jason Gunthorpe
2020-02-21 17:01         ` Saleem, Shiraz
2020-02-21 17:23           ` Parav Pandit
2020-02-21 18:04             ` Jason Gunthorpe
2020-03-19 11:49               ` Martin Habets [this message]
2020-02-12 19:14 ` [RFC PATCH v4 11/25] RDMA/irdma: Implement device initialization definitions Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 12/25] RDMA/irdma: Implement HW Admin Queue OPs Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 13/25] RDMA/irdma: Add HMC backing store setup functions Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 14/25] RDMA/irdma: Add privileged UDA queue implementation Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 15/25] RDMA/irdma: Add QoS definitions Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 16/25] RDMA/irdma: Add connection manager Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 17/25] RDMA/irdma: Add PBLE resource manager Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 18/25] RDMA/irdma: Implement device supported verb APIs Jeff Kirsher
2020-02-14 14:54   ` Jason Gunthorpe
2020-02-14 15:49     ` Andrew Boyer
2020-02-14 16:45       ` Jason Gunthorpe
2020-02-18 20:43     ` Saleem, Shiraz
2020-02-12 19:14 ` [RFC PATCH v4 19/25] RDMA/irdma: Add RoCEv2 UD OP support Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 20/25] RDMA/irdma: Add user/kernel shared libraries Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 21/25] RDMA/irdma: Add miscellaneous utility definitions Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 22/25] RDMA/irdma: Add dynamic tracing for CM Jeff Kirsher
2020-02-14 14:53   ` Jason Gunthorpe
2020-02-18 20:43     ` Saleem, Shiraz
2020-02-12 19:14 ` [RFC PATCH v4 23/25] RDMA/irdma: Add ABI definitions Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 24/25] RDMA: Add irdma Kconfig/Makefile and remove i40iw Jeff Kirsher
2020-02-12 19:14 ` [RFC PATCH v4 25/25] RDMA/irdma: Update MAINTAINERS file Jeff Kirsher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=745514bf-80a0-db20-044b-220a9f49e71f@solarflare.com \
    --to=mhabets@solarflare.com \
    --cc=davem@davemloft.net \
    --cc=gregkh@linuxfoundation.org \
    --cc=jeffrey.t.kirsher@intel.com \
    --cc=jgg@ziepe.ca \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mustafa.ismail@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=nhorman@redhat.com \
    --cc=parav@mellanox.com \
    --cc=sassmann@redhat.com \
    --cc=shiraz.saleem@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).