linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: "Nikolova, Tatyana E" <tatyana.e.nikolova@intel.com>
Cc: "dledford@redhat.com" <dledford@redhat.com>,
	"leon@kernel.org" <leon@kernel.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>
Subject: Re: [PATCH v2 rdma-core] irdma: Add ice and irdma to kernel-boot rules
Date: Mon, 20 Sep 2021 20:23:30 -0300	[thread overview]
Message-ID: <20210920232330.GH327412@nvidia.com> (raw)
In-Reply-To: <DM6PR11MB4692517FBBC9AFD046990DCDCBA09@DM6PR11MB4692.namprd11.prod.outlook.com>

On Mon, Sep 20, 2021 at 07:41:21PM +0000, Nikolova, Tatyana E wrote:
> 
> 
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Thursday, September 2, 2021 10:40 AM
> > To: Nikolova, Tatyana E <tatyana.e.nikolova@intel.com>
> > Cc: dledford@redhat.com; leon@kernel.org; linux-rdma@vger.kernel.org
> > Subject: Re: [PATCH v2 rdma-core] irdma: Add ice and irdma to kernel-boot
> > rules
> > 
> > On Thu, Sep 02, 2021 at 03:29:43PM +0000, Nikolova, Tatyana E wrote:
> > > > Given that ice is both iwarp and roce, is there some better way to
> > > > detect this? Doesn't the aux device encode it?
> > >
> > > Hi Jason,
> > >
> > > We tried a few experiments without success. The auxiliary devices
> > > alias with our driver and not ice, so maybe this is the reason?
> > >
> > > Here is an example of what we tried.
> > >
> > > udevadm info
> > > /sys/devices/pci0000:2e/0000:2e:00.0/0000:2f:00.0/ice.roce.0
> > > P: /devices/pci0000:2e/0000:2e:00.0/0000:2f:00.0/ice.roce.0
> > > E: DEVPATH=/devices/pci0000:2e/0000:2e:00.0/0000:2f:00.0/ice.roce.0
> > > E: DRIVER=irdma
> > > E: MODALIAS=auxiliary:ice.roce
> > > E: SUBSYSTEM=auxiliary
> > >
> > > udevadm info /sys/bus/auxiliary/devices/ice.roce.0
> > > P: /devices/pci0000:2e/0000:2e:00.0/0000:2f:00.0/ice.roce.0
> > > E: DEVPATH=/devices/pci0000:2e/0000:2e:00.0/0000:2f:00.0/ice.roce.0
> > > E: DRIVER=irdma
> > > E: MODALIAS=auxiliary:ice.roce
> > > E: SUBSYSTEM=auxiliary
> > >
> > > Given the udevadm output, we put the following line in the udev rdma-
> > description.rules:
> > >
> > > SUBSYSTEMS=="auxiliary",
> > DEVPATH=="*/devices/pci0000:2e/0000:2e:00.0/0000:2f:00.0/ice.roce.0/*",
> > ENV{ID_RDMA_ROCE}="1"
> > 
> > What is the SUBSYSTEM=="infiniband" device like?
> > 
> > This seems like the right direction, you need to wrangle udev though..
> > 
> 
> Hi Jason,
> 
> After more research and given the udevadm output, we revised the irdma udev rule to make it work. Could you please review the patch bellow?
> 
> diff --git a/kernel-boot/rdma-description.rules b/kernel-boot/rdma-description.rules
> index 48a7cede..09deb451 100644
> +++ b/kernel-boot/rdma-description.rules
> @@ -1,7 +1,7 @@
>  # This is a version of net-description.rules for /sys/class/infiniband devices
>  
>  ACTION=="remove", GOTO="rdma_description_end"
> -SUBSYSTEM!="infiniband", GOTO="rdma_description_end"
> +SUBSYSTEM!="infiniband", GOTO="rdma_infiniband_end"
>  
>  # NOTE: DRIVERS searches up the sysfs path to find the driver that is bound to  # the PCI/etc device that the RDMA device is linked to. This is not the kernel @@ -40,4 +40,9 @@ DEVPATH=="*/infiniband/rxe*", ATTR{parent}=="*", ENV{ID_RDMA_ROCE}="1"
>  SUBSYSTEMS=="pci", ENV{ID_BUS}="pci", ENV{ID_VENDOR_ID}="$attr{vendor}", ENV{ID_MODEL_ID}="$attr{device}"
>  SUBSYSTEMS=="pci", IMPORT{builtin}="hwdb --subsystem=pci"
>  
> +LABEL="rdma_infiniband_end"
> +
> +SUBSYSTEM!="auxiliary", GOTO="rdma_description_end"
> +KERNEL=="ice.iwarp.?", ENV{ID_RDMA_IWARP}="1" 
> +KERNEL=="ice.roce.?", ENV{ID_RDMA_ROCE}="1"
>  LABEL="rdma_description_end"

This doesn't seem right, the ID_* must be applied to an infiniband
device or the other stuff doesn't that consumes this won't work right.

What does the udev debugging say about these ID tags?

The SUBSYSTEMS=="" is the right approach, as shown above for the other
metadata. If you are having trobule I'm wondering if there is some
kind of kernel problem creating the wrong sysfs?

Jason

  reply	other threads:[~2021-09-20 23:25 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-23 15:48 [PATCH v2 rdma-core] irdma: Add ice and irdma to kernel-boot rules Tatyana Nikolova
2021-08-23 16:11 ` Jason Gunthorpe
2021-09-02 15:29   ` Nikolova, Tatyana E
2021-09-02 15:40     ` Jason Gunthorpe
2021-09-20 19:41       ` Nikolova, Tatyana E
2021-09-20 23:23         ` Jason Gunthorpe [this message]
2021-10-14 20:11           ` Nikolova, Tatyana E
2021-10-14 23:36             ` Jason Gunthorpe
2022-11-02 16:40               ` Nikolova, Tatyana E
2022-11-09 13:45                 ` Jason Gunthorpe
2023-01-13 23:57                   ` Saleem, Shiraz
2023-01-14  0:01                     ` Jason Gunthorpe
2023-01-17 20:27                       ` Saleem, Shiraz
2023-01-20 18:02                         ` Jason Gunthorpe
2021-09-02 16:03     ` Leon Romanovsky
2021-09-02 16:13       ` Nikolova, Tatyana E
2021-09-02 23:23         ` Leon Romanovsky
2021-10-10  9:42 ` Leon Romanovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210920232330.GH327412@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=dledford@redhat.com \
    --cc=leon@kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=tatyana.e.nikolova@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).