All of lore.kernel.org
 help / color / mirror / Atom feed
* mlx5_core Issue
@ 2016-02-04 15:09 Marc Smith
       [not found] ` <CAHkw+Lc3h5-0Km6J5j7xOHa0d-5gx1RO_dBRXYLfgGoyKzHkoA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 4+ messages in thread
From: Marc Smith @ 2016-02-04 15:09 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA; +Cc: alidsmith-Re5JQEeQqe8AvxtiuMwx3w

Hi,

I'm using Linux 4.1.11 and trying to get Mellanox 100 Gb adapters
working. When loading 'mlx5_core' I get the following messages (no
usable interfaces):

--snip--
Jan 25 12:18:08 myhost kernel: [  209.537341] mlx5_core 0000:41:00.0:
firmware version: 12.12.1100
Jan 25 12:18:09 myhost kernel: [  210.104954] mlx5_core 0000:41:00.1:
firmware version: 12.12.1100
Jan 25 12:18:09 myhost kernel: [  210.679413] mlx5_ib: Mellanox
Connect-IB Infiniband driver v2.2-1 (Feb 2014)
Jan 25 12:18:09 myhost kernel: [  210.692783]
0000:41:00.0:mlx5_core_create_qp:202:(pid 769): current num of QPs 0x0
Jan 25 12:18:09 myhost kernel: [  210.692788] command failed, status
bad parameter(0x3), syndrome 0x48b5c0
Jan 25 12:18:09 myhost kernel: [  210.692799] infiniband mlx5_0:
Couldn't create ib_mad QP0
Jan 25 12:18:09 myhost kernel: [  210.693374] infiniband mlx5_0:
Couldn't open port 1
Jan 25 12:18:09 myhost kernel: [  210.693381] infiniband mlx5_0:
ib_register_mad_agent: Invalid port
Jan 25 12:18:09 myhost kernel: [  210.693579] infiniband mlx5_0:
ib_register_mad_agent: Invalid port
Jan 25 12:18:09 myhost kernel: [  210.695172] infiniband mlx5_0:
ib_register_mad_agent: Invalid port
Jan 25 12:18:09 myhost kernel: [  210.696323]
0000:41:00.0:mlx5_core_create_qp:202:(pid 769): current num of QPs 0x0
Jan 25 12:18:09 myhost kernel: [  210.696326] command failed, status
bad resource(0x5), syndrome 0x15a3c9
Jan 25 12:18:09 myhost kernel: [  210.697452] infiniband mlx5_0: Port
1 not found
Jan 25 12:18:09 myhost kernel: [  210.697456] infiniband mlx5_0:
Couldn't close port 1 for agents
Jan 25 12:18:09 myhost kernel: [  210.697460] infiniband mlx5_0: Port
1 not found
Jan 25 12:18:09 myhost kernel: [  210.697463] infiniband mlx5_0:
Couldn't close port 1
Jan 25 12:18:09 myhost kernel: [  210.710739]
0000:41:00.1:mlx5_core_create_qp:202:(pid 769): current num of QPs 0x0
Jan 25 12:18:09 myhost kernel: [  210.710742] command failed, status
bad parameter(0x3), syndrome 0x48b5c0
Jan 25 12:18:09 myhost kernel: [  210.710751] infiniband mlx5_0:
Couldn't create ib_mad QP0
Jan 25 12:18:09 myhost kernel: [  210.711171] infiniband mlx5_0:
Couldn't open port 1
Jan 25 12:18:09 myhost kernel: [  210.711175] infiniband mlx5_0:
ib_register_mad_agent: Invalid port
Jan 25 12:18:09 myhost kernel: [  210.711357] infiniband mlx5_0:
ib_register_mad_agent: Invalid port
Jan 25 12:18:09 myhost kernel: [  210.712729] infiniband mlx5_0:
ib_register_mad_agent: Invalid port
Jan 25 12:18:09 myhost kernel: [  210.713834]
0000:41:00.1:mlx5_core_create_qp:202:(pid 769): current num of QPs 0x0
Jan 25 12:18:09 myhost kernel: [  210.713837] command failed, status
bad resource(0x5), syndrome 0x15a3c9
Jan 25 12:18:09 myhost kernel: [  210.714960] infiniband mlx5_0: Port
1 not found
Jan 25 12:18:09 myhost kernel: [  210.714964] infiniband mlx5_0:
Couldn't close port 1 for agents
Jan 25 12:18:09 myhost kernel: [  210.714968] infiniband mlx5_0: Port
1 not found
Jan 25 12:18:09 myhost kernel: [  210.714971] infiniband mlx5_0:
Couldn't close port 1
--snip--

Here is the Mellanox adapter information:
41:00.0 Network controller [0207]: Mellanox Technologies MT27620
Family [15b3:1013]
41:00.1 Network controller [0207]: Mellanox Technologies MT27620
Family [15b3:1013]

I tried using the Mellanox provided driver
(http://www.mellanox.com/page/products_dyn?product_family=27) and it
works fine.

Obviously the Mellanox provided driver is newer then what's in my
kernel version (4.1.11). Is there some type of configuration issue on
my end, or is this an issue that is fixed with newer kernel versions
(eg, later version 4.1.x or some other branch)? Or maybe some
firmware/driver incompatibility (eg, too new of firmware on the
adapters for the vanilla Linux kernel driver)?

I'd prefer to use the vanilla Linux kernel provided mlx5_core driver
and not the Mellanox third-party driver.


Thanks for your time.


--Marc
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: mlx5_core Issue
       [not found] ` <CAHkw+Lc3h5-0Km6J5j7xOHa0d-5gx1RO_dBRXYLfgGoyKzHkoA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2016-02-06 15:36   ` Leon Romanovsky
       [not found]     ` <20160206153613.GB8584-2ukJVAZIZ/Y@public.gmane.org>
  0 siblings, 1 reply; 4+ messages in thread
From: Leon Romanovsky @ 2016-02-06 15:36 UTC (permalink / raw)
  To: Marc Smith; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Or Gerlitz, Matan Barak

On Thu, Feb 04, 2016 at 10:09:19AM -0500, Marc Smith wrote:
> Hi,

For any reasons, I found this email in SPAM folder.

> 
> I'm using Linux 4.1.11 and trying to get Mellanox 100 Gb adapters
> working. When loading 'mlx5_core' I get the following messages (no
> usable interfaces):
> 
> --snip--
> Jan 25 12:18:08 myhost kernel: [  209.537341] mlx5_core 0000:41:00.0:
> firmware version: 12.12.1100
> Jan 25 12:18:09 myhost kernel: [  210.104954] mlx5_core 0000:41:00.1:
> firmware version: 12.12.1100
> Jan 25 12:18:09 myhost kernel: [  210.679413] mlx5_ib: Mellanox
> Connect-IB Infiniband driver v2.2-1 (Feb 2014)
> Jan 25 12:18:09 myhost kernel: [  210.692783]
> 0000:41:00.0:mlx5_core_create_qp:202:(pid 769): current num of QPs 0x0
> Jan 25 12:18:09 myhost kernel: [  210.692788] command failed, status
> bad parameter(0x3), syndrome 0x48b5c0
> Jan 25 12:18:09 myhost kernel: [  210.692799] infiniband mlx5_0:
> Couldn't create ib_mad QP0
> Jan 25 12:18:09 myhost kernel: [  210.693374] infiniband mlx5_0:
> Couldn't open port 1
> Jan 25 12:18:09 myhost kernel: [  210.693381] infiniband mlx5_0:
> ib_register_mad_agent: Invalid port
> Jan 25 12:18:09 myhost kernel: [  210.693579] infiniband mlx5_0:
> ib_register_mad_agent: Invalid port
> Jan 25 12:18:09 myhost kernel: [  210.695172] infiniband mlx5_0:
> ib_register_mad_agent: Invalid port
> Jan 25 12:18:09 myhost kernel: [  210.696323]
> 0000:41:00.0:mlx5_core_create_qp:202:(pid 769): current num of QPs 0x0
> Jan 25 12:18:09 myhost kernel: [  210.696326] command failed, status
> bad resource(0x5), syndrome 0x15a3c9
> Jan 25 12:18:09 myhost kernel: [  210.697452] infiniband mlx5_0: Port
> 1 not found
> Jan 25 12:18:09 myhost kernel: [  210.697456] infiniband mlx5_0:
> Couldn't close port 1 for agents
> Jan 25 12:18:09 myhost kernel: [  210.697460] infiniband mlx5_0: Port
> 1 not found
> Jan 25 12:18:09 myhost kernel: [  210.697463] infiniband mlx5_0:
> Couldn't close port 1
> Jan 25 12:18:09 myhost kernel: [  210.710739]
> 0000:41:00.1:mlx5_core_create_qp:202:(pid 769): current num of QPs 0x0
> Jan 25 12:18:09 myhost kernel: [  210.710742] command failed, status
> bad parameter(0x3), syndrome 0x48b5c0
> Jan 25 12:18:09 myhost kernel: [  210.710751] infiniband mlx5_0:
> Couldn't create ib_mad QP0
> Jan 25 12:18:09 myhost kernel: [  210.711171] infiniband mlx5_0:
> Couldn't open port 1
> Jan 25 12:18:09 myhost kernel: [  210.711175] infiniband mlx5_0:
> ib_register_mad_agent: Invalid port
> Jan 25 12:18:09 myhost kernel: [  210.711357] infiniband mlx5_0:
> ib_register_mad_agent: Invalid port
> Jan 25 12:18:09 myhost kernel: [  210.712729] infiniband mlx5_0:
> ib_register_mad_agent: Invalid port
> Jan 25 12:18:09 myhost kernel: [  210.713834]
> 0000:41:00.1:mlx5_core_create_qp:202:(pid 769): current num of QPs 0x0
> Jan 25 12:18:09 myhost kernel: [  210.713837] command failed, status
> bad resource(0x5), syndrome 0x15a3c9
> Jan 25 12:18:09 myhost kernel: [  210.714960] infiniband mlx5_0: Port
> 1 not found
> Jan 25 12:18:09 myhost kernel: [  210.714964] infiniband mlx5_0:
> Couldn't close port 1 for agents
> Jan 25 12:18:09 myhost kernel: [  210.714968] infiniband mlx5_0: Port
> 1 not found
> Jan 25 12:18:09 myhost kernel: [  210.714971] infiniband mlx5_0:
> Couldn't close port 1
> --snip--
> 
> Here is the Mellanox adapter information:
> 41:00.0 Network controller [0207]: Mellanox Technologies MT27620
> Family [15b3:1013]
> 41:00.1 Network controller [0207]: Mellanox Technologies MT27620
> Family [15b3:1013]
> 
> I tried using the Mellanox provided driver
> (http://www.mellanox.com/page/products_dyn?product_family=27) and it
> works fine.
> 
> Obviously the Mellanox provided driver is newer then what's in my
> kernel version (4.1.11). Is there some type of configuration issue on
> my end, or is this an issue that is fixed with newer kernel versions
> (eg, later version 4.1.x or some other branch)? Or maybe some
> firmware/driver incompatibility (eg, too new of firmware on the
> adapters for the vanilla Linux kernel driver)?
> 
> I'd prefer to use the vanilla Linux kernel provided mlx5_core driver
> and not the Mellanox third-party driver.

Can you please try 4.5-rc2 version [1]?

[1] https://git.kernel.org/cgit/linux/kernel/git/dledford/rdma.git/log/?h=k.o/for-4.5-rc

> 
> 
> Thanks for your time.
> 
> 
> --Marc
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: mlx5_core Issue
       [not found]     ` <20160206153613.GB8584-2ukJVAZIZ/Y@public.gmane.org>
@ 2016-02-07  8:52       ` Leon Romanovsky
       [not found]         ` <20160207085221.GB17225-2ukJVAZIZ/Y@public.gmane.org>
  0 siblings, 1 reply; 4+ messages in thread
From: Leon Romanovsky @ 2016-02-07  8:52 UTC (permalink / raw)
  To: Marc Smith, linux-rdma-u79uwXL29TY76Z2rM5mHXA, Or Gerlitz, Matan Barak
  Cc: Majd Dibbiny

On Sat, Feb 06, 2016 at 05:36:13PM +0200, Leon Romanovsky wrote:
> > I'm using Linux 4.1.11 and trying to get Mellanox 100 Gb adapters
> > working. When loading 'mlx5_core'.

I assume that this accepted patch [1] will fix your issue.

[1] https://patchwork.kernel.org/patch/7929551/
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: mlx5_core Issue
       [not found]         ` <20160207085221.GB17225-2ukJVAZIZ/Y@public.gmane.org>
@ 2016-04-19 19:17           ` Marc Smith
  0 siblings, 0 replies; 4+ messages in thread
From: Marc Smith @ 2016-04-19 19:17 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Hi,

Sorry for the long delay... we tested with 4.5-rc2 after you sent this
email, and the error message did not appear, but the driver/hardware
didn't appear to function; after loading the module we got this:

--snip--
Mar  3 15:05:43 localhost kernel: [  294.632663] mlx5_core
0000:41:00.0: firmware version: 12.12.1100
Mar  3 15:05:43 localhost kernel: [  295.229653] mlx5_core
0000:41:00.1: firmware version: 12.12.1100
Mar  3 15:05:44 localhost kernel: [  295.805140] mlx5_ib: Mellanox
Connect-IB Infiniband driver v2.2-1 (Feb 2014)
--snip--

No Ethernet/IB interfaces are visible, and no additional output in
kernel messages.


I looked at trying to apply the said patch to 4.1.11, but it appears
the driver has changed too much between the 4.1 and 4.5 branches, at
least in drivers/infiniband/core/sysfs.c (different function names,
parameters changed, etc.).

Should I focus on trying to use a newer kernel, or is it possible to
get the patch re-worked for 4.1.x? Or any other solutions so we can
stick to the upstream Mellanox kernel drivers for these 100 Gb/s VPI
adapters?



Thanks for your time.


--Marc

On Sun, Feb 7, 2016 at 3:52 AM, Leon Romanovsky <leon-2ukJVAZIZ/Y@public.gmane.org> wrote:
> On Sat, Feb 06, 2016 at 05:36:13PM +0200, Leon Romanovsky wrote:
>> > I'm using Linux 4.1.11 and trying to get Mellanox 100 Gb adapters
>> > working. When loading 'mlx5_core'.
>
> I assume that this accepted patch [1] will fix your issue.
>
> [1] https://patchwork.kernel.org/patch/7929551/
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-04-19 19:17 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-04 15:09 mlx5_core Issue Marc Smith
     [not found] ` <CAHkw+Lc3h5-0Km6J5j7xOHa0d-5gx1RO_dBRXYLfgGoyKzHkoA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-02-06 15:36   ` Leon Romanovsky
     [not found]     ` <20160206153613.GB8584-2ukJVAZIZ/Y@public.gmane.org>
2016-02-07  8:52       ` Leon Romanovsky
     [not found]         ` <20160207085221.GB17225-2ukJVAZIZ/Y@public.gmane.org>
2016-04-19 19:17           ` Marc Smith

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.